Saturday, May 14, 2011

Controling Robotic Arms Is Child's Play

"The input device contains various movement sensors, also called inertial sensors," says Bernhard Kleiner of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, who leads the project. The individual micro-electromechanical systems themselves are not expensive. What the scientists have spent time developing is how these sensors interact."We have developed special algorithms that fuse the data of individual sensors and identify a pattern of movement. That means we can detect movements in free space," summarizes Kleiner.

What may at first appear to be a trade show gimmick, is in fact a technology that offers numerous advantages in industrial production and logistical processes. The system could be used to simplify the programming of industrial robots, for example. To date, this has been done with the aid of laser tracking systems: An employee demonstrates the desired motion with a hand-held baton that features a white marker point. The system records this motion by analyzing the light reflected from a laser beam aimed at the marker. Configuring and calibrating the system takes a lot of time. The new input device should eliminate the need for these steps in the future -- instead, employees need only pick up the device and show the robot what it is supposed to do.

The system has numerous applications in medicine, as well. Take, for example, gait analysis. Until now, cameras have made precise recordings of patients as they walk back and forth along a specified path. The films reveal to the physician such things as how the joints behave while walking, or whether incorrect posture in the knees has been improved by physical therapy. Installing the cameras, however, is complex and costly, and patients are restricted to a predetermined path. The new sensor system can simplify this procedure: Attached to the patient's upper thigh, it measures the sequences and patterns of movement -- without limiting the patient's motion in any way.

"With the inertial sensor system, gait analysis can be performed without a frame of reference and with no need for a complex camera system," explains Kleiner. In another project, scientists are already working on comparisons of patients' gait patterns with those patterns appearing in connection with such diseases as Parkinson's.

Another medical application for the new technology is the control of active prostheses containing numerous small actuators. Whenever the patient moves, the prosthesis in turn also moves; this makes it possible for a leg prosthesis to roll the foot while walking. Here, too, the sensor could be attached to the patient's upper thigh and could analyze the movement, helping to regulate the motors of the prosthesis. Research scientists are currently working on combining the inertial sensor system with an electromyographic (EMG) sensor. Electromyography is based on the principle that when a muscle tenses, it produces an electrical voltage which a sensor can then measure by way of an electrode. If the sensor is placed, for example, on the muscle responsible for lifting the patient's foot, the sensor registers when the patient tenses this muscle -- and the prosthetic foot lifts itself. EMG sensors like this are already available but are difficult to position.

"While standard EMG sensors consist of individual electrodes that have to be positioned precisely on the muscle, our system is made up of many small electrodes that attach to a surface area. This enables us to sense muscle movements much more reliably," says Kleiner.


Source

Tuesday, May 10, 2011

Robotics: A Tiltable Head Could Improve the Ability of Undulating Robots to Navigate Disaster Debris

Researchers at the Georgia Institute of Technology recently built a robot that can penetrate and"swim" through granular material. In a new study, they show that varying the shape or adjusting the inclination of the robot's head affects the robot's movement in complex environments.

"We discovered that by changing the shape of the sand-swimming robot's head or by tilting its head up and down slightly, we could control the robot's vertical motion as it swam forward within a granular medium," said Daniel Goldman, an assistant professor in the Georgia Tech School of Physics.

Results of the study will be presented on May 10 at the 2011 IEEE International Conference on Robotics and Automation in Shanghai. Funding for this research was provided by the Burroughs Wellcome Fund, National Science Foundation and Army Research Laboratory.

The study was conducted by Goldman, bioengineering doctoral graduate Ryan Maladen, physics graduate student Yang Ding and physics undergraduate student Andrew Masse, all from Georgia Tech, and Northwestern University mechanical engineering adjunct professor Paul Umbanhowar.

"The biological inspiration for our sand-swimming robot is the sandfish lizard, which inhabits the Sahara desert in Africa and rapidly buries into and swims within sand," explained Goldman."We were intrigued by the sandfish lizard's wedge-shaped head that forms an angle of 140 degrees with the horizontal plane, and we thought its head might be responsible for or be contributing to the animal's ability to maneuver in complex environments."

For their experiments, the researchers attached a wedge-shaped block of wood to the head of their robot, which was built with seven connected segments, powered by servo motors, packed in a latex sock and wrapped in a spandex swimsuit. The doorstop-shaped head -- which resembled the sandfish's head -- had a fixed lower length of approximately 4 inches, height of 2 inches and a tapered snout. The researchers examined whether the robot's vertical motion could be controlled simply by varying the inclination of the robot's head.

Before each experimental run in a test chamber filled with quarter-inch-diameter plastic spheres, the researchers submerged the robot a couple inches into the granular medium and leveled the surface. Then they tracked the robot's position until it reached the end of the container or swam to the surface.

The researchers investigated the vertical movement of the robot when its head was placed at five different degrees of inclination. They found that when the sandfish-inspired head with a leading edge that formed an angle of 155 degrees with the horizontal plane was set flat, negative lift force was generated and the robot moved downward into the media. As the tip of the head was raised from zero to 7 degrees relative to the horizontal, the lift force increased until it became zero. At inclines above 7 degrees, the robot rose out of the medium.

"The ability to control the vertical position of the robot by modulating its head inclination opens up avenues for further research into developing robots more capable of maneuvering in complex environments, like debris-filled areas produced by an earthquake or landslide," noted Goldman.

The robotics results matched the research team's findings from physics experiments and computational models designed to explore how head shape affects lift in granular media.

"While the lift forces of objects in air, such as airplanes, are well understood, our investigations into the lift forces of objects in granular media are some of the first ever," added Goldman.

For the physics experiments, the researchers dragged wedge-shaped blocks through a granular medium. Blocks with leading edges that formed angles with the horizontal plane of less than 90 degrees resembled upside-down doorstops, the block with a leading edge equal to 90 degrees was a square, and blocks with leading edges greater than 90 degrees resembled regular doorstops.

They found that blocks with leading edges that formed angles with the horizontal plane less than 80 degrees generated positive lift forces and wedges with leading edges greater than 120 degrees created negative lift. With leading edges between 80 and 120 degrees, the wedges did not generate vertical forces in the positive or negative direction.

Using a numerical simulation of object drag and building on the group's previous studies of lift and drag on flat plates in granular media, the researchers were able to describe the mechanism of force generation in detail.

"When the leading edge of the robot head was less than 90 degrees, the robot's head experienced a lift force as it moved forward, which resulted in a torque imbalance that caused the robot to pitch and rise to the surface," explained Goldman.

Since this study, the researchers have attached a wedge-shaped head on the robot that can be dynamically modulated to specific angles. With this improvement, the researchers found that the direction of movement of the robot is sensitive to slight changes in orientation of the head, further validating the results from their physics experiments and computational models.

Being able to precisely control the tilt of the head will allow the researchers to implement different strategies of head movement during burial and determine the best way to wiggle deep into sand. The researchers also plan to test the robot's ability to maneuver through material similar to the debris found after natural disasters and plan to examine whether the sandfish lizard adjusts its head inclination to ensure a straight motion as it dives into the sand.

This material is based on research sponsored by the Burroughs Wellcome Fund, the National Science Foundation (NSF) under Award Number PHY-0749991, and the Army Research Laboratory (ARL) under Cooperative Agreement Number W911NF-08-2-0004.


Source

Friday, May 6, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Thursday, May 5, 2011

Robots Learn to Share: Why We Go out of Our Way to Help One Another

Altruism, the sacrificing of individual gains for the greater good, appears at first glance to go against the notion of"survival of the fittest." But altruistic gene expression is found in nature and is passed on from one generation to the next. Worker ants, for example, are sterile and make the ultimate altruistic sacrifice by not transmitting their genes at all in order to insure the survival of the queen's genetic makeup. The sacrifice of the individual in order to insure the survival of a relative's genetic code is known as kin selection. In 1964, biologist W.D. Hamilton proposed a precise set of conditions under which altruistic behavior may evolve, now known as Hamilton's rule of kin selection. Here's the gist: If an individual family member shares food with the rest of the family, it reduces his or her personal likelihood of survival but increases the chances of family members passing on their genes, many of which are common to the entire family. Hamilton's rule simply states that whether or not an organism shares its food with another depends on its genetic closeness (how many genes it shares) with the other organism.

Testing the evolution of altruism using quantitative studies in live organisms has been largely impossible because experiments need to span hundreds of generations and there are too many variables. However, Floreano's robots evolve rapidly using simulated gene and genome functions and allow scientists to measure the costs and benefits associated with the trait. Additionally, Hamilton's rule has long been a subject of much debate be-cause its equation seems too simple to be true."This study mirrors Hamilton's rule re-markably well to explain when an altruistic gene is passed on from one generation to the next, and when it is not," says Keller.

Previous experiments by Floreano and Keller showed that foraging robots doing simple tasks, such as pushing seed-like objects across the floor to a destination, evolve over multiple generations. Those robots not able to push the seeds to the correct location are selected out and cannot pass on their code, while robots that perform comparatively better see their code reproduced, mutated, and recombined with that of other robots into the next generation -- a minimal model of natural selection. The new study by EPFL and UNIL researchers adds a novel dimension: once a foraging robot pushes a seed to the proper destination, it can decide whether it wants to share it or not. Evolutionary experiments lasting 500 generations were repeated for several scenarios of altruistic interaction -- how much is shared and to what cost for the individual -- and of genetic relatedness in the population. The researchers created groups of relatedness that, in the robot world, would be the equivalent of complete clones, siblings, cousins and non-relatives. The groups that shared along the lines of Hamilton's rule foraged better and passed their code onto the next generation.

The quantitative results matched surprisingly well the predictions of Hamilton's rule even in the presence of multiple interactions. Hamilton's original theory takes a limited and isolated vision of gene interaction into account, whereas the genetic simulations run in the foraging robots integrate effects of one gene on multiple other genes with Hamilton's rule still holding true. The findings are already proving useful in swarm robotics."We have been able to take this experiment and extract an algorithm that we can use to evolve cooperation in any type of robot," explains Floreano."We are using this altruism algo-rithm to improve the control system of our flying robots and we see that it allows them to effectively collaborate and fly in swarm formation more successfully."

This research was funded by the Swiss National Science Foundation, the Euro-pean Commission ECAgents and Swarmanoids projects, and the European Research Council.


Source

Thursday, April 28, 2011

Caterpillars Inspire New Movements in Soft Robots

Some caterpillars have the extraordinary ability to rapidly curl themselves into a wheel and propel themselves away from predators. This highly dynamic process, called ballistic rolling, is one of the fastest wheeling behaviours in nature.

Researchers from Tufts University, Massachusetts, saw this as an opportunity to design a robot that mimics this behaviour of caterpillars and to develop a better understanding of the mechanics behind ballistic rolling.

The study, published on April 27, in IOP Publishing's journalBioinspiration& Biomimetics, also includes a video of both the caterpillar and robot in action and can be found athttp://www.youtube.com/watch?v=wZe9qWi-LUo.

To simulate the movement of a caterpillar, the researchers designed a 10cm long soft-bodied robot, called GoQBot, made out of silicone rubber and actuated by embedded shape memory alloy coils. It was named GoQBot as it forms a"Q" shape before rolling away at over half a meter per second.

The GoQBot was designed to specifically replicate the functional morphologies of a caterpillar, and was fitted with 5 infrared emitters along its side to allow motion tracking using one of the latest high speed 3D tracking systems. Simultaneously, a force plate measured the detailed ground forces as the robot pushed off into a ballistic roll.

In order to change its body conformation so quickly, in less than 100 ms, GoQBot benefits from a significant degree of mechanical coordination in ballistic rolling. Researchers believe such coordination is mediated by the nonlinear muscle coupling in the animals.

The researchers were also able to explain why caterpillars don't use the ballistic roll more often as a default mode of transport; despite its impressive performance, ballistic rolling is only effective on smooth surfaces, demands a large amount of power, and often ends unpredictably.

Not only did the study provide an insight into the fascinating escape system of a caterpillar, it also put forward a new locomotor strategy which could be used in future robot development.

Many modern robots are modelled after snakes, worms and caterpillars for their talents in crawling and climbing into difficult spaces. However, the limbless bodies severely reduce the speeds of the robots in the opening. On the other hand, there are many robots that employ a rolling motion in order to travel with speed and efficiency, but they struggle to gain access to difficult spaces.

Lead author Huai-Ti Lin from the Department of Biology, Tufts University, said:"GoQBot demonstrates a solution by reconfiguring its body and could therefore enhance several robotic applications such as urban rescue, building inspection, and environmental monitoring."

"Due to the increased speed and range, limbless crawling robots with ballistic rolling capability could be deployed more generally at a disaster site such as a tsunami aftermath. The robot can wheel to a debris field and wiggle into the danger for us."


Source

Saturday, March 19, 2011

Bomb Disposal Robot Getting Ready for Front-Line Action

The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.

The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.

Experts from the Department of Computer& Communications Engineering, based within the university's School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.

Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.

The robot also includes nuclear, biological and chemical weapons sensors.

Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.


Source

Thursday, March 10, 2011

How Do People Respond to Being Touched by a Robotic Nurse?

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not," said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person's forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

These results echo similar studies done with nurses.

"There have been studies of nurses and they've looked at how people respond to physical contact with nurses," said Kemp, who is also an adjunct professor in Georgia Tech's College of Computing."And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort… people were not so comfortable with that."

In addition, Kemp and his research team tested whether people responded more favorably when the robot verbally indicated that it was about to touch them versus touching them without saying anything.

"The results suggest that people preferred when the robot did not actually give them the warning," said Tiffany Chen, doctoral student at Georgia Tech."We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive."

Since many useful tasks require that a robot touch a person, the team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Many important healthcare tasks, such as wound dressing and assisting with hygiene, would require a robotic nurse to touch the patient's body,

"If we want robots to be successful in healthcare, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robot," added Kemp."And I think people haven't been as focused on that until now. Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that's not going to be enough if we actually want these robots out there helping people in the real world."

In addition to Kemp and Chen, the research group consists of Andrea Thomaz, assistant professor in Georgia Tech's College of Computing, and postdoctoral fellow Chih-Hung Aaron King.


Source

Wednesday, March 9, 2011

How Can Robots Get Our Attention?

The research is being presented March 8 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"The primary focus was trying to give Simon, our robot, the ability to understand when a human being seems to be reacting appropriately, or in some sense is interested now in a response with respect to Simon and to be able to do it using a visual medium, a camera," said Aaron Bobick, professor and chair of the School of Interactive Computing in Georgia Tech's College of Computing.

Using the socially expressive robot Simon, from Assistant Professor Andrea Thomaz's Socially Intelligent Machines lab, researchers wanted to see if they could tell when he had successfully attracted the attention of a human who was busily engaged in a task and when he had not.

"Simon would make some form of a gesture, or some form of an action when the user was present, and the computer vision task was to try to determine whether or not you had captured the attention of the human being," said Bobick.

With close to 80 percent accuracy Simon was able to tell, using only his cameras as a guide, whether someone was paying attention to him or ignoring him.

"We would like to bring robots into the human world. That means they have to engage with human beings, and human beings have an expectation of being engaged in a way similar to the way other human beings would engage with them," said Bobick.

"Other human beings understand turn-taking. They understand that if I make some indication, they'll turn and face someone when they want to engage with them and they won't when they don't want to engage with them. In order for these robots to work with us effectively, they have to obey these same kinds of social conventions, which means they have to perceive the same thing humans perceive in determining how to abide by those conventions," he added.

Researchers plan to go further with their investigations into how Simon can read communication cues by studying whether he can tell by a person's gaze whether they are paying attention or using elements of language or other actions.

"Previously people would have pre-defined notions of what the user should do in a particular context and they would look for those," said Bobick."That only works when the person behaves exactly as expected. Our approach, which I think is the most novel element, is to use the user's current behavior as the baseline and observe what changes."

The research team for this study consisted of Bobick, Thomaz, doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser.


Source

Tuesday, March 8, 2011

Teaching Robots to Move Like Humans

The research was presented March 7 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"It's important to build robots that meet people's social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them," said Andrea Thomaz, assistant professor in the School of Interactive Computing at Georgia Tech's College of Computing.

Thomaz, along with Ph.D. student Michael Gielniak, conducted a study in which they asked how easily people can recognize what a robot is doing by watching its movements.

"Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic," said Gielniak."We want humans to interact with robots just as they might interact with other humans, so that it's intuitive."

Using a series of human movements taken in a motion-capture lab, they programmed the robot, Simon, to perform the movements. They also optimized that motion to allow for more joints to move at the same time and for the movements to flow into each other in an attempt to be more human-like. They asked their human subjects to watch Simon and identify the movements he made.

"When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily," said Gielniak.

In addition, they tested the algorithm they used to create the optimized motion by asking humans to perform the movements they saw Simon making. The thinking was that if the movement created by the algorithm was indeed more human-like, then the subjects should have an easier time mimicking it. Turns out they did.

"We found that this optimization we do to create more life-like motion allows people to identify the motion more easily and mimic it more exactly," said Thomaz.

The research that Thomaz and Gielniak are doing is part of a theme in getting robots to move more like humans move. In future work, the pair plan on looking at how to get Simon to perform the same movements in various ways.

"So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action like waving, you always want to see a different wave so that people forget that this is a robot they're interacting with," said Gielniak.


Source

Wednesday, March 2, 2011

New Technique for Improving Robot Navigation Systems

An autonomous mobile robot is a robot that is able to navigate its environment without colliding or getting lost. Unmanned robots are also able to recover from spatial disorientation. Conducted by Sergio Guadarrama, researcher of the European Centre for Soft Computing, and Antonio Ruiz, assistant professor at the Universidad Politécnica de Madrid's Facultad de Informática, and published in the Information Sciences journal, the research focuses on map building. Map building is one of the skills related to autonomous navigation, where a robot is required to explore an unknown environment (enclosure, plant, buildings, etc.) and draw up a map of the environment. Before it can do this, the robot has to use its sensors to perceive obstacles.

The main sensor types used for autonomous navigation are vision and range sensors. Although vision sensors can capture much more information from the environment, this research used range, specifically ultrasonic, sensors, which are less accurate, to demonstrate that the model builds accurate maps from few and imprecise input data.

Once it has captured the ranges, the robot has to map these distances to obstacles on the map. Point clouds are used to draw the map, as the imprecision of the range data rules out the use of straight lines or even isolated points. Even so, the resulting map is by no means an architectural blueprint of the site, because not even the robot's location is precisely known, and there is no guarantee that each point cloud is correctly positioned. In actual fact, one and the same obstacle can be viewed properly from one robot position, but not from another. This can produce contradictory information -obstacle and no obstacle- about the same area of the map under construction. Which of the two interpretations is correct?

Exploring unknown spaces

The solution is based on linguistic descriptions of the antonyms"vacant" and"occupied" and inspired by computing with words and the computational theory of perceptions, two theories proposed by L.A. Zadeh of the University of California at Berkeley. Whereas other published research views obstacles and empty spaces as complementary concepts, this research assumes that, rather than being complements, obstacles and vacant spaces are a pair of opposites.

For example, we can infer that an occupied space is not vacant, but we cannot infer that an unoccupied space is empty. This space could be unknown or ambiguous, because the robot has limited information about its environment. Also the contradictions between"vacant" and"occupied" are also explicitly represented.

This way, the robot is able to make a distinction between two types of unknown spaces: spaces that are unknown because information is contradictory and spaces that are unknown because they are unexplored. This would lead the robot to navigate with caution through the contradictory spaces and explore the unexplored spaces. The map is constructed using linguistic rules, such as"If the measured distance is short, then assign a high confidence level to the measurement" or"If an obstacle has been seen several times, then increase the confidence in its presence," where"short,""high" and"several" are fuzzy sets, subject to fuzzy sets theory. Contradictions are resolved by a greater reliance on shorter ranges and combining multiple measures.

Compared with the results of other methods, the outcomes show that the maps built using this technique better capture the shape of walls and open spaces, and contain fewer errors from incorrect sensor data. This opens opportunities for improving the current autonomous navigation systems for robots.


Source

Friday, February 18, 2011

Controlling a Computer With Thoughts?

The projects build upon ongoing research conducted in epilepsy patients who had the interfaces temporarily placed on their brains and were able to move cursors and play computer games, as well as in monkeys that through interfaces guided a robotic arm to feed themselves marshmallows and turn a doorknob.

"We are now ready to begin testing BCI technology in the patients who might benefit from it the most, namely those who have lost the ability to move their upper limbs due to a spinal cord injury," said Michael L. Boninger, M.D., director, UPMC Rehabilitation Institute, chair, Department of Physical Medicine and Rehabilitation, Pitt School of Medicine, and a senior scientist on both projects."It's particularly exciting for us to be able to test two types of interfaces within the brain."

"By expanding our research from the laboratory to clinical settings, we hope to gain a better understanding of how to train and motivate patients who will benefit from BCI technology," said Elizabeth Tyler-Kabara, M.D., Ph.D., a UPMC neurosurgeon and assistant professor of neurological surgery and bioengineering, Pitt Schools of Medicine and Engineering, and the lead surgeon on both projects.

In one project, funded by an$800,000 grant from the National Institutes of Health, a BCI based on electrocorticography (ECoG) will be placed on the motor cortex surface of a spinal cord injury patient's brain for up to 29 days. The neural activity picked up by the BCI will be translated through a computer processor, allowing the patient to learn to control computer cursors, virtual hands, computer games and assistive devices such as a prosthetic hand or a wheelchair.

The second project, funded by the Defense Advanced Research Projects Agency (DARPA) for up to$6 million over three years, is part of a program led by the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Md. It will further develop technology tested in monkeys by Andrew Schwartz, Ph.D., professor of neurobiology, Pitt School of Medicine, and also a senior investigator on both projects.

It uses an interface that is a tiny, 10-by-10 array of electrodes that is implanted on the surface of the brain to read activity from individual neurons. Those signals will be processed and relayed to maneuver a sophisticated prosthetic arm.

"Our animal studies have shown that we can interpret the messages the brain sends to make a simple robotic arm reach for an object and turn a mechanical wrist," Dr. Schwartz said."The next step is to see not only if we can make these techniques work for people, but also if we can make the movements more complex."

In the study, which is expected to begin by late 2011, participants will get two separate electrodes. In future research efforts, the technology may be enhanced with an innovative telemetry system that would allow wireless control of a prosthetic arm, as well as a sensory component.

"Our ultimate aim is to develop technologies that can give patients with physical disabilities control of assistive devices that will help restore their independence," Dr. Boninger said.


Source

Saturday, February 5, 2011

Future Surgeons May Use Robotic Nurse, 'Gesture Recognition'

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The"vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said."You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said."You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said."In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply"anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said."This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said."Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

  • Use a small vocabulary of simple, easily recognizable gestures.
  • Not require the user to wear special virtual reality gloves or certain types of clothing.
  • Be as low-cost as possible.
  • Be responsive and able to keep up with the speed of a surgeon's hand gestures.
  • Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple"OK."
  • Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.
  • Be highly accurate in recognizing hand gestures.
  • Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.
  • Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.
  • Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said."Much is already known about voice recognition."

The work is funded by the U.S. Agency for Healthcare Research and Quality.


Source

Saturday, January 22, 2011

For Robust Robots, Let Them Be Babies First

Or at least that's not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of theProceedings of the National Academy of Sciences.

In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in"infant" tadpole-like forms and more time in"adult" four-legged forms.

These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait -- better able to deal with, say, being knocked with a stick -- than the ones that had learned to walk using upright legs from the beginning.

"This paper shows that body change, morphological change, actually helps us design better robots," Bongard says."That's never been attempted before."

Robots are complex

Bongard's research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics."We have an engineering goal," he says"to produce robots as quickly and consistently as possible." In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.

"But we don't know how to program robots very well," Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.

"They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there's neurons and there's sensors and motors and they're all turning on and off in parallel," Bongard says,"and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road." Or at least that's the goal.

But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.

Which is why Bongard, an assistant professor in UVM's College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors -- rather than trying to program the robots' behavior directly.

His new work may help.

To the light

Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space."It looks like a modern video game," he says. Each creature -- or, rather, generations of the creatures -- then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait -- based on its body plan -- that can get it to the light source without tipping over.

"The robots have 12 moving parts," Bongard says."They look like the simplified skeleton of a mammal: it's got a jointed spine and then you have four sticks -- the legs -- sticking out."

Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.

And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?

"The snake and reptilian robots are, in essence, training wheels," says Bongard,"they allow evolution to find motion patterns quicker, because those kinds of robots can't fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it's able to tackle the balance problem after already solving the movement problem."

Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?

"Yes," says Bongard,"We're copying nature, we're copying evolution, we're copying neural science when we're building artificial brains into these robots." But the key point is that his robots don't only evolve their artificial brain -- the neural network controller -- but rather do so in continuous interaction with a changing body plan. A tadpole can't kick its legs, because it doesn't have any yet; it's learning some things legless and others with legs.

And this may help to explain the most surprising -- and useful -- finding in Bongard's study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn't before faced, like efforts to tip them over.

Bongard is not exactly sure why this is, but he thinks it's because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.

"Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies," Bongard writes in his PNAS paper"This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world."

"One thing that has been left out all this time is the obvious fact that in nature it's not that the animal's body stays fixed and its brain gets better over time," he says,"in natural evolution animals bodies and brains are evolving together all the time." A human infant, even if she knew how, couldn't walk: her bones and joints aren't up to the task until she starts to experience stress on the foot and ankle.

That hasn't been done in robotics for an obvious reason:"it's very hard to change a robot's body," Bongard says,"it's much easier to change the programming inside its head."

Lego proof

Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM's Vermont Advanced Computing Center --"it would have taken 50 or 100 years on a single machine," Bongard says -- he took the task into the real world.

"We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it," he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs."The brace gradually tilts the robot," as the controller searches for successful movement patterns, Bongard says,"so that the legs go from horizontal to vertical, from reptile to quadruped.

"While the brace is bending the legs, the controller is causing the robot to move around, so it's able to move its legs, and bend its spine," he says,"it's squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it's walking like a coyote."

"It's a very simple prototype," he says,"but it works; it's a proof of concept."


Source

Friday, January 21, 2011

Robotic Ghost Knifefish Is 'Born'

The robot -- created after observing and creating computer simulations of the black ghost knifefish -- could pave the way for nimble robots that could perform underwater recovery operations or long-term monitoring of coral reefs.

Led by Malcolm MacIver, associate professor of mechanical and biomedical engineering at Northwestern's McCormick School of Engineering and Applied Science, the team's results are published in the Journal of the Royal Society Interface.

The black ghost knifefish, which works at night in rivers of the Amazon basin, hunts for prey using a weak electric field around its entire body and moves both forward and backward using a ribbon-like fin on the underside of its body.

MacIver, a robotics expert who served as a scientific consultant for"Tron: Legacy" and is science advisor for the television series"Caprica," has studied the knifefish for years. Working with Neelesh Patankar, associate professor of mechanical engineering and co-author of the paper, he has created mechanical models of the fish in hopes of better understanding how the nervous system sends messages throughout the body to make it move.

Planning for the robot -- called GhostBot -- began when graduate student Oscar Curet, a co-author of the paper, observed a knifefish suddenly moving vertically in a tank in MacIver's lab.

"We had only tracked it horizontally before," said MacIver, a recent recipient of the Presidential Early Career Award for Scientists and Engineers."We thought, 'How could it be doing this?'"

Further observations revealed that while the fish only uses one traveling wave along the fin during horizontal motion (forward or backward depending on the direction on the wave), while moving vertically it uses two waves. One of these moves from head to tail, and the other moves tail to head. The two waves collide and stop at the center of the fin.

The team then created a computer simulation that showed that when these"inward counterpropagating waves" are generated by the fin, horizontal thrust is canceled and the fluid motion generated by the two waves is funneled into a downward jet from the center of the fin, pushing the body up. The flow structure looks like a mushroom cloud with an inverted jet.

"It's interesting because you're getting force coming off the animal in a completely unexpected direction that allows it to do acrobatics that, given its lifestyle of hunting and maneuvering among tree roots, makes a huge amount of sense," MacIver said.

The group then hired Kinea Design, a design firm founded by Northwestern faculty that specializes in human interactive mechatronics, and worked closely with its co-founder, Michael Peshkin, professor of mechanical engineering, to design and build a robot. The company fashioned a forearm-length waterproof robot with 32 motors giving independent control of the 32 artificial fin rays of the lycra-covered artificial fin. (That means the robot has 32 degrees of freedom. In comparison, industrial robot arms typically have less than 10.) Seven months and$200,000 later, the GhostBot came to life.

The group took the robot to Harvard University to test it in a flow tunnel in the lab of George V. Lauder, professor of ichthyology and co-author of the paper. The team measured the flow around the robotic fish by placing reflective particles in the water, then shining a laser sheet into the water. That allowed them to track the flow of the water by watching the particles, and the test showed the water flowing around the biomimetic robot just as computer simulations predicted it would.

"It worked perfectly the first time," MacIver said."We high-fived. We had the robot in the real world being pushed by real forces."

The robot is also outfitted with an electrosensory system that works similar to the knifefish's, and MacIver and his team hope to next improve the robot so it can autonomously use its sensory signals to detect an object and then use its mechanical system to position itself near the object.

Humans excel at creating high-speed, low-maneuverability technologies, like airplanes and cars, MacIver said. But studying animals provides a platform for creating low-speed, high-maneuverability technologies -- technologies that don't currently exist. Potential applications for such a robot include underwater recovery operations, such as plugging a leaking oil pipe, or long-term monitoring of oceanic environments, such as fragile coral reefs.

While the applied work on the robot moves ahead in the lab, the group is pursuing basic science questions as well."The robot is a tool for uncovering the extremely complicated story of how to coordinate movement in animals," MacIver said."By simulating and then performing the motions of the fish, we're getting insight into the mechanical basis of the remarkable agility of a very acrobatic, non-visual fish. The next step is to take the sensory work and unite the two."


Source