Friday, December 31, 2010

New Cognitive Robotics Lab Tests Theories of Human Thought

"The real world has a lot of inconsistency that humans handle almost without noticing -- for example, we walk on uneven terrain, we see in shifting light," said Professor Vladislav Daniel Veksler, who is currently teaching Cognitive Robotics."With robots, we can see the problems humans face when navigating their environment."

Cognitive Robotics marries the study of cognitive science -- how the brain represents and transforms information -- with the challenges of a physical environment. Advances in cognitive robotics transfer to artificial intelligence, which seeks to develop more efficient computer systems patterned on the versatility of human thought.

Professor Bram Van Heuveln, who organized the lab, said cognitive scientists have developed a suite of elements -- perception/action, planning, reasoning, memory, decision-making -- that are believed to constitute human thought. When properly modeled and connected, those elements are capable of solving complex problems without the raw power required by precise mathematical computations.

"Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time," said Van Heuveln."But that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."

Robotics are an ideal testing ground for that principle because robots act in the real world, and a correct cognitive solution will withstand the unexpected variables presented by the real world.

"The physical world can help us to drive science because it's different from any simulated world we could come up with -- the camera shakes, the motors slip, there's friction, the light changes," Veksler said."This platform -- robotics -- allows us to see that you can't rely on calculations. You have to be adaptive."

The lab is open to all students at Rensselaer. In its first semester, the lab has largely attracted computer science and cognitive science students enrolled in a Cognitive Robotics course taught by Veksler, but Veksler and Van Heuveln hope it will attract more engineering and art students as word of the facility spreads.

"We want different students together in one space -- a place where we can bring the different disciplines and perspectives together," said Van Heuveln."I would like students to use this space for independent research: they come up with the research project, they say 'let's look at this.'"

The lab is equipped with five"Create" robots -- essentially a Roomba robotic vacuum cleaner paired with a laptop; three hand-eye systems; one Chiara (which looks like a large metal crab); and 10 LEGO robots paired with the Sony Handy Board robotic controller.

On a recent day, Jacqui Brunelli and Benno Lee were working on their robot"cat" and"mouse" pair, which try to chase and evade each other respectively; Shane Reilly was improving the computer"vision" of his robotic arm; and Ben Ball was programming his robot to maintain a fixed distance from a pink object waved in front of its"eye."

"The thing that I've learned is that the sensor data isn't exact -- what it 'sees' constantly changes by a few pixels -- and to try to go by that isn't going to work," said Ball, a junior and student of computer science and physics.

Ball said he is trying to pattern his robot on a more human approach.

"We don't just look at an object and walk toward it. We check our position, adjusting our course," Ball said."I need to devise an iterative approach where the robot looks at something, then moves, then looks again to check its results."

The work of the students, who program their robots with the Tekkotsu open-source software, could be applied in future projects, said Van Heuveln.

"As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln."To me, that's a heck of a lot more interesting than the computational solution."

In a generic domain, their early investigations clearly show how a more cognitive approach employing limited resources can easily outpace more powerful computers using a brute force approach, said Veksler.

"We look to humans not just because we want to simulate what we do, which is an interesting problem in itself, but also because we're smart," said Veksler."Some of the things we have, like limited working memory -- which may seem like a bad thing -- are actually optimal for solving problems in our environment. If you remembered everything, how would you know what's important?"


Source

Saturday, November 27, 2010

I Want to See What You See: Babies Treat 'Social Robots' as Sentient Beings

Curiosity drives their learning. At 18 months old, babies are intensely curious about what makes humans tick. A team of University of Washington researchers is studying how infants tell which entities are"psychological agents" that can think and feel.

Research published in the October/November issue ofNeural Networksprovides a clue as to how babies decide whether a new object, such as a robot, is sentient or an inanimate object. Four times as many babies who watched a robot interact socially with people were willing to learn from the robot than babies who did not see the interactions.

"Babies learn best through social interactions, but what makes something 'social' for a baby?" said Andrew Meltzoff, lead author of the paper and co-director of the UW's Institute for Learning and Brain Sciences."It is not just what something looks like, but how it moves and interacts with others that gives it special meaning to the baby."

The UW researchers hypothesized that babies would be more likely to view the robot as a psychological being if they saw other friendly human beings socially interacting with it."Babies look to us for guidance in how to interpret things, and if we treat something as a psychological agent, they will, too," Meltzoff said."Even more remarkably, they will learn from it, because social interaction unlocks the key to early learning."

During the experiment, an 18-month-old baby sat on its parent's lap facing Rechele Brooks, a UW research assistant professor and a co-author of the study. Sixty-four babies participated in the study, and they were tested individually. They played with toys for a few minutes, getting used to the experimental setting. Once the babies were comfortable, Brooks removed a barrier that had hidden a metallic humanoid robot with arms, legs, a torso and a cube-shaped head containing camera lenses for eyes. The robot -- controlled by a researcher hidden from the baby -- waved, and Brooks said,"Oh, hi! That's our robot!"

Following a script, Brooks asked the robot, named Morphy, if it wanted to play, and then led it through a game. She would ask,"Where is your tummy?" and"Where is your head?" and the robot pointed to its torso and its head. Then Brooks demonstrated arm movements and Morphy imitated. The babies looked back and forth as if at a ping pong match, Brooks said.

At the end of the 90-second script, Brooks excused herself from the room. The researchers then measured whether the baby thought the robot was more than its metal parts.

The robot beeped and shifted its head slightly -- enough of a rousing to capture the babies' attention. The robot turned its head to look at a toy next to the table where the baby sat on the parent's lap. Most babies -- 13 out of 16 -- who had watched the robot play with Brooks followed the robot's gaze. In a control group of babies who had been familiarized with the robot but had not seen Morphy engage in games, only three of 16 turned to where the robot was looking.

"We are using modern technology to explore an age-old question about the essence of being human," said Meltzoff, who holds the Job and Gertrud Tamaki Endowed Chair in psychology at the UW."The babies are telling us that communication with other people is a fundamental feature of being human."

The study has implications for humanoid robots, said co-author Rajesh Rao, UW associate professor of computer science and engineering and head of UW's neural systems laboratory. Rao's team helped design the computer programs that made Morphy appear social."The study suggests that if you want to build a companion robot, it is not sufficient to make it look human," said Rao."The robot must also be able to interact socially with humans, an interesting challenge for robotics."

The study was funded by the Office of Naval Research and the National Science Foundation. Aaron Shon, who graduated from UW with a doctorate in computer science and engineering, is also a co-author on the paper.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Friday, November 26, 2010

McSleepy Meets DaVinci: Doctors Conduct First-Ever All-Robotic Surgery and Anesthesia

“Collaboration between DaVinci, a surgical robot, and anesthetic robot McSleepy, seemed an obvious fit; robots in medicine can provide health care of higher safety and precision, thus ultimately improving outcomes,” said Dr. TM Hemmerling of McGill University and MUHC’s Department of Anesthesia, who is also a neuroscience researcher at the Research Institute (RI) of the MUHC.

“The DaVinci allows us to work from a workstation operating surgical instruments with delicate movements of our fingers with a precision that cannot be provided by humans alone,” said Dr. A. Aprikian, MUHC urologist in chief and Director of the MUHC Cancer Care Mission, and also a researcher in the Cancer Axis at the RI MUHC. He and his team of surgeons operate the robotic arms from a dedicated workstation via video control with unsurpassed 3D HD image quality.

“Providing anesthesia for robotic prostatectomy can be challenging because of the specific patient positioning and the high degree of muscle relaxation necessary to maintain perfect conditions for the surgical team,” added Dr. Hemmerling.“Automated anesthesia delivery via McSleepy guarantees the same high quality of care every time it is used, independent from the subjective level of expertise. It can be configured exactly to the specific needs of different surgeries, such as robotic surgery.”

“Obviously, there is still some work needed to perfect the all robotic approach– from technical aspects to space requirements for the robots,” added Dr. Hemmerling.“Whereas robots have been used in surgery for quite some time, anesthesia has finally caught up. Robots will not replace doctors but help them to perform to the highest standards.”

Combining both robots, the specialists at the MUHC can deliver the most modern and accurate patient care. The researchers will use the results of this project to test all robotic surgery and anesthesia in a larger scale of patients and various types of surgery.”This should allow for faster, safer and more precise surgery for our patients” concluded Dr. Aprikian.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Thursday, November 25, 2010

Robotic Gripper Runs on Coffee ... and Balloons

They call it a universal gripper, as it conforms to the object it's grabbing rather than being designed for particular objects, said Hod Lipson, Cornell associate professor of mechanical engineering and computer science. The research is a collaboration between the groups of Lipson, Heinrich Jaeger at the University of Chicago, and Chris Jones at iRobot Corp. It is published Oct. 25 online inProceedings of the National Academy of Sciences.

"This is one of the closest things we've ever done that could be on the market tomorrow," Lipson said. He noted that the universality of the gripper makes future applications seemingly limitless, from the military using it to dismantle explosive devises or to move potentially dangerous objects, robotic arms in factories, on the feet of a robot that could walk on walls, or on prosthetic limbs.

Here's how it works: An everyday party balloon filled with ground coffee -- any variety will do -- is attached to a robotic arm. The coffee-filled balloon presses down and deforms around the desired object, and then a vacuum sucks the air out of the balloon, solidifying its grip. When the vacuum is released, the balloon becomes soft again, and the gripper lets go.

Jaeger said coffee is an example of a particulate material, which is characterized by large aggregates of individually solid particles. Particulate materials have a so-called jamming transition, which turns their behavior from fluid-like to solid-like when the particles can no longer slide past each other.

This phenomenon is familiar to coffee drinkers familiar with vacuum-packed coffee, which is hard as a brick until the package is unsealed.

"The ground coffee grains are like lots of small gears," Lipson said."When they are not pressed together they can roll over each other and flow. When they are pressed together just a little bit, the teeth interlock, and they become solid."

Jaeger explains that the concept of a"jamming transition" provides a unified framework for understanding and predicting behavior in a wide range of disordered, amorphous materials. All of these materials can be driven into a 'glassy' state where they respond like a solid yet structurally resemble a liquid, and this includes many liquids, colloids, emulsions or foams, as well as particulate matter consisting of macroscopic grains.

"What is particularly neat with the gripper is that here we have a case where a new concept in basic science provided a fresh perspective in a very different area -- robotics -- and then opened the door to applications none of us had originally thought about," Jaeger said.

Eric Brown, a postdoctoral researcher, and Nick Rodenberg, a physics undergraduate, worked with Jaeger on characterizing the basic mechanisms that enable the gripping action. Prototypes of the gripper were built and tested by Lipson and Cornell graduate student John Amend as well as at iRobot.

As for the right particulate material, anything that can jam will do in principle, and early prototypes involved rice, couscous and even ground- up tires. They settled on coffee because it's light but also jams well, Amend said. Sand did better on jamming but was prohibitively heavy. What sets the jamming-based gripper apart is its good performance with almost any object, including a raw egg or a coin -- both notoriously difficult for traditional robotic grippers.

The project was supported by the Defense Advanced Research Projects Agency.


Source

Wednesday, November 24, 2010

Underwater Robots on Course to the Deep Sea

Even when equipped with compressed-air bottles and diving regulators, humans reach their limits very quickly under water. In contrast, unmanned submarine vehicles that are connected by cable to the control center permit long and deep dives. Today remote-controlled diving robots are used for research, inspection and maintenance work. The possible applications of this technology are limited, however, by the length of the cable and the instinct of the navigator. No wonder that researchers are working on autonomous underwater robots which orient themselves under water and carry out jobs without any help from humans.

In the meantime, there are AUVs (autonomous underwater vehicles) which collect data independently or take samples before they return to the starting points."For the time being, the technology is too expensive to carry out routine work, such as inspections of bulkheads, dams or ships' bellies," explains Dr. Thomas Rauschenbach, Director of the Application Center System Technology AST Ilmenau, Germany at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB. This may change soon. Together with the researchers at four Fraunhofer Institutes, Rauschenbach's team is presently working on a generation of autonomous underwater robots which will be smaller, more robust and cheaper than the previous models. The AUVs shall be able to find their bearings in clear mountain reservoirs equally well as in turbid harbor water. They will be suitable for work on the floor of the deep sea as well as for inspections of shallow concrete bases that offshore wind power station have been mounted on.

The engineers from Fraunhofer Institute for Optronics, System Technologies and Image Exploitation in Karlsruhe, Germany are working on the"eyes" for underwater robots. Optical perception is based on a special exposure and analysis technology which even permits orientation in turbid water as well. First of all, it determines the distance to the object, and then the camera emits a laser impulse which is reflected by the object, such as a wall. Microseconds before the reflected light flash arrives, the camera opens the aperture and the sensors capture the incident light pulses. At the Ilmenau branch of the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation,

Rauschenbach's team is developing the"brain" of the robot: a control program that keeps the AUV on course in currents such as at a certain distance to the wall that is to be examined. The Fraunhofer Institute for Biomedical Engineering IBMT in St. Ingbert provides the silicone encapsulation for the pressure-tolerant construction of electronic circuits as well as the"ears" of the new robot: ultrasound sensors permit the inspection of objects. Contrary to the previously conventional sonar technology, researchers are now using high-frequency sound waves which are reflected by the obstacles and registered by the sensor. The powerful but lightweight lithium batteries of the Fraunhofer ISIT in Itzehoe that supply the AUV with energy are encapsulated by silicone.

A special energy management system that researchers at the Fraunhofer Institute for Environmental, Safety and Energy Technology UMSICHT in Oberhausen, Germany have developed saves power and ensures that the data are saved in emergencies before the robot runs out of energy and has to surface.

A torpedo-shaped prototype two meters long that is equipped with eyes, ears, a brain, a motor and batteries will go on its maiden voyage this year in a new tank in Ilmenau. The tank is only three meters deep, but"that's enough to test the decisive functions," affirms Dr. Rauschenbach. In autumn 2011, the autonomous diving robot will put to sea for the first time from the research vessel POSEIDON: Several dives up to a depth of 6,000 meters have been planned.


Source