Robots are getting smarter, but they still need step-by-step instructions for tasks they haven't performed before. Before you can tell your household robot "Make me a bowl of ramen noodles," you'll have to teach it how to do that. Since we're not all computer programmers, we'd prefer to give those instructions in English, just as we'd lay out a task for a child.

Related Articles

But human language can be ambiguous, and some instructors forget to mention important details. Suppose you told your household robot how to prepare ramen noodles, but forgot to mention heating the water or tell it where the stove is.

In his Robot Learning Lab, Ashutosh Saxena, assistant professor of computer science at Cornell University, is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand.

Saxena and graduate students Dipendra K. Misra and Jaeyong Sung will describe their methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.

The robot may have a built-in programming language with commands like find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); carry (pan, stove) and so on. Saxena's software translates human sentences, such as "Fill a pan with water, put it on the stove, heat the water. When it's boiling, add the noodles." into robot language. Notice that you didn't say, "Turn on the stove." The robot has to be smart enough to fill in that missing step.

Saxena's robot, equipped with a 3-D camera, scans its environment and identifies the objects in it, using computer vision software previously developed in Saxena's lab. The robot has been trained to associate objects with their capabilities: A pan can be poured into or poured from; stoves can have other objects set on them, and can heat things. So the robot can identify the pan, locate the water faucet and stove and incorporate that information into its procedure. If you tell it to "heat water" it can use the stove or the microwave, depending on which is available. And it can carry out the same actions tomorrow if you've moved the pan, or even moved the robot to a different kitchen.

Other workers have attacked these problems by giving a robot a set of templates for common actions and chewing up sentences one word at a time. Saxena's research group uses techniques computer scientists call "machine learning" to train the robot's computer brain to associate entire commands with flexibly defined actions. The computer is fed animated video simulations of the action -- created by humans in a process similar to playing a video game -- accompanied by recorded voice commands from several different speakers.

The computer stores the combination of many similar commands as a flexible pattern that can match many variations, so when it hears "Take the pot to the stove," "Carry the pot to the stove," "Put the pot on the stove," "Go to the stove and heat the pot" and so on, it calculates the probability of a match with what it has heard before, and if the probability is high enough, it declares a match. A similarly fuzzy version of the video simulation supplies a plan for the action: Wherever the sink and the stove are, the path can be matched to the recorded action of carrying the pot of water from one to the other.

Of course the robot still doesn't get it right all the time. To test, the researchers gave instructions for preparing ramen noodles and for making affogato -- an Italian dessert combining coffee and ice cream: "Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture."

The robot performed correctly up to 64 percent of the time even when the commands were varied or the environment was changed, and it was able to fill in missing steps. That was three to four times better than previous methods, the researchers reported, but "There is still room for improvement."

You can teach a simulated robot to perform a kitchen task at the "Tell me Dave" website, and your input there will become part of a crowdsourced library of instructions for the Cornell robots. Aditya Jami, visiting researcher at Cornell, is helping Tell Me Dave to scale the library to millions of examples. "With crowdsourcing at such a scale, robots will learn at a much faster rate," Saxena said.

Mar. 31, 2015 — Video games not only sharpen the visual processing skills of frequent players, they might also improve the brain's ability to learn those skills, according to a new study. Gamers showed faster ... full story

Mar. 31, 2015 — Children who play video games for more than three hours a day are more likely to be hyperactive, get involved in fights and not be interested in school, says a new study. It examined the effects of ... full story

Mar. 31, 2015 — Using the assessment tool ForWarn, US Forest Service researchers can monitor the growth and development of vegetation that signals winter's end and the awakening of a new growing season. Now these ... full story

Mar. 31, 2015 — A team of engineers and biologists reports new progress in using computer modeling and 3D shape analysis to understand how the unique grasping tails of seahorses evolved. These prehensile tails ... full story

Mar. 31, 2015 — Searching the Internet for information may make people feel smarter than they actually are, according to new research. In a series of experiments, participants who searched for information on the ... full story

Mar. 30, 2015 — Neuroscientists are taking inspiration from natural motor control to design new prosthetic devices that can better replace limb function. Researchers have tested a range of brain-controlled devices ... full story

Mar. 30, 2015 — Speaking in public is the top fear for many people. Now, researchers have developed an intelligent user interface for 'smart glasses' that gives real-time feedback to the speaker on volume modulation ... full story

Mar. 30, 2015 — To help scientists make sense of 'brain big data,' researchers have used data mining to create www.neuroelectro.org, a publicly available website that acts like Wikipedia, indexing physiological ... full story

Featured Videos

Bionic Ants Could Be Tomorrow's Factory Workers

Reuters - Innovations Video Online (Mar. 30, 2015) — Industrious 3D printed bionic ants working together could toil in the factories of the future, says German technology company Festo. The robotic insects cooperate and coordinate their actions and movements to achieve a common aim. Amy Pollock reports.
Video provided by Reuters

Internet Giants Drive Into the Electric Vehicle Space

Reuters - Business Video Online (Mar. 30, 2015) — Internet companies are looking to disrupt the auto industry with new smart e-vehicles, but widespread adoption in Asia may not be cured by new Chinese investments. Pamela Ambler reports.
Video provided by Reuters

Related Stories

Apr. 16, 2014 — A way of making hundreds -- or even thousands -- of tiny robots cluster to carry out tasks without using any memory or processing power has been developed. Engineers have programmed extremely simple ... full story

Apr. 1, 2014 — Researchers have developed a method to allow a computer to give advice and teach skills to another computer in a way that mimics how a real teacher and student might interact. Researchers had the ... full story

Feb. 22, 2013 — Humanity came one step closer in January to being able to replicate itself, thanks to the EU's approval of funding for the Human Brain Project. Danica Kragic, a robotics researcher and computer ... full story

Mar. 9, 2011 — Scientists have developed a programmable "molecular robot" -- a sub-microscopic molecular machine made of synthetic DNA that moves between track locations separated by 6nm. The robot, a ... full story

ScienceDaily features breaking news and videos about the latest discoveries in health, technology, the environment, and more -- from major news services and leading universities, scientific journals, and research organizations.