May 29, 2013

Robot Can Use Algorithms To Predict Human Tasks

A robot developed at Cornell University has discovered a way to predict human action so that it can provide a helping hand with things like opening refrigerator doors or pouring a cup of beer.

According to researchers from the Ithaca, New York university´s Personal Robotics Lab, the machine uses algorithms in order to anticipate what a person will do in the immediate future. Once it considers all of the possible variables and makes it prediction, it responds accordingly, providing assistance with any number of tasks.

The robot uses a Microsoft Kinect 3-D camera and a database of 3D videos in order to review which different tasks it is observing. It considers what possible uses are associated with different objects in the scene it is viewing, then determines how each of them fit with the activities being performed by individuals.

Next, it creates a list of possible ways that the action could play out, including eating, drinking, cleaning or putting away. It determines which activity is the most probable to occur next, and as the person´s movement continues, it continuously updates, alters and refines its predictions.

“We extract the general principles of how people behave,” Ashutosh Saxena, an assistant professor of computer science at Cornell and co-author of a study linked to the research, said in a statement. “Drinking coffee is a big activity, but there are several parts to it,” and as the robot performs its functions, it builds a “vocabulary” of minor tasks which can be combined in different ways to perform larger activities, he added.

The robot correctly made predictions 82 percent of the time when looking ahead one second, 71 percent when looking ahead three seconds, and 57 percent when looking ahead 10 seconds, Saxena´s team said.

Their work was supported by the US Army Research Office, the Alfred E. Sloan Foundation and Microsoft.

“Even though humans are predictable, they are only predictable part of the time,” Saxena said. “The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond.”