Just imagine all the things we don’t want robots to figure out how to use!

Robot hands are amazing, but compared to human hands, they’re junk. A robot has real trouble catching things, or letting go of an object in order to shift its grip. You and I can reposition a pencil in our fingers with no effort at all. A robot will probably drop it.

advertisement

advertisement

And the problem isn’t the hand itself. It’s the brain. A robot needs to be told what to do, and how to do it, with every action broken down into steps. But the Adroit robot from the University of Washington, though, can work things out for itself, learning through trial and error how to complete surprisingly complex actions.

“Usually people look at a motion and try to determine what exactly needs to happen,” co-author Emo Todorov told the Washington Edu News. “The pinky needs to move that way, so we’ll put some rules in and try it and if something doesn’t work, oh the middle finger moved too much and the pen tilted, so we’ll try another rule.”

\

The Adroit platform takes UW’s own $300,000 custom robo-hand and connects it to learning software. The trick was to teach a computer simulation of the hand to perform basic tasks, proceeding to more complex maneuvers like spinning a cylinder on its palm. Then the hand goes to work on the same situation, only in real life.

“It’s like sitting through a lesson, going home and doing your homework to understand things better, and then coming back to school a little more intelligent the next day,” lead researcher Vikash Kumar told the site.

The hand and its computer brain are hooked up to an array of sensors and cameras, feeding the results into learning algorithms. Combined, the hand can learn to do complex, real-world tasks in as little as fifteen iterations, with five exploratory movements in each one.

advertisement

Interestingly, the system is quite resistant to “noise,” or unexpected interference. Adding noise at the beginning of the learning progress spoiled things, but if it was introduced after the action was partially learned, noise actually ”resulted in much more robust controllers,” says the paper. That is, making things difficult for the robot to learn made it learn better.

While the robot can learn how to use its hand, right down to controlling the pneumatic actuators, from scratch each time, quickly coming to master the required task, it isn’t good at adapting to new situations. It can’t start the learning process without initial guidance. The team is working on that, though, with additional plans to add haptics (touch feedback) and vision to the ‘bot. This could lead to the robot being able to figure out a new situation for itself, for instance, working out how to manipulate a completely unknown object–probably one we wish it wasn’t learning to manipulate.

advertisement

advertisement

advertisement

About the author

Previously found writing at Wired.com, Cult of Mac and Straight No filter.