HomeNewsThis five-fingered robot hand is close to human in functionality

This five-fingered robot hand is close to human in functionality

Machine learning algorithms allow it to master new tasks autonomously

May 10, 2016

This five-fingered robot hand developed by University of Washington computer science and engineering researchers can learn how to perform dexterous manipulation — like spinning a tube full of coffee beans — on its own, rather than having humans program its actions. (credit: University of Washington)

A University of Washington team of computer scientists and engineers has built what they say is one of the most highly capable five-fingered robot hands in the world. It can perform dexterous manipulation and learn from its own experience without needing humans to direct it.

“Hand manipulation is one of the hardest problems that roboticists have to solve,” said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”

The UW research team has developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the robot hardware and to real-world tasks like rotating an elongated object.

Autonomous machine learning

With each attempt, the robot hand gets progressively more adept at spinning the tube, thanks to machine learning algorithms that help it model both the basic physics involved and plan which actions it should take to achieve the desired result. (This demonstration begins at 1:47 in the video below.)

University of Washington | ADROIT Manipulation Platform

This autonomous-learning approach developed by the UW Movement Control Laboratory contrasts with robotics demonstrations that require people to program each individual movement of the robot’s hand to complete a single task.

Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength, responsiveness, and flexibility to mimic basic behaviors of a human hand.

The UW’s dexterous robot hand — which the team built at a cost of roughly $300,000 — uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand and with 24 degrees of freedom (types of movement). It is too expensive for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies.

The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviors and plan movements to achieve different outcomes — like typing on a keyboard or dropping and catching a stick — in simulation. Then they transferred the models to work on the actual five-fingered hand hardware. As the robot hand performs different tasks, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models.

So far, the team has demonstrated local learning with the hardware system, which means the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning, which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn’t encountered before.

The research was funded by the National Science Foundation and the National Institutes of Health.

Abstract of Optimal Control with Learned Local Models: Application to Dexterous Manipulation

We describe a method for learning dexterous manipulation skills with a pneumatically-actuated tendon-driven 24-DoF hand. The method combines iteratively refitted timevarying linear models with trajectory optimization, and can be seen as an instance of model-based reinforcement learning or as adaptive optimal control. Its appeal lies in the ability to handle challenging problems with surprisingly little data. We show that we can achieve sample-efficient learning of tasks that involve intermittent contact dynamics and under-actuation. Furthermore, we can control the hand directly at the level of the pneumatic valves, without the use of a prior model that describes the relationship between valve commands and joint torques. We compare results from learning in simulation and on the physical system. Even though the learned policies are local, they are able to control the system in the face of substantial variability in initial state.

comments 10

The five fingered hand appears to have most of the dexterity of the human hand. So little information is given about how it senses the world, whether by sight or touch or both.Likewise nothing is said about how the machine actually determines the nature of an object although they do say that it learns by itself. What is the representation it uses in its memory when it learns some properties of a new object? Is it expressed numerically or algebraically? whatever way it learns, once it learns an object hopefully that information can be easily transferred to another similar robot hand which will “know” the object and not have to relearn it. If this is possible than rather quickly you can get a large memory of differing objects. The study of objects and how we learn to manipulate them is pretty crude.Some objects really have no defined shape except in the moment ,for example a piece of cloth that is to be made into clothing.Wrinkled cloth which is to be ironed.There are many other shapeless objects, chewing gum for example. We find it almost impossible to automate procedures that involve objects that are difficult to categorize. The mental equipment of any robot is the most important part in determining its useful functioning.

:) Long way to go. Human hand is complex, alive, self-repairing, multi-signalling, and the skins ‘sees’. We detect air changes between the fingers, as well as tension, temperature movements – signalling with the brain for predictive & retrodictive actions.

This and other models essentially led by the Shadow Robot Company, are centred round degrees of freedom and sensing.

Really brilliant and still man-lab designed, but awesome for coming androids using DeepLearning systems.

I am curious why they used a clear rod filled with coffee beans? I like the idea they can have a stick that as they move it may shift weight and thus require a better griping system, but then maybe they need to put heavy metal balls inside that will roll and shift weight.

Besides ready availability, the beans may make it easier to see any “roll” rotation, and have enough mass to add a convenient amount of inertia. (The tube seems to have been connected to a sensor, and without it, may have moved in inconvenient, irrelevant ways.)
Agree with stevewaclo; accurate spelling goes a long way with me, too.

I’d like to see the “Luke Hand” team adopt this kind of model for a new model of their hand. It looks like it could potentially do everything that hand could, and is optimized for a more realistic cosmetic appearance if overlaid with the right “skin.” 3D printing should reduce cost of a copycat modeling, especially if they work with this team and give them credit.

When this hand is connected to a robot that is already bought and paid for, like one of the winners of last year’s DARPA challenge, that really drives down the price when the hands are busy in printing out copies of themselves and printing out copies of the printers they used, along with copies of the solar array that powers the entire process.

Couple them to trash grinders so they can use recycled materials for raw stock, then with every replication, the price goes down.

After 18 replications, you have 262,144 hands, for a little over a dollar apiece. In 19 reps, your 524,288 hands cost less than a buck each.

Shortly after his firm invented the Baxter robot, I heard an interview with Rodney Brooks where he essentially said that robotics would progress slowly. Recently, I heard an interview where he changed his mind, saying that biomimicry would allow for rapid progress. I cannot give references here, you just have to take my word for it, and I am getting old and senile.
Also, if you go to bigthink.com/experts/alecross, this interview with this
“think tank” type speaks of a new mathematics called, “mapping belief space,” that is also going to cause a quantum leap in robotics.
I did a search to see what mapping belief space is, but couldn’t find anything. Anyone know where I can find info about this?