Site Navigation

Site Mobile Navigation

In Search of a Robot More Like Us

MENLO PARK, Calif. — The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.

The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than I.B.M.’s celebrated victory on “Jeopardy!” this year with a robot named Watson.

Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world.

Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).

“All these problems where you want to duplicate something biology does, such as perception, touch, planning or grasping, turn out to be hard in fundamental ways,” said Gary Bradski, a vision specialist at Willow Garage, a robot development company based here in Silicon Valley.

AT YOUR SERVICE A robot programmed at the University of California, Berkeley, folds laundry — very slowly.

Last month President Obama traveled to Carnegie Mellon University in Pittsburgh to unveil a $500 million effort to create advanced robotic technologies needed to help bring manufacturing back to the United States. But lower-cost computer-controlled mechanical arms and hands are only the first step.

There is still significant debate about how even to begin to design a machine that might be flexible enough to do many of the things humans do: fold laundry, cook or wash dishes. That will require a breakthrough in software that mimics perception.

Today’s robots can often do one such task in limited circumstances, but researchers describe their skills as “brittle.” They fail if the tiniest change is introduced. Moreover, they must be reprogrammed in a cumbersome fashion to do something else.

Many robotics researchers are pursuing a bottom-up approach, hoping that by training robots on one task at a time, they can build a library of tasks that will ultimately make it possible for robots to begin to mimic humans.

The limits of today’s most sophisticated robots can be seen in a towel-folding demonstration that a group of students at the University of California, Berkeley, posted on the Internet last year: In spooky, anthropomorphic fashion, a robot deftly folds a series of towels, eyeing the corners, smoothing out wrinkles and neatly stacking them in a pile.

It is only when the viewer learns that the video is shown at 50 times normal speed that the meager extent of the robot’s capabilities becomes apparent. (The students acknowledged this spring that they were only now beginning to tackle the further challenges of folding shirts and socks.)

Photo

DON'T LOOK NOW Robert Bolles, left, and Pablo Garcia of SRI International are working on robotic arms and hands like the one picking Mr. Garcia's pocket.Credit
Ramin Rahimian for The New York Times

Even the most ambitious and expensive robot arm research has not yet yielded impressive results.

In February, for example, Robonaut 2, a dexterous robot developed in a partnership between NASA and General Motors, was carried aboard a space shuttle mission to be installed on the International Space Station. The developers acknowledged that the software required by the system, which is humanoid-shaped from the torso up, was unfinished and that the robot was sent up then only because a rare launching window was available.

“We’re in a funny chicken-and-egg situation,” Dr. Brooks said. “No one really knows what sensors or perceptual algorithms to use because we don’t have a working hand, and because we don’t have a grasping strategy nobody can figure out what kind of hand to design.”

An error has occurred. Please try again later.

You are already subscribed to this email.

Dr. Brooks is also tackling the problem: In 2008 he founded Heartland Robotics, a Boston-based company that is intent on building a generation of low-cost robots.

And the three competing efforts to develop robotic arms and hands with Darpa financing — at SRI International, Sandia National Laboratories and iRobot — offer some reasons for optimism.

Recently at an SRI laboratory here, two Stanford University graduate students, John Ulmen and Dan Aukes, put the finishing touches on a significant step toward human capabilities: a four-finger hand that will grasp with a human’s precise sense of touch.

Each three-jointed finger is made in a single manufacturing step by a three-dimensional printer and is then covered with “skin” derived from the same material used to make the touch-sensitive displays on smartphones.

“Part of what we’re riding on is there has been a very strong push for tactile displays because of smartphones,” said Pablo Garcia, an SRI robot designer who is leading the design of the project, along with Robert Bolles, an artificial intelligence researcher.

Photo

An SRI hand is covered with "skin" like that on smartphone displays.Credit
Ramin Rahimian for The New York Times

“We’ve taken advantage of these technologies,” Mr. Garcia went on, “and we’re banking on the fact they will continue to evolve and be made even cheaper.”

“The world is composed of continuous objects that have various shapes” that can obscure one another, he said. “A perception system needs to figure this out, and it needs the common sense of a child to do that.”

At Willow Garage, Dr. Bradski and a group of artificial intelligence researchers and roboticists have focused on “hackathons,” in which the company’s PR2 robot has been programmed to do tasks like fetching beer from a refrigerator, playing pool and packing groceries.

In May, with support from the White House Office of Science and Technology Policy, Dr. Bradski helped organize the first Solutions in Perception Challenge. A prize of $10,000 is offered for the first team to design a robot that is able to recognize 100 items commonly found on the shelves of supermarkets and drugstores. Part of the prize will be given to the first team whose robot can recognize 80 percent of the items.

At the contest, held during a robotics conference in Shanghai, none of the contestants reached the 80 percent goal. The team that did best was the laundry-folding team from Berkeley, which has named its robot Brett, for Berkeley Robot for the Elimination of Tedious Tasks.

“Our end goal right now is to do an entire laundry cycle,” said Pieter Abbeel, a Berkeley computer scientist who leads the group, “from dirty laundry in a basket to everything stacked away after it’s been washed and dried.”

A version of this article appears in print on July 12, 2011, on Page D1 of the New York edition with the headline: Race to Build A Robot More Like Us. Order Reprints|Today's Paper|Subscribe