DARPA’s Robot Challenge

What can a robot do if it needs to get inside an A.T.V.? How about a Fosbury flop?

The Pentagon’s Defense Advanced Research Project Agency, or DARPA, is currently holding a kind of robot triathlon to encourage researchers to build humanoid robots that can work in disaster zones. The first round had three events: the entries had to walk on uneven ground, get into a utility vehicle, and attach fire hoses to a spigot and turn on a valve. The point of these challenges was to test autonomy. It’s one thing to build a remote-controlled robot that is guided, puppetlike, by a human holding a joystick; it’s another to instruct a robot to get into a car, and have the robot complete the directive itself.

Twenty-six teams qualified for the first round of this year’s competition; today, the top seven groups move on. In the first round, everything was done via computer simulation. The Open Source Robotics Foundation, based in Mountain View, California, created a software-based physics simulator that computes and displays what happens as mock robots move through a computer-based world. Physics simulators aren’t new—they’re used in most video games—but this one is particularly realistic, and is designed to capture the actions and sensory inputs of robots.

DARPA’s hope is that the actions taken in O.S.R.F.’s simulator will match actions taken in the real world: if the model robot can pick up a grape in the simulator, the real robot will be able to pick up a grape in your kitchen. For years, robotics labs have tended to build their own software, and tests were done with expensive, and sometimes fragile, robots. Teams didn’t share advances with each other. For about a decade, though, a team of programmers, including Nate Koenig, now at O.S.R.F., have dreamt of fixing that by making a single, free, common platform that everyone could share. Now, DARPA is doing one of the things that governments do particularly well: imposing a software standard on an entire field, to the benefit of all. Even if the robots fail this year’s challenge, the field will win.

One fascinating thing about the competition is watching humanoid robots act like alien creatures; the flop into the car is a case in point. In the simulations, the robots look a lot like steel versions of human beings, but judging from the clips I saw they sometimes solve problems in very different ways than we do. The competitors are likely to be constrained more by physics than experience, settling for any solution that works rather than finding something that seems elegant. Most humans, for example, enter cars—even convertibles with their tops down—through the door, by pulling on the handle and climbing in. The robots tried a different technique: the flop. For now, anyway, robots have vastly less common sense than people do (the robot may not even recognize a door handle), and they also have less cultural experience. Unless the robot had analyzed a huge library of human videos, it might not have a clue about how a human would handle a given situation. Left to their own devices, robots may wind up with solutions very different from our own, sometimes better and sometimes worse.

My own intuition is that DARPA is asking for too much in the contest. Getting a robot into a car is one thing; getting it to figure out how to turn a key or control a manual transmission is another, especially when each car is a little different from the next. DARPA simplifies this matter by standardizing locks and keys, and always using a single kind of car, but the real world requires more flexibility than that. A robot controlling a forklift needs to be able to make its own choices and respond to rapid changes—a big challenge, as I noted in earlier pieces on robotics and artificial intelligence. Moore’s law applies to computers, but not to the complexities of artificial intelligence or the challenges in building an arm that can manipulate real things. Most artificial intelligence to date has been purpose-built for specific tasks like recognizing syllables and filling out tax returns; working with a vast array of human artifacts (like screwdrivers, fire hoses, and forklifts) requires a real-world understanding that has so far been elusive.

A decade ago, one might have expressed the same skepticism about another, early series of DARPA challenges aimed at encouraging research labs to build self-driving cars. When DARPA announced their initial self-driving-car “Grand Challenge,” in July, 2002, lots of people scoffed. The original prize, for driving unaided through the desert, went unclaimed. But in the next round, in 2005, several teams completed the course, driving a hundred and thirty-one miles in less than ten hours. By 2007, six different teams were able to complete a harder, urban version of the original challenge. Google picked up where Stanford and DARPA left off. Driverless cars are now street-legal in three states; the remaining obstacles are largely political and legal rather than technological. So, maybe, it won’t actually be that long until a robot flips into your car and drives away.

Gary Marcus is a professor of cognitive science at N.Y.U. and the author of “Guitar Zero.”