Why Making Robots Is So Darn Hard

Robots, all of a sudden, are all over the news. Mitsubishi has just announced that it’s built a robot specially designed to help clean up the Fukushima nuclear power plant; NASA is working on a robotic handyman suitable for use in outer space. The past two weeks have also seen the release of a new iPad guide to robots (featuring a hundred and twenty-six robots from nineteen countries), and a fresh investment in a Web site called robotappstore.com, in which you can download apps for your Roomba. Romo, one of the latest hits on Kickstarter, and slated to ship in March, 2013, is a tiny tank-like robot that uses an iPhone for an electronic brain.

This is hardly the first time someone tried to go after the consumer-robotics market. Perhaps most famously, Nolan Bushnell, the founder of the video-game-pioneer Atari, tried, in the early in the nineteen-eighties, with a company called Androbot. He invested, and, ultimately, lost, twenty-three million dollars of his own money. Thirty years later, computers have gotten a lot faster, but robots still seem kind of primitive. If you walk into a Brookstone or Best Buy, the only robots you are likely to find are children’s toys and glorified self-propelled vacuum cleaners, not all-purpose helpers that are versatile enough to change diapers, hold conversations, and cook dinner.

The two biggest challenges to making general-purposes robots are, as they always have been, hardware and software. Neither challenge is insuperable, but both are harder than one might think. On the hardware side, there are now lots of robots that can do incredibly cool things. One robot runs faster than the fastest human, another dances Gangnam style. Still another, PR2, folds towels and fetches beer. The catch is that, at the moment, each new robot is like a proof of concept. The ones that are fast and physically powerful, like AlphaDog, a quadruped robot, and the headless but amazing PETMAN, are, for now, still dependent on hydraulic actuators powered by industrial-strength pumps and gasoline engines; they work fine in a laboratory-test environment, but you wouldn’t want one roaming around your home. Others, like Baxter and PR2, are capable of fairly sophisticated movements, but at speeds that are still too slow to be practical around the home. It might take five minutes just for PR2 to grab you a beer.

Computer processors keep getting faster and faster—roughly doubling every eighteen months, the rate predicted by the so-called Moore’s Law—and memory gets cheaper and cheaper. But the motors and actuators that move robots aren’t improving nearly as fast. (Battery technology, too, is key, moving quickly, but not quite keeping pace with Moore). In the words of Erico Guizzo, the robotics editor at the IEEE Spectrum, “Lots of people have been working on humanoid robots for decades, but the electric motors needed to drive a robot’s legs and arms are too big, heavy, and slow. Today’s most advanced humanoid robots are still big hulking pieces of metal that are unsafe to operate around people.”

As Rodney Brooks, founder of Rethink Robotics, explained to me, the key difference is between information and physics. “If you want to use a blue pile of sand to represent a zero, and red pile of sand to represent a one, you can split each pile of sand in half, and still have the same information. You can keep doing that until you have a single grain of sand [in each pile]. And that’s basically what’s happened with computers, and why we can keep making them smaller. But the laws of mass and motion aren’t the same as the laws of information. If you move an arm with half the force, you only get half the result, which means, for example, that you can’t miniaturize a robot arm and expect it to lift the same heavy objects.”

Meanwhile, whether a robot looks like a human or hockey puck, it is only as clever as the software within. And artificial intelligence is still very much a work-in-progress, with no machine approaching the full flexibility of the human mind. There is no shortage of strategies—ranging from simulations of biological brains to deep learning and to older techniques drawn from classical artificial intelligence—but there is still no machine remotely flexible enough to deal with the real world. The best robot-vision systems, for example, work far better with isolated objects than with complex scenes involving many objects; a robot can easily learn to tell the difference between a person and a basketball, but it’s far harder to learn why the people are passing the ball a certain way. Visual recognition of complex flexible objects, like strands of cooked spaghetti and opening and closing humans hands, present tremendous challenges, too. Even further away is a robust way of embodying computers with common sense.

In virtually every robot that’s ever been built, the key challenge is generalization, and moving things from the laboratory to the real world. It’s one thing to get a robot to fold a colorful towel in an empty room; it’s another to get it to succeed in a busy apartment with visual distractions that the machine can’t quite parse. Likewise, the demo of a robot running at cheetah speed is amazing, but it’s conducted on the flat, level ground of a treadmill, not in the uneven territory of the real world. “Film and fiction have raised everyone’s expectations about what robots may be able to do,” Tandy Trower of Hoaloha Robotics and formerly of Microsoft Robotics, said. “I don’t believe we are anywhere near affordable, safe, manipulation on a mobile robot that can generalize such features into consumer operations for at least ten to twenty years.” The iRobot founder Rodney Brooks’s predictions were remarkably similar.