Robots build our cars, milk our cows, perform unassisted heart surgery, and, at least in Japan, take care of both the young and the old. Advances in robot technology in the home and workplace are impressive, but the best droids around (on our planet, at least) are out on the battlefield. For years, robotic soldiers have played considerable roles in performing the military’s most undesirable tasks: destroying and placing explosives, performing reconnaissance, and detecting and cleaning nuclear and biological agents—basically everything that gets left out when kids “play war.”

And now, the robots are armed. Gun-slinging robots are patrolling the borders of South Korea and Israel—where robots are trained in the “See-Shoot” school of military discretion. And the United States, the proud leader of military robotics, is flying unmanned aerial vehicles armed with Hellfire missiles. Firing of these UAVs is controlled remotely from Nevada, so humans are still in the loop—but, well, things still don’t always go as planned.

As technology advances, the robots are becoming more autonomous—and humans are getting out of the picture. The robots will eventually decide where, when, and whom to kill; the South Koreans, for example, are already working on a robot sentry that can distinguish the movement of people and shoot them on sight. In December, the US released an “Unmanned Systems Roadmap” (pdf) for 2007-2030, and autonomous robots that make their own life and death decisions are high on the agenda.

Professor Noel Sharkey, a computer scientist best known for his role on the BBC2 television series Robot Wars and the show’s less destructive spin-off, TechnoGames, is worried that we’re on the verge of an “international robot arms race,” and expressed his concerns today at a keynote address at to the Royal United Services Institute (RUSI). This goes beyond Asimov’s Three Laws—the concern isn’t that robots will take over the world, it’s that robots will make it too easy for people to take over the world.

A critical (and obvious) issue is for the robots to be able to effectively identify their targets—discriminating not just machine from human but from civilians, animals, and allies. Autonomous robots don’t have to hold up the Geneva conventions (yet), but Sharkey is calling for international legislation and a code of ethics for autonomous robots at war. And even if the technology was perfect, we’d still have a big issue. Robot manufacturing is getting cheaper and easier, and it might not be long before “the bad guys” get their hands on the technology (think of what would happen if robots replaced suicide bombers). And I know I don’t want to be around when someone hacks The Reaper.

Photo: An unmanned robot catches a bomb. Credit: U.S. Department of Defense