First Robotics Company Joins Campaign to Stop Killer Robots

Their name is sort of hilarious, but their mission is very serious, not to mention valuable. The Campaign to Stop Killer Robots, launched April of last year, aims to ban the creation and use of fully autonomous robotic weapons. Today, robotic vehicle manufacturer Clearpath Robotics became the first robotics company to show their support for the Campaign.

In an open letter, co-founder Ryan Gariepy wrote: "To the people against killer robots: we support you. This technology has the potential to kill indiscriminately and to proliferate rapidly; early prototypes already exist. Despite our continued involvement with Canadian and international military research and development, Clearpath Robotics believes that the development of killer robots is unwise, unethical, and should be banned on an international scale."

Their allegiance to the Campaign is somewhat surprising, considering that they often work with the Department of National Defense and the Navy. According to their marketing communications manager Meghan Hennessey, "Even though we're not building weapons now, that might become an opportunity for us in the future. We're choosing to value our ethics over potential future revenue."

The "killer robots" in question do not refer to all robots that serve the military, only those that can operate without human supervision, or those that "choose and fire on targets on their own, without any human intervention." According to the Campaign's website, "Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology."

"Killer robots" is an incendiary term, but there are potential benefits to fully autonomous weapons. Many have argued that it would be safer to trust life-and-death decisions to a robot, who will not make irrational decisions out of fear or stress. Plus, replacing human troops with robots could save the lives of countless soldiers.

That being said, there is no shortage of ethical problems associated with autonomous weapons. Remote-controlled drones have already dehumanized war to a large extent; autonomous robot soldiers would decrease the accountability even further. It may also make the decision to go to war much easier without the prospect of losing human troops, despite the fact that it would likely cause the deaths of many civilians. Furthermore, unless these robots were advanced enough to have a sense of ethics, they would not be able to perform the complex, subjective moral reasoning that's necessary on a battlefield, nor would they presumably be able to flout authority in the face of inhumane orders. (And if they were capable of ethical reasoning, then we would have to deal with the ethical implications of treating them as military slaves and sending them to their deaths en masse.)

"We need to have this discussion now and take a stance," wrote Gariepy. "The robotics revolution has arrived and is not going to wait for these debates to occur."