Should We Ban ‘Killer Robots’? Human Rights Group Thinks So

As if deploying drones -- unmanned aerial vehicles -- on the battlefield wasn't controversial enough, here's an even more disturbing question: Should we allow weapon-wielding robots that can "think" for themselves to attack people?

As if deploying drones — unmanned aerial vehicles — on the battlefield wasn’t controversial enough, here’s an even more disturbing question: Should we allow weapon-wielding robots that can “think” for themselves to attack people?

Imagine a drone that didn’t require a human controller remotely pulling its strings from some secure remote location — a drone that could make decisions about where to go, who to surveil…or who to liquidate.

No one’s deployed a robot like that yet, but international human rights advocacy group Human Rights Watch sees it as an issue we need to deal with before the genie’s out of the bottle. The group is calling for a preemptive ban on all such devices “because of the danger they pose to civilians in armed conflict.” They’ve even drafted a 50-page report titled “Losing Humanity: The Case Against Killer Robots,”which lays out its case against autonomous weaponized machines.

“There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian,” argues Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, in the HRW video above. “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”

And that’s the chief concern: that a robot given autonomy to choose who to attack, without human input, could misjudge and either injure or kill unlawful targets like civilians in combat. Autonomous, non-sentient robots would also, obviously, lack human compassion, as well as the ability to assess a situation proportionally — gauging whether risk of harm to civilians in a given situation outweighs the need to use force.

Autonomous weaponized robots also raise the thorny philosophical question of who would be accountable, say such a robot did injure or kill a civilian (or anyone else, unlawfully). Remember, autonomous doesn’t equal conscious, so punishing the robot’s out. Who then? The operational personnel that programmed or deployed it? The researchers that designed it? The military or government in general?

Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries – including China, Germany, Israel, South Korea, Russia, and the United Kingdom – have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.

What does HRW recommend we do? Establish international as well as domestic laws that prohibit the development, production or use of such weapons, then initiate reviews of existing technologies that might preempt autonomous weapons and create a professional code among scientists to consider the many ethical and legal issues as we roll forward.

Maybe it’s time we revisited author Isaac Asimov’s three laws of robotics, codified in a 1942 short story, and — pun half-intended — foundational in getting people talking about the ethics of artificial intelligence.

Absurd logic. Machines with better 'decision making' (taking great liberties in using the phrase right now) has a variety of other applications that can save lives just as easily as wage war. The machines are a means to an end, be it reconnaissance, attack or a rescue operation. The software coding and technology should not be blamed for its application; that is an entirely human failing.

In the decades to come, I think it will be demonstrated that a robot will indeed be able to quickly and reliably distinguish an ice cream cone from a rifle. It will do so with better vision and without the handicaps of adrenaline and combat anxiety. Human ability to gauge a situation is a purely subjective matter, seen through the eyes of someone with preconceived beliefs and emotions. An AI construct, free from ideology and primal fear instincts, could be programmed to make an objective and well measured assessment. We can't do that ourselves right now. How many police shootings have been attributed to human mistakes or emotions? Friendly fire? I think at some point we will trust the machines more. They will be... predictable.

Right now, a moderately skilled individual can build a drone in their garage in a few days. Even if these machines are outlawed in the future, there will be rogue elements creating more complex and armed machines. We will have no choice but to create our own in defense. It's just a matter of time at this point.

Thinking about DOD contracts and research/development programs going out into the 2040's, yes, this is definitely most defense technologies' direction. Autonomy. Not very hard to hear THIS train a'comin'......