United Nations To Decide Whether to Ban the Development of Killer Robots

It appears the future is upon us. We have now reached the crucial stage where we are to decide whether the human race will permit the development of autonomous weapons - affectionately known as 'Killer Robots'. The Terminator warned us this day would come.

This week, legal experts have warned the United Nations that killer robots should be banned before they can even be developed. But why? Today at Unlock the Law we look at the legal reasoning for refusing to permit killer robots to reach development stage.

So, what could killer robots do?

Fully autonomous weapons, would be able to choose and engage targets without any meaningful human intervention. Such weapons would differ from their precursors such as armed drones, in that they would not be controlled by human beings, but make their own decisions as to when and what to kill. This autonomy has raised concerns about the robots ability to meet international humanitarian law standards, especially relating to the rules of distinction, proportionality, and military necessity.

What are the legal problems with killer robots?

Ahead of the United Nations Convention being held in Geneva this week, Human Rights Watch issued a report in partnership with Harvard Law School's International Human Rights Clinic "Mind the Gap: The Lack of Accountability for Killer Robots,". The report outlines the most important legal concerns relating to the development of killer robots.

Accountability

Programmers, manufacturers, and military personnel could potentially escape liability for deaths and injuries which are unlawful where they are caused by killer robots. The report describes the difficulties of assigning personal liability for the actions of killer robots in both criminal and civil law. Senior Arms Division researcher at Human Rights Watch and the report's lead author, Bonnie Doherty said:

"No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party.The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons."

Violation of International Humanitarian Law

Another important concern regarding fully autonomous weapons is that they may have the potential to cause a great amount of civilian casualties - in violation of international humanitarian and human rights law. Without meaningful human control, it would be incredible difficult to hold anyone liable for such atrocious war crimes.

Whilst military commanders and operators could be found guilty if they intentionally deployed a fully autonomous weapon to commit a crime, it would be easy to avoid liability in more likely situation where they were unable to foresee a killer robot's unlawful attack or were able to foresee but unable to stop it.

"A fully autonomous weapon could commit acts that would rise to the level of war crimes if a person carried them out, but victims would see no one punished for these crimes."

Civil Law

Human Rights watch also warn that it could be equally difficult to hold individual liable under the rules of civil law - particularly in the United States where the law grants immunity to the military and its contractors and there are many evidentiary hurdles to faulty product claims. Many other countries have similarly difficult legal structures.

Furthermore, even if a civil claim was successful the victim would only be awarded compensation. The perpetrator or human individual responsible would not be truly held to account for their negligent actions. In terms of development of the criminal law, deterrence and retributive justice compensating victims is entirely useless without criminal accountability and attribution of legal fault.

The Convention held this week which is attended by 120 countries could decide the fate of killer robots and the future of warfare and the human race. These are but only the legal concerns - discussion of the ethical and political concerns could prove to be far more harrowing.