Steve Goose, Arms Division director at Human Rights Watch said in a statement:

“Giving machines the power to decide who lives and dies on the battlefield would take technology too far. Human control of robotic warfare is essential to minimizing civilian deaths and injuries.

“Losing Humanity” is the first major publication about fully autonomous weapons by a nongovernmental organization and is based on extensive research into the law, technology, and ethics of these proposed weapons. It is jointly published by Human Rights Watch and the Harvard Law School International Human Rights Clinic.”

It is a bit odd that a group as prominant as Human Rights Watch would dedicate such time and effort to such a research paper, but it truly seems as if they are afraid that the robots we train to kill our enemies today might some day become the enemies that try to kill us in the future.

It is a rational argument to make. A robot with artificial intelligence that is trained to kill and adapt to learn should in theory become self aware at some point and jump on the evolutionary train to “no human left to oppose me land.”

There is also of course the distinct possibility that a Dr. Evil might circumvent the ban if it were put in place and make the world pay him $1 miiiiiiiilllllllllion dollars to deactivate the killer robots.

Do you think world governments need to get on the banning Terminator robots bandwagon or is the issue just not that important these days?