A group of the world’s leading AI researchers and humanitarian organizations are warning about lethal autonomous weapons systems, or killer robots, that select and kill targets without human control.

The group alleges killer robots now exist and the bulk of these technological developments are military funded in UK, China, Israel, Russia, and the United States. Although, fully autonomous weapons systems have not yet been deployed on the battlefield, the action by the group to ban lethal autonomous weapons is a preemptive ban before the technology falls into the wrong hands. The group calls on the citizens of the world to contact their representatives and for countries to work together to form international treaties, before it’s too late.

The video portrays not to far in the distant future of a military firm unveiling a drone with shaped explosives that can target and kill humans on its own. Further in, the video abruptly changes pace, when bad guys get ahold of the technology and unleash swarms of killer robots onto the streets of Washington, D.C. and various academic institutions.

The video is aggressive and graphic but outlines if the technology was misused it could have severe consequences - such as civilian mass causality events.

Stuart Russell, a world leading AI researcher at the University of California in Berkeley, showed the video above to the United Nations Convention on Conventional Weapons on Monday. He said, “the technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance.”

Russell wants a preemptive ban on the technology before it’s too late. He claims the window to halt such technologies is closing and warns that autonomous weapons, such as drones, tanks and automated machine guns are imminent.

The military has been one of the largest funders and adopters of artificial intelligence technology.

The computing techniques help robots fly, navigate terrain, and patrol territories under the seas. Hooked up to a camera feed, image recognition algorithms can scan video footage for targets better than a human can.

An automated sentry that guards South Korea’s border with the North draws on the technology to spot and track targets up to 4km away.

Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in conflict, we call upon the international community for a legally binding treaty to prohibit the development, testing, production and use of autonomous weapon systems in all circumstances.

Human Rights Watch is another organization calling for the preventive measures to stop the machines…

The development of fully autonomous weapons—“killer robots”—that could select and engage targets without human intervention need to be stopped to prevent a future of warfare and policing outside of human control and responsibility.

Human Rights Watch investigates these and other problematic weapons systems and works to develop and monitor international standards to protect civilians from armed violence.

Conclusion: The evolution of targeted killing practices have certainly evolved to something on the lines of the Terminator, a fantasy/science movie where Skynet an artificial intelligence system gained self- awareness, and decided to wage a war on humans. Will Stuart Russell and his team of AI researches be able to stop the trend in autonomous weapon systems before it’s too late?