For the second time in a year, technology leaders signed a pledge not to participate in the manufacture, trade, or use of lethal autonomous weapons.

For the second time in a year, technology leaders signed a pledge not to participate in the manufacture, trade, or use of lethal autonomous weapons. The latest compromise was signed by 150 companies and more than 2,400 people from 90 countries during the International Conference on Artificial Intelligence in Stockholm, Sweden.

The list includes CEOs, engineers and scientists in the technology industry including Google DeepMind, the XPRIZE Foundation and Elon Musk who signed a pledge organized by Future of Life Institute.

Almost a year ago, AI and robotics experts signed an open letter to the United Nations to suspend the use of autonomous weapons that say they threaten a “third revolution in the war.”

Killer robots are weapons that can identify, target and kill autonomously. That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the system.

Toby Walsh, professor of artificial intelligence at the University of New South Wales, highlighted the ethical issues surrounding autonomous lethal weapon systems. “We can not deliver the decision on who lives and how he dies for the machines.”

Walsh was also part of a group of Australian researchers in robotics and artificial intelligence who in November 2017 called on Prime Minister Malcolm Turnbull to take a stand against artificial intelligence weaponry. Researchers have signed a letter calling on the Turnbull government to become the twentieth country to ban lethal weapons at a forthcoming United Nations Conference on the Convention on Certain Conventional Weapons.

Check below in full the text of the commitment signed:

Artificial intelligence is poised to play a growing role in military systems. There is an urgent opportunity and need for citizens, legislators and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we agree that the decision to lead a human life should never be delegated to a machine. There is a moral component to this position that we should not allow machines to make life decisions for which others will be blamed.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for each country and individual. Thousands of AI researchers agree that by removing the risk and difficulty of taking human lives, autonomous lethal weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

A simian inspired robot from the Jet Propulsion Labs, exits a vehicle.

In addition, lethal autonomous weapons have very different characteristics from nuclear, chemical and biological weapons. Unilateral actions by a single group can easily trigger an arms race in which the international community lacks technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We urge governments and government leaders to create a future with strong norms, regulations and international laws against lethal autonomous weapons. Currently absent, we have chosen to keep ourselves in high order: we will not participate in or support the development, manufacture, trade or use of lethal autonomous weapons. We urge companies and technology organizations, as well as leaders, policymakers and other individuals, to join us in this promise.