Thousands of artificial intelligence experts are calling on governments to take preemptive action before it’s too late.

The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia.

Among them are billionaire inventor and OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind — the company’s premier machine learning research group.

In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week not to participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons “to create a future with strong international norms.”

“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” the pledge says.

“Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage,” the pledge adds.

Lethal autonomous weapons systems can identify, target and kill without human input, according to the Future of Life Institute, a Boston-based charity that organized the pledge and seeks to reduce risks posed by AI. The organization claims autonomous weapons systems do not include drones, which rely on human pilots and decision-makers to operate.

According to Human Rights Watch, autonomous weapons systems are being developed in many nations around the world — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” FLI claims autonomous weapons systems will be at risk for hacking and likely to end up on the black market. The organization argues the systems should be subject to the same sort of international bans as biological and chemical weapons.

FLI has even coined a name for these weapons systems — “slaughterbots.”

The lack of human control also raises troubling ethical questions, according to Toby Walsh, a Scientia professor of artificial intelligence at the University of New South Wales in Sydney, who helped to organize the pledge.

“We cannot hand over the decision as to who lives and who dies to machines,” Walsh said, according to a statement from FLI. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Musk — arguably the pledge’s most recognizable name — has become an outspoken critic of autonomous weapons and the rise of autonomous machines. The Tesla chief executive has said that artificial intelligence is more of a risk to the world than North Korea.

Last year, he joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban autonomous weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, including Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned in an open letter in August.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”

According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Fighting killer robots with public declarations might seem ineffective, but Yoshua Bengio — an AI expert at the Montreal Institute for Learning Algorithms — told the Guardian that the pledge could rally public opinion against autonomous weapons.

“This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines,” he said. “American companies have stopped building land mines.”

Peter HolleyPeter Holley is a technology reporter at The Washington Post. Before joining The Post in 2014, he was a features writer at the Houston Chronicle and a crime reporter at the San Antonio Express-News. Follow