The luminaries believe that for the last 20 years we have been dangerously preoccupied with making strides in the direction of autonomous AI that has decision-making powers.

“Artificial Intelligence (AI) technology has reached a point where the deployment of [autonomous] systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” they write in a letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aires.

The danger has to do with a combination of abilities and improvements the machines have been gaining over the years, including various integration, learning, processing power and a number of component tasks, as evidenced in their previous letter published by the Future of Life Institute.

The researchers acknowledge that sending robots into war could produce the positive result of reducing casualties. The flipside, they write, is that it “[lowers] the threshold for going to battle.”

The authors have no doubt that AI weaponry development by any nation will result in a “virtually inevitable global arms race” with autonomous weapons becoming “the Kalashnikovs of tomorrow.”

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” they write.

Nothing will then stop all this from reaching the black market, researchers argue. “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” they continue.

The message is not that AI is bad in and of itself, but that there’s a danger of using it in a way that is diametrically opposed to how it should be used – which is for good, for healing and for alleviating human suffering.

The group speaks against a select few who would “tarnish” a field that, like biology or chemistry, isn’t at all preoccupied with using that knowledge to make weapons. Any negative use of this knowledge could only lead the public to forever stand against the field, therefore denying it the chance to really be of benefit in the future, the visionaries write.

None of this comes as a surprise to anyone who follows their careers or keeps up to date with AI news. Wozniak, who co-founded Apple with Steve Jobs, is known to be very vocal on the issue. But In June, he posited that a smart AI would wish to control nature itself – and therefore, humans. As “pets.” He even joked about feeding his dog fillet steak, because “do unto others…”

“They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally," he told an Austin audience at the Freescale Technology Forum 2015.

Musk is another skeptic. A recently-authorized biography outlines how he worries that his friend and Google CEO Larry Page could “produce something evil by accident” (the company has been making huge strides in robotics and AI, including recently purchasing Boston Dynamics – the company that scared everybody with its four-legged animal-like robots).

"The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most," Musk wrote in a leaked private comment to an internet publication about the dangers of AI, a few months ago. "Please note that I am normally super pro-technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand," Musk wrote.

Like Wozniak, Musk has been donating millions to the cause of steering AI in a direction beneficial to humanity.

World-famous physicist Stephen Hawking, who relies on a form of artificial intelligence to communicate, told the BBC that if technology could match human capabilities, “it would take off on its own, and re-design itself at an ever increasing rate.”

He also said that due to biological limitations, there would be no way that humans could match the speed of the development of technology.

“Humans, who are limited by slow biological evolution, couldn't compete and would be superseded,” he said. “The development of full artificial intelligence could spell the end of the human race,” the physics genius said.