Whether you realize it or not, artificial intelligence (AI) is already everywhere—from the phone you text on to the car you drive. The man who made his billions from PayPal and who has gambled a chunk of his fortune on the race for space, has warned frequently that AI represents humanity’s greatest existential threat.

He is joining forces with other tech entrepreneurs to establish a $1 billion investment fund for researchers to pursue applications with a positive social impact and to try to stay one step ahead of the technology. “Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach,” they said in a statement. “When it does, it’ll be important to have a leading research institution which can prioritise a good outcome for all over its own self-interest.” The statement is a reflection of the debate within the science and technology worlds about the threats and benefits offered by rapid advances in computer intelligence, and whether legislative safeguards – or even a total moratorium on research – are needed. To help keep the future safe for humanity, Elon Musk and a group of high-tech heavyweights have teamed up to launch a new research non-profit called OpenAI. The idea of super-intelligent computers that become so indispensable to human life they eventually make us redundant and take over has moved from the pages of science fiction to scientific journals. “If I were to guess what our biggest existential threat is, it’s probably that.

Yesterday, Tesla’s boss, along with a band of prominent tech executives including Linked in co-founder Reid Hoffman and PayPal co-founder Peter Thiel, announced the creation of OpenAI, a nonprofit devoted to “[advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The company’s founders are already backing the initiative with $1 billion in research funding over the years to come. The aim is to ensure that someone is looking at the pros and cons – free from the financial constraints of research and development departments at the likes of Google or IBM that have spent billions of dollars on research. “Since our research is free from financial obligations, we can better focus on a positive human impact. As Altman explained in an interview, the premise of OpenAI is essentially that artificial intelligence systems are coming, and we’d like to share the development of that technology amongst everyone, not just Google’s shareholders. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” said the founders of OpenAI on its website.

The weird part is this justification for doing so: essentially, Musk and Altman seem to think kickstarting the open-AI revolution is the only way to save us from SkyNet. Here’s Altman’s response to a question about whether accelerating AI technology might empower people seeking to gain power or oppress others: “Just like humans protect against Dr.

Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else,” Altman said. “If that one thing goes off the rails or if Dr. Last summer, Musk, along with Stephen Hawking, Apple co-founder Steve Wozniak and AI researchers, penned a letter urging world leaders to ban autonomous weapons.

If this sounds eerily reminiscent of the “a good guy with a gun would’ve stopped that bad guy with a gun” argument, that’s because it’s the same exact logic. The letter warned of a future arms race, and cautioned that inexpensive autonomous weapons could fall into the wrong hands and lead to all sorts of atrocities.