Elon Musk launches $1bn fund to save world from AI

Beyond Elon Musk, artificial intelligence startups are having a moment.

The man who made his billions from PayPal and who has gambled a chunk of his fortune on the race for space, has warned frequently that AI represents humanity’s greatest existential threat.

He is joining forces with other tech entrepreneurs to establish a $1 billion investment fund for researchers to pursue applications with a positive social impact and to try to stay one step ahead of the technology. “Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach,” they said in a statement. “When it does, it’ll be important to have a leading research institution which can prioritise a good outcome for all over its own self-interest.” The statement is a reflection of the debate within the science and technology worlds about the threats and benefits offered by rapid advances in computer intelligence, and whether legislative safeguards – or even a total moratorium on research – are needed. The idea of super-intelligent computers that become so indispensable to human life they eventually make us redundant and take over has moved from the pages of science fiction to scientific journals. “If I were to guess what our biggest existential threat is, it’s probably that.

Yesterday, Tesla’s boss, along with a band of prominent tech executives including Linked in co-founder Reid Hoffman and PayPal co-founder Peter Thiel, announced the creation of OpenAI, a nonprofit devoted to “[advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The company’s founders are already backing the initiative with $1 billion in research funding over the years to come. On multiple occasions right after finally convincing hiring managers to use his high-tech recruiting tool, Jersin has received the same bewildering question from new customers: Will this end up eliminating my job, or, you know, destroying my entire industry? Such is the nature of working in the artificial intelligence field today. “They understand that this technology is powerful enough that they need to take advantage of it, but they are a little bit concerned about the impact it may have on their industry in the long-term,” says Jersin, CEO of Connectifier and a former Google product manager. The aim is to ensure that someone is looking at the pros and cons – free from the financial constraints of research and development departments at the likes of Google or IBM that have spent billions of dollars on research. “Since our research is free from financial obligations, we can better focus on a positive human impact. As Altman explained in an interview, the premise of OpenAI is essentially that artificial intelligence systems are coming, and we’d like to share the development of that technology amongst everyone, not just Google’s shareholders.

It’s an unusual love/hate (or love/fear) dynamic that has come to define much of the new, fast-growing market for startups that rely on artificial intelligence technology. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” said the founders of OpenAI on its website.

After decades of not being taken seriously, artificial intelligence is enjoying a renaissance thanks to better data and computing capacity as well as all the attention showered on products like IBM’s dashing Watson, Facebook’s personal assistant M and large AI acquisitions from Google and Apple. The weird part is this justification for doing so: essentially, Musk and Altman seem to think kickstarting the open-AI revolution is the only way to save us from SkyNet. Here’s Altman’s response to a question about whether accelerating AI technology might empower people seeking to gain power or oppress others: “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else,” Altman said. “If that one thing goes off the rails or if Dr. In this Jan. 13, 2011, file photo, “Jeopardy!” champions Ken Jennings, left, and Brad Rutter, right, flank a prop representing Watson during a practice round of the “Jeopardy!” quiz show in Yorktown Heights, N.Y.

If this sounds eerily reminiscent of the “a good guy with a gun would’ve stopped that bad guy with a gun” argument, that’s because it’s the same exact logic. There is, at least for now, a vast disconnect between all the fearful predictions about artificial intelligence and what businesses are actually working to build. Those startups that have launched and raised funding in the past couple years promise to crunch data and use algorithms to automate and improve hiring decisions (Connectifier) or recommend treatment options to doctors (Enlitic). That’s by design: almost all companies working on these products have decided to make their virtual assistants more human rather than less to engage with users. “The very first choice you have to make if you go create any one of these is if you want to humanize it.

Does that take away from her humanity, or just suggest that she’s a shitty human?” Hodjat is the founder and chief scientist (a popular title at AI startups) for Sentient Technologies, which is by far the most well-funded of the current crop of artificial intelligence startups with nearly $150 million from private investors, the bulk of which came late last year. At some point, he believes this assistant may even be able to create renderings of new products designed specially for you that don’t yet exist. “And this is just shopping,” Hodjat continues. “Take that to healthcare: a very personalized medical and healthcare regiment for individuals based on their daily activities, their vitals, their Fitbit or Apple Watches. I think AI will be there helping us make decisions in the future pervasively.” The initial surge in AI startups began in earnest two to three years ago, but this year countless companies have latched on to the term in an effort to appear trendier. “AI is a very broad term, kind of like what on-demand was a year ago,” says Marvin Liao, an investing partner at 500 startups, who says he noticed an uptick in these companies four-to-six months ago. “Every company was an on-demand company; now every startup has some AI.

There’s a bit of a flock of sheep mentality.” Of the companies that claim to be artificial intelligence-driven, Liao estimates that less than 10% really fit the bill. Hammond founded the University of Chicago’s Artificial Intelligence Laboratory and currently serves as the chief scientist (that title again) at Narrative Science, which launched in 2010 made a name for itself in recent years using artificial intelligence to automatically write news stories. In the beginning, however, Narrative Science chose not to identify itself as an artificial intelligence service. “All that people could think of with AI systems were things that didn’t work. As he points out, even when IBM’s Watson made its star turn on Jeopardy, it was referred to as a “cognitive system” rather than AI. “They too knew AI was a term that was in disrepute.” “Living in a world where people think AI isn’t possible because we can’t figure it out isn’t a fun world to live in,” Hammond says. “Living in a world where people are afraid of it, I’m happy to have those conversations, those are easier conversations.”