“I’m really quite close, very close to the cutting edge in AI. It scares the hell out of me,” Musk said. “It’s capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential.”

Musk cited Google’s AlphaGo, a software powered by AI that can play the ancient Chinese board game Go, as evidence of the rise of the machine. In early 2017, AlphaGo clinched a decisive win over the number-one player of Go, the world’s most demanding strategy game.

Musk also predicted that advances in AI will let self-driving cars handle “all modes of driving” by the end of 2019. He said he thinks Tesla’s Autopilot 2.0 will be “at least 100 to 200 percent” safer than human drivers within two years. Musk imagines drivers can sleep at the wheel someday.

The rate of improvement excites and worries Musk. He expressed a need for regulating AI development to ensure the safety of humanity, but he didn’t say who should regulate it.

“I think the danger of AI is much bigger than the danger of nuclear warheads by a lot,” Musk said. “Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”

Musk wants to create a Plan B society on Mars

Musk has a back-up plan in case nuclear war – or AI – wipes out the human race.

In the event of nuclear devastation, Musk said, “we want to make sure there’s enough of a seed of civilisation somewhere else to bring civilisation back and perhaps shorten the length of the dark ages. I think that’s why it’s important to get a self-sustaining base, ideally on Mars, because it’s more likely to survive than a moon base.”

Musk has yet to detail exactly how hypothetical Mars colonists will survive for months or years on end.