Over at Aeon magazine, Ross Andersen has a fascinating story about a group of futurists who are trying to prepare humanity for the intelligence explosion. This is the moment when artificial intelligence surpasses humanity in its ability to control the planet. For these thinkers, AI is a much more deadly threat than asteroids from space, global warming, or nuclear war.

Writes Andersen:

'To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can't answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren't essential components of intelligence. They're incidental software applications, installed by aeons of evolution and culture. [University of Oxford futurist] Bostrom told me that it's best to think of an AI as a primordial force of nature, like a star system or a hurricane - something strong, but indifferent.'