PatrickRIot writes: In a wide ranging interview Nick Bostrom of the Future of Humanity Institute, Oxford goes through all the cosmic, grisly and robotic ways our species might eventually check out.

A really big chunk is given over to a possible super intelligence or global brain that makes artificial intelligence comparable to reverse engineering God. The old testament kind too.

'To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.'