Nick Bostrom is not a typical speaker for the Association for the Advancement of Artificial Intelligence (AAAI). The 16th annual conference, held February 12-17 in Phoenix, Arizona, is one of the premier events in the AI world. It is full of more than a thousand students, professors, and researchers—many of them specializing in computer science, engineering, and mathematics.

Bostrom is a philosopher.

Head of the Future of Humanity Institute, a multidisciplinary research program at Oxford concerned with tackling the question of how humans should be preparing for our future, Bostrom is considered a prophet when it comes to an "intelligent" future. He brought significant attention to the field of AI with his 2014 bestseller, Superintelligence: Paths, Dangers, Strategies, which outlines the possible ways that we could reach an age of superintelligence—defined as a general machine intelligence that exceeds human capability.

The birth of superintelligence, Bostrom said at AAAI-16 in Phoenix, Arizona, will be a monumental event in human history. "We can compare the rise of a superintelligence," said Bostrom at AAAI-16, "to the rise of homo sapiens, in the first place."

But the ape-to-human analogy may, in fact, be too modest. The transition from human to machine is "even more radical" than animals to humans, he said.

Bostrom's keynote focused on what's possible when considering superintelligence. What's real? he asked. And, "what should we leave to the science fiction authors to explore?"

He urged the group to consider a "view of the landscape ahead. What are the practical implications if you zoom out?" This broad view, Bostrom said, can inform the questions we ask today.

He proposed three categories—the short-term future, which includes technological advances like self-driving cars, the long-term future, which includes things like AI assistants, humanoid robot companions, etc., and the deep future. The deep future could include things like a cure for aging, "uploading," "ancestor simulations," and more.

Bostrom's talk, which reflects the message in his book, was a call to action. He warned of the dangers of what will happen when superintelligence is reached. We cannot, he said, presume that superintelligent agents will adopt human values. The most likely scenario, he believes is that they will pose a threat to humans, who could likely stand "in its way."

"Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb," Bostrom wrote in Superintelligence. With no adults around, it doesn't matter if most of them are sensible.

"Some little idiot is bound to press the ignite button just to see what happens," he said.

Informally, many people I spoke to at the conference agreed that Bostrom had an important message for the AI community. Vincent Conitzer, a professor at Duke, said there would be nothing wrong with a "breath of fresh air."

Yet there was an undercurrent of resistance, as well. Many researchers believe that the ideas presented are too far off to be of immediate concern. Oren Etzioni, director of the Allen Institute for Artificial Intelligence tweeted: "We run code while Bostrom runs arguments. Philosophy is not science or engineering—it is highly speculative."

Pushback also emerged in the Q&A session. Several voiced concern over why Bostrom does not focus on a potential world in which machines and humans coexist.