A futurist who's right 85% of the time says machines will be conscious by 2025 — and it'll be 'the beginning of the end'

Google DeepMind's artificial intelligence AlphaGo made history when it won the complex
game of Go against Lee Sedol, one of the greatest world players.

As Elon Musk pointed out at the time, experts in
the field thought AI was a decade away from reaching that
milestone. The momentous event showed that AI was gaining
skills typically reserved for humans far faster than we expected.

And that very fact could be a problem, Ian Pearson, a futurist
with an 85% accuracy track record, told Tech Insider.

"You could end up with superhuman machines going down that road,"
Pearson said. "Google's DeepMind isn't there yet, but really
I'm sure they'll probably discover things along the way and, by
2025, it's possible their computer could be superhuman and could
be conscious."

Pearson isn't the only one who thinks we're inching closer to
seeing machines with human levels of consciousness. Ray Kurzweil,
an AI expert for Google, wrote in his book "The Singularity is Near"
that current efforts to reverse-engineer the brain will allow us
to simulate it in computers by 2030.

The more computers can think like humans, the better they'll be
at performing tasks. We often take for granted how easy it is for
us to do basic tasks like, say, reach into a fridge and grab a
beer. But asking a robot to do that same task is really difficult
right now: it needs to understand what it's reaching for, how to
get it without breaking or knocking anything over, and how to
transport it safely.

If we want AI to do everything from buy us plane tickets through
our phones to powering robots that help around the house, it
needs to understand the world more like humans.

But there's also risks associated with AI evolving to the level
of consciousness Pearson and Kurzweil are expecting within the
next 14 years.

"Advanced AI could write its own codes and algorithms and take
over other machines to make secondary AI, and people may not know
where it is, nevermind how to switch it off," Pearson said.

AI with human levels of consciousness could also figure out how
to work around any restrictions it was initially programmed with,
he added.

"There's a certain amount of naivety [with the idea] that you can
explain the task and it'll do so," Pearson said. "People
underestimate the potential for a smart machine to choose not to
follow the rules."

Tech Insider

Tesla CEO Elon Musk has been particularly outspoken about the
threats AI poses as it gets smarter.

He's funded a number of research projects to
ensure AI doesn't turn evil, and he's not afraid of hyperbole on the subject: "With artificial
intelligence we are summoning the demon," Musk said at MIT's
Aeronautics and Astronautics department's Centennial Symposium in
2014.

In fact, Musk formed OpenAI, a non-profit research company, that's
focused on advancing "digital intelligence in the way that is
most likely to benefit humanity as a whole." An OpenAI researcher
recently teamed up AI experts at Google to outline "Concrete problems in AI safety."

So it does seem some experts in the field are aware of the
potential threats posed by advanced AI. Still, Pearson said he is
skeptical of people who think it can be easily controlled.

"Everyone wants to get all the benefits without any of the
problems, but if you look at any area of engineering or
bioengineering you see things going wrong occasionally and that's
in spite of people's best interests," he said. "It can't easily
be constrained."