Meanwhile, two researchers from the University of Oxford have estimated that computerization will put nearly half the jobs in the United States in jeopardy, including some creative professions that were thought to be immune.

“Occupations that require subtle judgment are also increasingly susceptible to computerization,” wrote Carl Benedikt Frey and Michael A. Osborne. “To many such tasks, the unbiased decision making of an algorithm represents a comparative advantage over human operators.”

Even for investing commentary? Just kidding — I hope.

So, are the machines really taking over? I interviewed two leading researchers in AI and came away a little reassured, but not much. AI is progressing, but some technical barriers may delay immediate quantum leaps in machine intelligence.

Still, over time intelligent machines will do more and more complex mental calculations, if not replicate the highly sophisticated operations of the human brain itself.

IBM researchers estimate that a human brain can make 36.8 quadrillion calculations per second. The total computational power of the world’s supercomputers is now eight times that, and the largest supercomputer, in Guangzhou, China, has nearly the computational capacity of one human brain.

Driving the exponential increase in computational power, as Hawking noted, is Moore’s Law, which says the number of transistors on an integrated circuit doubles every 18 months (or every two years, as Gordon Moore stated in 1975).

In 2007, Yale University economist William D. Nordhaus calculated that computer performance “has improved since manual computing by a factor between 1.7 trillion and 76 trillion.”

“Computers are approaching the complexity and computational capacity of the human brain,” Nordhaus wrote. “Perhaps computers will prove to be the ultimate outsourcer.”

Peter Bock, professor emeritus of engineering at the George Washington University, has followed AI’s progress for four decades and he says it’s tracking Moore’s Law closely.

“We are precisely on schedule” to have the capacity to create the human-like brain of Mada, a fictional intelligent machine he described in the 1993 book “The Emergence of Artificial Cognition.” Bock’s target date: 2024.

That intelligent machine would have to be able to learn new things “in its ever-expanding intelligence capacity, just as happens with human cognition,” he wrote in a follow-up email.

In some ways, computers are ahead of us, too. Even in a whole lifetime, a human couldn’t search billions of documents to find answers to questions the way Google
GOOG, -1.72%
can in less than a tenth of a second, observed Stuart Russell, professor of electrical engineering and computer sciences at UC Berkeley.

And yet thinking machines still can’t perform certain common-sense tasks, as Google’s own experiences with self-driving cars show. “You have to make decisions quickly or the car will run you over,” Bock told me. Good point.

That and some technical limitations facing Moore’s Law may slow progress. But lack of computing power may not be the biggest obstacle.

Right now, said Russell, “we still don’t understand” how to program a thinking machine to act like a human brain. “There are many conceptual and technical problems to solve in the design of algorithms that generate intelligent behavior,” he wrote in a follow-up email.

But companies like IBM
IBM, -1.26%
Microsoft
MSFT, -1.70%
Facebook
FB, -1.82%
and especially Google are throwing lots of money and (human) brain power at the problem. Google, which bought the British company DeepMind Technologies for an estimated $400 million, has scooped up many top AI researchers.

I believe in free enterprise, but given AI’s potential dangers, should corporations driven by profit and stock market performance be the ultimate arbiters of a technology that can have such far-reaching consequences? Or certain governments, for that matter?

Google has an ethics committee monitoring DeepMind (whose own co-founder, like Musk, called AI “the number-one risk this century”). But remember what Wall Street’s risk-management departments did to prevent the financial crisis? Zilch. When big breakthroughs promise big profits, ethics goes out the window. Google declined to comment.

Stuart Russell, who serves with Hawking on the Advisory Board of the Centre for the Study of Existential Risk at the University of Cambridge, believes scientists should act now to “understand and eliminate potential risks.”

“As you approach greater and greater amounts of AI capacity, you must develop better and better controls,” he told me.

“We’re not going to know what we’ve got until we get there,” said Bock. “There is a grave danger for us at the end of this path.”

How many more genies can we let out of the bottle? Let’s hope we humans don’t learn this lesson the hard way, as we have so many times before.

Intraday Data provided by SIX Financial Information and subject to terms of use.
Historical and current end-of-day data provided by SIX Financial Information.
All quotes are in local exchange time. Real-time last sale data for U.S. stock quotes reflect trades reported through Nasdaq only.
Intraday data delayed at least 15 minutes or per exchange requirements.