AI Winter Isn’t Coming

Artificial intelligence is all the rage, with headline-grabbing advances being announced at a dizzying pace, and companies building dedicated AI teams as fast as they can.

Can the boom last?

Andrew Ng, chief scientist at Baidu Research, and a major figure in the field of machine learning and AI, says improvements in computer processor design will keep performance advances and breakthroughs coming for the foreseeable future. “Multiple [hardware vendors] have been kind enough to share their roadmaps,” Ng says. “I feel very confident that they are credible and we will get more computational power and faster networks in the next several years.”

The field of AI has gone through phases of rapid progress and hype in the past, quickly followed by a cooling in investment and interest, often referred to as “AI winters.” The first chill occurred in the 1970s, as progress slowed and government funding dried up; another struck in the 1980s as the latest trends failed to have the expected commercial impact.

Then again, there’s perhaps been no boom to match the current one, propelled by rapid progress in training machines to do useful tasks. Artificial intelligence researchers are now offered huge wages to perform fundamental research, as companies build research teams on the assumption that commercially important breakthroughs will follow.

Andrew Ng, chief scientist at Baidu Research.

The advances seen in recent years have come thanks to the development of powerful “deep learning” systems (see “10 Breakthrough Technologies 2013: Deep Learning”). Starting a few years ago, researchers found that very large, or deep, neural networks could be trained, using labeled examples, to recognize all sorts of things with human-like accuracy. This has led to stunning advances in image and voice recognition and elsewhere.

Ng says these systems will only become more powerful. This might not only increase the accuracy of existing deep learning tools, but also allow the technique to be leveraged in new areas, such as parsing and generating language.

“There are multiple experiments I’d love to run if only we had a 10-x increase in performance,” Ng adds. For instance, he says, instead of having various different image-processing algorithms, greater computer power might make it possible to build a single algorithm capable of doing all sorts of image-related tasks.

The world’s leading AI experts convened in Barcelona this week for a prominent event called the Neural Information Processing Systems conference. The scale of the gathering, which has grown from several hundred people a few years ago to more than 6,000 this year, offers some sense of the huge interest there is in artificial intelligence.

“There’s definitely hype,” adds Ng, “but I think there’s such a strong underlying driver of real value that it won’t crash like it did in previous years.”

Richard Socher, chief scientist at Salesforce and a well-known expert on machine learning and language, says availability of huge amounts of data, combined with advances in machine-learning algorithms, will also keep progress going.

Salesforce, which offers cloud tools for managing sales leads and communication with customers. The company’s AI effort took shape after the company acquired Socher’s startup, Metamind, earlier this year. Salesforce now also provides simple machine learning tools to companies, such as an image recognition system.

Until now, machine learning has mostly been demonstrated by a few big companies in the consumer space, Socher says. Making such technology available more broadly could have a huge impact, he says. “If we were to make the 150,000 companies that use Salesforce 1 percent more efficient through machine learning, you would literally see that in the GDP of the United States,” he says.

Socher believes the application of machine learning in industries will maintain interest in AI for a while. “I can't imagine an AI winter in the future that could be as cold as previous ones,” he says.

Will KnightWill Knight is MIT Technology Review’s Senior Editor for Artificial Intelligence. He covers the latest advances in AI and related fields, including machine learning, automated driving, and robotics. Will joined MIT Technology Review in 2008 from the UK science weekly New Scientist magazine.