User login

As the AI revolution gains momentum, NVIDIA founder and CEO Jensen Huang took the stage Tuesday in Beijing to show the latest technology for accelerating its mass adoption.

His talk — to more than 3,500 scientists, engineers and press gathered for the three-day event — kicks off a GTC world tour where, in the months, ahead we’ll bring our story to an expected live audience of some 22,000 in Munich, Tel Aviv, Taipei, Washington and Tokyo.

“At no time in the history of computing have such exciting developments been happening, and such incredible forces in computing been affecting our future,” said Huang, clad in his trademark leather jacket, after striding onto the stage at the gleaming Beijing International Hotel convention center.

Demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Tuesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services.

In a nearly two-hour keynote, Huang explained how the neural networks that power AI applications are growing exponentially more complex, even as far more consumers use them. As a result, we’re in an entirely new era of computing, with new kinds of demands, Huang said.

“What technology increases in complexity by a factor of 350 in five years? We don’t know any. What algorithm increases in complexity by a factor of 10? We don’t know any,” Huang said. “We are moving faster than Moore’s law.”

The combination of TensorRT 3 with NVIDIA GPUs delivers the world’s fastest inferencing on the widely used TensorFlow framework for AI-enabled services — such as image and speech recognition, natural language processing, visual search and personalized recommendations. Coupled with our Tesla V100 GPU accelerators, TensorRT can process as many as 5,700 images a second, versus just 140 using today’s CPUs, Huang said.

The speed and efficiency TensorRT 3 offers when paired with NVIDIA GPUs translates into incredible savings, Huang explained. It takes 160 dual-CPU servers — costing $600,000 to $700,000, including networking and power delivery — that consume 65 kilowatts of power, to crank through 45,000 images per second.

By contrast, the same work can be done with a single NVDIA HGX server equipped with eight Tesla V100 GPUs that consume just 3 kilowatts of power.

That means “less carbon footprint, less space. And this is the part I love best,” Huang said with a grin. “Save money.”

In addition to TensorRT 3, Huang announced software to accelerate AI, including the DeepStream SDK — which provides real-time, low-latency video analytics at scale — and CUDA 9, the latest version of our accelerated computing software to speed up HPC and deep learning applications.

China Cloud Service Providers, OEMs Adopt Tesla V100

And there are now more options than ever for those looking to put this technology to work.

Huang announced that Alibaba, Baidu and Tencent are all deploying Tesla V100 GPU accelerators in their cloud services. Plus, China’s top OEMs — Huawei, Inspur and Lenovo — have all adopted our HGX server architecture to build a new generation of accelerated data centers with Tesla V100 GPUs.

In addition, Jensen announced all five of the leading internet and AI companies in China — Alibaba, Tencent, Baidu, JD.com and iFlyTech — have adopted NVIDIA’s GPU inferencing platform.

“Every single internet transaction, every piece of traffic that goes through the data center will in the future touch a neural network, or many neural networks,” Huang said.

New Metropolis Partners

And with the addition of Alibaba and Huawei as new partners — and the general availability of the NVIDIA DeepStream SDK — we’ve added more building blocks to our smart city foundation at GTC China this week.

DeepStream simplifies the development of scalable intelligent video analytics powered by deep learning for AI cities and hyperscale data centers.

Alibaba and Huawei join more than 50 of the world’s leading companies already using NVIDIA Metropolis. Together, we’re taking advantage of the more than 1 billion video cameras that will be in our cities by the year 2020 to solve such problems as traffic congestion, emergency notifications and locating lost persons.

“Artificial intelligence is going to revolutionize how cities are built in the future,” Huang said. “We call it AI City.”

NVIDIA Drive Adopted by Autonomous Vehicle Efforts Around China

One of the greatest impacts of AI will be autonomous vehicles — the self-driving car, Huang explained.

To meet this challenge, NVIDIA has created a platform for autonomous vehicles called DRIVE. It addresses every aspect of autonomous vehicles, and partners can utilize all — or some — of this platform.

To power autonomous efforts such as these — machines that can perceive the surroundings, understand the situation and reason about what to do, control itself to interact — NVIDIA created Xavier, the world’s first processor for autonomous machines. Xavier is the most complex SoC ever created.

It will be available early next year to early access partners, with general availability in the fourth quarter of 2018.

Pillars in Place to Invent Next AI Era

“Our vision is to enable every researcher everywhere to enable AI for the goodness of mankind,” Huang said. “We believe we now have the fundamental pillars in place to invent the next era of artificial intelligence, the era of autonomous machines.”