Deep Learning Pioneers Boost Research at NVIDIA AI Labs Around the World

The world’s top researchers are pushing the boundaries of artificial intelligence at the NVIDIA AI Labs, known as NVAIL, located at 20 top universities around the globe.

University of Toronto researchers are developing affordable self-driving cars. At the Université of Montréal, researchers aim to use genetic data to predict and prevent disease. And at the University of California, Berkeley, they’re developing robots that do tasks they’ve never learned.

Our NVAIL program helps us keep these AI pioneers ahead of the curve with support for students, assistance from our researchers and engineers, and access to the industry’s most advanced GPU computing power.

Indeed, NVAIL researchers were among the first to receive our DGX-1 AI supercomputer, beginning nearly a year ago.

That geographic diversity is no accident. NVAIL partner institutions are located in regions that are the research hubs of deep learning. Their research ranges from advancing deep learning itself to improving breast cancer screening (New York University) and automated lip reading (Oxford University).

Read on for a look at a few of their most promising projects.

Raquel Urtasun of Uber and the University of Toronto aims to lower the cost of self-driving cars. Image courtesy of Uber.

“So no matter what your income is, you can get the benefits of self-driving cars,” she said.

The technology in some autonomous cars — lidar, 3D sensors and hand-annotated maps — can cost more than $100,000, Urtasun said. Her team develops algorithms for perception, localization and mapping that use technologies like cheap sensors and satellite data.

In addition to computing power and technical support, the partnership with NVIDIA gives the University of Toronto something just as valuable, Urtasun said. “We get to have a say about the computing of the future, which will help our researchers.”

University of Montréal researchers are advancing deep learning for genomics.

Modern genotyping methods target as many as 5 million variations in the human genome, some of which may point to the risk of developing a certain disease. Researchers use deep learning to try to determine how useful each variation is for predicting disease, how variations relate to each other, and then weight the relative importance of these factors.

It’s a tall order because there are more variables to consider than the amount of patient genomic data available, Romero said. As a result, it’s hard to train a deep learning system that can make reliable predictions.

To find a better way, her research team — which includes Romero’s adviser, AI pioneer Yoshua Bengio — experimented with predicting genetic ancestry based on mutations. They came up with a deep learning architecture that makes predictions while using fewer parameters (the weights assigned to each variable). (See related paper, “Diet Networks: Thin Parameters for Fat Genomics.”)

“Our next step is to tackle disease prediction and work toward the possibility of having personalized medicine,” Romero said.

Versatile Robots

Most robots today can do one thing well — delivering packages, for example, vacuuming the floor or assisting with surgery. But when they’re faced with a new task, they’re stumped.

Finn wants robots to understand situations they’ve never seen before — without any help from engineers. She collaborates with her advisers, Pieter Abbeel and Sergey Levine, to create robots that are able to adapt to new environments.

To do this, Finn uses GPU-accelerated deep learning to train the robot to understand the results of its actions and then predict what it needs to do to accomplish the next task. (See related paper, “Deep Visual Foresight for Planning Robot Motion.”)

“We need to process data quickly so that the robot can learn on the fly,” she said. “Without the speed of GPUs, a lot of my research wouldn’t be possible.”