Known as the World Cup for computer vision and machine learning, the challenge pits teams from academia and industry against one another to tackle fiendishly difficult deep learning-based object recognition tasks.

The winners are well known. What’s news: 90% of the ImageNet teams used GPUs. Now it’s time for some of those teams to talk about how they used their not-so-secret weapons.

Teams from Adobe, the National University of Singapore and Oxford University will share how GPU accelerators helped them to break new ground at the contest by improving the object recognition accuracy of their deep learning algorithms.

It’s just one example of how GPUs are taking the deep learning world by storm.

Adoption of GPUs for Deep Learning Explodes

Around the world, deep learning researchers and enterprises are flocking to GPU acceleration. They’re focusing on tasks ranging from face and speech recognition and supercharged web search capabilities to image auto-tagging and personalized product purchasing recommendations.

NVIDIA GPUs are helping scientists train computers how to recognize a wide array of objects.

Deep learning is one of the fastest growing segments of the machine learning field. It involves training computers to teach themselves by sifting through massive amounts of data. For example, learning to identify a dog by analyzing lots of images of dogs, ferrets, jackals, raccoons and other animals.

But, deep learning algorithms also depend on massive amounts of computing power to process mountains of data. This can require thousands of CPU-based servers, but that’s expensive, unrealistic and impractical.

But not for GPUs. The high-performance parallel processors crunch through a broad variety of visual computing problems quickly and efficiently.

To make it easier for deep learning pioneers to advance their work, NVIDIA and the University of California at Berkeley are putting the power of GPU acceleration in the hands of many more individuals around the world.

U.C. Berkeley researchers have integrated cuDNN into Caffe, one of the world’s most popular and actively developed deep learning frameworks – one that many of the ImageNet contestants used for their work.

With cuDNN and GPU acceleration, Caffe users can now rapidly iterate on new training models to develop powerful, more accurate algorithms.