GPU Accelerators in Today’s Data Center: Performance & Efficiency

NVIDIA is a leading provider of GPU accelerators that are used in many High Performance Computing environments. This whitepaper, authored by Dan Olds of the Gabriel Consulting Group explains the need for this new generation of hardware in today’s data center and looks at what new technologies actual users are looking for.

NVIDIA’s latest GPU, the P100, has 3,840 cores with 16GB of memory on the card, with up to 720 GB/s memory bandwidth. A single PCIe attached card can provide up to 4.7 TFlop/s double precision performance, 9.3 TFlop/s single precision performance, and 18.7 TFlop/s half precision performance—which is stunning when you consider this is such a simple server upgrade.

Even higher performance can be gained by using NVIDIA’s innovative NVLink interconnect. This is a specialized connector for GPUs that provides 160 GB/s GPU bandwidth—5x the bandwidth of x16 PCIe. Many major manufacturers are now selling servers and motherboards with NVLink connectors, so they are a widely available option for users looking to maximize their GPU investment.

Even without the benefit of NVLink, the performance from GPU enabled systems is profound. A single two-way node with four interconnected NVIDIA P100’s can turn in the same processing power as 32 traditional CPU-only nodes on many applications. This performance differential will shift even more towards GPU based systems when NVLink is factored into the equation. Greater performance not only translates into quicker time to application completion, it’s also significantly less expensive in terms of acquisition costs, as well as being more energy efficient.

Wide Array of Applications for GPU Accelerators

There are a surprising number of applications that are able to take advantage of GPU acceleration. There are more than 400 GPU-optimized applications today. This includes nine out of the top ten HPC applications, all of the deep learning suites, and a host of other applications in the areas shown in the table below.

Given the performance and efficiency benefits of GPUs, it’s not surprising to see that the number of GPU-enabled applications has radically increased over the past several years and will certainly rise further still in the future. Many customers have also converted their own applications into GPU compliant versions. NVIDIA has a program that allows customers to take a free “GPU Test Drive” to give them the opportunity to run their own apps on a remote GPU-based cluster.

Over the next few weeks we will look at different aspects of GPU’s using the latest acceleration hardware from NVIDIA:

Resource Links:

Latest Video

Industry Perspectives

In this Nvidia podcast, Bryan Catanzaro from Baidu describes how machines with Deep Learning capabilities are now better at recognizing objects in images than humans. “AI gets better and better until it kind of disappears into the background,” says Catanzaro — NVIDIA’s head of applied deep learning research — in conversation with host Michael Copeland on this week’s edition of the new AI Podcast. “Once you stop noticing that it’s there because it works so well — that’s when it’s really landed.” [Read More...]

White Papers

This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos.