NVIDIA Targets Need for Speed With Ultra-Fast GPU Interconnect

At its annual GPU Technology Conference (GTC) in San Jose this week NVIDIA (NVDA) laid the foundation for its Pascal GPU architecture with NVLink high-speed integration, launched a GPU rendering appliance, and introduced a new Tegra K1 powered development kit for the embedded market. The conference conversation can be followed on Twitter hashtag #GTC14.

NVIDIA announced plans to integrate a high-speed interconnect into its future GPUs. The NVIDIA NVLink will enable GPUs and CPUs to share data five to 12 times faster than they can today. This will eliminate a longstanding bottleneck and help pave the way for a new generation of exascale supercomputers that are 50 to100 times faster than today’s most powerful systems.

NVLink will be a part of the 2016 Pascal GPU architecture, which is being co-developed by IBM, which is incorporating it in future versions of its POWER CPUs. NVLink joins IBM POWER CPUs with NVIDIA Tesla GPUs to fully leverage GPU acceleration for a diverse set of applications, such as high performance computing, data analytics and machine learning.

Overcoming a Bottleneck With PCIe

“NVLink enables fast data exchange between CPU and GPU, thereby improving data throughput through the computing system and overcoming a key bottleneck for accelerated computing today,” said Bradley McCredie, vice president and IBM Fellow at IBM. “NVLink makes it easier for developers to modify high-performance and data analytics applications to take advantage of accelerated CPU-GPU systems. We think this technology represents another significant contribution to our OpenPOWER ecosystem.”

The NVLink interface addresses the bottleneck with PCI Express, which limits the GPU’s ability to access the CPU memory system. PCIe is an even greater bottleneck between the GPU and IBM POWER CPUs, which have more bandwidth than x86 CPUs. NVLink will match the bandwidth of typical CPU memory systems, and it will enable GPUs to access CPU memory at its full bandwidth. GPUs have fast but small memories, and CPUs have large but slow memories. Accelerated computing applications typically move data from the network or disk storage to CPU memory, and then copy the data to GPU memory before it can be crunched by the GPU. With NVLink, the data moves between the CPU memory and GPU memory at much faster speeds.

The Unified Memory feature will simplify GPU accelerator programming by allowing the programmer to treat the CPU and GPU memories as one block of memory. NVIDIA GPUs will continue to support PCIe, but NVLink is substantially more energy efficient per bit transferred than PCIe. NVIDIA has designed a module to house GPUs based on the Pascal architecture with NVLink. This new GPU module is one-third the size of the standard PCIe boards used for GPUs today. Connectors at the bottom of the Pascal module enable it to be plugged into the motherboard, improving system design and signal integrity.

GPU rendering appliance

NVIDIA also launched a GPU rendering appliance that dramatically accelerates ray tracing, enabling professional designers to largely replace the lengthy, costly process of building physical prototypes. The new Iray Visual Computing Appliance (VCA) combines hardware and software to greatly accelerate the work of NVIDIA Iray, a photorealistic renderer integrated into leading design tools like Dassault Systèmes’ CATIA and Autodesk’s 3ds Max. Multiple Iray appliances can be linked, speeding up by hundreds of times or more the simulation of light bouncing off surfaces in the real world. As a result, automobiles and other complex designs can be viewed seamlessly at high visual fidelity from all angles. This enables the viewer to move around a model while it’s still in the digital domain, as if it were a 3D physical prototype.

“Iray VCA lets designers do what they’ve always wanted to – interact with their ideas as if they were already real,” said Jeff Brown, vice president and general manager of Professional Visualization and Design at NVIDIA. “It removes the time-consuming step of building prototypes or rendering out movies, enabling designs to be explored, tweaked and confirmed in real time. Months, even years – and enormous cost – can be saved in bringing products to market.”

About the Author

John Rath is a veteran IT professional and regular contributor at Data Center Knowledge. He has served many roles in the data center, including support, system administration, web development and facility management.

In a move that could help expand the market for graphics processing units (GPUs), NVIDIA and IBM will collaborate on GPU-accelerated versions of IBM's wide portfolio of enterprise software applications on IBM Power Systems. Read More

NVIDIA (NVDA) announced that its GRID technology is now available from Amazon Web Services (AWS) through its newly announced Amazon Elastic Compute Cloud G2 instance, delivering GPU acceleration to users running graphics-intensive applications and games in the cloud. Read More

Nvidia Corp. last week introduced its next generation graphics processing unit (GPU) architecture, which is codenamed “Fermi” and optimized for high performance computing. Is it a game-changer for HPC? Read More

Nvidia Corp. last week introduced its next generation graphics processing unit (GPU) architecture, which is codenamed “Fermi” and optimized for high performance computing. Is it a game-changer for HPC? Read More