GPU Computing

GPU Computing Creates New Paradigm

The origin of high-performance computing (HPC) was based on clustering multiple CPUs, and more recently, clustering scores of multi-core CPUs to achieve higher levels of computational power. The GPU (Graphics Processing Unit), and the recent advent of GPU computing, have created a quantum shift in computing architecture by introducing a hybrid model whereby GPU I/O cards now work in conjunction with CPUs.

Servers Designed for GPU Computing

While CPUs are excellent sequential processors adept at serial operations, GPUs were designed from the start to excel at processing similar data that can be split into many pieces and processed in parallel. This is due to the fact that GPUs contain hundreds of parallel cores which are capable of running thousands of parallel threads.

Maximizing GPU performance, however, begins with the underlying hardware architecture. At Trenton that means single board computers & PCI Express backplanes which are designed from the start with data throughput in mind.

Strategic NVIDIA Partnership

Programmers take advantage of this architecture by directing the most performance-critical sections of their programs to run on multiple GPU cores. By offloading the CPU, 10x or greater performance increases are common, and this factor is sure to increase as future generations of GPU computing technology are developed.