CUDA Spotlight: GPU-Accelerated Science and Computing

This week's spotlight is on Dr. Vincent Natoli, president and founder of Stone Ridge Technology.

Dr. Natoli is a computational physicist with 20 years of experience in high performance computing. Previous roles include senior physicist at ExxonMobil Corporation and Technical Director at High Performance Technologies Inc. (HPTi).

NVIDIA: Vincent, tell us a bit about Stone Ridge Technology.Vincent: Stone Ridge provides products and services to the HPC market. I had the idea for the company back in 2002 and started it full time in 2005. I wanted to build a company that solves difficult problems in science and engineering on leading edge hardware platforms. It’s the border between science and computing which I love.

NVIDIA: What services do you provide?Vincent: Most of our customers are in the oil and gas industry. We port, optimize and develop from scratch high performance technical codes for some of the biggest corporations in the world.

It used to be easier to be both an expert in your science domain and able to do a decent job writing simulation code when people were writing mainly scalar Fortran code. It’s much more complex today with multi-core, multi-node cluster solutions, GPU computing and object oriented languages. It’s a rare person who can do both well and those are the people who I hire!

NVIDIA: Why is GPU computing so relevant today?Vincent: It’s relevant because it’s the leading compute platform for solving challenging, real world applications. All of the Tier 1 cluster vendors now offer GPU compute options.

NVIDIA: What kinds of applications benefit the most from GPU computing?Vincent: The applications that benefit most from GPU computing are those that are compute bound and floating point. In those cases you are able to take advantage of the large gap in peak FLOP performance between GPU and CPU.

NVIDIA: What advice do you have for a developer who hasn’t yet made the leap to GPU computing?Vincent: I would say that if performance is important to you then GPU computing is well worth a try. Take a look at CUDA Zone to find a code similar to your own to see what kind of performance you can expect from the GPU. Profile your code and try porting the most significant hotspot first. The potential gain is well worth a few weeks of investigation. Along the way you will learn a lot about your code.

NVIDIA: What advice would you offer to CIOs who are looking at adopting GPU computing in their organizations?Vincent: There are many considerations that organizations must carefully weigh when considering a new technology. (My recent article in HPCwire explores some of these issues.)

For a new technology to even get on the radar, it should offer superior performance, it should not depend on the HPC market exclusively for its success, and it should have a mature development environment. GPUs qualify on all three fronts.

On the issue of development environment, NVIDIA deserves a lot of credit for delivering CUDA to the community. I’ve said before that the most enduring contribution to HPC from GPU computing may well be the CUDA programming model.

NVIDIA: Tell us about the ROI of using GPUs in a heterogeneous computing environment, in terms of speedup, man months, etc.Vincent: The ROI is going to vary for each company as the benefits and the costs of GPU computing will be weighed differently. In general, on the return side, applications that are memory-bound can see about a 6X improvement and compute-bound applications can see from 8X to over 20X.

Cost savings are realized by getting answers more quickly and in reducing infrastructure footprint and power budget. That savings is worth differing amounts to a firm in finance, oil and gas or bioinformatics. On the cost side there are direct costs like the hardware and code ports and indirect costs like potential inefficiencies introduced by changes to the IT infrastructure. For a data point on the cost of porting, optimizing, validating and integrating code, our projects range in duration from three to nine months.

NVIDIA: As computing becomes more powerful, what will the future hold?
Vincent: I like to point out that the field of computing - and HPC in particular - is not in the middle of its development, it’s not even at the beginning of its development. It’s at the beginning of the beginning. It’s only been 50 years since computers began to have a significant impact on business. It’s humbling to think what the field will be like 50 and 100 years in the future.

The near term three-to-five year timeframe is more accessible to prognostication. With respect to general computing, I believe the cloud will dominate. The economies of scale there make so much sense. Computing will become more like a utility to which you subscribe similar to the way we subscribe now to music and video.

In the HPC realm I believe the trend of leveraging easily-accessible systems from industry leaders like NVIDIA will continue. I am not as enthusiastic about the convergence of CPU and GPU architectures as others who see it around the corner. In the implementations I’ve seen, the integration basically throws out all the advantages of GPUs by reducing the number of cores and the memory bandwidth.

I believe there will continue to be advances in data parallel programming models such as CUDA which will allow developers to focus on writing optimal kernel code that scales more transparently. Finally, I see increasing attention and emphasis on power/FLOP and power/Byte as we move to larger and larger systems.