Advertisement

GPUs have greater raw computational power than conventional CPUs, but have a more limited repertoire of tasks. Combining hundreds of individual processors, they excel at applying simple repetitive calculations to large bodies of data.

Nicolas Pinto of the Massachusetts Institute of Technology is using them in his efforts to crack the brain’s formula for recognising objects in images. “The interesting thing about a GPU is that they are made to produce a visual world,” he says. “What we want to do is reverse that process.

Hidden rules

“When an object moves across your retina, it will obey certain rules, the physical rules of the world,” Pinto says. “We are trying to learn these rules from scratch.”

Last year, for less than &dollar;3000, he built a 16-GPU “monster” desktop supercomputer to generate and test over 7000 possible variations of an object-recognition algorithm on video clips.

To test each model, Pinto’s makeshift supercomputer performed statistical analysis in both space and time on thousands of frames of video to find objects moving through the scene. Selecting for the models best able to decipher the action, he was able to match or better more traditional approaches.

He says this kind of work would previously have only been possible with a fully fledged supercomputer.

“If we weren’t newcomers in this field and could apply for multi-million dollar grants, then yes, we could probably get one of these massive computers from IBM,” he says. “But if money is an issue, or you are a newcomer, that is too expensive. It’s very cheap to buy a GPU and explore.”

Easy power

The latest graphics cards, from manufacturers ATI and Nvidia have 512 individual processors. By dividing the work among these processors, they can reach speeds of half a trillion calculations per second.

“Things were in computer graphics shader languages and texture coordinates – none of the stuff we were used to in scientific computing,” says Chris Johnson, director of the Scientific Computing and Imaging Institute at the University of Utah in Salt Lake City. “It was extraordinarily difficult to map your problem to a GPU.”

Exaflops beckons

While GPUs make desktop supercomputing accessible to a wide range of researchers, flagship computing centres such as Oak Ridge National Laboratory in Tennessee have also taken notice. Oak Ridge announced last October that its next supercomputer, predicted to be the world’s fastest, would be built from GPUs.

“As we look at how to get the next 1000 times faster, to an exaflops, or 1018 calculations per second, we see a lot of big challenges,” says Buddy Bland, a project director at Oak Ridge.

He says that the lab already uses clusters of GPUs for some number-crunching computing tasks such as climate modelling and simulations of supernovas. He says that increased precision and speed, along with reduced power consumption, make the cards an attractive option for the next generation of supercomputers. “We think this is one path to getting the higher-performance computing that we need.”