updated 09:15 pm EDT, Tue August 12, 2008

NVIDIA questions Larrabee

Plenty of buzz surrounds Intel's Larrabee GPU project, especially with the SIGGRAPH conference taking place this week in Los Angeles. The Larrabee GPU is being developed for release in 2009 or 2010 by Intel to compete with NVIDIA and AMD/ATI products. The new graphics chip will use multiple, complete x86 processor cores. NVIDIA has presented their viewpoint on the situation, questioning many of Intel's claims about Larrabee.

Intel has claimed the x86 instruction set and "new languages" are the solution to parrallel computing. The shift to multiple core programming has been difficult for many developers, just scaling from 2 to 4 cores. Intel hasn't released the official number of cores that will be used on Larrabee but scenario graphs presented at SIGGRAPH included a range from 8 to 48. NVIDIA asks "if it'll be easy to program 32 cores with 16-wide SIMD, why aren't more developers using quad cores with 4-wide SIMD? And if Ct is the answer, then why not use it on their CPU's?" Ct is Intel's programming model developed for future multi-core chips.

NVIDIA suggests the challenge in parallel computing lies elsewhere. Developers have to first divide a problem into parallel and then design software to use a parallel processor. GPUs have been successful handling graphics processing with parallel computing. Developers have written complex programs to run graphics or even physics through a standard graphics pipeline. NVIDIA developed the 'CUDA computing architecture' in 2006, adding more instruction sets and architectural concepts. This compiler for the x86 was developed to support the C-language. NVIDIA claims CUDA is fully programmable in the C-language, and is not a new language despite comments suggesting otherwise.

More questions were raised regarding Intel's suggestion that Larrabee will be 'seamless' for developers. With uncertainty still surrounding the possibility that Larrabee will use a new programming model, there could be compatibility issues. NVIDIA questions if current apps for Intel CPUs will need to be modified to run on Larrabee. Conversely, will apps written for the new GPU run without modification on the current production Intel multi-core CPUs? NVIDIA suggests there could more compatibility issues if Larrabee runs a different SIMD than the CPUs.

Multi-core capability on NVIDIA has proven to scale from 8 to 240 cores. This would give developers the advantage of writing an application once, capable of being run across multiple platforms. The compatibility of CUDA across GPUs and CPUs was demonstrated with an astrophysics program on an 8-core GPU inside a chipset, a G80 class GPU, and a quad-core CPU. The GPUs all used the same binary code, and the source code was shared between CPUs and GPUs. Additionally, CUDA works on all NVIDIA GPUs introduced in the past two years and there are more than 90 million C-language enabled GPUs already in use.

So far Intel has not described Larrabee's development environment. Considering the possibility of a new language being required and the existing problems with multi-threading, the developers might be required to learn a whole new structure. NVIDIA states that "Parallel computing problems are not solved with device level instruction sets, these problems are solved in computing languages with a computing architecture that is quick to learn and easy to use." Until some of these questions are answered it will be difficult to gauge developers' reactions to the new hardware. Larrabee is poised to have a significant impact on the GPU market. Consumer reaction has been generally positive. There is always a market for more powerful graphics hardware. If the developers see Larrabee as worth the potential challenges of new architecture, this could change the focus of graphics programming.