What Should We Expect from Intel’s Larrabee Graphics Cards?

There is a lot of excitement about Larrabee, it’s not everyday Intel decides to get into a new market. But does Nvidia really have anything to worry about? For GPGPU, yes. In outright consumer graphics, maybe not so much.

slide 1 of 4

So Is Larrabee that Different from Existing GPUs?

We compared Larrabee to existing graphics processor architectures in this article and found that the differences don’t seem that large. Also, the lack of ROP hardware is a difference that potentially puts Larrabee at a disadvantage. But we focused on the graphics, and therefore vector units, of Larrabee. As the block diagram on the right shows, that’s only one part of each Larrabee core.

Each Larrabee vector unit is wrapped in its own CPU core. This adds a ton of programmability, but its benefits are largely on the GPGPU side. Those CPU components don’t necessarily add to Larrabee’s graphics prowess, but they do take up space and draw power. That might not be a good trade off.

Intel has some differences they would rather you focus on. They feel that Larrabee is a radical departure from the current graphics pipeline. The slide below left from an Intel presentation shows how they want people to see things.

slide 2 of 4

slide 3 of 4

While the clouds vs. squares is less than crystal clear, the point can be deciphered that while the traditional pipeline is made of inflexible yellow squares, the new one is made of flexible clouds. Of course, the clouds don’t tell you that while they mean flexible hardware shaders in the DX column, they mean software rendering in parts of the Larrabee column. Once you know that, Intel’s own slide shows they are going to Pre 1996 land with Larrabee.

The other problem is that the DX8-DX10 column really only represents things up to DX9. The DX10 pipeline (which we picture above and at the right) can kick data back to the thread processor to be split up and sent through the stream processors again for more shading tasks. Also, it unified the shaders, which is when Nvidia started calling them stream processors.

So the Larrabee pipeline is more flexible, but not by as much as Intel wants you to think. And parts of it have to be rendered in software to get that flexibility. But when everything is said and done, you still have a bunch of SIMDs crunching vectors at the heart of the matter.

slide 4 of 4

How Fast Will It Be?

Obviously it’s far too early to tell, we probably won’t see a Larrabee based card until next year. But some of the guesses and rumours we’ve seen aren’t all that inspiring. The software rendering is one concern, but another problem is if all of the CPU bits, which are great for GPGPU stuff, will make a Larrabee for traditional GPU use too large, hot, expensive, and power hungry.

A popular number floating around as to how many cores the Larrabee card will have is 32, which would make it competitive with high-end cards currently available, at least in terms of performance. And, assuming a 45nm manufacturing process, it will be large, hot, expensive, and power hungry.

Further rumours have indicated that Larrabee will have to use a 12 layer circuit board. That’s more layers than any graphics card to date, and it won’t come cheap. One should never count out Intel, but it looks like the Larrabee graphics card won’t have buyers lining up for it.

It obviously has a lot of potential as an HPC GPGPU, a market Nvidia currently enjoys with its Tesla line of products running CUDA. It’s a tiny market compared to the consumer graphics market, but Larrabee could turn into a real winner there. As a consumer graphics card, it might be more of a flamboyant gesture towards Nvidia’s core business than something Intel really hopes will dominate.