If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Benchmarking The Intel Ivy Bridge Gallium3D Driver

Phoronix: Benchmarking The Intel Ivy Bridge Gallium3D Driver

While Intel only supports their classic Mesa DRI driver when it comes to their open-source 3D driver on Linux, developed independently is also a Gallium3D driver for Sandy Bridge and Ivy Bridge generations of Intel graphics processors. In this article are benchmarks of the new Intel (i965) Gallium3D driver with Ivy Bridge HD 4000 hardware.

These results are impressive. They managed to achieve 30 to 70% speed with 1/10th or less of the manpower and only a few months of development... I wonder what this could become with 20+ full time developers working on it.

The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.

The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.

This.
The perfomance difference looks nearly same as radeon vs catalyst; nouveau vs nvidia. The same 50%.
I really wonder if radeon and nouveau are actually slower due to gallium?!

The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.

Now that Gallium supports both TGSI and LLVM IR (i.e. more than one kind), it probably wouldn't be too hard to add support for GLSL IR and drop in the classic driver's compiler backend. Less than a week, I'd imagine.

On that hardware the fixed function path will be implemented using shaders. The question is whether this shaders are compiled at runtime or are available in some intermediate/assembly form, bypassing the compiler.

Now that Gallium supports both TGSI and LLVM IR (i.e. more than one kind), it probably wouldn't be too hard to add support for GLSL IR and drop in the classic driver's compiler backend. Less than a week, I'd imagine.

Yep, I'm pretty sure that's it, and plumbing GLSL IR though Gallium would allow it to reuse the existing shader compiler backend that Intel has spent so much time optimizing.

However, i don't think Gallium support would be that easy. Right now i don't think it supports LLVM IR directly, it just allows translating TGSI into that before it goes to the driver. I don't think you'd want to go from GLSL IR -> TGSI -> GLSL IR inside gallium, which means you'd have to add the ability to remove TGSI from the pipeline altogether, and i think that's still quite a bit of work to be done before that can happen. Maybe not, though.

Gallium tries to translate everything into shaders. The fixed function calls get translated into TGSI, and then the driver is responsible for calling into the hardware correctly, which is generally going to be by creating shaders.

Originally Posted by kbios

These results are impressive. They managed to achieve 30 to 70% speed with 1/10th or less of the manpower and only a few months of development... I wonder what this could become with 20+ full time developers working on it.

This does build on top of quite a bit of work already done by the Intel devs. For example, it shares all the same kernel driver code, and i imagine quite a bit of the hardware driver code was just copied out of the i965 driver.

I told you so!

Originally Posted by log0

The question is whether this perf difference is really due to gallium or the driver backend pieces? And if it is gallium, this project can be a great opportunity to identify and fix those slow paths. I know, not so exciting for Intel, as they've got nothing to gain here.

Intel has some "egg on their face" and "foot in their mouth" to gain here:
1. if someone can finally prove that Gallium architecture is performant.
2. leveraging gallium allowed 1/2 guys to do what took 20 Intel engineers.

And users have much less chance of being denied features
simply because Intel chooses not to implement them.

Ah, the sweet satisfaction that comes from those simple words: "I told you so!".
It almost makes it worth the wait!