If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Romanick believes the reason why the open-source Gallium3D graphics drivers are so slow is
attributed to way too much CPU overhead with Gallium3D attempting to do/support too much.

Well, it's hard to say how good Gallium would perform if you throw as many developers at it as intel currently has in its FOSS GPU driver department.
Am I missing something? I mean, which Gallium driver has been developed with nearly the same momentum?

Well, it's hard to say how good Gallium would perform if you throw as many developers at it as intel currently has in its FOSS GPU driver department.
Am I missing something? I mean, which Gallium driver has been developed with nearly the same momentum?

I have developed and continue to develop quite considerable OpenGL applications across multiple platforms, so I am not just throwing peanuts here, but stating things I have empirically tested over a long time, and, well, to be completely unfair to Intel: actually, it's quite easy to say how it would perform if you throw as many developers at it as Intel currently has in its FOSS GPU driver partment - very, VERY abysmally. The Windows driver is still far and away better than the Linux one on performance, framerates continue to be about twice as good with heavy modern OpenGL code on the Windows side with Ivy Bridge. And even then the Windows Ivy Bridge drivers are lackluster. So what does that make the Linux drivers? Rhetorical question.

Ivy Bridge has within it the potential to be a very decent gaming GPU, but Intel has squandered its potential by being criminally and retardedly stingy with its driver development teams, on ALL platforms, even for Direct3D, not just OpenGL. It seems Intel is far more interested in a long-term marketing effort, not development effort, to squeeze it's rivals (Nvidia, AMD) from below with the verisimilitude, not the actuality, of a good IGP. It makes no sense - the hardware is there - but they refuse to use it responsibility by adequately staffing their driver teams with people with the right technical expertise to make performance scream. Why put a hot-rod engine in the vehicle if you're going to disconnect the gas pedal? To sell it as a marketing bullet-point apparently.

I have developed and continue to develop quite considerable OpenGL applications across multiple platforms, so I am not just throwing peanuts here, but stating things I have empirically tested over a long time, and, well, to be completely unfair to Intel: actually, it's quite easy to say how it would perform if you throw as many developers at it as Intel currently has in its FOSS GPU driver partment - very, VERY abysmally. The Windows driver is still far and away better than the Linux one on performance, framerates continue to be about twice as good with heavy modern OpenGL code on the Windows side with Ivy Bridge. And even then the Windows Ivy Bridge drivers are lackluster. So what does that make the Linux drivers? Rhetorical question.

Ivy Bridge has within it the potential to be a very decent gaming GPU, but Intel has squandered its potential by being criminally and retardedly stingy with its driver development teams, on ALL platforms, even for Direct3D, not just OpenGL. It seems Intel is far more interested in a long-term marketing effort, not development effort, to squeeze it's rivals (Nvidia, AMD) from below with the verisimilitude, not the actuality, of a good IGP. It makes no sense - the hardware is there - but they refuse to use it responsibility by adequately staffing their driver teams with people with the right technical expertise to make performance scream. Why put a hot-rod engine in the vehicle if you're going to disconnect the gas pedal? To sell it as a marketing bullet-point apparently.

Semi-retraction: Upon discussion, it would appear the issue is not quite as dire as I made it out to be with regards to team staffing issues anymore, so I apologize for shameless venting of frustrations based on out-dated information, but the empirical results on performance still stand.

Intel has avoided Gallium3D and it looks like they will continue to do so for the foreseeable future. While some users are fond of Gallium3D for its "universal" architecture and state trackers coming about for doing more than just 3D/OpenGL (such as the aforementioned 2D, video acceleration, GPGPU/OpenCL, D3D, etc) leads to way too much overhead. Other users also think that Gallium3D is faster or want an Intel Gallium3D driver simply because it's the latest "popular" topic as of late.

Romanick believes the reason why the open-source Gallium3D graphics drivers are so slow is attributed to way too much CPU overhead with Gallium3D attempting to do/support too much.

Why should they contribute to Gallium, fix the performance for the other GPUs. It would make their hardware look rather bad in comparison.

I kinda agree with Intel when using Gallium3D. They don't need an extra software layer to develop drivers for video cards that they designed and made. It will just unnecessarily slow things down and add more overhead.

I think the real reason some gallium drivers are so slow is that they are just incomplete. Most drivers don't have a good optimizing shader compiler (if they have a compiler at all), R600 has 2D tiling on DRI buffers still disabled by default. There are a lot of apps which are not CPU-bound and there the overhead of Gallium doesn't matter. Yes, Gallium adds some overhead, but it's not so much really. Well, yes, the overhead used to be a lot worse in 2009. The strong advantages of Gallium are:
- It can be thread-offloaded very easily and it would not need any synchronization with the driver thread for the user buffer uploads and the uploads with the discard flag (map_buffer_range). It's ridiculous how easy it is with Gallium.
- It filters out most of the redundant state changes core Mesa is doing (core Mesa's state management is a disaster BTW, OpenGL states map very poorly to the _NEW flags, which map very poorly to drivers (even the classic ones), I guess you know that already).
- With immutable state objects, you can take a set of states and obtain a GPU command buffer setting those states instantly.

Hypothetically speaking, all the additional overhead added by Gallium we see today could be removed by simply dropping all the classic drivers and build core Mesa around Gallium. Hell, we might even get a lot better than the classic drivers are today.

And finally, it's really easy to pick one underperforming driver and use that as an argument that something else (Gallium) is slow.