If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

ATI's Gallium3D Driver Is Still Playing Catch-Up

Phoronix: ATI's Gallium3D Driver Is Still Playing Catch-Up

Yesterday we delivered benchmarks showing how the open-source ATI Radeon graphics driver stack in Ubuntu 10.04 is comparing to older releases of the proprietary ATI Catalyst Linux driver. Sadly, the latest open-source ATI driver still is no match even for a two or four-year-old proprietary driver from ATI/AMD, but that is with the classic Mesa DRI driver. To yesterday's results we have now added in our results from ATI's Gallium3D (R300g) driver using a Mesa 7.9-devel Git snapshot from yesterday to see how this runs against the older Catalyst drivers.

did any work go into optimizing, or is it only work on features right now?
what im acutally asking is, if we can expect performance jumps if somebody if one of the developers found time to spend on optimizing the drivers?

edit:
did any work go into optimizing, or is it only work on features right now?
what im acutally asking is, if we can expect performance jumps if one of the developers found time to spend on optimizing the drivers?

edit:
did any work go into optimizing, or is it only work on features right now?
what im acutally asking is, if we can expect performance jumps if one of the developers found time to spend on optimizing the drivers?

I expect OpenCL performance is going to be driven by two things - shader compiler efficiency packing multiple instructions into a VLIW, and memory manager maturity. Both are hard but not impossible, and I *think* the OpenCL use cases should be less varied than OpenGL and easier to deal with. Other than that, raw power of the GPU should drive things.

Remember that there hasn't been a lot of optimization on the Gallium3D code base yet, just a push to get to classic Mesa levels so that 300g can replace 300.

BTW the shape of the resolution vs performance curves (generally flat) indicates that performance is basically CPU limited right now, so a round of profiling and optimizing should make a hefty difference. The question is whether the bulk of the CPU time is being used in the HW driver stack or in the upper level Mesa code.