If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

If you mean integrate the shader compiler from mesa into the ddx, that is a lot of work. If you mean use the 3D driver to generate the GPU asm, that should be possible. You'll have to write the program in either GLSL or TGSI, then you can dump the shaders. Afterwards they may require a bit of tweaking to handle differences in pipeline state between the 3D driver and the ddx.

If you mean integrate the shader compiler from mesa into the ddx, that is a lot of work. If you mean use the 3D driver to generate the GPU asm, that should be possible. You'll have to write the program in either GLSL or TGSI, then you can dump the shaders. Afterwards they may require a bit of tweaking to handle differences in pipeline state between the 3D driver and the ddx.

It's not really an issue with 2D vs. 3D engines. 2D engines suck for RENDER too. The reason vesa or old drivers seem faster for certain things is because they use shadowfb or XAA (which ends up being shadowfb because offscreen acceleration has been disabled for years due to bit rot in XAA). Shadowfb is pure software rendering. Pure CPU rendering is almost always faster than mixed CPU/GPU rendering since there is no ping-ponging between GPU and CPU rendering.

You can enable shadowfb in the radeon driver if you want to compare by setting Option "NoAccel" "True" in the device section of your xorg config.

Option "NoAccel" "True" also disables Xv acceleration, rendering this driver configuration not very useful in practice. But indeed it performs roughly the same as xf86-video-fbdev.