If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Using Gallium3D On AMD FirePro Workstation GPUs

03-30-2011, 06:00 AM

Phoronix: Using Gallium3D On AMD FirePro Workstation GPUs

How well do AMD's FireGL/FirePro workstation graphics cards work with the open-source graphics drivers for Linux? It's something we never have really focused on up to this point, since after all, most workstation users are satisfied with using proprietary display drivers on Linux. It is the workstation market that drives the proprietary Linux driver development after all for AMD and NVIDIA, and that is really the focus of development, not Linux gamers or enthusiasts. But curiosity got the best of me, so here's what happens if you try to use an expensive FirePro graphics card with the open-source driver stack and the Mesa Gallium3D driver.

Comment

As a point of interest: if anyone tries to run an application where background processing in the application itself will pretty much peg your CPU, you won't see much of a difference between r600g and Catalyst. Your FPS will top out around 30 - 50 fps on average.

Back in the old days, my favorite example of this "damn app is a CPU hog all by itself" effect was the MMO game, Anarchy Online. MMOs do indeed have a lot of background processing to do with all the dynamic content flying around. I remember vastly upgrading my GPU and not getting any FPS improvement in AO. It was only years later when I picked it up again using a quad core CPU with more cache that I could finally start to get FPS in the 60 and above range.

Nowadays, my favorite example is Second Life (and related viewers). It's an MMO by some definitions, a metaverse by others. But what makes SL king of the CPU is that it gobbles down cycles like no tomorrow because ALL the content (the textures, the meshes, the animations, everything but the shaders) is dynamically generated based on actions of other users. So the engine cannot be optimized ahead of time to consume low amounts of CPU during render-time.

What you find out is that things normally done during "loading" time for most games is done during render-time for Second Life, hence the high CPU usage. Things like:

-Decoding audio streams
-Decoding JPEG2000 images (textures)
-Decoding meshes which are sent in a binary format over the network
-Converting the serialized mesh data into actual OpenGL triangles
-Applying shaders and lighting to rendered surfaces (and applying it again and again as they change in ways that the client cannot predict ahead of time)
-Applying animations to wireframes to make them move as expected frame to frame
-All the network I/O that is necessary to pull down all of this
-Plus all the standard MMO stuff, such as chat, internal OpenGL-based window management, avatar positions, inventory updates, etc.

If you look at the above list, most of that stuff is done on the CPU. So even though I have one of the fastest desktop GPUs on the market, my "above average but not astonishingly powerful" Core i7 920 still yields crappy FPS, Catalyst or no.

In fact, lowering my detail settings doesn't even help. Nor does running Windows (but then, what would lead one to believe that would help anyway? ) -- try as I may, I can't seem to get more than 20 or 25 FPS out of Second Life in areas with "typical" complexity. In areas with above-average complexity (such as 20 or 30 avatars on the screen), the FPS can drop to 10 and below.

So the funny thing is, I get about 20 - 25 FPS on both r600g and Catalyst in this application. You'd think that if r600g were really that CPU-hungry, then it would further degrade framerate compared to Catalyst because it'd be robbing the userspace application of some of that CPU power, right? -- No, not that I can measure.

My conclusion is that, if you're just running applications that are either very simple (e.g. desktop compositing, YouTube) or very complex (e.g, Second Life), r600g should work indistinguishably from Catalyst. The only problems arise when you run apps that either (a) demand OpenGL > 2.1, or (b) demand features of OpenGL 2.1 that are not supported yet, or (c) are very taxing on the GPU but easy on the CPU at the same time. For my particular workload, right now, the only app I can claim that hits any of these is Unigine OilRush, and right now I'm not enthusiastic enough about that to even care. I have much better peace of mind just using the open source drivers.

But no, if you are using the open drivers with a workstation card in a production CAD environment, you need to have your head examined.

Comment

What you find out is that things normally done during "loading" time for most games is done during render-time for Second Life, hence the high CPU usage ... try as I may, I can't seem to get more than 20 or 25 FPS out of Second Life in areas with "typical" complexity. In areas with above-average complexity (such as 20 or 30 avatars on the screen), the FPS can drop to 10 and below.

That explains why all the times I see a shot of second life on TV it always looks like it's being played on a Pentium2 with a really crappy video card (S3 Virge). That's right, I never tried to play/use/whatever it myself.

So the funny thing is, I get about 20 - 25 FPS on both r600g and Catalyst in this application. You'd think that if r600g were really that CPU-hungry, then it would further degrade framerate compared to Catalyst because it'd be robbing the userspace application of some of that CPU power, right? -- No, not that I can measure.

If it's not cpu-bound then what is it? Not optimized enough?

Comment

I think its due to missing / only partly working OpenGL extensions. Someone should write a small application, which only used the most basic OpenGL as default. Then you should be able to enable an OpenGL extension one by one or in groups (if they do not work independendly), to see if it hits the performance.
This will enable us to see if some of the extensions need more work and enables the community to optimize that extension / extension group.

If I had never written anything for mesa, I would be happy just trying to optimize some of the existing code instead of writting it from the scratch. This could potentially attract new developers.

I don't know if the current developers already have a list of only partially working / not implemented OpenGL extension?