If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Radeon Gardenshed DRM + Gallium3D Benchmarks

Phoronix: Radeon Gardenshed DRM + Gallium3D Benchmarks

It has been about a month since we last delivered ATI/AMD Radeon Linux benchmarks comparing the performance of the open-source driver against the high-performance proprietary driver. Since that point there's been various improvements to the Mesa/Gallium3D driver and there's also been the merge of the latest Radeon DRM code for the next kernel, which will likely be called the Linux 3.0 kernel, but in the DRM pull request was referred to as Gardenshed. Here are these benchmarks on several different Radeon graphics cards.

Just a thought, but wouldn't a comparative of Gallium r600 at 1 month intervals (ie compare todays gallium with last month's, march's ...) be far more interesting than this "Oh, one month on, we still haven't closed that order of magnitude gap....."

looks like there's some serious bottlenecks... fps hardly drops between a low and a high resolution... but catalyst reacts as supposed, there's a real difference when you higher the resolution
Does anybody have some clues about the features needed to unleash the power of those cards ? I mean, even if the performances was 50% lower than the catalyst ones and you can observe the drop of fps between resolution, you could say that everything is in order, and that there's some optimizations missing, but those benchmarks feel strange...

Anyway, I really appreciate all the works around r300G, r600g and nouveau ! keep up the good work, guys !

Here's an idea of what to test next (if it was already done, then please ignore )

Open Source Radeon vs Nouveau performance compared to their blob counterparts. Radeon, which is supported as Open Source by AMD and has access to hardware specs, vs Nouveau which is not supported by NVidia and needs to reverse-engineer a black box. Did OSS Radeon do a better job at coming close to 100% performance while having access to specs? Did Nouveau do a better job? And if Nouveau is the winner, what's the conclusion? Not having specs and support leads to better drivers? If both are on-par, what does that mean? Specs and support doesn't help? It got wasted? If Radeon wins, does that mean that Nouveau is probably never going to become a true replacement for the blob?

IANAMD (Mesa Developper) but IIRC the major bottle necks are in the shader compiler and the handling of state changes.

Both bits are being worked on by the dedicated volunteers who keep our X stack moving, but neither is what you could call low hanging fruit and both need to be killed by a thousand paper cuts if you get what I mean.

IANAMD (Mesa Developper) but IIRC the major bottle necks are in the shader compiler and the handling of state changes.

Both bits are being worked on by the dedicated volunteers who keep our X stack moving, but neither is what you could call low hanging fruit and both need to be killed by a thousand paper cuts if you get what I mean.

Open Source Radeon vs Nouveau performance compared to their blob counterparts. Radeon, which is supported as Open Source by AMD and has access to hardware specs, vs Nouveau which is not supported by NVidia and needs to reverse-engineer a black box. Did OSS Radeon do a better job at coming close to 100% performance while having access to specs? Did Nouveau do a better job? And if Nouveau is the winner, what's the conclusion? Not having specs and support leads to better drivers?

Similar studies have been done before and the conclusion is always the same. Having a higher percentage of the developers dress in black leads to better drivers.

Given the relatively high performance of the r300g stack it seems like better questions would be :

- what performance-related features are enabled in r600g relative to r300g at this time ?
- are there differences in hardware architecture between the pre-r600 and 600+ generations which could require design changes between r300g and r600g, were those changes made, and were they correct in hindsight ?
- what other design changes were made between r300g and r600g ?

My 10,000 foot impression of the answers is :

- programming some of the performance-related features is a lot trickier in r600+ hardware than in earlier hardware and as a consequence a number of those features are enabled in r300g but not yet enabled in r600g

- r600+ hardware has a larger number of registers and different grouping of registers so a different approach to state management was taken in r600g relative to r300g; in hindsight the mapping of registers to state changes is more complex than first expected so more work on state management is probably needed

- the shader compiler for r600+ started off as more of a 1:1 IR-to-hardware translator than a real compiler, although recent work may have improved that a lot (haven't had time to look)