If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

AMD R600g Performance Patches Yield Mixed Results

11-02-2012, 10:00 AM

Phoronix: AMD R600g Performance Patches Yield Mixed Results

Following performance benchmark results I published earlier this week comparing the open-source Radeon and AMD Catalyst driver performance under Ubuntu 12.10, Marek, the well-known independent open-source graphics driver developer, set out to explore some of the performance issues in the open-source driver. One day later, he published a patch that could quadruple the frame-rate of the Radeon Gallium3D driver. He went on to push another performance-focused patch too for this R600g driver. In this article are a fresh round of benchmarks of the open-source driver to look at the wins and losses attributed to this new code.

Comment

It seems to me that the _only_ game where it didn't pay off is Xonotic and looking at the frame rates there I'm guessing that either that game does something weird of something else is going on (like hitting a software fallback path somewhere, for example).

Michael, you need an attitude change.
How about being positive for a change instead of outrageously negative...

Comment

The closed blob's perfirmance advantage is not due to tweaks like this. I have an extremely simple test case which renders a single static rectangular texture and the open source driver is half the speed. This is the simplest fundamental operation and alas we are slower than we should be. Until fundamental problems like that are addressed, tweaks here and there for this and that game are not likely to have the expected result.

Comment

Based on the way only some games are being affected, it looks like something very specific is broken. Especially since according to that bug report, ETQW's performance is usually better, and only sometimes worse. My intuition is that this should be possible to fix without reverting the optimization, although I'm not familiar with the r600g code.

If it turns out that the regressions can't be fixed without reverting, well, the blobs have game-specific hacks, why shouldn't r600g?

Comment

Can we have Nouveau tests of gpus without reclocking support vs Nvidia blobs **reclocked** to frequencies used by Nouveau?

Now we know how good Nvidia is and how good Nouveau is without actually using 100% of GPU computational powers. If we could see how Nvidia behave on smaller frequencies we could compare relative capabilities of those gpu drivers with better accuracy (and "somehow" scale up Nouveau perf in "would be" scenario where Nouv know how to reclock)

As for article. Is Marek around to comment?

PS While crowdfunding messa/drivers is not practical now, maybe crowdfunding x.org efforts for continous integration could be doable? Michael what do you think? Maybe you can talk about it with x.org foundation?