If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

17 GPU comparison in Steam using Serious Sam 3

I am nearly dead ...

... but serious enough to present you my latest work upon Steam in Ubuntu 12.04 based on Serious Sam 3 benchmarks. I used 17 different GPUs provided by my extreme overclockers friends in order to compare and contrast their performance in game. I faced many problems with the drivers, so I had to re-run the benchmarks all over again. So after 3 weeks of benching,

throw a intel hd 4000 in there I wanna see if they're capable of running steam games.

also compare the perfomance with the same card same system under windows8

hey there, thank for the feedback
I would through an Intel HD 4000 only if I had an Intel Ivy Bridge CPU. In article I am using Sandy Bridge 2500 (K skew) model overclocked up to 4.5 in order to avoid bottleneck. However, I will do some HD 3000 tests if you. But bare in mind that all these Integrated embedded GPUs are considered as entry-gaming, not even mediocre.

In case of Windows 8, I've made some benches back in the day. DirectX performance is still about 10-15 FPS better than OpenGL.

bmkResults() - Print the previous benchmarking results to the console, if any.
bmkStartBenchmarking(float tmStartIn, float tmDuration) - Start benchmarking, starting in tmStartIn seconds and lasting for tmDuration seconds. When done, print the results to the console.
bmkStopBenchmarking() - Immediately stop benchmarking and print the results. This does nothing if benchmarking isn't currently being done.

As you can see in some cases there is huge scaling, and in some other does not. In Serious Sam 3 I got that scaling. Sorry but... I did
I suggest you start benchmarking with real games instead of obsolete OpenArena, Doom3, Unreal Tournament and similar stuff. If you want to see scaling you have to use Unigine Heaven and start messing with settings. Check my last review for Radeon HD 6970 and Radeon 6950 in Crossfire using DirectX9, 10 & 11. The review is in greek language so you have to use Google Translate to figure out what I am saying in there.

I do not say CF is not working on Win but you pretend to benchmark on Linux. Also you can not replay demos with SS3, do you play always the same or do you benchmark a standing picture?

I tried to play always the "same" pattern, that's why it took me 2.5 weeks benching. Way too much time consuming. I wish there would be a timedemo for SS3...

That has been told I had to play the same thing twice and then put an average on that. Most of the time I got:
1st run: 55 FPS
2nd run: 53 FPS
3rd run: 55 FPS
4th run: 54 FPS
5th run: 55 FPS

sum and divide ... it's 54,4 ~ 54 FPS.

Same level, same duration and as much as I could to simulate the same walkthrough. My results are not zero point accurate, but they are not fake as you imply. Since there is no a timedemo, there is no other way to benchmark the game. So if anyone would like to know about Serious Sam 3 performance (an not Steam games in General) my article is okay. May not accurate, but definetaly not fake.

IF there was a timedemo, I would have already use it. BUT since there is not, I tried my best to give you a glimpse of my experience with SS3 in my setup. That's all about. Have a first taste, nothing more. Everything are in expermental, beta phase, from driver and SS3 to the Steam itself. The numbers are expected to be innacurate, but not fake.

That has been told I will benchmark now with Team Fortress2 beta, where there is such a timedemo feature in order to replay the same demo and avoid such misunderstandings.

As for CFX, find a couple of 6970 or 6950 and run Unigine Heaven benchmark for Linux (not Windows).

I am pretty sure that there is no Heaven CF profile for OpenGL. I did lots of benchmarking in April and only DX had a CF profile (on W7). Why should i get new card, i have got already 2x HD 5670 for exactly that purpose and installed em yesterday again in my box for testing. I could not even see an improvement with Doom 3 (which had some speed differences in an early phoronix test years ago) - i only noticed that when you rename the binary then you get around 15% more speed - that works for dhewm3 as well, when the binary (not a symlink) is called doom.x86 and does not matter if i enable or disable CF.
So to sum up, CF is win only feature, if a game/benchmark has got 2 renderers usually only the DX codepath has got a CF profile. The only interesting thing i found was that Rage was accellerated by CF as well but Nvidia does not provide a SLI profile.

Yesterday i repeated my Heaven 3.0 tests with Win and Linux and the result was the same like 7 month ago on Win. On Linux the rendering artefacts have been fixed but no speed increase. On Win you can enable CF logo, but you see that also wenn you run Heaven with OpenGL renderer, but only the DX renderer which is 50% faster than the OpenGL one without Tesselation scales perfectly (over 90% increase). OpenGL has got no speed diff on Win or Linux. So who tested the things better, you or me?