If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

My big beef here is that you use a *fast* system for your tests. Now how about running each OS on a netbook, where the slower CPU, more limited memory, slower disk would all hightlight the differences better.

I've also wondered about how many times each test is run, and whether you collect meaningful statistics. I'd love to see the mean, median and std deviation for each test. I think in alot of cases, we'd see that the systems really are just tied, or have just a small improvement.

It would also be nice to see how reproduceable each sub-test really is, which would tell us alot about how useful each test really is.

Another tweak to show would be to run each test multiple times, but to drop and not drop the vm caches between tests, to see how well the VM and it's caching helps.

I do like these benchmarks, they're certainly improving over time, but they could be better. More data please!

I've also wondered about how many times each test is run, and whether you collect meaningful statistics...It would also be nice to see how reproduceable each sub-test really is, which would tell us alot about how useful each test really is.

All of that data is easily available and clear through the Phoronix Test Suite.

Michael: are you planning a 64 bit comparison? It would be nice to add it to the mix. Also, would it be possible to run Ubuntu on ext4 too see how much of the disk performance is coming from other factors? (different kernel, different compiler, libraries perhaps?)

As the article says in the ifrst page: "The x86_64 builds of both Fedora 11 and Ubuntu 9.04 were used."

It is a bad idea to have compilation benchmarks with different versions of a compiler. Because the speed of different compilers is an added variable to determining the speed of the OS. In fact, the difference of speed between different compilers might be the only reason for a speed difference between the OSes.

Also, as a developer, the speed of compilation is one of the last factors for consideration when choosing a compiler. I'd be primarily concerned with the speed of generated executable and supported languages, standards, and libraries.

Most (casual) developers don't look to closely at the compiler that ships with the system, if it has gcc they start using. It is only in more formal environments where there is tool selection (or an individual developer is really focused on some metric like above.

I have built *way* too many compilers myself (crosstool really rocks), but for most purposes, I don't bother looking to closely at the compiler for general ad-hoc development tasks.

Finally, remember that a lot of people _like_ to stay bleeding edge with end user functionality, like kernels, gnome, firefox, etc. They aren't focused on speed or size of the result, but they know they will be building on a regular basis. They too don't focus on selecting the right compiler, but instead grab what is easily available.

Maybe better give a gui rating not a performance rating I prefer solutions executed by scripts - run it, then something works. Of course lots of distros provide extra guis for this and that. I did not try Mandriva nor Fedora lately, but from history usually Mandriva is maybe just behind SuSE's yast from gui tools. When you like the tools a distro provides additionally to its preconfig/stability that's usally the logical reason to choose it. Nobody would use Mac OS X because it is faster in a few benchmarks - same applies for any distro. I don't know of anybody who selects a distro because some apps run slightly faster.

As PTS compares mainly selfcompiled binaries it would not be that hard to bootstrap a newer compiler if needed. I never did that because the default compiler was slower or generated slower binaries. The only time i definitely had to compile even an older gcc (2.95) instead of using the default (some 2.96 prerelease) of old Red Hat 7.x systems was because the compiler was so broken that it was not able to generate binaries from standard source code, i still have got no idea how many changes red hat did to compile everything they shipped precompiled Well i did not prefer to fix the code, i just added another compiler - all in my home only, so nothing could hurt the rest of the system.