"On Monday I posted Geekbench results for my Sun Ultra 20 M2 running Solaris and Windows. Afterwards, I received a number of requests asking how Linux performed on the same hardware. Now that I've finally managed to download Fedora Core 6, here are the Geekbench results for Fedora Core 6 (and Solaris, as a comparison) on a Sun Ultra 20 M2."

Either way, I'm impressed. In the benchmarks from the previous Solaris/Windows test, Solaris slammed WinXP in just about every category, and in this set of benchmarks, it looks like Linux is a real competitor. I'm sure they're out there, but I'd like to see the same set of benchmarks for Linux vs Winxp and Vista.

You may think linux super in desktop area though only has .4% market share. But in server area, linux is just a toy box. Proprietary unix like Solaris, SGI, HPUX, AIX, XENIX, etc, and FreeBSD were out there for a long time. Recently Solaris goes open source, so people can compare it with linux.

" Proprietary unix like Solaris, SGI, HPUX, AIX, XENIX, etc, and FreeBSD were out there for a long time"

Solaris? Then why Sun commits its resources to Linux?
SGI? It's almost broken.
HPUX? Does anyone use it today?
AIX? IBM has commited resources to Linux. Maintaining AIX source costs money.
XENIX? Come on, are you joking me? What's next? DOS?
FreeBSD: I respect it, but its licensing scheme is problematic. Linux succeded because of the GPL. Not a technical merit.

I believe the first release of Solaris x86 was
Solaris x86 2.1 in May of 93 and Linux 1.0 was
March 94. But those are release versions. There
was all kinds of development activitey before that.
Interesting the chart didn't include some of the
other versions of x86 Unix back than such as
Interactive, Microport, Wyse, Esix and if I remember
correct Dell. But than again you can through
a lot of money at a software project it doesn't
mean your going to get the "best of breed". I'd
say back than Solaris x86 and/or the other versions
of x86 unicies where more complete for commercial
purposes. The hardware for those platforms where
more of a pain.

Even if it is primarily depending on the compiler, I think that this is still very appropriate. As is the benchmark will reflect the relative performance of the common configurations of the 2 operating environments. When people install Solaris, chances are they'll be using the SUN compilers. When people install Linux, chances are, they'll be using GCC. Sure you can use GCC on Solaris and Sun Studio on Linux, but I would venture to say that most people don't.

Yes, looking at the FAQ, the multi-threaded tests have four threads and this test was run on a 2-core system so there will be some context switching. However, I suspect the impact of the OS is much smaller than the compiler; given that the test has two independent variables it's impossible to tell for sure.

Indeed, and if you look at all the geekbench benchmarks it's clear that its propose is to compare "systems", aka "different machines with different hardware". The benchmarks are compiler-biased, which is great to compare different systems running the same OS, but not two different OSes running diffrent compilers. the one remotely interesting benchmark seems to be the "memory benchmarks", which measure stdlib performance (aka, libc)

Why does John and Matt use Fedora Core as a target when they could use CentOS or a similar "release quality" Linux distribution? If nothing else it would eliminate the "it's beta software" comments that will probably appear as people read and interpret the results.

Or, they could benchmark Fedora Core against a recent Solaris Express build and try to level the playing field.

More information on how the system was setup with each OS and compile options would be helpful as well.

Overall it looks like Solaris is the better performer, but I wish more data was provided to explain WHY... as in what are the code differences between the two that would account for the differences.

The other score I found interesting was the bzip test. Would it have anything to do with disk performance in that case? How does Geekbench generate the scores, are they all time-based? Yeah, I could go and read the geekbench docs I reckon... but a quick blurb about this stuff would have been extra nice.

I'm getting really tired of these non-sense benchmarks. All these comparisons always test the wrong things. With the almost hatred between OS camps, it's *amazing* to me that there are VERY few decent comparisons between the OSs in real-world tasks.

Somebody send me a Sun box that can run Solaris and Linux, and I'll put it through some REAL paces. You know, webserving, database work, etc. I can't believe people have yet to do respectable benchmarking, it's not exactly rocket science.

Somebody send me a Sun box that can run Solaris and Linux, and I'll put it through some REAL paces. You know, webserving, database work, etc. I can't believe people have yet to do respectable benchmarking, it's not exactly rocket science.

We're utilizing that program for testing production operations, and they can't be dedicated to benchmarking against linux boxes at this point (we have no interest in running linux, and cannot spare the try-before-buy boxes to benchmarking.)

Again, I'll make it clear. Heck, if anybody in Hawaii has a Sun box sitting around, I'll gladly travel to your location and give it a go. Or, you're welcome to come by my data center and set it up in there.

" I can't believe people have yet to do respectable benchmarking, it's not exactly rocket science"

Well, most benchmarks are useless and don't really represent the real world. You should build a real world application that makes a good and fair use of OS facilities to test performance. Even then, what works best for you would not work well for others. So it's rather poinless. Look, most business are using Windows as their platform with all its problems of performance and security.

What is needed with any benchmark is an explanation of why the results are a certain way. It as far as I can tell, the GeekBench focuses on integer/floating point/memory bandwidth tests which do not depend very much on operating system structure and depend a lot on compiler quality. If the GeekBench was benchmarking I/O performance, then it would be a more appropriate measure of operating system performance.

Why would the raw integer/floating point performance be different under Windows vs. Solaris vs. Linux? What part of the operating system affects this?

Let's see... benchmarking a cross-platform C compiler on a beta operating system (designed for testing components for their enterprise version) against a full release OS and C compiler targetted directly at the hardware it's running on and designed for production use.