If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Why Software Defaults Are Important & Benchmarked

03-08-2011, 08:40 PM

Phoronix: Why Software Defaults Are Important & Benchmarked

Almost every time benchmarks are published on Phoronix, there's always at least a handful of people - or more - that will right away say the benchmarks are flawed, meaningless, or just plain wrong. Why? Because the software configuration is tested with its default (stock) settings. These users then go on to say that the defaults are not optimized for performance and that "everyone else knows better" to use a particular set of options, etc. But it's my firm belief that it's up to the upstream maintainer -- whether it be the project itself developing the software in question or the distribution vendor that's packaging and maintaining the given component -- to choose the most sane and reliable settings, and that's what most people use. In addition, with open-source software, there's endless possibilities for how a given piece of software can be tuned and tweaked. Here's some numbers confirming these beliefs of testing software at its defaults...

One thing that I do ask Michael is that when you bench openSUSE in your articles you list the exact kernel package installed. The installer for openSUSE will make a decision during install based on the test systems hardware configuration and it could install the "generic -default" kernel packages or the " -desktop" kernel packages.

Comment

One thing that I do ask Michael is that when you bench openSUSE in your articles you list the exact kernel package installed. The installer for openSUSE will make a decision during install based on the test systems hardware configuration and it could install the "generic -default" kernel packages or the " -desktop" kernel packages.

It will be said on OpenBenchmarking.org always, now that all results are to be hosted there.

Comment

Maybe you could write an article about xorg.conf settings and how they affect performance, with actual data.

I think the black magic of xorg.conf could need that.

Well, xorg.conf is supposed to be gone already (interesting when you think about the topic of the post .

Michael and I have been pondering ways of doing that. It's not technically openbenchmarking specific, but effectively fuzzing the configuration options to find a miximal performance is an incredibly interesting idea.

It applies equally well to compiler flag optimization and kernel option configuration too...

I am sure Michael and I will have something baked towards the end of the year .

Matthew

Comment

As a Gentoo user, I can see the advantage to doing benchmarks with the default options.

I use a AMD Athlon II X4, which doesn't get tested at all on Phoronix, so any benchmark that has Core i7 optimizations won't necessarily apply to me. I'd rather see what the base performance is, rather than have to figure out if something is fast merely because of i7 optimizations.

Comment

For doing an Ubuntu benchmark I would agree. I'm also not going to spend a lot of time tweaking Windows.

But when you are talking about looking at how a particular Linux patch/feature affects performance and then just benchmark the kernel with some apps in a serialised fashion (BKL removal), I do not agree as that benchmark would be pointless to the article. Using the results as a Kernel snapshot benchmark would be a different story.

Michael it is your job as a journalist to investigate and not test. Testing might be a part of investigation, but it is not always set up to fullfill the investigation role.

Just saying. I still like the website

Comment

hmm telling micheal what his job is might be a bit rude i think
anyway, this whole discussion id think leads to the interests.
ones would like to see the topmost performance of developed software whether stable or not. just to see what is possible, how developers are doing. and as far as i remember phoronix defined itself as a regression finding site to help the devs right? so thats one point.
a different one is to look at stock, for usability. testing a well known system thats reliable - mostly important for new users and well people in need of reliability. probably companies/vendors running linux not wanting to perform one liner patches on hundreds of desktops and/or laptops.
those 2 groups just want totally differnt things. from the devs and regresion finding viewpoint "flaw in testing" seems correct to me, whereas the same would not apply from the general/noob user.
id say one would need tests as done in the past. ones for default stack and ones for highly specialized tests on different machines. (like the ones enabling avm or whatever the name was - the vector graphics thing)

Comment

Maybe you could write an article about xorg.conf settings and how they affect performance, with actual data.

I think the black magic of xorg.conf could need that.

Well, I'll be damned, I was just about to say exactly what you said. A guide to xorg.conf settings (or xorg.conf.d/*) for

fglrx
nvidia-blob
noveau
intel, nvidia and ati xorg-drivers.

and perhaps also some generic settings. I've been a Linux user since 2002 and I still use the default, except EXA (is that even the good thing to use anymore) and SoftwarePointer "No".
Do we need to enable Composite in the extensions anymore? Etc etc etc.

Most documentation is from the days before Compiz reached a fairly stable point (I still don't think they're there) and Compositing was done via XGL and other weird stuff.

Pleaaaase, Master, teaaach us!

Comment

It's true; stock configurations are all that 90% of the users will see. Even 80-90% of Phoronix readers are newbies, as taken from the previous years' Phoronix Linux Graphics Survey. And Phoronix readers are more savvy than the general populace.

There should be a huge challenge afoot in the distribution marketplace to create optimized distros that, while able to run on old hardware, can adapt to new hardware and run fast there as well. I'm not just talking about SSE, but maybe, things like having certain programs (very performance sensitive programs) specially compiled on install? Not everything is performance sensitive, but certainly the 3d graphics related stuff is, and certainly the kernel is.

I've seen a few binary distros around that purport to focus on optimization, but I think I'd rather have the mainstream distros focus on optimization. The mainstream distros are where most of the casual users are going, and we can afford to give them a better out of the box experience.