If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Linux Kernel Benchmarks Of 2.6.24 Through 2.6.39

05-09-2011, 06:00 AM

Phoronix: Linux Kernel Benchmarks Of 2.6.24 Through 2.6.39

With the recent look at the major Linux power regressions taking place within the Linux kernel, some initially wondered if the increase in power consumption was correlated to an increase in system performance. Unfortunately, it is clear now that is not the case. With that said though, here's some performance benchmarks of all major kernel releases going back to Linux 2.6.24 and ending with the Linux 2.6.39 kernel.

Comment

i agree that one should use Energy/Task (not per second aka Watt) and idling of course...
but.. i wonder what can bring the performance down. Whats different from the high performing kernels from the bad performing ones? is it not possible to put the good working parts together and leave them unchanged? or have there been other things... like instability ... in the "good" performing kernels?
is the kernel becoming too large?
there must be some idea to the question of kernel becoming slower... maybe someone can bring in some light...

Comment

Surely one problem that a lot of desktop users will face is that driver support is continually being added to the Linux kernel. Older kernels may be 'leaner and meaner' but they also may not support new devices (a problem I have experienced myself).

Comment

I don't want to say a thing about power management regression yet it shows as to any complex piece of software that power management (which mostly rely in how kernel can send signals to various components of your PC to enter in save-energy state) will not change it's algorithms.
Vista/7 certainly is faster than XP in many areas, cause of optimizing file layout (doing live defragmentation, a feature that OS X have from OS X 10.4 Tiger and aggressive prefetching), better scheduling algorithms, yet cause it uses more components to run (like it use GPU to render Aero like UI) so a low-power state may get better on an older OS (here XP compared with Windows 7).
Instead of complaining of energy expenditure with a fuzzy bug report (everyone complains that last kernels consume more energy without point if is CPU that goes slower to idle state, or network card or sound card, or disk drive) makes the premise and the benchmarks of the article meaningless.
Is like comparing the speed of redrawing in Windows XP vs 7 based on boot time increase. I am not here to say there is no relation between those two (for example Win7 may load multiple things by default, and if you will launch corresponding components in XP, your system may load even slower with an XP like setup, or fragmentation of a longer XP usage will make eventually that Win7 to be faster than XP).
I am not an expert on benchmarking but the ground of the article add loaded statements without no meaningful correspondence.
Part 2
In a similar note I would also like (or dislike) to complain on another articles that follow the same pattern. For example the AVX instructions that are in Sandy Bridge (and will be in the next AMD Bulldozer CPU) tested in GCC/LLVM and taking a relevant revision to see the speedups and cause of regressions and tests that DO NOT TAKE ADVANTAGE of those instructions make you think that is phony to buy a Sandy Bridge as there is no support. I'm not sure about you, but in general a workstation CPU is bought for gains that you will take advantage for the years to come (to say such, for like 3 years) and because of that you will want that when your CPU will be slow, the supported extensions to make your software to worth performance while.
Let me give an example: you are in 2007, you can buy with your money two NVidia cards, and you are undecided: a 5800 card or a 6600. As many people will likely buy the 7600 card, not only cause of number, but because the extra two effects that the video card support will not be offloaded on CPU. Also if you would buy 6600 card and for that time an AMD X2+ - 3800+ which was a popular CPU at that time, you can use all accelerations of today, 64 bit support, decent Compiz graphics and such.
I am not here to complain that benchmarking is meaningless, but without setting a close to unbalanced context, excluding probably the bias view to work right on Unix/Linux, a benchmark/article/rant I don't think makes a good career in reading news.
Sandy Bridge in AVX high parallel computations may give an edge, and AMD Bulldozer will give an edge in multi-threaded benchmarks (like a rendering system).
Promoting and pushing those in benchmarks will also give a substance of what can be achieved and will convince users to ask the software creators to use those features.
If you think that AVX will bring just 15% speedup, when it brings in some specific case like 80% speedup, you will most likely no one will say: you developer should rewrite (if possible) this Gimp filter to take advantage of AVX (or compiler to find the patterns using GCC's Graphite Framework).
This is all, is a devel view and I understand that probably I take a too rational stance and I lose the "journalism" behind.
Also I want to say that I really appreciate what Phoronix is, and as Linux promotion site, it does a fairly good job, so my sincere congratulations about it!

Comment

i agree that one should use Energy/Task (not per second aka Watt) and idling of course...
but.. i wonder what can bring the performance down. Whats different from the high performing kernels from the bad performing ones? is it not possible to put the good working parts together and leave them unchanged? or have there been other things... like instability ... in the "good" performing kernels?
is the kernel becoming too large?
there must be some idea to the question of kernel becoming slower... maybe someone can bring in some light...

Comment

How can an increased performance afflict power consumption negatively...

Doing the same number calculations in less time means LESS power consumption as the cpu can reach deeper sleep states in less time and stay there longer, it's so clear...

my understanding is that 'power consumption' is a colloquial term that means 'energy'. You are definitely are thinking of 'energy' (Wh), and on that part you are right, but I think he meant power (watt). Doing the "same number of calculations in less time", can mean more 'power'. That's because, you are probably using more 'resources' to achieve the performance boost (better use of the hardware, doing more things in parallel etc). Overal, depending on the performance boost, you may consume less energy (or your average power will be lower, if you like), since, as you said, you finish faster, thus allowing the CPU to idle quicker and longer.

Having said all that, the kernel regression seems to affect battery life, so it is about energy...