I'm curious. Why would it be of historical interest only, if we can
produce numbers that show that the system worked much more efficiently
in the past? (Ie. getting services done took less real time, using the
same hardware, with similar loads.)

I'm not interested in using benchmark numbers to grind my axe. If you
actually look at what Gregory/Andrew posted, to which I responded, you'll
see that some parts of the system got faster, and some parts got slower.
From macrobenchmarks (e.g. the mysql runs Andy does) and from Sun's
microbenchmarks we know that for many important workloads, as well as in
many basic ways lmbench is too old or insufficiently comprehensive to
measure, the system is, in fact, much faster than it used to be.

Very few parts have gotten faster by looking at that post. Quite a few
parts have become slower, and some by a very significant share. That is
disturbing.

You can't tune a system for every workload -- or every hardware it might
ever run on -- at once. Given that, trying to resuscitate my decade-old
system that I benchmarked 1.2 on seems like a significant waste of time.

Well, if we're talking about system tuning, then we're talking about all
subsystems. When looked at in detail like this, then you'd expect that
you'll have gains in some areas, which are offset by losses in others. I
can't really discover any patterns like that here. (And it's a different
question then, wether the selected choices are optimal depending on
workload, but that's not what I'm talking about.)

Investigating the parts of Gregory's results that make me scratch my head
and say, "huh, that's strange" -- that's of more interest to me, at least.

Yes, that was my point. It's not only of historical interest. There are
some numbers in there that really look strange.