The Megahertz Myth: Why Clock Rate Is Unreliable.

Stand back, folks. This article’s going to be just a touch more technical than some of my other stuff.

So, here’s the deal. You see a new desktop processor on the market. It runs at about….2 GigaHertz. Now, those of you with any computing experience will know right off that 2 GigaHertz in a desktop is a pretty abysmal clock rate (the rate at which the system completes a single cycle). Obviously, that processor over there that runs at 2.6 GHz is better, right?

Not exactly.

This is something that’s been circling the web for a while. It’s known as the MegaHertz (or GigaHertz) myth. No one’s quite sure how – or why – it started, though it’s likely that it came about because people saw the clock rate of a processor as a simple, no-fuss way to determine how fast or powerful that processor was. Overzealous tech bloggers didn’t do a great deal to debunk the myth, either. Nor did Intel, which has been pushing the “higher clock rate = better processor” deal for quite some time.

One CPU might seem better than another CPU simply because it completes twice as many cycles as another CPU (hence, it has a higher MHz/GHz value). However, the other CPU could very easily complete twice as much with each cycle, meaning that both CPUs would ultimately process the same amount of information.

Its not just the clock rate that determines how powerful a processor is. The microarchitecture of the processor also plays a HUGE role in its quality.

I’ll use an analogy that most anyone should be able to understand. Let’s say you have two factories- A and B. The factories each represent different computer processor designs. Now, let’s say the workers in factory A work for eight hours per day- we’ll say that equates to 2.4 GHz. The workers in factory B, on the other hand, only work for four hours per day- 1.2 GHz. Naturally, you’d expect the workers at factory A to produce more stuff than those at factory B.

Thing is, though, factory B has a far better assembly line- and harder working employees- than Factory A. As a result, they end up putting out the same volume of products as factory A, even though they only put in half the hours.

It’s not a great analogy, but it should at least give some idea as to why clock rate isn’t the be-all and end-all of measuring a processor’s raw power.

“people saw the clock rate of a processor as a simple, no-fuss way to determine how fast or powerful that processor was”

That’s exactly what it was. Go back to the mid-1990’s when AMD, Cyrix, and Evergreen tech each were tweaking integer and floating point math on x86 (and x87).
FSB + multipliers did determine performance.
No SMP (inefficent SMP aware OS) or multi-core/hyperthreading. Add-ons to allow compilers to work more efficiently, like SSE and 3DNow! were not available. Just raw clock multiplier.

A Cyrix 6×86 166+ (running at 133/66) was outperformed by and Intel P5 133/66…

Enter multi-core and non-x86 exclusive compiling, and 8 or 10 or 12 or 14, whatever it was, stage x86 step pipelines, then the MHz (or GHz mark) began to loose meaning. (Around the time the Athlon slot1 made Intel look like it was standing still due to FPU performance in the late 90’s).

It got worse when making comparisons between AthlonXP and early P4… IMO, clock ratings began to lose meaning (agreeing with you) in the late 1990’s to early 2000’s era.

This article is incomplete. It should just be starting where it ends. The simile is nice, but how about relating it back to how processors compare to one another. The whole “half the time, but same work” bit sounded like a comparison between a 64 and 32 bit processor. Also I can’t help but nitpick that 2 GHz is not the rate at which the computer completes a single cycle. It means 2e9 cycles are are completed per second.