Intel doesn't believe in the Megahertz Myth anymore now. AMD started it with their naming scheme, now Intel has followed. They no longer name their products based on clock frequency. They are also abandoning the Netburst architecture whose long pipeline allowed such high frequencies to be reached. Their new Pentium M core is the casis of their chips, and you can see that it runs at a much slower frequency._________________Computer Engineer
Junior, Brown University
15" NC8430 HP Laptop
1.42Ghz PPC Mac Mini, 1Gb RAM, 1st Gen
40GB G4 iPod
2GB Black iPod Nano

They're still not getting as much work done per clock as the G4 core: the Pentium-M core is still basically the same P6 core introduced with the Pentium Pro. The only problem with the G4 core is the 166 MHz bus is so much slower than the Pentium-M's 400- and 533- MHz busses.

The Freescale MPC8641 with its dual 667 Mhz memory busses would have kicked the Pentium-M's butt, watt-for-watt, even without throwing the "D" in. Too bad we won't get to see it happen now.

They're still not getting as much work done per clock as the G4 core: the Pentium-M core is still basically the same P6 core introduced with the Pentium Pro. The only problem with the G4 core is the 166 MHz bus is so much slower than the Pentium-M's 400- and 533- MHz busses.

I agree that the G4, and especially the G5, core are more efficient clock-for-clock than the Pentium M. Intel isn't using the Megahertz Myth against anyone anymore. If anyone's still spreading it, it's Steve Jobs who's still obsessed with the magic 3Ghz. Of course, it's harder to get faster than a G5 single-core clock for clock, and there aren't any viable alternatives right there, so the only way to go is with higher clockspeed._________________Computer Engineer
Junior, Brown University
15" NC8430 HP Laptop
1.42Ghz PPC Mac Mini, 1Gb RAM, 1st Gen
40GB G4 iPod
2GB Black iPod Nano

Last edited by Susurrus on Wed Aug 03, 2005 6:49 pm; edited 1 time in total

If anyone's still spreading it, it's Steve Jobs who's still obsessed with the magic 3Ghz.

Steve Jobs and his damn 3 GHz.

I wouldn't be surprised to find that a G4 beat a G5 clock-for-clock once they got rid of the crappy front side bus. 7 stage pipeline vs a 20 stage pipeline? I mean, the G5 does a better job of dealing with a long pipeline than most anything I know of... but still, that's gotta hurt.

The thing is that the G5 is 64bit, so if you're executing 64bit code, the G5 would be your only choice. That's one of the reasons that things run faster on a G5, OS X executes 64bit code on them. At least, that's my understanding. The thing is, with good branch prediction, a longer pipeline isn't that bad. The problem with the Netburst architecture was that Intel's branch prediction is crap.

If the G4 had its FSB bumped up, it would be a good processor, but I think that the design went to the G5 for a reason.

The thing is that the G5 is 64bit, so if you're executing 64bit code, the G5 would be your only choice. That's one of the reasons that things run faster on a G5, OS X executes 64bit code on them. At least, that's my understanding.

The whole 64 bit thing is so overhyped it's not funny.

If you need 64-bit addresses, you know it, and I don't have to explain why you know it because you know why. You're also probably working at Oracle or Pixar or the National Weather Service or Lawrence Labs.

If you're even slightly unsure why you might need 64 bits, if you can't explain in detail how you'd improve the algorithm with bigger pointers or integers, you don't need 64 bits.

And if you don't need 64 bits, you're better off in 32 bits, because your code will run faster in 32 bit mode.

I've been working on 64-bit systems for ten years now, using the DEC Alpha, supporting programmers who are doing things like simulating the electricity grids of states and small countries. I know why you need 64 bits. And I doubt more than one tenth of one percent of the people running on a G5 fit into that category.

But, you want to know what is funny? Hold on, I'll get there...

"What about AMD64"?

That's really a funny story. See, the thing about the 80x86 is that it's got a teeny tiny register file. Like, you have half or a quarter the number of general purpose registers that any half-decent RISC has. This means that compilers have to keep copying stuff from registers to memory and back again, and that's a realy slow thing to do even if you have cache and a fast bus. What AMD did was come up with a 64 bit address mode that had TWICE AS MANY REGISTERS. SO people bought it because 64 bit was cool, and recompiled their code because 64 bit was cool, and lo and behold it was faster.

And that's why AMD64 is faster. The compiled code spends less time copying data between registers and memory. Not because it's 64-bit. It'd actually be a bit faster still if it had those extra registers in 32-bit mode. But nobody would recompile their code for a different 32-bit mode, so it wouldn't sell... so making it just a bit slower than it could be, they end up letting a lot of people run code that's faster.

Pretty funny, eh?

Quote:

The thing is, with good branch prediction, a longer pipeline isn't that bad. The problem with the Netburst architecture was that Intel's branch prediction is crap.

The big problem with netburst is that the best branch prediction in the world won't help you if you keep having to stall the pipeline waiting for stuff to get copied between registers and memory for no good reason because you've got a crappy instruction set. Shadow registrers help, some, but at some point you still have to wait for memory writes or reads because, well, you don't KNOW that the compiler wasn't just making room for another temporary... you have to treat them all as if they're sacred writes.

Demand is high. I orderd one Tue. night and the expected ship date was yesterday. I recieved an email from Apple saying b/c of demand the expected ship date is the 10th._________________Sincerely,
XForce11