USS Clueless Dissected

One of the articles on the Mac Web getting a lot of comments this
week, both pro and con, has been USS
Clueless: An Unbelievable Kludge by Steve Den Beste. The article
has generated a lot of heat and raised a lot of questions.

I'm quoting the bulk of Den Beste's original article and two
follow-up pieces, which should be displayed on your browser in a
light blue type. My comments will be in the
site's regular typeface.

Den Beste raises many important points about the G4 processor, DDR
memory, and the Power Mac motherboard. As I read and reread his three articles to separate the wheat from the chaff, I am
increasingly convinced that the G4 is a dead end for multiple processor
computing - and that IBM's Power architecture seems to be the most
promising path to Apple's future.

Stardate 20020814.1823

The "new" 1.25 GHz G4's aren't new. They're 1 GHz G4's
which are being overclocked 25%. Apple is selecting G4's which can run
that speed, and they've designed their new top-end system around
them.

The first question we must ask: Where is your evidence of this,
since Apple isn't even shipping the 1.25 GHz G4s yet? Later in the
week, a reader wrote to report that the G4s (MPC7455) in his 1 GHz
model were clearly marked as 1 GHz chips - more on that below.

Although the current
MPC7455 Product Summary on Motorola's site states a maximum
processor speed of 1 GHz, it is definitely within the realm of
possibility that either Motorola will have a 1.25 GHz version of the
MPC7455 or they will have a newer version of the G4 designed to run at
that speed in September. The information on this page may already be
outdated, since it also reflects a maximum bus speed of 133 MHz, and
Apple is currently selling systems using a 167 MHz bus.

There is no need to assume that the only way Apple will be able to
sell a G4/1.25 GHz is by overclocking the current top-end 1.0 GHz 7455
processor.

But though they determined that enough 7.5x G4's could
run at 167 MHz, SDRAM cannot. Its base speed is 133 MHz but it's
possible to buy selected SDRAM which will run at 150 MHz. But there
isn't enough of it that will run at 167 MHz, so for these new machines
Apple had to switch to a faster RAM technology, DDR-SDRAM.

Assuming Apple has some reason for sticking with a 7.5x multiplier,
this would explain why they revised the motherboard to support both a
faster bus and a different kind of memory. However, there doesn't
appear to be any such limitation on the CPU. We are already seeing 800
MHz G4 upgrades that run on a 50 MHz system bus (16x multiplier) and
1 GHz upgrades that run on a 66 MHz bus (15x).

Apple could run a 2 GHz G4 on a 133 MHz bus if it existed and Apple
so desired. The G4 is capable of multipliers beyond 7.5x, so Apple's
engineers undoubtedly have good reasons for designing the G4/1.25 GHz
to use a 7.5x multiplier instead of a higher one. I suspect that a big
part of the equation is that the delay in accessing motherboard memory
is in proportion to the multiplier. Den Beste seems to feel the same
way about "starving the CPU":

That, in turn, meant that they had to design a new
mobo controller, which had a DDR interface on its backside, because
otherwise the new 1.25 GHz machines would be even more starved for RAM
bandwidth than the existing ones are. But even if the RAM is capable of
doing 266 MHz, the real bandwidth into the CPUs is bottlenecked on the
167 MHz FSB. (I have a sneaking suspicion that they're underclocking
the RAM to synchronize it with the FSB.)

A new mobo controller chip isn't something that you
conjure up in two weeks, and the fact that they actually designed an
entirely new piece of silicon to support this kludge means that Apple
has been making its engineering plans based on the assumption of
another Moto speed stall.

In other words, Moto has stopped developing the G4,
and Apple has known it for a long time, long enough to develop this new
bus controller chip. If Moto were continuing to work on faster G4's and
expected to release new ones with higher multipliers, Apple wouldn't
have bothered doing something like this.

This was a desperation move, a way of wringing one
final speed bump out of a terminated processor design.

If Murphy's Law applies anywhere in the CPU industry, it applies at
Motorola. While Apple waits for the next generation CPU from Motorola
or IBM, it has to make the best of what it can get today. I'm pretty
sure the 167 MHz bus is just a stopgap until something better comes
along.

If Motorola has stopped development of the G4, and that's something
Den Beste provides no evidence of, it would undoubtedly be to pave the
way for the G5. Or maybe they're just stopping development of this G4,
the 7455, to make way for the next generation G4, the rumored 7470
(which could be G5 - who knows).

And it's probably the last speed bump, too. It's hard
to believe that they could wring even more speed out of a trick like
this, so until such time as they come up with something else entirely,
this is the end. Either Moto against expectation actually delivers the
G5 (and rumor is that they canceled it a year ago when they started
making major cuts in their semiconductor group) or IBM comes through
with its desktop Power4 and Apple releases an entirely new class of
machines based on that, which is what I now expect.

What Den Beste labels a trick is Apple moving Xserve and the G4s to DDR memory on a tricked
out system bus. I have to agree that this is a hack, as was the Yikes! motherboard in early entry-level G4s.
It's not an elegant engineering solution; it's a way to make the most
of what's available today.

Like Den Beste, I hope Apple gives Motorola the heave ho and puts
the future of the Power Mac in IBM's hands. Then instead of waiting for
a 1.25 GHz model with two CPUs, we could already have a 1.3 GHz machine
with a dual-core processor.

Apple's continued reliance on Motorola has helped them lose the MHz
war in the eyes of the public. The sooner they can shift to IBM
processors, the better.

But there's absolutely no way to know when IBM will be
ready with the new Power4 chip in quantity; it could happen in October
or it could be a year from now. Until it happens, Apple is stuck with
what they've just released, to compete against PCs which are expected
to use processors from AMD and Intel which will continue to increase in
speed and drop in price. But we can make a pretty shrewd guess that
Apple expects the Power4 to be later rather than sooner. If they
expected Power4's in two months they would not have designed these
systems. They wouldn't have used such a large part of their engineering
on a stopgap if something much better was coming shortly
thereafter.

Intel is already talking about 3 GHz Pentium 4 processors, and the
AMD Athlon 2600 outperforms the current 2.53 GHz Intel P4. By
comparison, today's 1 GHz G4 just doesn't sound fast, giving Apple
the unenviable task of marketing convenience in an industry that puts
way too much focus on performance and raw GHz numbers.

I suspect Apple is designing the next generation Power Mac and that
several motherboards are being created around the Power4 architecture.
I also suspect that they are far from ready and that the need to market
newer, faster models every six months or so forced Apple's hand. Maybe
we'll see Power4 machines at the January Expo.

Until then, at least we've seen a 25% boost in CPU performance to
tide us over and keep the Mac from looking too underpowered from the
x86 side of the street.

Brian also asks why these machines are so expensive.
It's because the number of 7.5x G4's which can actually run this fast
is limited, and they need to price them high so that they don't sell
very many and outstrip their supply of parts. The 1.0 GHz G4's have to
remain attractive and sell in quantity, because they have to move a lot
of slower 7.5x G4's for every fast one they ship.

That may be part of the picture, but Apple has always sold their
fastest models at a significant premium. Customers who demand the best
are willing to pay the long dollar, and the pro line (Power Macs and
PowerBooks) remain the more profitable side of Apple's hardware
business. It's not just parts and availability; Apple needs to keep
turning a profit.

Update 20020816: I made a mistake on this, though it's
not a serious one. The new dual 1.25 GHz system is indeed a 7.5:1 G4
running 25% overclocked. The new dual 1.0 GHz system is not the same
chip running slower, rather they're using the 6:1 G4 (nominally "800
MHz") and also overclocking it 25%, so that it uses the same mobo with
the same 167 MHz FSB. There are probably two reasons for that. The only
difference between these two new systems is which version of the CPU
Apple plugs in, which will raise volume on the rest of the system and
help their manufacturing a bit. And it means they can try to claim that
the new dual 1 GHz is faster than the previous dual 1 GHz because
of a 25% increase in FSB bandwidth to somewhat relieve the memory
bottleneck.

Again, Den Beste offers no evidence that Apple is using CPUs rated
at 800 MHz in the 1 GHz machine, and with the 1.25 GHz model not
yet available, he has no way of demonstrating that it is based on an
overclocked 1 GHz CPU.

Further, there is no such thing as a 6:1 G4. As is true of all
modern PowerPC processors, the G4 can operate at several different
multipliers. The same 1 GHz chip could use a 6x multiplier on a
167 MHz bus, 7.5x on 133 MHz, or 15x on 66 MHz. All Apple needs is for
Motorola to certify 1 GHz G4s for a 167 MHz bus, and everything is
kosher.

The likely reason is that though DDR-SDRAM, which is
used by the new systems, has substantially greater throughput than
SDRAM as used in the older ones, the new systems are largely wasting it
because of the FSB bottleneck. On the other hand, DDR-SDRAM has more
latency than SDRAM, which they're eating in full. When the Athlon went
to DDR they ran it at full speed so they more than made up in bandwidth
what they lost in latency. But Apple is incapable of taking advantage
of most of the DDR bandwidth, but is fully affected negatively by the
latency. It appears to be nearly a wash, and the systems are still
badly bottlenecked on the pipe between the CPUs and mobo control chip,
a problem which L3 cache ultimately can't solve. The bottleneck is the
FSB itself.... A complete redesign of the FSB is what's needed, but
that would require Moto to respin the chip, which they have not done,
and which I now think they will never do. And since Apple can't
unilaterally alter the FSB interface on the CPU's, there is ultimately
nothing they can do to really relieve this problem.

Agreed. If Apple is to continue working with Motorola, the new FSB
and new CPUs must fully support the double data rate of DDR
memory. Or they need to move to a newer, better memory architecture,
should such be available.

For that, we'll have to look either to the x86 side of the street,
since it's unlikely anyone would develop a new type of RAM just for the
Mac market, or work with companies such as IBM that can build CPUs,
FSBs, and RAM optimized for throughput. (IBM's Power4 servers are
powerful.)

Update 20020819: I have been informed that the L3
cache in Apple duallies is per-CPU, and access to the L3 cache doesn't
rely on the shared FSB to the mobo controller. That is definitely a
good thing because it will somewhat alleviate the bottleneck of the
mobo bus. But this also explains some of the apparent lack of
improvement of the new 167 MHz FSB machines compared to the previous 1
GHz FSB duallie. That machine had a slower bus to the mobo controller,
but it also had twice as much L3 cache as the new machines. Since the
new machines have half as much L3 cache, the processors are both
competing more heavily for access to the mobo controller. What they
gain in speed, they are losing in wasted cycles due to bus contention.
It's hardly surprising that the result overall is a wash.

The only time that the new machines will substantially
outperform the older one is when they're doing a lot of work on a small
amount of data and code, and as a result are disproportionately able to
operate out of their caches. On the other hand, if they are running
code which is large, or operating on large amounts of data, then the
cache will help less and the bus bottleneck will impede the system
more, and relative performance will drop considerably. The new machines
run out of cache sooner, so bus contention kicks in more commonly.

Stardate 20020821.0911

Today, Macintouch put my link on the front
page again, along with a response from some anonymous person who
wrote:

Mr. Den Beste is just that, clueless. Apple is using no
"overclocked" chips. I can state unequivocally that the 1.2 GHz chips
are a new rev and are not overclocked! [...] When the new systems (1.2
GHz) are available his claims will easily be disproven.

It's interesting that he provides no evidence at all.
Either he is pretending to be an insider, or else he has manifest faith
in the infallibility of Apple and Motorola. Good luck to him.

The writer provides every bit as much evidence as Den Beste has. No
more. No less. The writer operates on faith (or perhaps inside
information), while Den Beste relies on his "spider sense" (see
below).

If you want to quibble about what the word "overclock"
means, be my guest, but that doesn't change the substance of my
analysis, which is that these CPUs are using the same clock multipliers
as the old ones, and that the purported increase in speed is entirely
due to increasing the base clock rate.

In the entire rest of the industry, the term "overclock" means to
run components beyond their rated speed. Apple is running a 1 GHz
G4 on a 167 MHz bus and will soon offer a 1.25 GHz CPU on the same bus.
The fact that the bus and top-end CPU are 25% faster doesn't mean that
they are overclocked; Apple may well be using parts at their rated
speed.

Preliminary tests have shown that the new machines are
essentially the same speed as older ones which used the 133 MHz bus,
and at this point it seems that the most likely explanation is that
what Apple gained by increasing the FSB bandwidth, they lost by cutting
the L3 cache from 2 MB to 1 MB (mostly, I suspect, to get the
cost down because cache RAM is spendy). The result is more or less a
wash, overall. (The new machines are going to be somewhat faster at
some things and slower at others).

Well said, clearly stated, and right on the money as far as I can
tell. We'll know more when the 1.25 GHz machines can be put through the
wringer in September. With a faster bus, faster CPU, and the same size
(but faster) L3 cache, it should provide almost exactly 25% more
performance than the old dual CPU 1 GHz machine.

When I first considered the new systems, the one thing
that leaped out at me was the fact that the two CPUs share a single FSB
to the mobo controller. If they're so damned choked on FSB bandwidth,
why not double it? This morning I realized the answer: they
can't.

There are two major engineering issues involved in
making symmetrical multiprocessing work. The first is software,
designing the OS scheduler to distribute the jobs between the two CPUs
without designating one the system master that tells the other what to
do. (That's what the "symmetric" part means.) The other problem is
hardware, and it's a bitch.

When either processor writes to memory, that
potentially makes the other processor's cache obsolete if it happens to
be holding that memory location. So every time each processor writes to
any memory location, the other processor has to know so that it can
update its on-chip cache if necessary. (I suspect what they do is just
cease to mark that address as being cached, so that the next access to
it goes to main memory instead to retrieve the new value. Actually
updating the cache value would be much too difficult.)

On an SMP system with a shared FSB, each processor
watches the bus while the other is using it, and grabs the address from
every memory write so that it can prevent anachronisms in its own
cache. If Apple had designed its mobo controller to give each processor
its own FSB, the two CPUs would no longer have been able to keep their
caches synchronized, and the system would fail. (Cache anachronisms
would be fatal at the software level; I doubt that the OS would even
boot.)

The FSB would have to actually be designed in such a
way that the mobo controller could feed each processor's memory writes
out the other bus to the other CPU so it could see it happen. That must
be what the Athlon duallies do, because they do indeed each have a
separate bus to the mobo controller. But that would be a different kind
of bus cycle, and the CPUs would have to be designed for it. P4
duallies share a bus, but the bandwidth of that bus is immense so they
don't have a bottleneck (yet).

The G4's used in these new systems are not new
designs, and they were originally designed to share the FSB. Each
processor must be watching for the other processor's writes, and it's
difficult to see how a mobo controller could manipulate separate FSBs
so as to fool each into thinking it was seeing another CPU doing a
write. At the very least, that would make the mobo controller chip
itself extremely complicated because some of the time it would have to
drive pins it ordinarily listened to (while pretending to be another
CPU), and if it's possible at all it may be beyond the capabilities of
Apple's controller designers (for reasons of time and budget, most
likely).

So Apple didn't have any choice. Separating the FSBs
would have alleviated the bottleneck, but the resulting system would
fail because of unresolved cache anachronisms. They had to leave both
processors on the same FSB, and as a result both CPUs are seriously
starved for memory bandwidth.

Another cogent explanation of a difficult subject for the layman to
understand, and yet another argument for Apple moving to the IBM Power4
architecture. Power4 was designed from the ground up for multiple CPU
cores and many, many processors. The latest models support up to 32
CPUs, something no Pentium or Athlon could do.

I have no doubt that the marketing value of
finally switching to DDR was at least as much of a motivation
as the ability to keep the new slightly-faster bus fed.

Good point. Companies should be market driven and product driven,
never marketing driven. That puts the cart before the horse.
(Like Intel did with P4.)

On the other hand, most Mac users don't really care about the
technical details of memory. We just want to know how much RAM our Macs
support, if it's compatible, and how much it costs.

In the mean time, Rob at Bare Feats continues to do
testing of the new machines and discovers something curious.
When they try to run two unrelated CPU-bound tasks at once, these
systems choke badly. Which is more than a little weird, since that's
one of the things you want a dually for. The idea of running a
CPU-crunch job in the background while using an interactive program is
the entire point, and the new systems should have more than enough CPU
power to handle it. But what this does is to heavily stress the FSB,
both on the old and new versions of the Mac dually. If there were
adequate FSB bandwidth to memory, both programs would run at full speed
with little degradation.

It sounds more and more like the technical limitations of the
current G4 architecture have become a compelling reason to go Power4 in
the next round. I'm sure Apple is working with it; I hope Apple will
adopt it.

Stardate 20020822.1255

(Captain's log): After being linked for the third day this
week from the front page of Macintouch, my refer traffic from there has
been quite amazing. It's also amazing how disgruntled and outraged some
people have been about my use of the term "overclock".

Dylan writes:

Just to let you know . . . I opened up one of the new dual 1ghz G4
systems today and removed the heat sink....

The CPU is an MPC7455, and is a 1000 MHz
part, not an 800 MHz part that's been overclocked as you
asserted in your article. Even if Apple were to be overclocking these
machines, the 933 would be a better choice to overclock to 1000 MHz . . .
and overclocking the 7455 from 1000 to 1250 (25% higher) without
serious cooling modifications would likely be very unstable. I find it
hard to believe that Apple has folks individually testing their entire
stock of 7455/1000 CPU's to find those rare gems that would withstand
such a brutal oc'ing.

Also, there are legal issues with selling 1000 MHz parts as 1250 MHz, and
800 MHz parts at 1250 MHz. Apple would surely face class action down the
road if they were doing this. I haven't been able to inspect a dual
1250 up close, but I would wager heavily that they are MPC7455/1250
parts. Certainly I'd concede otherwise, as mot.com doesn't currently
list a 1250 part, but incidentally, they only recently posted the info
about the 7455 reaching 1 GHz . . . so this may not be reliable.

Also, it's entirely possible that the 1.25 is, in fact, a 7470 chip.
This is less likely, but still possible. Only time will tell at this
point I think.

I'm not at all surprised to learn that the new CPUs
are labeled by Motorola at the speed Apple is selling them at. Apple
doesn't have the technical ability to test the parts to see if they'll
run faster.

Moto does, because Moto has to have that ability
anyway. Every CPU which comes off the line has to go through a testing
process which is extremely complicated and elaborate, performed by
equipment of unbelievable sophistication. That testing process is
intended to guarantee that all parts of the processor work as designed
at the specified clock rate.

This hasn't been my understanding, although things may have changed
in recent years. In earlier times, a few chips from a wafer would be
tested and the entire batch rated at the lowest speed all chips passed.
Testing every individual CPU is a costly undertaking, but if there is
sufficient reward (how much is Motorola charging Apple for each 1.25
GHz CPU?), it could make economic sense.

But Moto has to do that anyway, because they have to
confirm that every chip works before they can ship it. Their chip
testers do so at (a little bit faster than) the rated clock speed of
the part, but if the testers are capable of running a lot faster (which
would be expected) then it wouldn't be too hard for Moto to test
certain groups of chips twice, once at 133 MHz and if they pass that
then again at (a bit faster than) 167 MHz. The main reason for
resisting that is that it doubles the testing time, and testing already
takes too long and costs too much. Testing is a potential bottleneck on
most IC lines because the process is inherently slow and extremely
capital intense. But if Apple was willing to pay a higher price for
parts certified to run at a higher price, then that would make it
worthwhile for Moto.

If that is what Moto and Apple are now doing, I don't
think it's dishonorable for them, either. That's not the point I was
trying to make.

I was using the term "overclocking" not in the
pejorative hobbyist sense, but rather to mean that these are old parts,
running faster. It may be that we have a disagreement on whether that
is actually correctly designated "overclocking", but that's
unimportant. Irrespective of what words we use to describe it, I still
think that's going on, and that's because if Moto were still developing
new G4's, then it seems as if they could have released ones which were
not only rated for a higher clock but which also had a higher
multiplier. I am extremely suspicious of the fact that the new 1.25 GHz
Macs are still using a 7.5:1 clock multiplier, just as the old 1 GHz
Macs did. Why isn't the multiplier higher?

That's a question to address to Apple's engineers. As noted above,
latency in accessing motherboard memory increases in proportion to the
multiplier, so at some point, a higher multiplier only results in a
higher clock speed, not increased performance. (Does this sound like
the P4 vs. PIII to anyone?)

Spidey-sense

When you've worked as an engineer long enough, you
gain a sort of spidey-sense about what other engineers do, a kind of
intuition which permits you to work backwards from results to causes.
When you see something which at first seems inexplicable, you can get
inside the heads of the guys who designed it, and try to figure out
what might have driven them to do what they did. You make the basic
assumption that they're not incompetent fools and that if they did
something strange then there must have been a good reason why. And long
experience with engineering will give you an intuition about the kinds
of things which might have driven them to do what they did. In
particular, sometimes engineering decisions are driven by business
realities.

That's what I'm basing my speculation on. Let's be
clear that I have no inside information at all. What I have is the
inexplicable fact that the new fastest Mac will be using a processor
whose multiplier is the same as the old fastest Mac. Why would they do
that?

It's because they didn't have any choice. That's what
my engineering spidey sense tells me. This engineering decision was
indeed driven by a business reality.

It's a lot easier for Motorola to rate old 7.5:1 CPUs
to run at a higher FSB than to actually design a new part entirely, if
in fact they discovered that a lot of the old parts actually could run
faster (which would not be particularly surprising). A higher rating on
an old design only involves changing the testing and labeling.

My spidey-sense tells me that the reason Motorola is
willing to test and certify old parts for a higher clock rate, and that
Apple actually designed a new computer to use them, is because Motorola
doesn't plan to bring out new chips which are substantially faster any
time soon (or maybe ever) and Apple knows it.

Pure speculation, Mr. Parker, er, Den Beste.

Motorola has been losing mammoth amounts of money on
its business with Apple for a long time. The number of high performance
CPUs that Apple buys, and the price Apple is willing to pay for them,
isn't enough to fund the capital investment required for the kind of
design effort needed to create them and stay competitive. And no one
else besides Apple is interested in PPCs that fast from Motorola; all
its other PPC users are doing embedded applications and they're less
concerned about high speed than they are about low power consumption.
So if Motorola's management is finally determined to stop hemorrhaging
money from the semiconductor group, high performance CPUs would be a
good candidate for the chopping block. Motorola is in deep trouble
financially, and the
semiconductor group has routinely been its biggest money loser. It
can't go on like this. One more year at their current rate of losses
and Moto will end up in bankruptcy.

The semiconductor group has been cut deeply. Is it any
surprise to think that at least some of the cuts are in engineering?
And if you have fewer engineers, you have to kill projects. So why not
kill projects which would never be expected to make a profit? The
losses in Motorola's semiconductor group have not been solely due to
its relationship with Apple, but Apple has sure helped.

So what my engineering spidey sense tells me is that a
few months ago, Motorola's management decided that Apple's business was
a luxury it could no longer afford, and told Apple that it was going to
substantially decrease its investment in development of high
performance PPCs. The two dickered and someone suggested the stopgap
possibility of trying to run existing CPUs at a higher clock rate to
eke out a bit of a performance gain, without requiring Moto to actually
create a new CPU, to hold Apple while it began to look around for
another alternative. (And Moto may also have agreed to finish one more
chip design already in process, while refusing to make any more
engineering starts. So we may indeed see one more incremental
improvement in the G4 before it stalls completely, which was actually
begun before the decision.)

And what I think happened then was that Apple
convinced IBM to produce a cut-down desktop version of the Power4,
which Apple will switch to and finally gain that dream
processor they need, while permitting Moto to stop losing money on new
investments in designs of high performance PPCs.

If that happens, we all win.

No matter how you cut it, the fundamental
characteristics of these new 167 MHz PowerMacs clearly show engineering
and marketing desperation. You only need a stopgap when you've got a
gap to fill. Apple has announced what is clearly a stopgap, and the
only gap that makes sense for it to fill is in sources for faster
CPUs.

Preach it, brother!

The fact that Moto is involved in testing and
certifying these CPUs for a higher bus rate and labeling them as such
doesn't change the fact that they are old CPUs being run faster, rather
than new CPUs. That is what I meant when I said they were being
"overclocked". Some who have written to me have claimed that
"overclocking" means running the part beyond its certified speed, and
since Moto is certifying them then it isn't overclocking. QED.

This is the crux of our arguments against USS Clueless. If
Motorola is testing and certifying parts at a specific speed, they
cannot be considered overclocked by definition - whether they are
the same design as earlier CPUs or not.

As Den Beste himself notes, components are usually designed, tested,
and certified at slightly higher than their rated specifications,
providing a margin of error. If Motorola has discovered that the
current MPC7455 design can function reliably on at least a 167 MHz bus
and at least 1.25 GHz, bravo.

I don't care what marketdroids write in the spec
sheet; to me as an engineer "overclocking" means to run the part beyond
what the design engineers expected. While it may have developed that a
substantial number of these CPUs actually can run reliably at a higher
clock rate, I don't believe that was a deliberate action of Moto's CPU
design group.

Conclusion

After all this, what are we left with:

Motorola is able to produce, test, and certify some MPC7455
processors on a 167 MHz bus and at speeds of up to 1.25 GHz. Den Beste
considers this overclocking, which is at variance with the typical use
of the word.

To support a 167 MHz bus, Apple was forced to move to faster
memory. The decision to choose DDR may have been driven as much by
marketing as by engineering concerns.

The current Power Mac architecture cannot take full advantage of
DDR memory. Specifically, the FSB can only feed the CPU at bus
speed.

There are serious limitations to the dual processor G4 architecture
which must be addressed in future systems.

Moving to 167 MHz is probably a temporary step to fill the need for
a speed bump every six months or so and provide a faster computer until
the next generation Power Mac (whether based on a faster G4, a G5, or a
Power4 derivative) can be released.

So where does that leave us today?

That's an important question, because I'm working with a client (and
potential employer) to set up a killer desktop publishing system for
his new business. He's not a computer expert and is easily swayed by
the hype, and he's got the money to buy a top-end system complete with
a Cinema Display,
wide Epson printer, gobs of RAM, and all the necessary software.

Value

I'm Dutch, and my people have a reputation for being frugal. (That's
part of the reason Low End Mac puts the
focus on computing value.) I've done desktop publishing on systems
ranging from a 25 MHz Mac IIci though
Quadras and various Power Macs.

Except for the memory starved IIci (8 MB of RAM and virtual
memory on), all of the systems felt powerful enough for the work. The
last machine I used at that job was a G4/400, and it was far more than I really needed
- but it was also the low end at that point.

I'd like 1 GB total RAM, and memory is less costly for the old
computer. But we don't need a SuperDrive. Architecturally, though, the
older 1 GHz model has a larger L3 cache and reduced latency when
accessing motherboard memory, something Den Beste's article helps me
understand.

Still, it costs about $500 more than the new G4/867, and two 867 MHz
CPUs are far more power than we'll need for our work, so I'll probably
end up recommending the lower cost solution as the best value.

And in the end, that's the important thing. Whether the new G4s are
a stopgap or not, they do offer more power for less money than any Mac
before them. Regardless of the technical pros and cons of DDR memory,
memory controllers, bus speed, and possible overclocking, they work -
and they work fast.

So while it might make sense to wait for the next generation of
Power Mac if you don't need a new computer now, don't let all of this
technical talk keep you from buying today's Power Mac now if now is
when you need it.

Dan Knight has been using Macs since 1986,
sold Macs for several years, supported them for many more years, and
has been publishing Low End Mac since April 1997. If you find Dan's articles helpful, please consider making a donation to his tip jar.

Links for the Day

Mac of the Day: Performa 6360, introduced 1996.10.01. A new motherboard turned the Road Apple x200 into a winner.

Welcome Image and Text

We believe in the long term value of Apple hardware. You should be able to use your Apple gear as long as it helps you remain productive and meets your needs, upgrading only as necessary. We want to help maximize the life of your Apple gear.

Welcome Image and Text

We believe in the long term value of Apple hardware. You should be able to use your Apple gear as long as it helps you remain productive and meets your needs, upgrading only as necessary. We want to help maximize the life of your Apple gear.

Affiliates

Advertise

All of our advertising is handled by BackBeat
Media. For price quotes and advertising information,
please contact

Page not found | Low End Mac

Welcome Image and Text

We believe in the long term value of Apple hardware. You should be able to use your Apple gear as long as it helps you remain productive and meets your needs, upgrading only as necessary. We want to help maximize the life of your Apple gear.