Posted
by
timothy
on Sunday July 09, 2000 @09:54PM
from the obligatory-note-benchmarks-=/=-real-world dept.

SteveM wrote citing an Semiconductor Business News article which begins: "SANTA CLARA, Calif. -- Here's a surprise. Benchmark test results from Intel Corp. show its new 815E chip set with PC133 SDRAMs beating the performance of its 820 chip set with Direct Rambus memories. Moreover, Intel has posted those unexpected test results on its Web site, not intending to show PC133 SDRAMs beating the Direct Rambus memory format, which is favored by the Santa Clara chip giant." The results actually show some fairly unspectacular differences, but those differences lean overwhelmingly in favor of the SDRAM. Surely someone will come up with a benchmark that always makes RDRAM look better.

I also ought to point out that the i840 was beaten by the 440Bx Oc'd to 133 in a fair number of tests and that while i840 was competitive, it didn't have any kind of edge. This based on the Tom's Hardware benchmark the original poster mentioned. --Shoeboy

No, I fully understand your point and agree with what you are saying. All I'm trying to point out is that benchmarks that deal exclusively with performance and do not mention cost are necessary. In this case there is a clear winner and a clear loser, but that isn't always the situation, so as a general approach it doesn't work....

not to criticize your english because I know not everyone on here is a native english speaker. I know I make countless mistakes myself... but best can be used as a verb, and it has the same meaning as beat.

Perhaps they're just slowly removing their ties from RAMBUS. Coming straight out and saying "we didnt know what the f*ck we were doing, we just wanted the stock options" doesn't make them look very good.

The fact that intel decided to use SDRAM and/or DDR SDRAM in its next generation of chipsets instead of RDRAM outright shows that intel knows a bit better than to push technology that is at best marginally better at 5x the cost. The conspiracy really isn't much, I quote from Tom's hardware:

When Intel 'decided' to go for Rambus technology some three years ago, it wasn't out of pure believe into technology and certainly not just 'for the good of its customers', but simply because they got an offer they couldn't refuse. Back then Rambus authorized a contingency warrant for 1 million shares of its stock to Intel, exercisable at only $10 a share, in case Chipzilla ships at least 20% of its chipsets with RDRAM-support in back-to-back quarters. As of today Intel could make some nifty 158 million Dollars once it fulfills the goal.

20% os the market is quite a bit, but intel doesn't have to be a RAMBUS zealot to pull this off. If RAMBUS really does work better, for, say, the server market, this is acheivable without incredible loads of propeganda that we've seen from them last year and much of this year.

The fact that intel itself would come out and say DDR SDRAM is better than RDRAM pretty much ends the conspiracy theory. But that doesn't mean they're still not biased twords it.

Adding a second RIMM channel also reduces the likelihood you'll take a "bank hit" in the RDRAM, and it allows the chipset to prefetch on the second channel if it thinks there's going to be a subsequent access over there when it sees an access on the first channel.

Of course, CPU and chipset designers have never been all that good at ESP. And, as on-chip caches grow larger, the traffic at the CPU boundary looks increasingly random because all of the redundant and predictable traffic has been absorbed/filtered by the cache, making ESP all the more important. (And yes, I mean Extra Sensory Perception, as in the chipset needs to psychically know where the CPU's going next.)

The other comments about making the channel wider rather than deeper to reduce latency also apply.

Okay, I'll buy that. Some people will always want maximum performance, and benchmarks are always going to be slanted towards a certain group, or excluding another group.

However, in this case it doesn't matter until RDRAM gets cheaper, or it gets a killer app that works massively better with it. I think it could make a pretty good long-term, frequently accessed data cache; maybe something like a BIOS shadow?--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

Hey The Grammar Jew, why is this the first time that I've read of you correcting grammar?

You need to hang here more often I presume.

Are you trying to build up karma before losing it as a grammar corrector?

Let's set things straight.

My goal in life is not building up Karma. Better crack is available in my area, and it's also a lot cheaper than $3 stuff I keep hearing about.

My goal in life is also not cleansing Slashdot of bad grammar, bad spelling, or pretty much anything else. I just can't help but notice that a guy like myself, who isn't a native English speaker, doesn't live in an English-speaking country, and has poor formal training still can spell better than some of the participants of this forum. Some, but not all! Indeed, almost everything that I know about the English language I've learned here.

Are you supposed to be my enemy or something? Why did you pick the name, 'The Grammar Jew'?

I'm not supposed to be your enemy, and the reasons of my picking this nick have nothing to do with my spelling-correcting posts. For spelling is the area where I, well, duh, rule. Grammar (English grammar, mind you) is not.

I have no problems with poeple of differing races/religions.

I truly appreciate that (apart from "poeple" which is supposed to be a typo, right?), but then you've picked the name "* nazi" which, you should realize, may lead you to problems with some folks of European, and especially Jewish, origin. I don't have this problem because I realize that the name is not supposed to represent those real nazis. Be warned that not everybody is like me. Try to avoid assuming such names in the future, and you might save yourself from a lot of trouble.

Thank you for your time. I hope you didn't find it terribly difficult to finish reading this long, dull submission (which I'm even bothering to spellcheck right now...ok, no tyops).

> I agree with your entire position on Intel, but logically you cannot exempt > AMD from your ire. While they are surely less evil than Intel, they are > still evil for contributing to the continued existence of x86.

Actually, I never said that I personally think x86 is bad, evil, or otherwise undesirable. I used the phrase "since everyone here hates the x86 architecture so much"--and generally they do, but I'm an exception. In a recent post [ http://slashdot.org/comments.pl?sid=00/06/29/22272 57&cid=170 ], I argued that x86 is the "open-source ISA" since anyone can use it, while Intel and HP will demand steep royalties for anyone wanting to do IA64 processors. As long as you don't have to code in assembler for it--and few code in assembler these days, anyway--there's nothing wrong with x86 since modern x86 CPUs are really a RISC core with an x86 decoder tacked on, which according to Ars Technica only adds about 1% penalty to the processor's speed. My point was that I find it contradictory that so many people hate x86, but love Intel. People just hate x86 because it's old and ugly as an ISA, but these days it's not such a real-world problem since few people code in hand assembler. ISA is really less important than how efficient the actual RISCy core of a modern CPU is; a 1% speed penalty is really insignificant in exchange for compatibility with the last 20 years worth of x86 apps, and despite people claiming for the last 5 years that x86 is going to hit a performance ceiling "soon", it still hasn't and probably won't for some time.

So, I never said Intel was evil for pushing x86 for so long, I said that it's dumb for people to hate x86 but not fault Intel for creating a better ISA long ago. That leaves AMD in the clear as far as I'm concerned, since I'm glad they're going to extend x86 to 64bits and maintain backwards compatibility and maintain an open, freely usable ISA--putting the next big ISA into Intel's licensing control is a very, very, very dangerous idea--I'll keep incurring that 1% penalty in exchange for keeping an open chip platform, thank you. The reasons Intel is evil include its sloth, especially in keeping the P6 core so long, and its predatory M$ like nature. I congratulate AMD for starting out as having really crappy inferior processors, but making honest and huge leaps with almost every generation, almost every year, while Intel sat on its hands with the P6 core *for 5+ years*. AMD processors are now at least equal to their Intel brethren, and most benchmarks put them at a slight edge now that cache is all on-die, and in price/performance they whomp Intel completely and mercilessly.

> Quality, high-performance workstations from Sun, SGI, and Decompaq can > be had for less than USD 5000

Yes, I agree that the PC architecture is lacking woefully, but the oppenness of that platform is what allowed the Internet boom and Information Age to happen. Cheap commodity hardware that even people who live in trailer parks can afford, but which scales up to performance powerhouses which equal the horsepower (for most applications, but obviously not all) of a RISC unix workstation for a fraction of the price. The sheer brute force and clockspeed of a commodity x86 processor, even on the hobbled buses of the PC platform, make Alphas and Ultrasparcs unnecessory for all but the highest-ed uses. It may take an 800MHz Athlon to get the FP performance of a 400MHz Alpha, but when the Athlon and its mobo are so inexpensive, there's no contest as to which is most useful. Why in God's name would I pay $5000 for a DEC or Sun box which won't run most things any faster than a $2500 x86 box I could build myself? For the elegance? Fuck elegance, give me just as fast for half the price and I'll take x86 ugliness any day. Depending on which processor the DEC or Sparc has, either an Athlon Tbird or SMP P!!!s could get equal performance for between $1600 and $2500 total, not near the $5000 for a non-x86 workstation or server. If you need those big caches, the 500MHz Xeon with 2MB cache goes for between $700 and $900, though for most applications regular P!!!s at higher clockspeed/smaller cache would be better, or a regular 1 GHz Athlon Tbird. Jeezus, one could build a Quad Xeon for less than the price of a typical DEC workstation: mobo $2500, processors P!!! Xeon 733MHz $500 each, add a hard disk and video card to taste. Unfortunately, AMD is still behind with its multiprocessor solutions...

Most PC platform problems could be cured by moving to faster and wider buses, and a Unified Memory Architecture like SGI used on its short-lived line of Wintel workstaions. And, most existing operating systems and the software which run on them would work fine with just a minor OS patch, like the one SGI used to get NT 4.0 to run on its UMA Visual Workstations.

Hi, I remember hearing about the same problem in the beginning of the AGP era. This is not only a matter of technology but also of driver, ROM/BIOS routine, etc. This problem might disappear as soon as some "tuning" is done.

BTW, benchmarks usually involve some very specific tests where some low-level aspects are considered more important than ones related to ergonomy : user comfort, etc. This could be a good thing to know what they actually found. --

I do not believe that RDRAM is still a first generation implementation, since they've been around for a long time. Check their history [rambus.com]. Since they were founded, standard PC ram evolved from DRAM to EDO RAM to Synchronous DRAM and DDR SDRAM is just around the corner. Decide for yourself how Rambus has been progressing in 10 years compared to "normal" DRAM.

Torture. Instead of correcting these folks, hunt them down and start removing body parts, using increasingly painful techniques, for each error they've made. I suggest some sort of standard punishment scale be published, so that other freelance grammar police can tell at a glance what atrocities upon the english language a given person has comitted in the past, and take that into account in their enforcement activities. "An eye for I before E", kind of thing.

I argued that x86 is the "open-source ISA" since anyone can use it, while Intel and HP will demand steep royalties for anyone wanting to do IA64 processors.

Anyone remember Intel's lawsuits against AMD for implementing this open ISA? Huh. How quickly we forget. The only reason Intel gave up is that they have something supposedly better now. On the other hand you can buy a license to manufacture as many SPARC chips as you like for $99. Total, not each. SPARC is an open ISA. x86 is only open because Intel no longer cares to defend it.

which according to Ars Technica only adds about 1% penalty to the processor's speed.

While I doubt this number, I have no other so I will not contest it. Regardless of the performance penalty, there are most certainly much larger a) power penalty - power consumption is proportional to dies size, b) heat output penalty - ditto, and c) elegance penalty. To me, the elegance penalty is the killer. It's cruft. It's a nasty hack to try and get performance from something that was never designed for it. It's a marketing decision laid down in silicon. Even if you don't care about elegance, consider this: how much faster would the CPU be if the extra silicon were a) cache, or b) logic directly related to processing, not translation. It's inexcusable.

Fuck elegance, give me just as fast for half the price and I'll take x86 ugliness any day.

I'm sorry you feel this way. I don't think I could live without an appreciation for beauty.

If you need those big caches, the 500MHz Xeon with 2MB cache goes for between $700 and $900

Uh...the street on a 2MB Xeon 500 is $3000-4000. That is, higher than a 400 MHz 4MB UltraSparc II and significantly lower in performance as well.

Unfortunately, AMD is still behind with its multiprocessor solutions...

As is Intel. The practical effects of inelegance.

Most PC platform problems could be cured by moving to faster and wider buses, and a Unified Memory Architecture like SGI used on its short-lived line of Wintel workstaions.

Sure. But that's where all the cost is, not the CPU. And if you're going to spend the money on a nice architecture, why not put in the extra $100 for a better CPU as well? Then you can kick Intel's kiester for their anticompetitive behaviour as well.

And, most existing operating systems and the software which run on them would work fine with just a minor OS patch, like the one SGI used to get NT 4.0 to run on its UMA Visual Workstations.

The operating systems that are used to actually get things done already run on the CPUs that don't suck. Linux runs on virtually everything. As does NetBSD (no SMP though). You can get realtime OSs for nearly every CPU, and there are vendor Unix OSs that work fine for most platforms. Who cares about enntee? Nobody who values his job uses it anyway. And all the useful OSs already have code to handle the I/O architectures. Why patch when useful OSs are already available?

Uh, it's because the i815 is based on the i810 design, but with newer and more bells and whistles, and separate AGP support (i810 just used the embedded graphics controller, i815 lets you disable it). i810 was intended as a low-end "bridge" chipset for use in entry-level systems while the i820 and i840 became established.

I really think Intel wasn't backing Rambus out of any sinister conspiracy scheme - I think they really thought that PC100/PC133 wasn't going to hold up long-term in their roadmap and they needed something better. They had the Rambus investment, and didn't forsee DDR SDRAM. That's why they got caught flatfooted with the i810 being their only non-Rambus chipset and what opened the door to both Via and AMD.

I bet if they could do it all over again they would have started with the i815 as the low-end chipset, which would have both closed the window of opportunity that Via and AMD used to get business, and it would have eliminated the demand for SDRAM support on the i820 (and we all know how that worked out...), since there would have been an equivalent performing SDRAM chipset.

The same profits AMD is receiving with THEIR license from Rambus. Chipsets define the RAM, not chips. No, Rambus and Intel have no been 100% aboveboard in the way they've approached the market, but Rambus has designed a product with great potential for technical superiority. Anand Tech [anandtech.com] has two interesting articles discussing the ramifications of RDRAM, DDR SDRAM and SDRAM.

I don't know what planet you're from if you consider US$5k for a workstation (even a high-end workstation) "surprisingly inexpensive" either.

Look, I really hate to use buzzwords (is it ok if they have fallen out of use?) but you need to think about total cost of ownership. If you have to pay someone $50 an hour plus benefits and taxes to fix things when they break that $3500 peecee suddenly looks pretty expensive. Real workstations are much more reliable, and when they do break it's just a matter of pulling out the broken piece and popping in the new one. If you've ever worked in real hardware you know what I mean. Any repair job is 2 minutes, and there are no bloody hands and extra screws to deal with. If we're talking about individual use systems, then the TCO depends on how much you value your time. I consider playing around in cramped, cable-rat's-nest-ified, sharp-edged, poorly labeled peecee cases to be a complete waste of my time. It's well worth the extra money to have a machine that always works; and even if it doesn't, it's trivial to fix it. If you've never owned a real workstation, you can't really argue with me. Try it; you'll never go back.

I consider playing around in cramped, cable-rat's-nest-ified, sharp-edged, poorly labeled peecee cases to be a complete waste of my time. It's well worth the extra money to have a machine that always works; and even if it doesn't, it's trivial to fix it. If you've never owned a real workstation, you can't really argue with me. Try it; you'll never go back.

Oh yeah. I fixed an Indy once for a friend (small problem, PSU fan died, very easy to fix), and I couldn't get over how wonderfully easy it was to pull apart the system (once I'd figured out how it was held together) and get at everything.

It was like the difference between an AT-layout x86 and an ATX-layout x86. Only better. Lots better. No more digging through little scraps of ribbon cable connecting on-board serial ports to the connectors on the back of the card cage.

Actually, it was almost as good as working on a Mac G3/G4. (And, even then, Macs hold stature only because of familiarity.)

Perhaps Intel is trying to scare Rambus through some public relations... Rambus may be demonstrating attitudes that Intel doesn't like, or thinks will threaten Intel's investment in them. Subjective qualities like becoming complacent, cocky or too aggressive in their dealings is not something Intel wants to see.

Perhaps Intel is just doing this to keep Rambus on their toes, make sure that they are always using notch 11 on the 10 notch amp, for that little bit of extra energy.

OK, just for the hell of it, I'll bite. According to Tom's Hardware, Intel stands to make about $158 million off of Rambus. That's pocket change for Intel -- they probably spend more money than that on offices cleaning supplies. But by buying into RDRAM, Intel gets to confuse AMD and forces it to spend money licensing the technology that could be better spent on research. In the meantime, motherboard manufacturers scramble to license RDRAM and incorporate it into their products. Only a small number of mavericks try to stick with SDRAM after mighty Intel has spoken.

Then suddenly Intel does some benchmarks and plays innocent -- "those Rambus bastards lied to us!" So Intel does an about face and bring back SDRAM. Maybe it even buys out a couple of those mavericks (who are probably hurting for cash) and stick Intel labels on their mobos to get them out the door quickly.

Where does this leave AMD and competing mobo makers? Up a creek that's where. The big PC makers want to follow Intel's lead and go with SDRAM mobo manufacturers can't afford to switch back to SDRAM quickly enough -- Intel wipes out a bunch of competitors and solidifies its grip on the mobo market in one fell swoop. AMD is pushed away from the PC mainstream and relegated to the extreme low end and hobbyist markets -- again. And Intel thaws out Elvis in time for the launch of Itanium.

From what I recall SynchLink was 800Mb/s per pin (that is,a small 'b' as in Megabits per second). So, you'd need a 16-pin interface to reach the same bandwidth as RAMBUS. (Hey wow, that's the same number of pins as RAMBUS uses. Think that's a random coincidence? Think again.) I remember hearing about SyncLink before they'd added the 'h' to become SynchLink, and when their bandwidth per pin was still 400Mbit/s. From what I recall, they upped it to be competitive with RDRAM.

RDRAM lost EVERY test except for FPU thoughput with L2 cache disabled. HOWEVER, when the cache was enabled, it got trounced! The theory of RDRAM operation and performance is ok (not great) but the reality is a different story.

for he pulled a spelling trick worth a fourth grader! Spelling the same word differently in two different places is a definite no-no. Both occurrences will be treated as spelling errors by your teacher.

I told our friend, CMiYC, to be more careful with his grammar. Surely he will take this advice and use it before all of his future posts. I can only cleanse Slashdot one user at a time and, even then, I can only do it in a preventative manner. I wish that there was a better way. Do you have any suggestions?

Maybe it's just me, but there sure appears to be a lot of belief here that RDRAM is a decent solution. The truth is, a lot of effort was put in to making it, and the result was an overly-complex protocol that traded latency for bandwidth.

It has no real position in today's market, since it is too expensive to be used for personal workstations, and is too slow with multiple chips, which rules out the lucrative server market (notice how Intel's new Xeon-style solutions recommend SDRAM).

Plus, RDRAM and PC133 SDRAM are in two totally seperate leagues. It would have been better to compare RDRAM to DDR-SDRAM (PC200?) which is proven to smoke RDRAM in both latency AND bandwidth.

I'm not talking about a performance rating, or a number on a box; I'm talking about taking it into consideration when you make a comparison.

I made some other comments on this same topic as well. However, I believe my point was "RDRAM is too expensive *and* it doesn't offer a real performance boost, for general-purpose memory". Do you see why price/performance would be an important metric here? (or even some consideration or mention of price?)

Also, benchmarks are fundamentally flawed in the first place. Depending on how they are conducted, and the *exact* components, software and hardware, for the entire system, plus configuration tweeks, the result can vary by a huge amount! So I wouldn't argue that performance doesn't change. The system I buy won't be anything like the one they benchmarked; I might not be using the same chipset, operating system, or bus, let alone tweeked settings in my nonexistent "Windows Registry". So performance can be just as artificial as price.

Your other point about letting the reader compare for themselves is valid, but I wasn't intending to advocate eliminating performance metrics entirely; I just wanted to see someone mention how *#@$ expensive RDRAM is now, and how useless it is to buy it for performance as system RAM. Also, any decent benchmark should have full disclosure as to how the performance numbers were achieved, and all the information possible about the testbed, so that people can recreate the results, or change another parameter and compare to those results. In a perfect world, that is...--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

I fail to see your argument for AMD. I agree with your entire position on Intel, but logically you cannot exempt AMD from your ire. While they are surely less evil than Intel, they are still evil for contributing to the continued existence of x86. And any proportion of evil makes for complete evil. And to get right down to it, x86 itself is nowhere near as bad as the peecee architecture in general. The product of 20 years of non-design and corner-cutting cheapness, this architecture offers atrocious performance, maintenance nightmares, and outrageous total cost of ownership. So if you really want to make a difference, stop buying peecees altogether. Your case for AMD is weak at best. If Intel is evil, so is AMD.

There are plenty of options out there; many are surprisingly inexpensive. Quality, high-performance workstations from Sun, SGI, and Decompaq can be had for less than USD 5000, often less than half of that, which do not use x86 nor the peecee architecture. You'd better hurry, though, before everyone drops their quality architectures for IA64 and gives Intel the market chokehold it has been lusting after for years.

I'm ready to buy a decent higher end box, but am trying not to go much over $2,500 (the price I paid for my first computer, a Mac 128K in 1984 -- tradition!)

Comparing similar boxes based on i815 and i820, I can get an i815 based box with 256 megs RAM for cheaper than a 128 meg i820 box, and if I even wanted to go to 256 on an i820, it'd cost me an extra $500 or so. And -- you have to be really careful. Dell has apparently been shipping PC600 or PC700 with many of their units, to keep costs down. And PC600 RDRAM should *really* be called PC534 but it's been rounded up. If it doesn't say PC800 in the "configurator," be suspicious.

Bottom line, screw minor benchmark differences, when it comes down to it, RDRAM cost is prohibitive and if you compare boxes of the same cost with the SDRAM based box loaded up with extra RAM, you'll be better off with SDRAM.

Now this HAS to tell people something? Does Intel think that its end users are going to remain dumb forever. I think Intel needs to come clean as to why exactly it's still pushing Rambus memory so hard. I've seen so many benchmarks that show PC-133 on top that why a company that claims to be the processor leader would choose an inferior product.

I agree with you in saying that the x86 based processors may not be as efficient as more advanced RISC based processors (Alpha, Sun, etc.), however you also need a BIG piece of reality into you. The fact of the matter is that a Sun Workstation worth less than $5000.00 will have a much lower performance than a high quality PC which would cost only $3000.00. In our research group we have a Sun Ultra Enterprise Server with a 333Mhz processor and a PII 350Mhz processor beats our processor on many integer and floating point calculation benchmarks. The first one costs $20000.0 while the second one much less than $5000.00. We also have many Sun Ultra 10 Workstations (they cost around $5000.00) with 333Mhz chips and their performance is inferior (in Matlab applications) to simlilarly configured PCs. The reality is that if you don't have around $50000.00 to buy the TOP of the line Sun or Alpha, you can get MUCH MORE PERFORMANCE from a PC based on the Athlon or Pentium processor for much less money. Personally I think you are much better off using a PC with Linux and the latest top of the line processor (wether it is the Thunderbird or the Athlon) than bying a very expensive Sun machine. On a different subject I find it very funny that even INTEL itself admits that PC133 SDRAM is better than RDRAM. This is not new since it's been in may websites like toms hardware guide (www.tomshardware.com). I guess we will have to see what are INTELs true intentions with respect to memory in the future.

As far as load is concerned, RDRAM is optimized for throughput, SDRAM is optimized for latency. Something that hits many cache rows in more or less random order taking only a little data from each will work well with SDRAM. Something that processes large amounts of data in more or less linear order will work well with RDRAM. It depends on what you're doing.

My personal opinion? RDRAM is a bad implementation of a good idea. In five years we might see something better. For now, by DDR SDRAM. YMMV.

I can't help but mention that I did call myself "a guy". By the way, this "Nazi/Jew" contraction is disgusting. Finally, I refuse to manually process Fishbabble (TM) in order to turn it to something remotely human-readable. There's an RFC [faqs.org] that deals in exatly these things.

I think Intel needs to come clean as to why exactly it's still pushing Rambus memory so hard.

Other than the fact that they own Rambus? How about profits from licensing Rambus technology? How about using patents to put the squeeze on SDRAM manufacturers? How about designing future CPU's and chipsets so that rambus is the ONLY memory that is supported?

We love to bash M$ because we are visibly affected by their evilness on a daily basis, but I think most people would be suprised by the kind of nasty stuff that Intel gets away with (just ask intergraph!)

It doesn't look like the PC133 results were massivly better, but almost all of these tests showed minor performance increases. The article states that the tests were done in the same lab, I'm kind of surprised that no one realized they were getting nearly the same results. If the same group of people did the measurements, I would think that someone would have gone "hey, those numbers look familiar." Hmmm.

How is it "insightful" that AMD's evil because they sell a chip that consumers want to buy??? And encouraging people to swap archicectures to machines that cost 100%-500% MORE than equivalent x86 machines? It's not like those chips make any bit of difference in the consumers mind.

Any company that walks away from the x86 processor business is a dumb company. That'd be like leaving money on the table. Buckets full of money. And x86 is really the only market to be in right now... hmmm? Should i go after 10% of a 100 million units a year or 50% of 5 million units??? Which way does the math work best?

>That's sorta true - however, the 8086, which was >BEFORE the 8088, had a 16 bit bus.

That's almost true:) The 8086 was not *before* the 8088; the 8085, 8086, and 8088 were introduced simultaneously.

>The 8 bit bus was actually from the 8080 (or the >competing Z80), which was an 8 bit processor.

I remember those quite well. I even have an 8080:) However, the busses used varied widely. To the best of my knowledge, the IBM PC bus did not relate to any 8 bit busses . . .

>The 8086 was short lived for cost reasons,

short-lived? It was in wide use by almost everyone except IBM until the 286 became commonplace. At that point, it fell largely out of use, and the 8088 was used in budget machines.

>so most people (i.e. you) associate the 16 bit >bus with the 286.

Uhh, no. Aside from that I remember all of this from when it was happening, I most certainly do not make any such association.

However, the *particular* 16 bit bus that was being discussed is the IBM PC/AT bus, which was introduced with the attached 286, and extended the 8 bit bus of the IBM PC which used an 8088. There were several other 16 bit busses at the time, including Olivetti's and Vector's, which extended the 8 bit pc bus, and an extension to the S-100 favored by companies such as Compupro.

The point is... Rdcram might be great for server apps and stuff.. buuuuuuuut... THAT'S NOT WHAT INTEL IS DEVELOPING IT FOR!

Seems their whole desire is to push it into desktop machines as soon as possible...

Rdcram may be good for some things.. but personally I don't see any reason for spending that much extra cash for a mb that supports it.. let alone the cost of the ram itself just for my desktop machine... I mean.. let's get real folks

SDram still has a ways to go... figure 200mhz will be enough to hold us out for a few years at least.. then in the meantime the server boys can suck up the developmental costs of Rdcram and push the prices down to a reasonable level for us desktop users and hopefully get the bugs worked out of it in the meantime.

Well, consider the fact that SDRAM, and even DDR SDRAM, is considerably cheaper than RDRAM. Thus, for (nearly) the same performance, you can have a cheaper solution that holds up to the more expensive one. Now which do you choose?

Please, like slashdot readers never complain about misleading benchmarks like Mindcraft. The simple truth is that in most real-world applications, Rambus handily outperforms PC133 DIMMs, and is worth the extra expense (which means little to companies who want the extra bandwidth).

Sweet Troll man. Two biters with the +1 bonus already and neither of them have any suspiscion. God, you'd think at least one of them would have bothered to read your fucking post. The simple truth is that in most real-world applications, Rambus handily outperforms PC133 DIMMs, and is worth the extra expense.Yeah, that's a well supported assertion;) --Shoeboy

Wait for a second and flip the numbers over for a sec... lets assume for a moment that the RDRAM was 3% faster than the SDRAM... is that 3% worth the expense?

Now lets come back to reality. According to the benchmarks I've seen SDRAM comes out close to and usually beats RDRAM at a much lower cost. I wouldn't be suprised if some tech-head came and showed me RDRAM spanking the benchmarks for big server apps, but why do I care?

I would like to see a benchmark comparing Linux with SDRAM against Windows with RDRAM. Of course we'll have to get mindcraft to do the tests.. as they have experience in this area..

In the end, it depends on your perspective... Do you spend extra cash on the memory on the processor or the memory? I'd live to see someone put 600MHz RDRAM on a 500MHz processor...

OTOH, SDRAM has been around for a bit longer than RDRAM (it's also an..."open standard"...). Do you want to pile your cash into a STILL umproven technology that could easily be squashed in a few months?

I've been worried recently that SDRAM prices will skyrocket and RDRAM prices will plummit for the ONLY reason of big buisness pushing around the little consumer.

11 or 15. Six for decode (FETCH, SCAN, ALIGN1/MECTL, ALIGN2/MEROM, EDEC, IDEC/Rename), or maybe five depending on how you feel about IDEC/Rename. For integer instructions (at least direct path ones) you then have SCHED (which can take multiple cycles, depending on how long it takes for all inputs and an appropriate functional unit to become available), EXEC, ADDGEN, DC-ACC, RESP (DC-ACC and RESP are cache accesses, I'm not sure where the write back is -- they may have left the retirement out of the document I'm looking at). The FP pipeline (FP instructions, MMX and 3D Now! instructions as well) is longer, 15 stages (including the first 7 above), more for FMUL.

Intel P4 (aka Willamette) has 20 stage pipeline, and it remains to be seen whether the high clock rates this enables makes up for the hits it'll take due to latency and branch mispredict penalty.

I thought the Willamette's was more like 25 pipestages for the integer unit, and an undisclosed (I assume higher) number for FP operations. The PPro's is pretty long allready, like 18 or so (that may be for the FP). I assume about the same length in the P-II and P-III since they are the same microarcheture.

I'd like to read some nice conspiracies of how Rambus is controlling Intel and how Aliens from [insert name of distant planet] are controlling the whole thing from above.

Yeah, it makes sense that the i815 is newer and based on the i810.. still, they could have called it the i-gothitovertheheadwithacluebat chipset and they might have gotten it a bit closer to the truth;).

Just like they changed the processor serial number (PSN) to be the WPSNYTB? (What PSN are You Talkin' aBout?)

The U5 and U10 are not deserving of the name they carry. They are peecees with a different CPU. I won't consider discussing the merits of these machines separately from peecees, since they are identical. Someone at Sun loaded up on crack cocaine and the 5/10 resulted. End of story. Let's hope he got the help he needed.

And you're welcome to spend $1000+ for a Creator 3D card that's probably no faster than a $200 PC video card.

I paid $80 for mine. FFB2+. Very nice.

And then you can get screwed when you need a patch for Solaris that's only available to contract customers.

So don't use Solaris. It isn't very good anyway. Linux runs exceptionally well on Sun hardware, much faster and more reliable than on peecee hardware.

You think everyone would rather spend 3 times as much because you're too lazy to work inside a computer for a few minutes longer?

Laziness has nothing to do with it. I was discussing cost. If something takes longer, it costs more. In an environment where you're paid to do so, the costs are immediate and direct. In other environments you must evaluate the worth of your time. Personally, I'd rather just use my computers to do the work I want to do and not spend lots of time screwing around trying to get broken, misdesigned hardware to function. YMMV of course.

It's not easy to do that. Everyone has different weightings that they put on those categories. A honda gives fantastic price/performance, but if you want to win the Indie500, it is definitely not the right choice. Some people will pay a lot more for a small gain in performance because they need all the performance they can get. Others will take a significantly inferior product for even a small price drop because they just don't have the extra $100, period.

Why? Well, upcoming Intel "whitebox" servers WILL NOT USE INTEL CHIPSETS! They will use chipsets from Reliance Computer Corporation (RCC), now known as Serverworks.

spot on. i just bought a Tyan Thunder 2500 (based on the ServerWorks IIIHE chipset), mainly because it has proper support for 133MHz SDRAM, without any hacks or kludges. that, and the fact that it has 64 bit PCI slots, 8 SDRAM sockets, and of course dual CPU support.

if you can actually find one of these boards, its a pretty mean piece of bad azz mofo hardware.

Hey The Grammar Jew, why is this the first time that I've read of you correcting grammar? Are you trying to build up karma before losing it as a grammar corrector? A true grammar nazi doesn't need to build karma since his posts are enlightening and grammar-correcting. Positive moderation is the reward for the grammar nazi.

Are you supposed to be my enemy or something? Why did you pick the name, 'The Grammar Jew'? As a grammar nazi, I'm trying to cleanse Slashdot of bad grammar. I have no problems with poeple of differing races/religions. If you wish to be my enemy then please start using lousy grammar. If you wish to coexist and team up against the lousy english on Slashdot, then welcome to the club!

Test them on price/performance instead of performance; for general-purpose memory, I see no compelling reason to use RDRAM except to say that you're using it. (As in, "Wow, RDRAM, that's new, isn't it? I bet that set you back quite a bit...")

While many end users are actually more interested in price/performance than they are in performance per se, the idea of listing price per performance is still a bad one. There are two main reasons for this:

Different users have different willingnesses to spend extra for more performance. By making a composite yourself, you deprive the reader the ability to make that choice himself.

Prices for computer components are well known for being unstable both in time and location, while performance fluctuates less. By factoring in the price at the time you ran the test and where you bought the components, you muddy the comparison for users buying components in a different environment.

Both of these factors suggest that rating by price/performance is a bad idea, and that rating just by performance is much better.

A marginal speed increase in some benchmarks related to desktop applications is hardly significant and can easily be attributed to differences in the chipset, different BIOS settings for memory timing, and so on.

If RDRAMs are indeed faster than SRAMs, you'll see that in situations where high memory bandwidth is essential. For example, a comparison on a large SMP database server could be really interesting. Usually, desktop applications do not have these memory bandwidth requirements. Even for so-called multimedia applications, the PCI bus bandwidth is the limiting factor most of the time.

Intel's philosophy is no different from Microsoft's: Embrace, extend, extinguish. I'm just amazed that your typical Microsoft-bashing/.ers aren't Intel bashers, too, because Intel deserves a big ol' can of whoopass opened right by their corporate asses. Let's examine a little...

First off, Intel has been in the process of developing standards for the PC architecture for some time, as well it should. However, they've doing it the same way Microsoft has been "contributing" to Internet standards. For example, they developed AGP up to 4x, which has proven to be very useful; however, rumours are churning out from reputable sources discussing an Intel project to create a successor to AGP 4x, and this successor is to be limited to Intel chipsets and chipsets made by select Intel partners--i.e., anyone who annoys Intel will get left behind. Intel developed PC-100 memory standards--a great service, but...then it refused to develop PC-133 standard or DDR-SDRAM specifications, because of its own interest in RDRAM as a wholesale replacement for all SDRAM.

Many have questioned that Intel has much to gain from Rambus becoming the new standard instead of DDR-SDRAM; after all, contrary to popular belief Intel doesn't completely own Rambus, and their deal with Rambus would only give them compensation in the tens of millions, which isn't much for a company whose revenues are in the billions each year. But what Intel has to gain isn't direct monetary compensation by Rambus, it's *control* over the standards for memory and memory controllers--and the rights to manufacture and license those memory controller technologies. This is exactly what MS did with IE--it didn't directly make a profit by developing a new web browser and bundling it with Windows; it gained market control and the ability to manipulate the Internet protocols so that all its products, from IIS to Frontpage to NT Server and the rest, had an advantage of guaranteed interoperability and increased functionality over competing products.

Intel wants to do the same with RDRAM and its new IA64 architecture, and its new forays into the emerging appliance market. Intel will make royalties on all chipsets which support RDRAM. Intel will make direct profits on its IA64 processors and has probably been hoping to licence the ISA to competitors once x86 plateaus. Intel has purchased the StronARM and other embedded/appliance hardware companies, hoping to leverage its market dominance to push it into every area. And, let's not forget that they tried and tried and tried to force their way into the graphics market, but failed there due to too-short product cycles and competitors with much more graphics experience.

It's clear that Intel wants to be the Microsoft of the hardware world. If they leverage enough tech patents on all fronts, they can force use of their products in the same unfair ways Microsoft leveraged itself into every crevice: big OEMs unable to get the best prices on Intel desktop processors unless they agree to use StrongARM in their embedded/appliance products instead of Transmeta or MIPS, or unable to get hold of ahort-supplied IA64 for workstations/servers unless they use P4 in their desktops, VIA unable to make the most advanced RDRAM chipsets unless they cut back on DDR or agree not to pursue QDR, etc. Don't think it won't happen, even with M$ as an example: there are many sneaky, below-the-board ways to hint at such matters without bluntly making demands.

And, since everyone here hates the x86 architecture so much, why the Hell are so many/.ers such big Intel fans? They're the companywhich kept pushing x86 for decades instead of developing something new and improved and more RISCy, so why so many Intel apologists and AMD naysayers? After all, as good and serviceable as the P6 core was, it didn't deserve to stay in service for 5+ years. AMD may have been a dog back then, but at least it made radical improvements with almost every product cycle; Intel just wasn't trying at all. And look at the disaster which is the new Celeron/Culeron: it may be overclockable to 900MHz easily, but because of the set associativity lost by savagely destroying half the cache like Huns sacking Rome, it barely rivals a P!!! 700MHz and gets blown away by a lower-clocked Duron too--and the Duron is also very OCable. Intel is being just as evil as M4.

You are a stupid fuck. Ouch. In realworld applications, Rambus does perform better.Where is your fucking PROOF asshole? Is Bryce 4 not a 'real world application'? How about CorelDraw 9? Naturally Speaking? Quake III? Netscape Communicator? Paradox 9.0? Photoshop 5.5? Powerpoint 2000? Windows Media encoder 4.0? Word 2000? I think all of these are 'real world applications' and guess what, the 440bx at 133 smacks the i820 all over the fucking place. The only real world app where I saw Rambus with an advantage was excel 2000. --Shoeboy

It does not take very much imagination to think of the horror show consumer environment that a world without AMD, Cyrix and Winchip would have been. Intel would have dropped prices the 12th of never and would have innovated at a much slower and more profitable for them pace. These companies all work for a profit and that is good, but the market pressure brought on by AMD latest technological gains in surpassing Intel at a lower price point is great for everyone. With some willingness to accept the lack of stability of the Sun type work stations a 1 gig box can be built for well under a 1,000 USD no 22" monitor or RDVD, but what was super computer performance just a few years ago WOW! Little black helicopters? If you only knew.......

At last, a/. reader with an intelligent question. A few percentage points in benchmark testing are not a big deal to the average home user. I offer my own example. I built an "upgrade kit", which allowed me to use my SCSI drives in a new, faster computer, 16MHZ to 300MHz CPU speed change. Following the supplier's suggestions, I ended up with a computer running 100 MHz memory at 66MHz, and the CPU at 150 MHz. There were several reasons to limit memory speed, cool running and conservative practice being two. However, the CPU slowdown was just a goof. The technician thought there was another doubler in the CPU. Eventually, an update to X-windows was unacceptably slow to load. After checking the setup, I changed the CPU multiplier to get 300 MHz. Everything worked without a problem, and the only noticeable speedup was the X-windows load. Mind you, these are hardly small, incremental differences I am discussing. The answer for the average home user is still the same. Determine the equipment that will do your job, then buy it. For most of us poor boys, this makes price a powerful parameter, far more significant than a few percentage points in an irrelevant benchmark.

As long as you don't have to code in assembler for it--and few code in assembler these days, anyway--there's nothing wrong with x86 since modern x86 CPUs are really a RISC core with an x86 decoder tacked on, which according to Ars Technica only adds about 1% penalty to the processor's speed.

I don't see how they came up with the 1% number. Here are a few counter arguments...

The x86 has reached some pretty impressave speeds. 1Ghz is shockingly fast. Even 800Mhz is quite speedy. Intel has done this by using extreamly long pipelines. Some 15-22 pipestages depending on the operation. AMD has done the same. A longer pipeline increses the latency of many operations, and makes sequential dependencies in code cost more and more. And branch penailties, and load cache misses. IBM has the PowerAS running at 600Mhz with a 5 pipestage machine (that is fewer pipe stages then the AMD uses to decode instructions!). It smashes the PPro-P-III and AMD in anything that has lots of poorly predicted branches. Like DB code. It does better on code that does lots of pointer chasing (like linked list walks).

(the PowerAS has a zero to one cycle peanilty for "misprecidted" branches (it's prediction method is "allways taken", or "never taken" I forget which); the Intel has a penality of more like 11 to 20 cycles, with a maximum peanilty of 44 or so cycles of work discarded from the ROB, the Intel has a very good branch prediction scheme, for predictable branching patterns, when it gets to bad to predict code it sucks big time)

The P-III and AMD managed to decode 3 instructions per cycle, quite an acomplishment with an irregular sized instruction set. They have finally gotten to this point. The SuperSPARC in 1992 or 1993ish decoded four instructions per cycle. That means the best the x86 can do over the long term is to execute three instructions per cycle (because even if they have spare functional units, they will run out of instructions in the reorder buffer if they manage to execute >3 instructions per cycle for long). RISCs have grown a few more decoders in the intervening 8 years. Some of them at least.

If the x86 is only one percent slower then RISCs, why is the anchent (2 year old?) Alpha 21264 at a mere 667Mhz still turning in better SPEC2000 FP numbers then the "shipping only to select OEMs, and not many units either" 1Ghz Intel part?

Try to get a stream benchmark number in the same ballpark as a real Alpha (not one baised on the PC chipsets), with a Xeon. Intel hasn't made a memory system that can compete. And the memory system is half the price of the damm Alphas.

RISC may have lost the comercial war to CISC, but there is no need to stomp on it's acomplishments. There are really impressave RISC CPUs for a fraction of the research dollar Intel (and AMD!) spend.

So, I never said Intel was evil for pushing x86 for so long, I said that it's dumb for people to hate x86 but not fault Intel for creating a better ISA long ago.

Oh, but they have. The i960 is a diffrent ISA, I never used it, but I'm sure it is quite diffrent from the x86. The i860 was also very diffrent. It had a pretty nice ISA as long as you didn't put it into streaming mode. The VLIW mode was a bit odd to me, but it wasn't a huge deal.

Peopel even used them. Just apparently not enough people used the i860. I donno what the deal was with the i960. It was extreamly popular 5 years ago, but doesn't seem to be now.

Maybe I'm just not enough of a hardware junkie, but are a few percentage points difference that big a deal?

I think the big deal is the fact that RDRAM is suppose to be so much better in terms of performance than SDRAM. The very fact that SDRAM matches or beats or loses by so little causes one to wonder why spend the extra $$$ for RDRAM. So, no... in terms of performance only a few percentage points don't matter. But if you look at the overall picture: price, availability, compatbility, APPLCATION.... which technology do you really need?

That being said, I also think that RDRAM may not be dead. Look at the celeron. The first celerons were crap. Now it is just about the most common low end processor out there. There may be a little more lag time since RDRAM isn't being developed directly by Intel, but I think that Rambus will do whatever Intel tells it to. (At least they better!)

RAMBus is over eight years old. This is something like the fourth major revision (in '92 it was a 400Mhz 8bit interface). This is not a repeat of the Celeron story.

Try putting in 8 slots of fully interleaved RDRAM vs SDRAM and you will find that the RDRAM has one hell of a better bandwidth.

If you fully interleve the SDRAM it has pretty impressave bandwidth numbers too. Of corse that takes (about) four times as many pins. In fact 8-way interleved PC100 SDRAM excedes the bandwidth the Intel and AMD can get off their CPUs, so the only thing that will matter is latency, which'll make the SDRAM a better choice...

If you can come up with all those pins. If.

Low pin count is one of RDRAMs few remaining advantages (RDRAM systems with no CPU L2 cache run about as well as systems with small L2 cache and a normal memory system -- but that's not a good deal with L2 caches so large these days...I can list other obsolete advantages if you like). You can four way interleve RDRAM with (about) the same number of pins you need to interface to stright (not interleved) SDRAM. So if IBMs high density packaging catches on, RDRAM loses that (as more pins will be cheep). If DDR SDRAM really uses a 16 bit interface RDRAM loses it's advantage.

Of corse I don't see many chipsets using this advantage. Where are the motherbord chipsets with four RDRAM controlers?

Test them on price/performance instead of performance; for general-purpose memory, I see no compelling reason to use RDRAM except to say that you're using it. (As in, "Wow, RDRAM, that's new, isn't it? I bet that set you back quite a bit...")

Now, for some special-purpose applications, RDRAM might be an excellent choice, just like in some circumstances, a P-III might work out better than an Athlon, or an 8086 might be the better choice than a G4, or a hammer might work better than a screwdriver. But for general purpose, plain old RAM, RDRAM is underwhelming.

...now watch the price of RAMBUS drop. I can hear the screams from here.:)--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

Someone's has *got* to be fired over this. I forgot the exact figure, but intel has invested a huge amount in Rambus... and you can be sure to expect Rambus's stock prices to go down a bit if word of this spreads far and wide... This incident reminds me of the time the current CEO of Microsoft said at a press release that 'tech stocks are overpriced'. Oops. There go MSFT stocks down $20 a share. I doubt this is proof of the beginning of Intel's favoring of SDRAM over RDRAM. I just think it's another dumb mistake in a series of small dumb mistakes of the lumbering giant, and is foretelling that it's on the way down... Maybe the mistake was not as conscious as the one to make Intel's i820 non-overclockable (therefore losing a part of the market, not a big one, but a piece of the pie nonetheless)... Hail to the king (AMD), baby:)

I am definately not a Rambus lover, G-d knows I just had to spend a fortune on some Rambus RAM. I wanted to go 933, could not wait for i815, and needed an extremely stable solution for my workstation. I ended up with an Intel Or840 mainboard, 256 megs of RDRAM, and a P3-933. This thing is rock stable but cost a small fortune (the ram cost more than the CPU).

RAMBUS ram is completely out of control when it comes to price. It's nearly impossible to track down (if you're building your own system), and I can't see it competing well against dual channel SDRAM solutions that are due out soon. If a single channel configuration with 800mhz RDRAM can't beat out a PC133 solution, there's no way a two channel RDRAM solution will beat DDR SDRAM.

The big problem we run into is system stability. Call me crazy, but I simply can't compromise by going to a VIA chipset anytime in the future to avoid RDRAM. I've seen too many problems with them in the past, and I simply can't take the chance on it. I need stable machines at my shop, otherwise my life is miserable (plenty of Lusers here!)

Intel's primary reason for sticking with Rambus is the amount of money they've made off of Rambus stock. I don't expect them to abandon it anytime soon. The big question is whether or not we'll see a dual channel SDRAM solution coming out of Intel.

What's really pissing me off is that Rambus is going to make money regardless of the memory technology. They are winning lawsuits against DDR SDRAM manufacturers, and it looks like there won't be a stick of ram produced that won't have a royalty fee going to the big R. Can you smell antitrust?

I wouldn't be suprised if some tech-head came and showed me RDRAM spanking the benchmarks for big server apps, but why do I care? Really? It would shock the hell out of me. RDRAM latency degenerates rapidly as you add chips. Get a Quad Proc system with 4GB of RDRAM and you'll see some truly abysimal benchmarks. That's why intel was trying to position RDRAM as a desktop/workstation tech for Williamette (the P4) while pushing SDRAM for Foster (the P4 Xeon). --Shoeboy

I think that the benchmarks make you step back and think. Do you really need to spend the money on Rambus? Think of it this way, if you were about to invest in a Rambus system just because you thought it was faster than PC133... you might be surprised to find out that whatever your application is, SDRAM performs just as good.

So, think of it in that respect, it all depends on the application and if the application warrents the cost. If your specific application won't gain anything out of it, why spend the money? On the otherhand, you might be able to rest assured that the money is well spent.......(which I know most people here won't think that way, they'll just look at the numbers, but hey that's life).

I think that it's obvious (as kirkb mentioned above) that Intel has a lot of financial incentives to back Rambus. It's a good strategy, really: buy 10% of the company for dirt cheap, then force the technology down the throats of users. Now sales are up, the stock price skyrockets ("well, Intel says it's the next big thing and who would know better than Intel?") and their investement increases tenfold.

It's a classic scam... just not usually pulled by a company the size of Intel.

Benchmark: RDRAM vs. SDRAM General Purpose RAM as a memory system for a PC Rated on Price/Performance and Performance.

Since RDRAM, if anything, tends to be slower, *and* it is massively more expensive, it loses.

Any other uses for it are just that--other uses. i.e. not what I would be benchmarking, and not what I was talking about.

Also, I'm going to buy a new computer, and I'm going to get an Athlon with PC133 SDRAM, both for cost and for performance. If you could find me an equivalently priced and performing Pentium III with RDRAM, I'd buy it. Do you see the relevance of this metric now? If their performance was *significantly* better for the general tasks I'd perform, then we could change the weights on Price/Performance. Until then, it's a sucker bet.--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

I also heard something about individual RAM cards integrating CPU's onto them. Not fully-fledged 32-bit or 64-bit CPU's, but 1-bit or 4-bit or 8-bit CPU's. The main CPU could farm out operations to the smaller processors, saving a lot of the main CPU time for the bigger operations needed.

Perhaps a vector processor on every chip would have value, allowing SIMD operations to be performed with much more efficiency than if a a central vector unit had to do it, like AltiVec or whatever name Intel has come up for their technology this week.

That strikes me as being a pretty smart technology... cheap and effective, hopefully. Add RAM, add a small about of processing power. We will see that eventually, but it would be good for some company to do that now and get an early lead in the "iRAM" field.

I'm not sure. That one PDF from their web page says 11 for (most) int instructions and 15 for (most) FP/MMX/3DNow instructions. It also leaves out any pipestages that look like retirement/register writeback, so unless they are folded in with other stages that are unlikley I think they left something else.

AMD has pretty decent tech docs on their page, go look. My memory said more then 11 min, so I was just as supprised to see the 11 as you are by the 15 (which is also listed).

First of all, the Sun Ultra 10 machines I was talking about aren't PCs, I am not sure where do you get that from.

Open one up sometime. Look at the chips. You will see that they use standard peecee components like ATI graphics, IDE, and Goldstar (yes, Goldstar) CD drives. In my book, that makes it a peecee.

I wonder if you use Matlab, which doesn't seem you do.

It's not my primary application, no. I use my systems for development. Since I know for a fact that matlab does not run on sparc-sun-linux (I do admin matlab, I just don't normally use it), I would strongly suggest that your disappointment with your Suns is the fault of your choice of operating systems, not hardware. Solaris has a reputation, backed up by benchmarks for whatever they're worth, for offering poor performance, especially on fewer than 16 processors.

I once worked on a project to translate a matlab program into C. I do not know whether the original program played to matlab's strengths or weaknesses, but I can say that my portable ISO C program averaged 23 times the performance of the matlab version. The point? I don't think matlab is a very good benchmark. Obviously, it's your application so it's the only benchmark you care about, but I suspect that in the grand scheme of things matlab doesn't necessarily mean much. It also has no way whatever to test things like disk I/O and internal bandwidth which are nearly irrelevant to matlab but of critical importance for virtually every other application, areas in which peecees lose to any real workstation, often by a factor of 5 or more.

My point is if I have an Alpha or Sun with a 500 Mhz processor and they cost 5000 and I can buy an Athlon 900Mhz for 2500 I pefer the Athlon and I KNOW it has very good chances of outperforming the others.

Good for you. I'm glad you've found systems that work well for your application. I'm sure you'll enjoy repairing them numerous times in the six months before they stop working completely. *shrug* It's your maintenance nightmare, not mine.

Granted the PC architecture sucks. Look at IDE! How many more band-aids are we going to see placed over what is essentially a 16-bit interface designed for the 286? How about "standard SVGA"? The closest there was to standard was VESA and everyone reading this should know VESA is useless except when used with DOS real mode - too slow otherwise. Our sound card standard pretty much the SB Pro with Windows Sound System for 16 bit audio - and you can't even depend on that! USB is nice, but then again its implementation is also a band-aid. I could go on, but I think I've made your point for you: the PC, as a platform, sucks.

The other part of your argument... AMD is evil because they make wintel-class chips? I think not. AMD would be out of business if they made some little off-brand CPU architecture. With more than 90% of the intalled base of desktops and workstations running under the PC architecture, you'd be a fool not to consider making hardware for it! Even SGI has been moving their software to the PC platform because there's just more of it out there and they know they can't keep up when it comes to price vs. performance.

I don't know what planet you're from if you consider US$5k for a workstation (even a high-end workstation) "surprisingly inexpensive" either. I can build a pretty damned sweet workstation by any standard for US$3.5k and that's including a monitor better than my current 21" and some very nice (if expensive) input devices. You said it yourself, the PC architecture wasn't planned beyond build something that "works"(?) as cheaply as possible. Until other architectures can deliver as much or more performance at a comparible or lower cost down in the mid- to low-end workstation range as well as the high-end and our respective mothers can still play Solitaire and Minesweeper... The resurgance of unix and unix-like platforms, especially those which are developed portably and openly with such a focus on ease-of-use may as they mature make it easier to throw away the tired PC architecture. That time just ain't here yet. Until then, AMD looks like a mighty promising choice the next time I build a box.

The RDRAM is serial unlike most RAM. This means that the 800Mhz serial speed of RDRAM has the same transfer rate as 100Mhz DDR SDRAM. RDRAM can go faster but do to the serialness of it there has been problems. Here are some general peak transfer rate stats for RAM.

Tom's Hardware Guide [tomshardware.com] alludes to an E-mail he received from Rambus stating that the performance problems he observed in his benchmarks were the result of Intel bungling the technology. I doubt Intel would be too happy if such comments are flying around. That would be enough reason for Intel to start being truthful...

Intel's high-end RDRAM motherboard beat the hell out of SDRAM systems. It had two interleaved RIMM slots, doubling effective bandwidth. Wrong. The PIII has a 64 bit memory bus operating at 133Mhz. That's 1.06GB/s. Adding a second channel of PC-800 RDRAM -- theoretical max bandwith of 1.6GB/s -- does not give you 3.2GB/s of effective bandwith, you're still limited by the CPU. A PIII can't handle any more bandwith than PC-133 delivers. The reason the i840 outpowers the i820 is that it reduces latency. RDRAM latency gets worse the more sticks you add. So a system with 2 Rimms on two channels will have lower latency than a system with 2 Rimms on 1 channel. --Shoeboy

If you bought your machines directly from Sun, that's your own fault. You can buy them elsewhere for a fraction of the cost. The Ultra 10 is a peecee, so I don't see how it fits into the comparison. Of course it sucks, it's a peecee. Go buy a used dual CPU ultra 2. It's elegant, fast, and inexpensive. The performance of peecees compared with real computers depends, as always, on application. If you want to spin your cpu in a tight loop of integer instructions, then a real computer is not necessary. If you actually want to get anything done, you need some I/O bandwidth and floating point performance, things that aren't available in anything with a BIOS in it.

Rambus handily outperforms PC133 DIMMs, and is worth the extra expense (which means little to companies who want the extra bandwidth). Like was mentioned before, it all depends on the application. Some applications will run slower on RDRAM, it is just that simple. The problem is that Intel doesn't make any distinction about the differences.

They present it as if it is a straight upgrade path, the same as upgrading from a 486 to a pentium. They "forget" to mention that the technology is completely different, and will perform differently (sometimes radically) under different cirsumstances.

That being said, I also think that RDRAM may not be dead. Look at the celeron. The first celerons were crap. Now it is just about the most common low end processor out there. There may be a little more lag time since RDRAM isn't being developed directly by Intel, but I think that Rambus will do whatever Intel tells it to. (At least they better!)

As far as the SDRAM patent issue, I don't think Rambus has a chance in hell, and they are wasting their time and resources trying. The way US patent law is written is that something that has been in common use for over a year cannot have a patent put on it retroactively. I don't know all the details of the case, but the question rambus is going to have to answer is "Why didn't you deal with this earlier?"-----------------------------