Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

"I work for a major research organization. Of late a lot of the normal big computer companies have been visiting and preaching the gospel of
Itanium. My question to them, and to the assembled masses here at Slashdot is what happens next when Itanium is real? My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)? In other words, has Intel finally done in most of their customers by obliterating all the other CPU choices (except IBM Power4 [& friends G4, et al] and AMD Hammer) and turned the remainder of the marketplace into raw commodity goods? Lest you defend the other CPUs... Sparc is dead,
Sun doesn't have the money (more than US$1B we'll guess) to do another round. PA-RISC is done, as HP has
given away the architecture group. MIPS lacks
funding (and perhaps even the idea people at this point). Alpha is
gone too (also because of the heavy investment problem no doubt). Most other CPUs don't have an installed base that makes any difference, especially in the high end computing world. So what's next? I don't like the single track future that Intel has just because it is a single track!"

You seem to miss the point on this a little bit. Although there will be compilers available, there is an extreme difference between a compiler and a good compiler. A compiler works, a good compile is able to utilize an architecture to its fullest (or at least close).

LIW and VLIW were tried before. They flopped, because compilers were dumb then. Compilers stayed dumb until midway through the RISC era. Now RISC and CISC are the same, compilers are reasonably bright, and intel is trying its own hacky LIW thing. The compilers are smart enough for a generation 1 LIW design to work, but there may or may not be any indication that they'll be smart. And as each successive subarchitecture of IA64 happens, the compiler will need to change or the chip will need to handle previous generation instructions. Intel is not true LIW in this regard - you should be able to run unmodified IA64-1 bins on IA64-2 chips.

So, some brains are still in the IA-64 chip, meaning the compiler wont have to be _as_ smart, but they'll still need to be smart, and you'll still need a new compiler for each IA64 implementation to get max performance.

Not likely, it would take a couple of weeks max for the first compilers to appear

You obviously know nothing about Itanium, EPIC, VLIW, or pretty much anything else on this topic.

The issue isn't whether or not there's a compiler available. The issue is how GOOD the compiler is. In the case of a Very Large Instruction Word (VLIW) CPU like the Itanium, the compiler is the bottleneck for system performance. Why? Because the premise of these CPUs is that while they have a low clockspeed (750-800 MHz for Itanium), they execute many instructions per cycle - 10 or more. So while "slower", they get more done per cycle, resulting in a faster overall execution. It's up to the compiler to properly structure the executable machine code to take maximum advantage of this layout and keep all execution units of the CPU busy at all times, as well as reduce disseparate memory accesses and so forth.

The intial compilers that are released with these machines do it, but not as well as they could. In fact, compiler writers are still trying to grasp the issues with pipelining on modern CPUs and their much lower number of execution units, and this is without utilizing special instructions that explicitly do non-conflicting operations at once. We're still years away from writing fully optimized compilers for contemporary CPUs. And while there's been a great deal of work done on VLIW already (prior to Itanium), there's even more yet to be done. A decade for a "good" compiler is probably optimistic.

You may be wondering, what's the point anyway? If VLIW is so damn hard, why bother? Just ramp up that clock speed and get more CPU power! Well, that's nice, but it doesn't work in reality. We're starting to bump up against physical limitations in CPU speeds. Electrons are not magical particles that travel instantaeously. They are limited to slightly under the speed of light, which means roughly 1 cm per nanosecond. This doesn't seem to be a big deal until you realize that a 2.0 GHz CPU means each clock cycle is 0.5 nanoseconds. So if you have to fetch an instruction or data from main memory, and that memory is a mere 5 cm away, under optimal conditions you've just sat around for 10 clock cycles waiting on that memory to be fetched. This is ignoring the fact that there's propogation delays, latch delays, and other things. So go ahead, pump that CPU up to 10 GHz and waste even more clock cycles waiting on data. That or redesign the entire thing, expect the compiler to do the work and properly feed you data and instructions such that you can do 10x as much in the same amount of time, and all with no wasted CPU instructions.

That's the theory at least.

Reality is that not only does the compiler have to properly organize the machine code, it also has to have some idea of what the code is doing to do so. Compile the code w/ profiling, run the code against a "realistic" data set, then recompile it again feeding it the profile data. Many compilers can do this now, but it's rarely done. Because it's hard to guess a "realistic" data set, it's hard to acquire the same, how you expect the code to be used and how it actually is used are rarely the same, and there's more development time involved in all of this. So most companies don't bother. And despite what I said above, 2.0 GHz still hasn't reached the point where the CPU is sitting on it's ass more than it's doing work. Until we start approaching that point there's little incentive to put in the R&D time necessary to switch to a new CPU archictecture.

And, of course, on top of all of the above is the issue that Joe Sixpack will invariably see 2 GHz as faster than 750 MHz no matter what. Have fun with that one.

This is exactly why 'virtual machines' (VM) or 'Just In Time' (JIT) compilers will eventually replace the current series of compile to asm compilers.

Actually... Java/.NET and JIT compilers are exactly why "Merced" or "Itanic" isn't well suited for the very things it was supposed to be good at. You see, for a VLIW machine like those, the degree of compiler optimization required to achieve good performance is much greater than for a traditional RISC-ish machine (in which I'm including x86, for reasons I'm not going into). Essentially, to get maximum performance requires a great deal of compilation, profiling, and compiling again. This is all front-end overhead on your process. The whole idea behind JIT is that it's supposed to be fast, and occure when you download new code... But now the opposite is true. At this point, you're just as well off using a traditional-style compiler/profiler that produce traditional binaries.

So basically you're saying that computers are magical radio-wave transcievers? Funny, I thought computers were based on capacitively switched [Bi]CMOS transistors. This means the "logical operation" travels at the speed of the capacitor charge / discharge times. After the ramp-up, ramp-down time (further delayed by theinnefficiencies of junctions), then the signal travels at the drift velocity of the electrons trapped within the conduction-band; significantly slower than a stream of free-flowing electrons, much less a single electron going full-tilt.

In fact, when electrons start going close to the speed of light within a silicon, there's typically an avalanching effect (utilized in zenor diodes). Channel break-down can easily occur under such situations (caused by relatively high voltages).

To my understanding, the single biggest speedup in the past several years was the introduction bipolar transistors into the CMOS frame-work. Bipolar are very fast (non-capacitively switched), have high current, high amplification, but are power-hogs and require difficult geometries to manufacture. My understanding of BiCMOS is that FET's are used everywhere, but when a FET needs to be charged quickly (or generally requires high current output), a bipolar device is attached on the output as an amplifier. You get the best of both worlds (with the possible exception of the geometry limitations).

Wiring obviously was an issue because new copper based CPUs can run cooler and faster.

I only have an undergraduate understanding of the processes, but the simple point is that there are paracitics all throughout the architecture, and we're discovering efficincies everyday which provide percentage increases in overall performance. Thus it's not the speed, but the sophistication of the design.

There's lots of work going into light-based computing, but I don't think this will ever win out because they're plagued with even bigger interconnect problems and thus paracitics.

Think for a minute how long we've been using 32-bit processors. If (and when) 64-bit becomes mainstream, I imagine it will be around for a LONG time, as it becomes standardized and slowly takes over a majority of the market. Also, we'll have the other contenders butting in with equivalent and cheaper options, like Cyrix (tried) and AMD (did).

Just because Intel will pave the way for mainstream 64-bit processors using the Itanium doesn't mean it will monopolize the market until it comes out with a 128-bit processor. No matter what, it will probably be years from now before we have to worry.

The only problem with AMD's 64 bit line is that it isn't going to be compatible with the Itanium. That is both good and bad. Good in that it is an alternative, bad in that it is going to cause a lot of confusion.

I think a lot of people are too overconfident that Itanium is going to be successful, let alone quickly. It is going to require a lot of changes to software in order to take advantage of it because it isn't just a 64 bit x86, it is a whole new architecture, one more closely related to HP PA-RISC than x86. It also may not do a very good job of running existing 32 bit code, which could slow down its acceptance, particularly in desktop systems. The last time Intel made a big push (with the i432) to create a whole new non-x86 processor family, it was less than successful. Although to be fair, the i432 was a radically different proposition and the Itanium with its more proven PA-RISC roots looks a lot more sound.

AMD's Hammer architecture, on the other hand, is more conservative, being a x86 family processor extended to 64 bit. It should require less modifications to existing software to take advantage of it, although an argument could be made that it won't have as much advantage to take having more legacy issues with the aging x86 architecture. It also may perform a lot better on existing 32 bit code than Itanium. And if AMD's track history holds true, it will probably be significantly less expensive than the Itanium.

A lot of whether it is Intel or AMD that paves the way for 64 bit mainstream CPUs will probably have to do with which of them is the first one that offers a price attractive product that runs existing 32 bit software well while being marketable as a 64 bit chip. Unfortunately for AMD, the marketable part is, as always going to be tough. While AMD has been hugely successful in "white box" sales where customers can choose their CPU, they've had a much more difficult time penetrating the big name PC markets, particularly in higher end systems. This despite the fact that in many cases an Athlon or Duron would offer a better performance than a PIII or P4 at a better price.

Not really. Quantum computers aren't very good at adding, subtracting or a lot of other things that most programmers find come in handy from time to time these days. Boolean logic will still be prevalent for a VERY long time to come. It may not happen on silicon for that long, however.

AMD's newest chip is supposedly fairly remarkable (don't have specifics, see Tom's Hardware's search engine). What about the Crusoe? VIA's purchase of (I believe) the M3? I wouldn't look at companies that are currently in the business only - I would tend to look at companies that might move into the business, either via investment, startup, or outright purchase.

I'm not too worried about Itaniums, and I don't see them becoming prevalent for quite a while. While the Pentium II, III, and IV moved through the marketplace fairly rapidly they all offered compatibility at some level. If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.

In essence consider this: it's like a brand new operating system attempting to break into the monopoly that Microsoft has. (Parallels drawn out of necessity.) While it may be better, faster, superior in every way it doesn't have 20+ years of legacy code behind it - and that will end up being what drags it down.

If I recall correctly 32 bit programs that are not rewritten for 64 bit run SLOWER on the Itanium than they do the equivalent Pentium line.

When Apple transitioned from the M68K line to the PPC, they were in the same situation - 68K code would run faster on a 40Mhz 68040 than on a 40Mhz PPC 601. The reason consumers didn't mind was that the the PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer), and (to the end user) didn't cost significantly more.

Until Intel gets the Itanium cost down to the point where they run 32-bit code at equivalent speed to a Pentium at the same cost, Itanium probably isn't ready for the consumer market.

When Apple transitioned from the M68K line to the PPC, they were in the same situation - 68K code would run faster on a 40Mhz 68040 than on a 40Mhz PPC 601. The reason consumers didn't mind was that the the PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer), and (to the end user) didn't cost significantly more.

While that's a valid point, it also bears pointing out that Pentium IV is at 2200 MHz whereas Itanium is at 800MHz -- about 1/3rd the clock speed.
That ratio is going to remain for awhile too -- McKinley will come out at 1000 MHz, while Pentium IV continues its mad march toward 3000MHz and beyond. You acknowledge this fact implicitly with your next statement (re: Itanium not viable until approx same speed at approx same cost), but
I felt it'd be interesting to point out just
how large a gap there is.

These ratios spell doom for hardware-level emulation of the Pentium on the Itanium.
Unless Intel has some serious magic, having a 100% cycle-for-cycle perfect emulation of the Pentium III or even Pentium IV on the Itanium die will never run better than 1/3rd the speed of the
real thing, since the fundamental clock rate is
so far off. The only real way to get close is
to do a software-level translation and get a
boost from scheduling for the native hardware.

It's interesting to note, BTW, that HP's Dynamo [hp.com]
project does a software translation of PA-8000 code targeting (guess what) a PA-8000 CPU, and rather than slowing things down, it actually gets 20% speedups!
Ars Technica [arstechnica.com] also did a piece on this. Perhaps that's
why HP doesn't have hardware-level translation from PA-RISC to Itanium on the die like Intel does -- they (HP) are in a better position to just translate the PA-RISC code to IA-64 when
needed. (Also, in the UNIX world, it's just
simply less necessary.)

While 800/2200MHz is a large difference, you fail to mention something that everyone here should know by now, that clock speed does not equal performance.

Clock speed does not equal performance. This is a fact of life, especialy with 20 stage pipelines and the like. AMD and Apply have been trying to teach this to the world, and on the surface most geeks understand, but they don't beleive it in their hearts.

Now, I'm not saying that the PIV won't be faster than Itanium for a good while here, and I honestly have no idea if it will be or not. We just need to stop using Mhz as our comparisons unless we're comparing the same chip.

PPC 601 started at 60Mhz (approximately the break-even point to the emulation layer)

Actually the break even point wasn't reached until about 100 Mhz or so, not sure. But I do remember when the first ppc came out they were definatly slower than the old 040's. Still don't know how Apple pulled that one off (selling new computers that were essentially slower than previous models)

Well, eventually, that will happen, without a doubt. Moore's law pretty much assures it, in fact. The big question mark is whether or not the Itanium can match the price/performance of the Pentium like before someone else does. Seeing as Itanium is currently running at clock speeds around 800 mhz when it would need about 1600 to be equivalent to a P4, even Intel's not betting on this (hence the Yamhill) and they're seemingly relegating the Itanium to high-end servers (to take over where the Suns and Alphas left off) which seems to be where they're best suited. At least for now, it looks like the x86-64 (Hammer/Yamhill) is the platform of the future, and Itanium will be just another expensive non-consumer platform.

The luxury Apple had in this situation was control of the operating system, which Intel doesn't have. Ironically, Apple will also be moving to a 64-bit architecture within the year (conservative rumors say Q3/Q4 2002.) The transition is supposed to go very smoothly, as developers are being told to prepare their programs with the 64-bit OS X libs and OS X-64-bit is being developed in concurrence with the 32 bit version. FAT binaries helped immenseley in the 68k-PPC transition, and probably will again for the G4-G5 transition.

Though honestly, if Microsoft gets what they want with the entire.NET plan (not the framework, the entire plan) then architecture will become largely irrelevant. In any case, I doubt that many people will need frequent execution of their old 32-bit apps much more than 2 years after any sort of major switch happens. It happened with Mac OS, and it'll happen with Windows. Linux is irrelevant here, as most Linux software can be easily patched and recompiled.

the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it was a 32bit one ( Windows 95 ).

SCO Unix, OS/2, and to some degree Windows NT ran quite a bit faster on the 32bit optimized PPro when compared with the same clocked Pentium.

Because of Microsofts great PR, even Intel was caught off guard and scrambled out a hack called MMX to give the appearance of progress in the CPU market. While the MMX based Pentiums were getting press/air time, Intel was hacking at the Pentium Pro core to get it to run THE 16bit OS (Windows) faster. That was the Pentium II.

IBM did some speed tests of OS/2 on the PPro and in some cases they saw a 100% speed increase on the 32bit optimized PPro.

This reminds me of the 7degrees from Kevin Bacon reference. It seems that many failures in the computer industry are only about 3degrees from Microsoft. And never is the failure do to competition but more likely, marketing and market control. IMHO.

The PPro was a darn good CPU. It finally took 32bit-ness seriously though about 10 years after the 32bit i86386 was released. As much as I like the simplicity of RISC, Intel will never get the Titanicium off the ground and AMD/Hammer will force Intel to follow their lead with an extension to the i86 instruction set into 64bit land.
IMHO.

"the ONLY reason the Pentium Pro didn't catch on was because Microsoft released a 16bit OS and told everyone it"

I wouldn't say ONLY. There was also the slight problem of the double chip package (separate cache and cpu dies mounted on one substrate) being horrendously expensive to produce. Looks like Itanium will have thesame problem [slashdot.org].

This seems to be a recurring problem in a number of technology based industries. Once you get to a certain lever of high-tech, only the (very) big boys can even compete.

So here's the question: how do you keep competition alive when an initial investment costs in the billions of dollars. For any company less than Intel sized, a single bad product cycle spells complete doom. That's no kind of market to be in.

Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.

What I'm worried about are the isolated markets that still require massively powerful, low processor number architectures. Not everything splits into nice Distributed.net packages.

Also, wasn't this inevitable. There are a few Beowulf jokes being posted, but that's really what's going on. Increasingly high performance tasks (Google, render farms etc. etc. etc.) are using massive arrays of low-power CPUs. It costs a lot of money to develop big iron chips, and if people aren't buying them then there's no point in investing that much money.

The problem is that a massively parallel computer is only useful for certain classes of problem. There are many types of problem where communications load goes up very rapidly with the number of processors, which makes a cluster (with its relatively poor communications bandwidth) impractical. This is what Big Iron is designed to be useful for.

Speaking of badass mainframe processors, I was an intern at IBM in the mid 80's. The top-of-the-line mainframes used a central processor comprised of about 100 custom ECL chips mounted on a 4-inch-square 100-layer ceramic substrate.

The whole thing was cased in a shiny metal module. Each chip had its own sping-loaded heat slug that transferred heat to the cooling liquid sent through the module's plumbing. (100 ECL chips == major kilowattage)

They told me each CPU cost about $50,000. On a factory tour, I saw an entire pallette of these sitting on the floor, kind of like gold at Fort Knox.

These things may not perform like today's chips, but they gave meaning to the term "Big Iron"

Actually, I was just transferred to the UltraSPARC 4 project at Sun [sun.com] in Burlington, MA. I don't know of the official release date, though I've heard rumors of early 2003. I'm amazed at the quality of FUD in this "article" and that it actually made it to the front page of Slashdot.

Go take a look at Sun's sales numbers for 2001 and Q1 2002. Given that they have X86 machines ready to hit the market in June, the chances of Sun being able to convince already reluctant buyers that Sparc systems are still worth the money are rather low, especially now that big iron is being replaced with clusters of cheap systems. Sparc may not be dead, but Sparc's future as a commodity item is dim at best.

McKinley is 464mm^2! That's a huge CPU. Will be very expensive to product, even though Intel will probably be subsidsing it with their profits from x86. Current Itanium systems start at about $8000 - doubt McKinley will be much cheaper. It'll take a long time for volume to build up, especially as it has so little software ported to it. Even if you have Intel's money, still can't just create a new platform overnight. Intel's optimisticly expect it to be until about 2005 before Itanium has any real market presence.

Sun's CPU division is 1300+ strong and they're planning to hire another 100-200 in the next 2 years.

A lot of HP's PA-RISC customers (and Compaq's Alpha customers) are quite unhappy with being forced to change architectures and are jumping ship to Sun and IBM - HP had a 7% drop in Unix sales Q3 to Q4 last year, while Sun had a 10% rise. By 2003 the significant majority of the $100k+ system market will be owned by Sun and IBM. Very reason for any of those customers to switch to Itanium, so it'll mostly just eat Xeon sales.

If Itanium fails you can be sure Intel will release the Yamhill [slashdot.org], a chip much like AMD's Hammer.

"It's pretty well understood that Itanium will not provide leadership x86 performance. That's Hammer's great hope, in fact. AMD's strategy depends on Intel mistakenly abdicating its x86 throne leaving Hammer and its descendants the heirs apparent to a software kingdom.

Would Intel so cavalierly jeopardize its legacy? Not on your life. To no one's great surprise, Intel is rumored to be developing something that will give future Pentium processors--not IA-64 processors--a performance kick. In a perverse reversal of roles, Intel may actually be following AMD's lead in 64-bit x86 extensions. A "Hammer killer" technology, code-named Yamhill, may appear in chips late next year, about the time Hammer makes its debut. It's suggested that Intel's forthcoming Prescott processor will be based on Pentium 4, but with Yamhill 64-bit extensions that coincidentally mimic Hammer's. (Prescott is also rumored to be built on a 0.09 micron process and implement HyperThreading.)

Naturally, the very existence of Yamhill, if it exists at all, is a diplomatically touchy subject at Intel HQ. The company doesn't want to undermine its outward confidence in Itanium and IA-64, but neither can it afford the possibility of ceding x86 dominance to a competitor. Besides, whether they appear in future Pentium derivatives or not, Intel's 64-bit extensions could appear in future IA-64 processors instead. New IA-64 features plus competitive x86 performance--now that's a compelling product."

The problem with discussions of Intel vs every other chip maker is they ignore the extraordinary differences in scale between the players.

Let's compare: Sun is a company that produces operating systems (Solaris), computers, CPUs, motherboards, and a host of peripherals. (Plus it has to invent Java, J2EE, etc.) It's R&D budget was $2.0bn in 2001.

Intel is 95% CPUs. It spent $3.8bn on R&D in 2001.

Intel has the world's most productive fabs. It's capex budget is so huge, it can order the lithograohy companies and the like to build to order inside its factories. Result, it's yields are 25% better at start; and still 10-12% better after 6-9 months.

It is incredibly difficult for anyone to keep up with the Intel machine. I wish it weren't so; but it is.

Why not G5's? Or x86-256s? Or those wacky 25x's [colorforth.com]? Who knows? (rhetorical) Slashdot is not a magic eight ball, and the folks who do have a clue are most likely under NDA's.
My guess is either a G(some large number here) or an Itanium(some other large number here) that has a 128bit bus. And God willing, whichever wins will run Irix.

Having recently participated in an NDA from Sun regarding the SPARC processor (and even with the knowledge I had walking into the meeting), SPARC is not dead or dying. In fact, I'd say that Sun squarely recognizes it as a strength. Their competition (HP for example), however, is wishing they didn't knife their baby.

As far as money to go another round, remember, Sun doesn't fab CPUs. What Sun does is design them, and they turn it over to Texas Instruments for production. And TI has their own reasons to keep up-to-date with the latest production technologies, so Sun doesn't eat that cost.

BTW: I really wish that I could talk about the SPARC presentation. I liked it a whole lot better than the NDA I attended with HP talking about their Itanic future.

I heard SPARC chips are so fucking scared of the multi-GHz x86 clones that they are running their instructions out of order! Some of the Sparc instructions think they can even hide in a delay slot (under a jump) so the x86 clones won't find them and kick their sorry out-of-date asses!

Sun's strength isn't in the performance of its servers. You don't buy a Sun because you want the fastest thing out there. You buy it for the support, reliability, software base, and probably a number of other things. As long as Sun's processing performance is "on par" with competitors, it isn't going to be a liability.

When I talk with management about servers, they don't ask me which one has the fastest CPUs. They've got a "short list" of hardware vendors (IBM/Sun, then further down HP/NT).

Sun doesn't have to worry about raw CPU power because their machines are not designed to write Word documents or play a game of Wolfenstein. Compare a Sun machine to almost any PC out there and it will smash the PC's memory and system bus bandwidth. For the kind of tasks that Sun machines usually accomplish, that is much more important when it comes to the throughput that people buy Sun machines for.

Hell, most PCs don't even have enough PCI bandwidth to fully saturate a gigabit ethernet connection unless you have a totally bare PCI bus or a system which provides each PCI slot with its own dedicated bus, as most Sun PCI systems do.

Let's not even compare the stability, scalability, and worksmanship of PC and Sun hardware. That would just be unfair to 99% of the "business" PC workstations and servers on the market.

Given the tremendous capital requirements in building a state of the art fab along with the incredible amount of enginnering man-hours required to leap to the next level, I think we are seeing a situation similar to the one for airliners: Airbus or Boeing. They are the only two that matter because the cost of entry into the airliner market is so prohibitive. This does not necessarily apply to Microsoft and it's OS monopoly as the Linux community has illustrated. Mindshare and marketshare are not always linked.

I have hopes for Intel producing the worlds best microprocessors as that would benefit s all. Simply advocating a move to Itanium for marketing reasons or to meet revenue targets does a disservice to the computer industry.

The huge die size of the Itanium and its upcoming successor make the chip far more expensive than the Pentium series, so I would not expect Itanium machines for $2K. So far, the CPUs alone are several $thousand. I also haven't seen where its performence is that impressive. x86 code performence, since its emulated, is poor. Recompile or else. Intel has sold, what 500 Itanium CPUs?

The upcoming AMD Hammer series, OTOH, is supposed to be about 30% faster clock-to-clock than the current Athlon XP series (which is considerably faster clock-to-clock than the Intel P4) and start at 2GHz. Sun's recent announcement of Linux x86 platform support, with details to come midyear, suggests that they'll be moving to the Hammer (to ship Q4). Sun would certainly love to take a swipe at Intel, and Sun has made positive comments about AMD's x86-64 Hammer architecture.

Now that the G4 has finally gotten past the 1GHz mark, and Apple has a brand spanking new Unix based OS running on it(and if you don't like it you can run others), this opens a whole new choice for the researcher looking for a new platform.

It is my opinion that once Microsoft makes its Common Language Runtime a forced deFacto standard, and once they manage to implement it on other CPU architectures, they'll essentially have a hardware-independent Windows platform. Once that happens Microsoft will have sole leverage on the PC business. That means that Intel will NOT be needed at all for running future versions of Windows-compatible programs. Who knows, maybe this could spell a revival on new and innnovative CPU architectures, since they all will now be able to run the CLR. Side note: We *could* do this today with Java, but sadly Sun doesn't have the leverage Microsoft's monopoly does on the PC business.

... that a runtime environment where "Hello World" will require, let's say, several GB of disk, a few hundred MB of RAM, continuous online updating (also requiring continuous hardware updating), and hundreds of old and newly-arriving security holes and exploits, is going to "take over the world."

Granted, it's going to be popular for a while. But isn't what's popular *always* sucky?

They already tried that. Guess what? NT was supposed to be multiplatform! And geez do you see any of the non-X86 versions out there? Nope....

In fact, NT was developed on MIPS. And M$ is in no way interested in having the CLR running on non windows based platforms. CLR is not designed to make code machine-independent, but rather location-independent. M$ still wants you to be using Windows, it just wants to have a tighter grip on you no matter where you go.

First, you are assuming that Itanium will succeed and drive all other choices from the market. At the moment, this is far from clear, and even Intel is said to be hedging their bets with a P4 follow-on.

Second, what will drive the price of the Itanium down? Historically, Intel have announced that their latest superchip is "targeted at servers, not desktops" about a week before releasing a flood of them into the desktop marketplace (usually the ones that didn't pass spec at the higher speed level), thus driving down the price of the server chips to where no one else could compete. What will be the driver this time? Businesses aren't buying desktops, and when they do start buying again it will be pure commodity: there is zero appeal for Itanium on a business desktop. And treble for home desktops.

Which leaves high-end servers. I don't think that any datacentre manager worth his pay is going to pull out $100,000 HP N-Class boxes in favor of $2,000 Intel clones. There's a bit more that goes into a server than the CPU.

SPARC dead? I'm not sure where you come across that idea. Having listened to a few talks down at JavaOne and chatted briefly with Marc Tremblay (head chip dude down there, father of MAJC and one designer of SPARC) they've already got design down on the next two levels of SPARC as the IV is experimental, and the V is the next production level as I understand it. MAJC seems to be the experimental platform they are using for smaller implementations and alternative ideas to be tried, based on some of Tremblay's theories.

I may be off base on some of the details, but Sun has a unified approach from top to bottom, from tools to silicon for the systems they plan to deliver. I doubt it will just throw in the towel. Ultimately, Sun ships iron, and they lead the market in their segment.

I don't see the basis for your assertion, and where you pulled 1B out of for cost I also don't know.

Alpha is AMD now, as that's where a good chunk of the people went. MIPS is still kicking, with the 14000 so far, but I won't speak to the future of that chip line. There's a lot of chip heads on this site with much better info than I on many of the lines.

A fast CPU is nice, but how about upgrading the rest of the standard PC architecture and peripherals to the same level?

Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu.:P

Using a 20 year old peripheral port on last weeks multi-gig cpu is like sucking a McDonalds shake through a coffee stirrer!

Agreed. We're disproportionaly favoring cpu when the real gains would be seen in high speed interconnects especially with storage devices. Most of my cpu's time is spent waiting for instructions and even when sent could stand to recieve them both faster and in greater numbers.

Yes, In one of my CS classes we were told the statistic(which was probably made up, and i've since forgotten) about how long it takes to read the entire contents of the hard drive. If current trends keep up, it'll soon take us weeks to just read everything we can store on one hard drive. Anyone have "hard" figures?

A fast CPU is nice, but how about upgrading the rest of the standard PC architecture and peripherals to the same level?

Weren't we all suppose to be using high-speed serial connections by now instead of a cocktail of SCSI (1/2/3, wide, fast, hold the mayo), IDE (ATA-33/66/100), parallel, 8 bit serial, USB, Firewire, PS/2, PCI, ISA (which is finally disappearing), etc. Heck, I'd be happy if the motherboard ran at even half to a third the speed of the cpu.:P

The good news is that USB is well on its way to completely replacing serial and parallel ports, and that PCI has been the One True Bus for the past couple of years now. Everything south of the southbridge is slowly fading away.

IMO, if we'd switched to 66 MHz 64-bit PCI years ago, we'd have no further problems on this front. In practice, PCI-X may finally be pushed through by Intel, and that will serve most internal communications needs. Motherboard chipsets are modular enough that it doesn't really matter what flavour of IDE/SCSI/firewire your drive is hanging off of; the drive controller is just another PCI device to the processor. You have enough bandwidth and DMA functionality on PCI bus to handle it.

The only peripherals that are currently bottlenecks are RAM and the video card. RAM is handled by upgrading the memory bus every couple of years. This is easy to do, because peripherals don't care what happens on the other side of the northbridge. The video card was handled adequately by the hack that is AGP (64-bit 66 MHz PCI would have been a much better idea, but that wouldn't have given Intel its nice AGP port to license).

The only peripheral that *might* be a problem in the future will be the network card (when gigabit cards finally come into vogue), and that will probably be what forces motherboard makers to put wider/faster PCI on to midrange boards and not just high-end boards.

In summary, this is less of a problem than it first appears to be.

The only serious bottleneck for performance is RAM latency, and that's not because of legacy peripherals.

4X AGP is a 32-bit 266 MHz bus. That's more throughput than possible with PCI.

Unless you buy into Intel's PCI-X, which is 64/133.

And most graphics cards are not limited by bus bandwidth with *any* flavour of AGP (see the various Tom's Hardware benchmarks). The usual limit is fill rate for new cards, and lack of geometry processing for old cards (assuming you're playing a new game). Textures are stored on-card by any sane game, so the only thing going across the bus is lists of triangles.

AGP doesn't have contention with other devices on the bus so it doesn't have to do any logic for mastering or controlling and can allocate all its clocks to doing a data transfer.

While this would be an issue for very short data transfers, graphics cards will likely be transferring large batches of data. This is done in burst mode, which gives one transfer per clock.

Why would you want PCI? The only advantage PCI gives is that you can hang multiple devices off of it. But while that lets you get multiple monitor support easier, it will really kill your limited bandwidth.

You have bandwidth to spare; all you'd be doing in a multi-monitor setup is sending the same triangle lists over the bus, not cutting and pasting image data or doing texturing. Have one one dominant card and leave the others snooping traffic, and you have zero extra overhead for this.

The real benefit of having multiple video cards is that it lets you easily do render farming for things like games. Have each card render half the screen, and copy all cards' partial renderings to one card's frame buffer. 32/33 PCI is too slow to be practical for this, but 64/66 has more than enough bandwidth. I studied the feasibility of this at one of my past jobs.

My own guess for the desktop is that NVidia will put a CPU core, probably from AMD, in the next generation of their nForce part. That puts CPU, graphics, networking, sound, disk control, and the motherboard logic on a single chip. Their current nForce part already has all of that but the CPU.

If you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

if you look at the transistor counts, NVidia's graphic chips already are more complicated than most CPU parts. This is quite do-able.

There's more to [CG]PU complexity than transistor count. Look at the 512Mbit memory cells that run for only a couple dollars a chip.

The trick is inter-related logic complexity. To my understanding the existing GPUs have no issues with backward compatability (so much of the x86 overhead is avoided), the core itself is pipelined and modular, so the complexity is spread out across the whole chip (independent teams can work on their own components with little concern for sistern components, whereas every ounce of performance is being squeezed out of x86's which require complete coordination). Further, graphics acceleration is simply the application of graphical algorithms into silicon. While I'm not quite sure which algorithms there are, the possibilities are endless. Imagine a fast-fourier transform implemented as a SIMD floating point instruction. You create an array of floating point logic units, and interconnect them. The floating point unit is pretty much a common-off-the-shelf design, so the only real logic you apply is the interconnectivity.

I'm not saying that GPU's are easy to design, I'm just saying that hardware filters are designed this way all the time, and I would'nt be surprised if a large percentage of the nVida chips weren't stock logic modules.

Sure, build your own box for $2k instead of buying one for three times that much -- if you don't mind being fired.

You don't pay $6k or $8k for a server just because there's high markup on the parts. A lot of it is due to tighter tolerances required for high-availability or high-reliability equipment. There's greater consideration for issues of heat, RF, power consumption and stability -- and then there's the built-in redundancy for many components (power supplies, fans, etc).

You talk alot about Sparc, MIPS, and Alpha in that question of yours. Yes, those are all relatively low volume products, yes they do cost a lot of money. However, the Itanium is almost like Intels version of those products, done in a slightly different way. Even though they are made in lower volumes they are still profitable because the people buying them will pay a lot more for a system. Sun can sell a 64-processor UltraSpac III system for in the realm of a million dollars and more. If you don't think they are making a nasty profit of of that you are nuts. That is why they keep advancing the technology.

People love to through buzz words like 64-bit vs. 32-bit and stuff like that but when it comes down to it what do you need on your desktop? If you are using your PC for basic development or coding there is not much to be gained from a 64-bit core at all. You don't really need anymore precision. If you are talking about scientific applications then maybe you do need the 64-bit core.

I am not saying that desktop PC's won't eventually go to 64-bit cores. However, even if you were to get a cheap Itanium right now it would perform no better, and possibly worse then your high end AMD and Intel x86 processors because few of your applications would take advantage of the core.

This question will be better asked for when Intel puts a processor on there desktop timeline that utilizes IA-64 technology.

Umm... Given how well Sun is entrenched in the financial world, I think you saying the platform is dead is just plain FUD. Check with the IT department at any major financial company and ask them how many 4500 or better systems they have. (I know, I used to work for one) And yes, a lot of them are upgrading to the new UltraSparc III machines.

And for those folks doing hard research (or special effects companies with lots o' money) SGI is still king. Despite what nvidia would like us to believe, SGI's not going anywhere anytime soon for big 3d rendering projects.

At the moment, Itanium systems are worth their money only if you have large address space requirements. Intel seems to focus on optimizing the Pentium 4 compiler, and not the Itanium compiler. I doubt that the Itanium architecture will surpass IA32/x86 on the desktop (where 4GB is enough for everyone;-) anytime soon.

That's why I doubt that we are going to see affordable IA64 systems soon. After all, the transition is quite rough, thanks to Itanium's abysmal IA32 emulation (performance-wise), so there isn't even much market demand.

In the future, Intel may well decide to switch to the IA64 instruction set before it is really time for it, just to make things a bit more complicated for AMD.

IA32 can currently handle up to 64 GB in one node, with some kind of EMS-like hack. This means that you can put more than 4 GB in your machine (actually, PCI devices need adress space, too, so you hit the barrier at 3.9 GB or so), and still use all of it.

On the other hand, the per-process adress space is still limited to 4 GB. I don't think this is a concern for the pro user who wants to show off his RAM size, though.

"At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?"

Is missing something. HP, Compaq and Dell provide more than the hardware. They provide services that go along with the HW. They use the hardware to suck you into to using their services. While small companies can build these systems on their own for cheaper, the larger companies are the ones that need to outsource some things that HP, Compaq and Dell's services provide.

Also its kind of silly to think that these IA-64 systems will be able to be built for $2k each (given the cost of similiarly performance) Sparc's and IBMs. Intel is hoping for their backwards compatibility and clout to push ISVs into programming for their systems. Once they have those vendors in their camps, the chip and server prices will go up again.

And finally, most people that would need a 64bit solution will probably need multiproc systems. OEM's will be able to provide the small systems, but once you go past the 4-8 way space, there really isn't a cheep way of scaling up any higher (, and btw, clustering is really only a solution for tasks that don't involve large sharing of data between processors that is time sensitive.) Which is where HP, Compaq, Fujitsu, NEC, and IBM will be with their high-end systems. I doubt I will ever see Dell release a system with more than 8 IA-64 processors.

Of course only time will tell what will happen next.
OH, one last thing. The guy who posted should be informed that HP did not sell any processor guys, they sold some chipset guys to Intel. I'm surprised that someone that is in a processor research group would not know this.
Checkout:
http://slashdot.org/comments.pl?sid=22319&threshol d=0&commentsort=3&tid=118&mode=thread&cid=0

I had a professor last semester that worked at Intel, and several things he told me, reminded me of somthing: It's still a busisness. In my opinion Intel will not make any huge move, until they KNOW that they will profit off of it. This means that they won't make any major move until the consumer market is there. For example, he was telling us that there have been times where they have come up with ideas that would in fact increase performance, HOWEVER due to their wonderful job at brainwashing the entire public into thinking that clockspeed is THE measure of performance, they scrapped the ideas because they noticed that they would cost too much to implement, and would result in no frequency increase. (Thanks Intel)

I also think that while AMD has shown that they can provide an honest competition in terms of performance, it is going to be stuck following Intel's every move, for the mere reason that Intel is "sleeping with" so many big OEMS (*cough* Dell *cough*), leaving it as the CPU for the hobbyist

You're not going to be getting an Itanium based system for $2000 anytime soon.

First of all, Intel has said ever since the Itaniums much-delayed release that it couldn't really compete and is primarily released to get some infrastructure ready for when the McKinley is ready (IIRC, it's scheduled for about 3 months from now...).

Secondly, the die size for the McKinley is HUGE. On todays top-of-the-line.13 micron process, the manufacturing costs are likely to be too high for this chip to make it into high-end workstations, let alone $2000 consumer computers.

Thirdly, the competition isn't dead yet. Sparc and PA-RISC may be dead, but Sun offers competition, and IBMs Power4 will be a decent competitor. Alpha does indeed look to have disappeared, but I thought I heard something about some Japanese company buying rights to some Alpha stuff, and planning on a big die shrink and integrating a large cache (which is all the Alpha really needs to compete, for the near future).

Fourth of all, the performance of even the McKinley is questionable. Compilers for it's IA64 instruction set are still quite poor, with little sign of the anticipated improvements. It's predecessors, the Merced/Itanium, was dog-slow at most tasks (though good at floating-point). The most recent benchmarks show the McKinleys 32-bit performance as terrible, though it's floating-point performance is supposed to be stellar, and its integer performance decent (when combined with an enormous on-die L3 cache...).

Anyway. Intel just likes the Itanium because the the instruction set is sufficiently complex that the prohibitive cost of designing a compatible would raise the cost of entry to the market enough to give them a more secure monopoly for the next decade.

The implicit assumption that the author is making here is that 64-bit CPU's such as Itanium will be the 'next big thing'. I'm not sure - 64-bit CPU's really only are necessary for machines that need more than 4 GB of VM space - and with various x86 addressing extensions, some IA32 CPU's can address up to 16 GB (I think).

Now don't get me wrong - 64-bit filesystems are great, and necessary - being limited to 2GB or 4GB files is terrible. But no 64-bit CPU is necessary for that kind of thing, the filesystem just has to be written as 64-bit (which is easier said than done, and could easily sacrifice backwards-compatibility with various API's, but I digress...).

That being said - Intel might very well be moving down the wrong path - the Itanium is a huge, expensive, hot, completely new chip. Even Intel is hedging its bets [theregister.co.uk] on whether or not Itanium will take off - and AMD is poised to eat Intel's lunch with their new Hammer design [com.com].

Who knows, perhaps all CPU's from now on will be compatible with x86 IA32, and innovation will be in the various processing units that sit behind the instruction-set decoder. Take a look at AMD or Transmeta for examples of that, already.

Just look at the auto industry. GM, Ford, Chrysler began the North American market by consolidating all the smaller auto companies and dominated for years. Then along came Honda, Toyota, Nissan and now they have made huge gains.

The fact is that even though it looks impossible to overcome Intel at this point, someday someone will.

Rewriting standard applications to take advantage of the Itanium is one thing. However, companies that need a $10k+ server usually have programs that are specialized. After 20 years of the x86 standard there's a large codebase, although given a few improvements along the way. If you read the FreeDOS article a little while back companies were still running DOS in production systems, because it *works*. Porting it to Itanium will be a lot worse than porting it to x86-64 and Hammer. Let's face it, the hardware cost is usually minimal today. Software programmers however, are not cheap.

You won't see anybody building an Itanium for $2K, since the chips cost more than that when you buy 1000 of them at a time.

Maybe 10 years from now, but that's too far off.

1) HP's PA-RISC is as dead as Intel's x86

2) Alpha should regain the speed crown with the EV7 for a while, so they aren't dead yet. They've just announced they'll be dead in a few years:)

3) IBM's POWER4 is the current speed king and is likely to be around for a long long time.

4) MIPS.. Aren't these popular RISC chips in the world due to their embedded use? (N64, Playstation, networking) At 500Mhz in SGI's machines they are pretty dead, but various MIPS chips are doing quite well in emerging areas. Infact AMD just bought a MIPS company.

5) Sparc has never been that great CPU vs CPU with the other companies, but I expect them to be around for a fairly long time still, just based on their installed base. Their customers never really bought on performance (otherwise ALpha would still be around!), but on service and reliablity. As long as they can provide good enough performance they'll be around.

The next Itanium is HUGE making it very expensive to produce (meaning you won't ever build a system for under $2K with one!), requires a LOT of optimization in software to get accepable perfomance (meaning it'll suck unless you run active profiling optimizations and I doubt most game companies will even do that), it uses a lot of power and creates a lot of heat (it makes the Athlon/P4 look like embedded chips!), and it isn't really compatible with existing software. Nobody is going to run Win98, WinXP, or even GNU/Linux on it on the desktop.

The next Itanium will be more popular than the last, but it won't even register on people's radars as it won't provide the best performance, it won't have a bunch of software written for it, and it'll be expensive. Apple will sell more iBooks than Intel sells Itaniums for the next few years.

There is little compelling need for desktop users (the ones that create the volume for commoditization) to move to 64 bit systems.

Until there is breakthrough brought on by computing speed, we will see a stall in computer upgrading as we have seen in the past.

I expect we will see more things like the Imac (very cool computers), before we see a press for new computers for speed.

The two things I think will create the next level breakthrough.

Real Time CGI imaging at Toystory/Mosters INC/FF, level of quality. We can probably predict precisely WHEN that will be possible by mapping the development speed of 3d hardware, memory, software breakthroughs, and polygon density to date, and where the predictable bottlenecks will appear. (My suspicion is that we are 5-8 years away).

The other breakthrough which I think would do it, and right now it is very difficult to predict when it will happen, but I suspect that adoption would be pretty rapid, is real time voice interaction that is 5 9's accurate. This is likely to appear after a certain speed level of computers, and a breakthrough understanding/algorithm for speech recognition.

However, I suspect the AMD x86-64 solution may be adopted much faster than the Itanium solution. Likely there is an app out there that may have a large enough niche to require 64 bit apps, and the rest of the apps on the computer would be 32 bit. I suspect that the app will be imaging or video related, and that will create an adoption around the AMD solution, before the Itanium moves out of the server market to the desktop market where it will be commoditized.

While the Power4 will no doubt compete with the Itanium in the server space, since many people are talking about when 64-bit chips will hit the desktop, you should note that its "friend" the G4, which has been out since before the P4, is by no means meant to compete with new Intel offerings; the Goldfish PowerPC 8500 ("G5") is aimed squarely to dominate the desktop space before Intel can get to it with 64-bit chips. It's ability to run 32-bit code at much better speed than the othet 64-bit offerings makes it much more appealing to people looking to transition to 64-bit on the desktop, and if they can pull off the.13 SOI, 500MHz RapidIO bus, etc. it should reassert A.I.M.'s competitiveness in high-end desktops. Now when it will actually ship, how much of this will get implemented, and at what frequency it starts at is anyone's guess.

Fud, fud, fud.
I can't speak for the other companies but Sun can easily afford to fund R&D on the next generation SPARC chip, they've got 6 billion $ cash in hand [sun.com]. Let alone investments, and have done for over 2 years.
BTW the current generation is UltraSPARCIII, UltraSPARCIV is just a fabrication improvement. Work is already underway on UltraSPARCV's design.
Sun's crown jewels are SPARC/Solaris, when Sun stops working on their own OS/CPU/Server platform it's time to stop investing in them.

My world view is that Itanium based systems will become commodity products very quickly after good silicon is available in reasonable volume. At that point, why should one spend $8-10k for that hardware from the likes of HP, Compaq, Dell and others when one can build it for $2k (or even less)?

When peolpe start buying Itanium systems in volume, then the prices will drop on the Itanium systems. The reasons, they're expensive is not because the chips are hard to come by but because no one wants to buy them right now.

However, this comment alone makes me wonder about he posters cluelessness. He obviously hasn't worked in any real production environment. You people should realize that you simply can't build the kind of systems that Dell, HP, etc sell -today- out of commodity components. Take a look at a typical high-end SMP Dell server: propietary OEM motherboard, propietary case, hot-swap hard drives, hot-swap redundant power supplies and cooling, LOM support, etc. All components have been carefully designed to work together to produce a reliable, and scalable server system. You will never ever build the same kind of system on your own and if you do it's not going to be cheaper than buying one. Plus you don't get the vendor support.

The comment about SPARC being death is completely astonishing at the time when Sun is -THE- unix market leader. SPARC CPUs were never faster than the competition but that didn't worry Sun users as long as they were up to par with the competitors. The reason people buy Sun hardware is not the CPUs (CPU is alone is useless) but Solaris which is THE enterprise class OS and its applications, Sun's excellent support, massive multiprocessor scalability of Sun systems, massive I/O bandwidth, etc.

Current Sun chip is not bad at all (UltraSPARC III) and Sun is working on UltraSPARC V.

Sparc is dead - FUD. You're right. The new 1.05Ghz Cu chip is pretty frickin' fast - and speed is NOT always been Sun's selling point.

PA-RISC is done - FUD.Not true. HP is moving to IA-64 - even their boxes are starting to wired to ship with either PA-RISC or McKinley.
McKinley is essentially an HP design... PA has lived longer than expected but that's just because IA-64 is so late.

MIPS lacks funding - FUD. Actually this probably true. SGI is not a well comany and they will probably need to move to a new chip arch soon. There R14s are G5s rumored - who knows?

Alpha is gone - FUD.Nope - it's gone. Intel bought it and swallowed it whole. No new development, no new generations, it'll only live on in some parts in IA-64.

This guy works for either IBM or Intel. Probably IBM, as he favors the Power4 and G4. Don't take him seriously!

I can't say where he works, but he has a point. Maybe you should look at the recent server chip landscape before dismissing this guy's claims.

SPARC: Who buys them anymore? *Every* application in the last year that I have heard about the management has stated that they will buy a commodity PC rather than a Sun Workstation because of Price/performance ratios.

PA-RISC: Not enough info to comment on.

MIPS: I know one hardware guy who is trying to build an embedded MP3 player using a MIPS CPU. That's it. I've never heard of anybody using them commercially.

Alpha: Compaq stopped making them last year (go and check their old press releases for March), nobody makes systems based on them, nobody buys them that I have ever heard of. In fact, I can't remember ever seeing one.

Others: Programming the 8-bit CPU that runs the engine in your car can be fun if you like machine code, but it's not really satisfying.

Anything whos sales are decreasing, zero, or that is not being manufactured anymore is dead. The Z-80 is dead after the longest run of any CPU out there (26 years!) but it is gone. Alpha is gone. SPARC is going. MIPS is going. The world will be a poorer place for their loss.

(If you have evidence to the contrary, please post. I'd like to be wrong.)

Nice idea, but keep in mind that static compilers are extremely difficult to create for Itanium. Performance results I've seen show that while the theoretical maximum for IA-64 is pretty impressive, the actual results static compilers are generating are not so hot.

Now, try to write a dynamic, JIT compiler for Itanium, which is even hardware than a static compiler. I haven't seen any java or CLR performance numbers for IA-64, and suspect I know the reason why.:-)

Virtual machines rely on things like delayed compiling that are fairly antithetical to the whole idea of Itanium, where they push enormous amounts of work previously handled by the CPU out to the compiler. Personally, I believe that VLIW for general purpose processors was a really bad idea that was disproven a good decade ago. Intel is in the middle of giant train wreck, and the market doesn't even know it yet.

Consider the downside of pushing the majority of your branch prediction to the compiler. For example, the compiler doesn't know about multiple processes and how they interact with eachother! This means that it's likely that Itanium boxes won't even serve transactions very well. This begs the question of what Itanium will be useful for. If it's not for the desktop, and it's not for transaction service, what the heck is it for? High end scientific computing? Competing for Alpha's market share is a big mistake, in my mind.

If I worked for a major "research" organisation which couldn't even find out simple stuff like the short term future of the microprocessor industry then I would want to remain anonymous too.

One sees this comment in every Ask Slashdot thread. It is not only tiresome, it is wrong as well. Seeking multiple opinions from diverse sources is certainly part of research, similar to skimming all the current trade and academic publications.

Sorting out the meaningful comments from the slush is part of good research.

It's different because they haven't signed exclusive deals and used marketing to force other competitors out of the fray. Essentially, they will have priced the competitors out of the building. I'm not saying they aren't a monopoly, but realistically, it's harder to argue they did it illegally or unjustly.

However, I still think that there will be room for others. AMD will probably succeed doing what they do best, outpace Intel in quality and lower the price by ~10%. This has been successfull (I hope it continues, I own stock) and will probably continue. And I doubt Sun is out. There maybe changes coming, but I figure McNealy would sell his baby prior to using Intel chips. As for the others, they fell and never recovered. You can't charge super high premiums when your competition is charging super low premiums. A lot of corps assumed you could and get away with it and look what happend.

The future is unwritten, so any sort of prediction is just fantasy for the most part. Step back to 95 and tell me who predicted 2000 or 2001? Reality is far more interesting than any professional opinion from the Gartner group et. al.

Anyone who sees the recent Sun announcements (re: Linux) as the end of SPARC or Solaris, clearly doesn't know anything about the business world or about Sun.

Yes, Sun has made an announcement to start supporting Linux. This is no big surprise, especially after the Cobalt aquisition.

This doesn't mean that they are switching to Intel or giving up on the SPARC architecture.

SPARC is far from dead. All you have to do is talk to anyone within Sun to see the U4 and U5 roadmaps. Sun firmly believes in their architecture and has/will spend the R&D to to continue to develop it.

Plus, the install base of these technologies is much too large for them to just give up on them.

Look at HP, for example... Here is a company that is part of the engineering process for Itanium. They've already committed to use Itanium on their higher end servers, but they aren't completely giving up on their PA series CPUS (yet). All of their new systems take both.