"Many people in the industry assumed that Itanium had a low - and poor - profile among end users. That was what the folks at IDC assumed until recently, when they surveyed 500 members of their Enterprise Server Customer Panel. The results were somewhat surprising, they said. Not only was there a high level of awareness among the users - more than 80 percent knew of the platform - but that their intent to buy an Itanium system was fairly strong. About 24 percent of those polled said they had bought at least one Itanium system, though only 13 percent of non-HP users had done so. However, more than a third of all participants said they were highly likely to buy an Itanium system within the next 12 to 18 months."

I'd say x86 is the agonizing platform, sooner or later this 30 y.o. architecture will hit the scalability and performance wall.

The x86's demise has been predicted for at least 20 years, so you're gonna have to come up with some more convincing evidence for your assertion.

The x86 ISA has been extended and adapted so often and successfully that the scalability argument is just silly. And if x86 is so bad, why does nobody, including Intel themselves, manage to beat it (and not just for special applications) at the same transistor budget?

x86 may not be pretty, but it certainly does the job. And with its compact code it's actually quite well suited to today's requirements, where memory bandwidth
and latency are much more important than the size of the instruction decoder.

The x86's demise has been predicted for at least 20 years, so you're gonna have to come up with some more convincing evidence for your assertion.

And so was predicted the end of litographic technology, which still resists, but it doesn't proofs that it will not reach its practical/physical limitations at some point. It WILL.

Technology/engineer will always find it's way around, but it doesn't mean it's the best way. Transition costs and compatability are really the key terms in this issue, so industry always tend to postpone such gigantic transitions. (LCD vs CRT etc)

The x86 ISA has been extended and adapted so often and successfully that the scalability argument is just silly.

You think so? Just look at the figures showing real performance gain for past decade. You'll be surprised how curve subsides due to different factors. x86 just hapens to be one of them.

And if x86 is so bad, why does nobody, including Intel themselves, manage to beat it (and not just for special applications) at the same transistor budget?

And who would beat that mamoth application base with its software developers? Like I said, compatability is really a key issue here.

x86 may not be pretty, but it certainly does the job. And with its compact code it's actually quite well suited to today's requirements, where memory bandwidth
and latency are much more important than the size of the instruction decoder.

Now, you are not having any clue about what you're talking, do you? Bandwidth is always opposed to latency, and instruction decoder is just a way to save bandwidth on part of latency. Further more, it limits CPUs ability to process data by delaying and limiting number of instructions which are fed to its pipelines. Out-of-order execution just makes things worse when it comes to prediction miss (pipeline flush). It's not that simple, you know.