Intel’s Itanium server CPUs shuffle one step closer to the grave

Intel is quietly scaling back the roadmap for its ill-fated 64-bit CPUs.

When last we heard about Intel's Itanium server CPUs, it sounded like Intel was planning a slow, controlled slide into obsolescence for the processors and their accompanying architecture. Kittson, an Itanium revision due in 2015 or so, would be socket-compatible with the company's more popular Xeon CPUs, removing the need for (and, crucially, the R&D costs of) Itanium-specific motherboards and chipsets. Now, Intel appears to be backing down from even those modest plans—PC World has just noticed an Intel posting from late January, saying that Kittson would remain socket-compatible with the current Itanium 9300 and 9500 CPUs.

Sticking a new processor in an older motherboard can still yield speed improvements, but you'll miss out on new, chipset-dependent advancements—support for faster RAM, newer RAM standards (like the upcoming DDR4), and new versions of PCI Express, SATA, and USB, among other thngs. Intel's decision to keep future Itanium revisions on today's chipsets will make it easier to upgrade existing systems, but it won't do much to sell new ones. Intel will also keep Kittson CPUs on its 32nm manufacturing process rather than moving to the more energy efficient 22nm process it uses for its Ivy Bridge CPUs, reflecting a further reluctance to continue investing in the architecture. By the time Kittson ships, this process will be even further behind the curve than it is now.

While Intel told PC World that using the same motherboard and chipsets for Itanium and Xeon "will be evaluated for future implementation opportunities," this is just the latest sign that Itanium is on Intel's back burner—Microsoft's server software no longer supports the architecture and a lawsuit from HP is the only thing keeping Oracle's database software for Itanium alive. Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM while also fighting off competing RISC architectures, but the Itanium instruction set didn't maintain backward-compatibility with existing x86 software. In 2003, AMD introduced its first Opteron server CPUs, which implemented 64-bit instructions while maintaining 32-bit compatibility; Intel eventually licensed these instructions for use in its own chips, and Itanium has been in a long, slow decline ever since.

Promoted Comments

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

That's certainly what it accomplished, and if you ask Intel today, they'll certainly tell you that Itanium was never meant to do anything else. But, if you asked Intel in 2000, Itanium was absolutely supposed to be a post-x86 architecture. Rather than giving x86 dominance, Itanium was originally meant to kill x86. A lot of people at the time assumed that Itanium would move downmarket to desktops, but the margins on high end gear would allow stuff like MIPS and SPARC to stay competitive in servers for a long time. Remember, at that point Intel really had no credibility in that market. The P6 core was impressive, but it was still something that small offices used for servers because they couldn't afford 'proper' server hardware. Hindsight makes the present world seem obvious, but at the time many people just sort of laughed at Intel, without realising that they were already obsolete and Intel was about to kill them.

I wonder what would have happened if Itanium had ever caught on. I've skimmed the story over the years, but from what I understand it basically boils down to AMD's release of x86-64 providing a more cost efficient enterprise-level CISC processor which the market loved, forcing Intel to develop a 64 bit implementation of x86 and pretty much hamstringing their own high end offering.

Wasn't IA-64 supposed to be a from-the-ground-up redesign of what a modern CPU should be, without having to support all of the legacy x86 bits? If Itanium, and therefore IA-64, had captured relevant market share, would it have eventually trickled down into the desktop space and replaced x86 as the defacto standard for general computing? Without having to hang on to the legacy x86 stuff, might we all actually be in a better place with more powerful systems today had Itanium not floundered?

EDIT:

tigas wrote:

Good thing that AMD took the logical step and brought us AMD64. Imagine if this clusterf. had become the dominant CPU architecture.

Itanium today is a mess, sure, but how much of that is due to Intel devoting little in the way of resources to it? If Itanium had become dominant and received as much funding and development time as x86 has since then, isn't it likely today's IA-64 processors would be more powerful than today's x86 processors?

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

Was it really because of Itanium or merely because the x86 and x64 CPU produced by Intel and AMD became more and more relevant, efficient and powerful, which, added to the huge software library, erased the x86 legacy burden?

Looks to me like problem is solved. Intel releases a dual compatibility platform chip, so it doesn't have to continue making for an old platform, and a way to phase out the design all together. All without causing harm to its customers in the late future. By that time all the old mobos should be about to die anyway. Everyone should be happy.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

That's certainly what it accomplished, and if you ask Intel today, they'll certainly tell you that Itanium was never meant to do anything else. But, if you asked Intel in 2000, Itanium was absolutely supposed to be a post-x86 architecture. Rather than giving x86 dominance, Itanium was originally meant to kill x86. A lot of people at the time assumed that Itanium would move downmarket to desktops, but the margins on high end gear would allow stuff like MIPS and SPARC to stay competitive in servers for a long time. Remember, at that point Intel really had no credibility in that market. The P6 core was impressive, but it was still something that small offices used for servers because they couldn't afford 'proper' server hardware. Hindsight makes the present world seem obvious, but at the time many people just sort of laughed at Intel, without realising that they were already obsolete and Intel was about to kill them.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

Wrong again. IA64 was a way of grabbing the high-end all for Intel, by strangling MIPS, Power, Alpha and PA-RISC (mostly by introducing an element of wait-and-see for possible clients of alternative IAs, while remaining in vapour mode for years), AND using an IA that AMD couldn't license.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

Wrong again. IA64 was a way of grabbing the high-end all for Intel, by strangling MIPS, Power, Alpha and PA-RISC (mostly by introducing an element of wait-and-see for possible clients of alternative IAs, while remaining in vapour mode for years), AND using an IA that AMD couldn't license.

Thats a stretch. All were essentially dying anyway, strangled by economic factors beyond their control that made their continued development unprofitable and unjustifiable (except POWER which lives on). Itanium didn't help with that, but it didn't really hurt much either.

Good thing that AMD took the logical step and brought us AMD64. Imagine if this clusterf. had become the dominant CPU architecture.

Itanium today is a mess, sure, but how much of that is due to Intel devoting little in the way of resources to it? If Itanium had become dominant and received as much funding and development time as x86 has since then, isn't it likely today's IA-64 processors would be more powerful than today's x86 processors?

No, no, hell, no. The VLIW idea has been successfully applied to GPU's, but it failed dramatically for general-purpose code because too much performance depended on optimization at compile-time. They took one of the tenets of RISC (let the compiler worry about lots of things) and turned it so far they came out with something even more convoluted than CISC.

Wasn't IA-64 supposed to be a from-the-ground-up redesign of what a modern CPU should be, without having to support all of the legacy x86 bits?

Well, if you define "early 1990s" as "modern", then I suppose. It was basically an attempt to replace a dated 30 year old ISA with a dated 20 year old ISA. Its debatable how much, if any, real advantage that would have given for desktop applications.

NulloModo wrote:

If Itanium, and therefore IA-64, had captured relevant market share, would it have eventually trickled down into the desktop space and replaced x86 as the defacto standard for general computing? Without having to hang on to the legacy x86 stuff, might we all actually be in a better place with more powerful systems today had Itanium not floundered?

VLIW never really caught on in any space. Even in highly data-parallel environments like GPUs it was eventually phased out in favor of more conventional, SIMD-based designs which seem to have outperformed it.

I don't think Intel really had a realistic chance of getting Itanium to catch on for the desktop, but if it had I suspect in the long run Intel would have been forced into hacking around a lot of its VLIW weirdness much as they hack around CISC bits and pieces now.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

Wrong again. IA64 was a way of grabbing the high-end all for Intel, by strangling MIPS, Power, Alpha and PA-RISC (mostly by introducing an element of wait-and-see for possible clients of alternative IAs, while remaining in vapour mode for years), AND using an IA that AMD couldn't license.

Thats a stretch. All were essentially dying anyway, strangled by economic factors beyond their control that made their continued development unprofitable and unjustifiable (except POWER which lives on). Itanium didn't help with that, but it didn't really hurt much either.

Like a bee string after being shot in the head.

MIPS and PA-RISC were in trouble, but Alpha was going gangbusters and even POWER was doing OK, considering IBM still regarded it as something to keep its hardware chops honed.

Picture Intel coming into this market and saying "we have the fabs that we use for mainstream processors, and we're going to kill all these guys with our manufacturing expertise, so don't buy anything until we show our future processor" - HP killed PA-RISC immediately in exchange for a seat on the table, MIPS choked, DEC panicked and the corpse was split between Intel and Compaq, and IBM noticed that if they couldn't sell POWER workstations any more, they could still run mainframe code on POWER.

Itanium is shades of EISA and MCA. Try to dominate the market and the market reacts. Itanium had some good ideas ahead of their time but the execution was poor, and the world wasn't ready to give up x86 (still isn't).

MIPS and PA-RISC were in trouble, but Alpha was going gangbusters and even POWER was doing OK, considering IBM still regarded it as something to keep its hardware chops honed.

Alpha was building fast processors does not change the fact that their business model was ultimately doomed by its insufficient volume.

tigas wrote:

Picture Intel coming into this market and saying "we have the fabs that we use for mainstream processors, and we're going to kill all these guys with our manufacturing expertise,

Thats what they did . . . using x86. Thats the thing about manufacturing expertise, you don't need a good ISA as long as you have volume. Whatever the desktop PC market picked, no matter how terrible, would have won because in the long term it's volume made it a viable business, whereas Alpha, MIPS, SPARC, PPC, etc were all ultimately going to become unprofitable as fab and engineering costs continued to grow. And in the end bad but profitable wins out over [insert adjective here] and unprofitable.

If Itanium, and therefore IA-64, had captured relevant market share, would it have eventually trickled down into the desktop space and replaced x86 as the defacto standard for general computing? Without having to hang on to the legacy x86 stuff, might we all actually be in a better place with more powerful systems today had Itanium not floundered?

VLIW never really caught on in any space. Even in highly data-parallel environments like GPUs it was eventually phased out in favor of more conventional, SIMD-based designs which seem to have outperformed it.

I don't think Intel really had a realistic chance of getting Itanium to catch on for the desktop, but if it had I suspect in the long run Intel would have been forced into hacking around a lot of its VLIW weirdness much as they hack around CISC bits and pieces now.

When "graphic accelerators" (rendering raster pipelines at high speed) became GPUs, VLIW was doomed.

"Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM while also fighting off competing RISC architectures, but the Itanium instruction set didn't maintain backward-compatibility with existing x86 software."

Well that's a very neutral way to put it. The issue is not, and never was, that Itanium was not compatible with x86 software. The issue was that

(a) Intel could not straddle the divide between making Itanium truly take off and protecting x86. And so Intel was never willing to price the product where it should to be priced, given the existence of Xeon at the low end and POWER at the high end.

The relevance of this to today is we are seeing a version of the same struggle within Intel regarding Atom. People imagine that Atom is the tiger chip, just waiting to pounce but, just like Itanium, Intel is terrified of hurting its desktop business. The specifics are very different, but the underlying reality is the same: like Microsoft so far, (and unlike Apple) Intel can't see a way to cross the chasm between the money it's making today from chips of the past, and the money it may one day make from chips of the future.

(b) Itanium was DELIBERATELY made insanely complicated. Intel started with a the technical philosophy of something akin to RISC (although the details were different), namely "let's move all the complexity associated with superscalar execution from the CPU into the compiler, and simply offer a 'language' by which the compiler can indicate to the CPU the superscalar details". But then some bright MBA decided that a chip that was simple to design by Intel would be easy to design by anyone else, and so one bizarre item after another was added to complexify the chip. The end result was something every bit as complicated to design as an x86 --- but with a thousandth the size of the potential market. And so no money for a steady stream of annual tick-tock updates (and the MASSIVE pool of design engineers working in parallel to keep the thing competitive). And so IBM (starting with a cleaner slate in the form of POWER, with no demand for weird and pointless complexity) is able to basically remain competitive without untenable design effort.

The relevance is, once again, to Atom. As I've said before, the problem Intel has with Atom is not that the x86 overhead costs a lot of power or a lot of area, it's that it costs so much in terms of design and test manpower. And once again Itanium shows us that, for all Intel's skills, especially in fab, the insane levels of complexity in these chips make it much harder than people imagine to simply crank out a new and improved version. I, for one, find it hard to see how Intel could establish a pipeline manned as aggressively as the desktop pipeline to keep pushing ARM aggressively. But without that level of manpower, all the arguments about how Intel COULD compete with ARM are moot.

The really pathetic thing is that this death by complexity in both cases (Itanium and Atom) was absolutely self-imposed. There was no technical reason for it, and Intel could have got pretty much everything it needed from both products by maintaining an engineering focus that was willing to look forward confidently, rather than backwards in paranoia, and design a much simpler chip. There's an interest book waiting to be written here about Intel --- the technical side (fabs and architecture), the business side, and how the business side has managed to so warp the engineering side.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

The thing is that Alpha was killed not by any performance or technical metric but via business politics. Compaq bought DEC which started it all. Then HP decided to buy Compaq but at the time HP was working with Intel at the time to release Itanium. There was a bit of conflict of interest so Compaq spun off the Alpha design team into their own company so the HP-Compaq merger could go through regulators. So what happened to that Alpha company? It was purchased by Intel and the death blow was properly delivered.

MIPS on the other hand was already in decline in the market as SGI, the platforms big supporter, was faltering. To further compound the issue, SGI had already publicly endorsed the Itanium as the future platform for IRIX. MIPS main uses in the market sudden became the embedded space where it has lingered.

Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM

WTF? 64-bit processors had been there long before 2001. Itanium's job was to kill Alpha and MIPS, ensuring x86 dominance. Mission accomplished.

That's certainly what it accomplished, and if you ask Intel today, they'll certainly tell you that Itanium was never meant to do anything else. But, if you asked Intel in 2000, Itanium was absolutely supposed to be a post-x86 architecture. Rather than giving x86 dominance, Itanium was originally meant to kill x86. A lot of people at the time assumed that Itanium would move downmarket to desktops, but the margins on high end gear would allow stuff like MIPS and SPARC to stay competitive in servers for a long time. Remember, at that point Intel really had no credibility in that market. The P6 core was impressive, but it was still something that small offices used for servers because they couldn't afford 'proper' server hardware. Hindsight makes the present world seem obvious, but at the time many people just sort of laughed at Intel, without realising that they were already obsolete and Intel was about to kill them.

Wasn't it AMD64 that killed Alpha and MIPS? Itanium (aka Unobtanium) was a huge failure and disappointment. It hardly achieved much more than a few chuckles from the watching crowd.

Good thing that AMD took the logical step and brought us AMD64. Imagine if this clusterf. had become the dominant CPU architecture.

Itanium today is a mess, sure, but how much of that is due to Intel devoting little in the way of resources to it? If Itanium had become dominant and received as much funding and development time as x86 has since then, isn't it likely today's IA-64 processors would be more powerful than today's x86 processors?

No, no, hell, no. The VLIW idea has been successfully applied to GPU's, but it failed dramatically for general-purpose code because too much performance depended on optimization at compile-time. They took one of the tenets of RISC (let the compiler worry about lots of things) and turned it so far they came out with something even more convoluted than CISC.

I don't think that's really correct. EPIC, as a concept, was a way for the compiler to pass more information to the CPU than a standard ISA, meaning the CPU had to do less work to figure out that information. It allowed for cheaper super-scalar dispatch.The real problems were[technical]- it didn't do much to help OoO execution, and that's the harder (and more important) part than the superscalar execution- it didn't do much to help memory performance, and so you (especially the target markets of servers and HPC) were paying for an unbalanced machine --- too much computation for the memory capabilities. The only way to rebalance was to add what were, for the time, huge amounts of expensive cache.

[business]- it exposed (ie made compiler visible) a machine model that was too rich for the time, so there was no easy way to scale it down to cheaper parts. Intel learned nothing from IBM's 360 experience, or its own experience (which even at that time had been clear, from things like 8086 vs 8088, or the 386sx/486sx, or the split to desktop model and a Xeon model.

I don't think it's useful to say the VLIW ISA was what killed it. A better answer is that EPIC solved what was not a serious problem (super-scalar), did not solve what was a serious problem (OoO and memory), and, as implemented, allowed for some questionable business decisions, which a traditional ISA would have ameliorated. I think it is still possible (technically, though probably not commercially) to do better. An ISA could be devised that is per-CPU specific, exposing both super-scalar and OoO info to the compiler. BUT what would be distributed as binaries would be something like LLVM intermediate code which would, on "installation" on each CPU, be compiled down to the exact code appropriate for that CPU. What this would do, however, is limit the extent to which one could write assembly language for such a system. This is essentially how the GPU guys have dealt with the issue, and it has allowed them to create ISAs that are very appropriate for their HW, but that aren't frozen to a HW model appropriate to a particular point in time.

If anyone can pull this off, it will be Apple, and it will be very interesting to see if they ever go in this direction. They, after all, have all the pieces in place: they control LLVM (and can modify it and its intermediate representation if necessary), they control the app store (and can insist that subsequent app submission be in LLVM code rather than ARM as today), and they control the iOS CPU...

AFAIK, Itanium was HPs attempt to outsource the development (and production) of the next generation PA-RISC processors to Intel. Intel was looking for a way into the hight-end server/workstation market and agreed to work with HP. As it turned out, Intel got into that market on their own by making an inferior, but significantly cheaper product - the Xeon. Why share your profits when you can keep everything to yourself?! That left HP hanging - dependent on Intel (and later Oracle) and with a CPU that was even less competitive than the PA-RISC and Alpha architectures they controlled.It's a shame that a really good architecture like PA-RISC (arguably the most efficient one at the time) had to end up like that.

"Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM while also fighting off competing RISC architectures, but the Itanium instruction set didn't maintain backward-compatibility with existing x86 software."

I don't think it's useful to say the VLIW ISA was what killed it. A better answer is that EPIC solved what was not a serious problem (super-scalar), did not solve what was a serious problem (OoO and memory), and, as implemented, allowed for some questionable business decisions, which a traditional ISA would have ameliorated.

This is probably the best way to put it. VLIW solves a problem that seemed big 20 years ago, but has since been largely forgotten. It does not really do anything much to address the actual constraints facing modern processors.

"Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM while also fighting off competing RISC architectures, but the Itanium instruction set didn't maintain backward-compatibility with existing x86 software."

Well that's a very neutral way to put it. The issue is not, and never was, that Itanium was not compatible with x86 software. The issue was that

(a) Intel could not straddle the divide between making Itanium truly take off and protecting x86. And so Intel was never willing to price the product where it should to be priced, given the existence of Xeon at the low end and POWER at the high end.

The relevance of this to today is we are seeing a version of the same struggle within Intel regarding Atom. People imagine that Atom is the tiger chip, just waiting to pounce but, just like Itanium, Intel is terrified of hurting its desktop business. The specifics are very different, but the underlying reality is the same: like Microsoft so far, (and unlike Apple) Intel can't see a way to cross the chasm between the money it's making today from chips of the past, and the money it may one day make from chips of the future.

(b) Itanium was DELIBERATELY made insanely complicated. Intel started with a the technical philosophy of something akin to RISC (although the details were different), namely "let's move all the complexity associated with superscalar execution from the CPU into the compiler, and simply offer a 'language' by which the compiler can indicate to the CPU the superscalar details". But then some bright MBA decided that a chip that was simple to design by Intel would be easy to design by anyone else, and so one bizarre item after another was added to complexify the chip. The end result was something every bit as complicated to design as an x86 --- but with a thousandth the size of the potential market. And so no money for a steady stream of annual tick-tock updates (and the MASSIVE pool of design engineers working in parallel to keep the thing competitive). And so IBM (starting with a cleaner slate in the form of POWER, with no demand for weird and pointless complexity) is able to basically remain competitive without untenable design effort.

The relevance is, once again, to Atom. As I've said before, the problem Intel has with Atom is not that the x86 overhead costs a lot of power or a lot of area, it's that it costs so much in terms of design and test manpower. And once again Itanium shows us that, for all Intel's skills, especially in fab, the insane levels of complexity in these chips make it much harder than people imagine to simply crank out a new and improved version. I, for one, find it hard to see how Intel could establish a pipeline manned as aggressively as the desktop pipeline to keep pushing ARM aggressively. But without that level of manpower, all the arguments about how Intel COULD compete with ARM are moot.

The really pathetic thing is that this death by complexity in both cases (Itanium and Atom) was absolutely self-imposed. There was no technical reason for it, and Intel could have got pretty much everything it needed from both products by maintaining an engineering focus that was willing to look forward confidently, rather than backwards in paranoia, and design a much simpler chip. There's an interest book waiting to be written here about Intel --- the technical side (fads and architecture), the business side, and how the business side has managed to so warp the engineering side.

"Intel introduced the Itanium chips in 2001 as a way for servers to move to 64-bit operating systems that could handle more than 4GB of RAM while also fighting off competing RISC architectures, but the Itanium instruction set didn't maintain backward-compatibility with existing x86 software."

Well that's a very neutral way to put it. The issue is not, and never was, that Itanium was not compatible with x86 software. The issue was that

(a) Intel could not straddle the divide between making Itanium truly take off and protecting x86. And so Intel was never willing to price the product where it should to be priced, given the existence of Xeon at the low end and POWER at the high end.

They could initially as 64 bit support was considered a premium feature. Intel's master plan was to keep x86 32 bit and as the market slowly wanted 64 bit software, slip in the Itanium. There was a time table and even the possibility of a Netburst/Itanium hybrid chip that'd slip into socket 775 that'd assist in the transition. (Note that the first few iterations of the Itanium chip effectively had a 486 on-die to provide x86 backwards compatibility. This hardware was dropped as software emulation soon was able to provide greater performance.)

The problem wasn't Intel's plan but rather AMD's intention of extending the x86 architecture into the 64 bit realm. Upon the Opteron's release, Intel had a competitor to their Netburst architecture and a design that diminished the value of Intel's Itanium line while costing less than either. Nearly a perfect storm.

What made the storm perfect were two factors. MS dropped support for Windows XP for Itanium workstations (i.e. Intel's path to the desktop) and announced an x86-64 port. This would keep Itanium locked into the server market. And in the server market itself, Itanium was having troubles as IBM's POWER

name99 wrote:

The relevance of this to today is we are seeing a version of the same struggle within Intel regarding Atom. People imagine that Atom is the tiger chip, just waiting to pounce but, just like Itanium, Intel is terrified of hurting its desktop business. The specifics are very different, but the underlying reality is the same: like Microsoft so far, (and unlike Apple) Intel can't see a way to cross the chasm between the money it's making today from chips of the past, and the money it may one day make from chips of the future.

Agreed. Intel right now doesn't seem to want to challenge ARM by aggressively positioning the Atom core. Instead, Intel is bringing extremely low voltage main stream x86 parts (i.e. the 7W/10W Ivy Bridge) to challenge ARM. The high efficiency of those designs are indeed trumping ARM in performance in tablets but still consume more power and cost significantly more. Intel has seemingly ignored the idea that the smart phone and tablet market are heavily driven by price with the idea of performance being 'good enough' to simply usage scenarios. Anything more compute demanding is likely running desktop software where desktop user interfaces would be more appropriate (running them on faster desktop/laptop hardware is a nice bonus).

The amusing thing is that Intel has the technical, design and manufacturing expertise to actually win in the mobile market if they were allowed to compete head-on without regards to their high margins. It seems by the time Intel is willing to state to their share holders that they need to reduce margins to take over the mobile market, the battle for that market would have already be won by ARM.

name99 wrote:

(b) Itanium was DELIBERATELY made insanely complicated. Intel started with a the technical philosophy of something akin to RISC (although the details were different), namely "let's move all the complexity associated with superscalar execution from the CPU into the compiler, and simply offer a 'language' by which the compiler can indicate to the CPU the superscalar details". But then some bright MBA decided that a chip that was simple to design by Intel would be easy to design by anyone else, and so one bizarre item after another was added to complexify the chip. The end result was something every bit as complicated to design as an x86 --- but with a thousandth the size of the potential market. And so no money for a steady stream of annual tick-tock updates (and the MASSIVE pool of design engineers working in parallel to keep the thing competitive). And so IBM (starting with a cleaner slate in the form of POWER, with no demand for weird and pointless complexity) is able to basically remain competitive without untenable design effort.

Looking back at Itanium's architecture and design philosophy is that something they added had to have crippled clock speed growth. Until the most recent Poulson chip, Itanium had not broken the 2 Ghz mark. Itanium was supposed to run roughly half the clock speed of their Netburst chips and the first few Itanium chips did roughly that. However, clock speeds on Itanium chips only moved from 1.6 Ghz to 1.73 Ghz over the course of nearly decade. Itanium should have hit around 3 Ghz on a 65 nm process.

name99 wrote:

The relevance is, once again, to Atom. As I've said before, the problem Intel has with Atom is not that the x86 overhead costs a lot of power or a lot of area, it's that it costs so much in terms of design and test manpower. And once again Itanium shows us that, for all Intel's skills, especially in fab, the insane levels of complexity in these chips make it much harder than people imagine to simply crank out a new and improved version. I, for one, find it hard to see how Intel could establish a pipeline manned as aggressively as the desktop pipeline to keep pushing ARM aggressively. But without that level of manpower, all the arguments about how Intel COULD compete with ARM are moot.

Actually the x86 overhead does have a large enough power overhead to be mentionable in the smart phone market. The wonderful thing for Intel is that their fabs are a generation ahead of all the other manufacturers and Intel's x86 designers are exploiting that lead. The x86 power overhead is there but currently mitigated by Intel's manufacturing advantage.

One aspect Intel could use and has already starting doing in a more limited fashion is to design internal components into discrete building blocks. Intel currently has their own CPU blocks, L3 cache blocks and GPU blocks. The ARM consortium has their own internal bus design that allows them to swap components or add new blocks as new technologies become available. I would fathom that there has been a bit of disconnect between the CPU design teams and their chipset design teams. When it came time to integrate various IO that was typically done by the chipset team, they couldn't just plug in the IO blocks into the internal bus that the CPU design team had envisioned. Intel needs to be more aggressive in this area.

name99 wrote:

The really pathetic thing is that this death by complexity in both cases (Itanium and Atom) was absolutely self-imposed. There was no technical reason for it, and Intel could have got pretty much everything it needed from both products by maintaining an engineering focus that was willing to look forward confidently, rather than backwards in paranoia, and design a much simpler chip. There's an interest[ing] book waiting to be written here about Intel --- the technical side (fads and architecture), the business side, and how the business side has managed to so warp the engineering side.

I'd love to read such a book. I suspect that there would be a wide fear of 'not designed here' technology. It'd also dive into the real decisions as to why Intel's CEO is stepping down this year.

Agreed. Intel right now doesn't seem to want to challenge ARM by aggressively positioning the Atom core. Instead, Intel is bringing extremely low voltage main stream x86 parts (i.e. the 7W/10W Ivy Bridge) to challenge ARM. The high efficiency of those designs are indeed trumping ARM in performance in tablets but still consume more power and cost significantly more. Intel has seemingly ignored the idea that the smart phone and tablet market are heavily driven by price with the idea of performance being 'good enough' to simply usage scenarios. Anything more compute demanding is likely running desktop software where desktop user interfaces would be more appropriate (running them on faster desktop/laptop hardware is a nice bonus).

If Peter's article is any indication, ARM will take the "thin client" on one end, and the real computing horsepower will be in the cloud. Who will be powering that has multiple players vying for the position.

MIPS main uses in the market sudden became the embedded space where it has lingered.

Their Processors are respectable even if not a household name like ARM.

MIPS in the embedded market isn't bad in that role. ARM clearly has the momentum now from a design and market stand point. ARM's dominance in the smart phone/tablet market is a basically a factor of them offering the right designs at the right time to the right companies right before the market exploded in popularity. The thing that would have held MIPS back from being in that position is that selling embedded CPU designs for usage on system-on-a-chip devices was relatively new to MIPS and rather traditional for ARM. In the end it was more business politics rather than a technical one that MIPS wasn't there in ARM's place.

The same could be said for Motorola/Freescale and IBM's embedded PowerPC business.

MIPS main uses in the market sudden became the embedded space where it has lingered.

Their Processors are respectable even if not a household name like ARM.

MIPS in the embedded market isn't bad in that role. ARM clearly has the momentum now from a design and market stand point. ARM's dominance in the smart phone/tablet market is a basically a factor of them offering the right designs at the right time to the right companies right before the market exploded in popularity. The thing that would have held MIPS back from being in that position is that selling embedded CPU designs for usage on system-on-a-chip devices was relatively new to MIPS and rather traditional for ARM. In the end it was more business politics rather than a technical one that MIPS wasn't there in ARM's place.

In addition, ARM was strongly bolstered by Intel's support of the ISA, which forced it into a lot of consumer facing mobile devices (e.g. a lot of early mid 2000s Windows CE devices). At that point it was a no-brainer for Apple to pick up for the iPods. Then rather then start from scratch, a lot of the iPod bits found their way into the first iPhones, which naturally ran ARM. During this time Coldfire, MIPS, and SH fell behind.

Meanwhile Android was supposed to run on lots of ISAs (using Java), but everything but ARM was quickly depreciated because of the huge momentum that ARM had.

Agreed. Intel right now doesn't seem to want to challenge ARM by aggressively positioning the Atom core. Instead, Intel is bringing extremely low voltage main stream x86 parts (i.e. the 7W/10W Ivy Bridge) to challenge ARM. The high efficiency of those designs are indeed trumping ARM in performance in tablets but still consume more power and cost significantly more. Intel has seemingly ignored the idea that the smart phone and tablet market are heavily driven by price with the idea of performance being 'good enough' to simply usage scenarios. Anything more compute demanding is likely running desktop software where desktop user interfaces would be more appropriate (running them on faster desktop/laptop hardware is a nice bonus).

If Peter's article is any indication, ARM will take the "thin client" on one end, and the real computing horsepower will be in the cloud. Who will be powering that has multiple players vying for the position.

The amusing thing is that 3 years later things really haven't changed much. The bottleneck is still the network, though not to the same degree. ARM's increase in compute capabilities has been used for enhancing the end user experience. Some more passive features like storage have successful cloud applications as they just depend on a network being there, not necessarily the quality or speed of the network.

As for the cloud backend, it is currently x86 and will remain so over the short term. ARM has announced but yet to ship 64 bit chips. For low end web servers, 64 bit support isn't absolutely necessary (especially when web traffic is easily distributed across numerous 32 bit instances) ARM has a chance of breaking through. The difficulty is scaling up in the server market where RAS features become increasingly important and performance less so. High end x86 servers today offer features like hot swap memory in the event of a DIMM failure. Future x86 chips are expected to offer hot plug processors and hardware lock-step execution mirroring for redundancy. I personally haven't seen many ARM boards with expandable memory, much less having ECC or hot plug capabilities. The more important the cloud become, the more important the RAS features of the cloud hardware become. ARM can bridge that gap over the long term but has no chance in the short term.

x86 does have some competition in the back end. IBM's POWER line is going strong and is one of the few architectures right now to best Intel's x86 offerings in performance. Also POWER has been inhereting RAS features from IBM's mainframe line. Speaking of which, IBM's mainframes are no longer embarrassingly slow, just slow... and virtually immortal. Oracle has inherited the SPARC architecture and has been pushing out new designs that aren't performance leading but competitive with similarly high RAS features. The problem IBM and Oracle have is the same which Intel is facing in the mobile sector: pricing. Neither IBM or Oracle want to sacrifice their hardware margins to drive up market share. This is a bit worrisome as the UNIX market as a whole is shrinking but IBM and Oracle are sitting steady as HP customers either migrate to them (or Linux on x86).

MIPS main uses in the market sudden became the embedded space where it has lingered.

Their Processors are respectable even if not a household name like ARM.

MIPS in the embedded market isn't bad in that role. ARM clearly has the momentum now from a design and market stand point. ARM's dominance in the smart phone/tablet market is a basically a factor of them offering the right designs at the right time to the right companies right before the market exploded in popularity. The thing that would have held MIPS back from being in that position is that selling embedded CPU designs for usage on system-on-a-chip devices was relatively new to MIPS and rather traditional for ARM. In the end it was more business politics rather than a technical one that MIPS wasn't there in ARM's place.

In addition, ARM was strongly bolstered by Intel's support of the ISA, which forced it into a lot of consumer facing mobile devices (e.g. a lot of early mid 2000s Windows CE devices). At that point it was a no-brainer for Apple to pick up for the iPods. Then rather then start from scratch, a lot of the iPod bits found their way into the first iPhones, which naturally ran ARM. During this time Coldfire, MIPS, and SH fell behind.

Meanwhile Android was supposed to run on lots of ISAs (using Java), but everything but ARM was quickly depreciated because of the huge momentum that ARM had.

True but one investing tidbit missing during that time frame is that Intel sold off their ARM division to Marvell. ARM had gotten support early on from Intel's acquisition of StrongARM from DEC which morphed into the Xscale line in the early 2000's. But as the iPhone was nearing release, Intel ditched their Xscale line in favor of their then unannounced Atom which shipped a few years later.

As pointed out, much of the iPod's technology found its way in to the first iPhone. Intel could have wedged their way into the first phone via Xscale as Apple and Intel had a good relationship pulling off the PowerPC to Intel transition for Macs. Intel would love for Atom to become the center piece of the iPhone now. This isn't just to get a foot hold into the mobile market but also to protect Intel's x86 efforts on the Mac side. Apple is rumored to be further merging iOS and OS X to the point of a common hardware platform. The fear is that Apple will drop x86 support on low end Macs and go ARM. To further bolster Intel's fears is that Apple isn't an ARM chip customer anymore: Apple is an ARM CPU designer. Losing Apple wouldn't mean the loss of Intel's biggest customer but rather one of their highest profile customers. With the PC market contracting now, this would have implications for the rest of ailing PC market.

A few people seem to think vliw was only in desktop chips.TI has shipped hundreds of millions of vliw dsp chips - TI C6000 series either as standalone dsp chips or in a lot of their omap chips omap3 and 4 (in a fair few phones and tablets)http://www.ti.com/lsds/ti/dsp/c6000_dsp/overview.page

vliw is used in a few other dsp chips ,were in a good lot of set top boxes and tv's (some of st's chips and some from nxp) st200 series (40 million+ shipped) ,nxp trimedia ,fujitsu also has a vliw core (FR series - in some digital cameras -Leica ?)..

A few years back (2008 -2010) most mobile base stations had vliw cores (TI).Still a good few but a lot more fpgas used.

Wasn't IA-64 supposed to be a from-the-ground-up redesign of what a modern CPU should be, without having to support all of the legacy x86 bits? If Itanium, and therefore IA-64, had captured relevant market share, would it have eventually trickled down into the desktop space and replaced x86 as the defacto standard for general computing? Without having to hang on to the legacy x86 stuff, might we all actually be in a better place with more powerful systems today had Itanium not floundered?

Back in the day, HP was using their own RISC CPUs (PA-RISC) for their high end products (with success).But they became convinced that using out-of-order execution as means to achieve higher performance would hit a brick wall and developed the idea of creating a new ISA which had better support for explicitly expressing parallelism (Explicit Parallelism Computer Architecture).And they decided they did not want the burden of developing CPÛs alone, so they took their research to Intel, which liked the idea and IA-64/Itanium was born.

In the end of the day, the theoretical performance advantage of Itanium over RISC/CISC OoO CPUs never materialized.AMD's x86-64 and later Intel's own x86-64 CPUs pretty much wiped the floor with it on the low/mid end.On the high end, IBM's POWERx also pretty much wiped the floor with, but maybe more due to superior system integration than CPU performance itself.

As for the cloud backend, it is currently x86 and will remain so over the short term. ARM has announced but yet to ship 64 bit chips. For low end web servers, 64 bit support isn't absolutely necessary (especially when web traffic is easily distributed across numerous 32 bit instances) ARM has a chance of breaking through.

True, but let's not forget Nvidia's entry into the "gaming server" end of "gaming thin clients". It's a niche. A niche being so specialized places it. But as GPGPU has shown there is a market and Intel's not there. ATI could be there since they have all the pieces.

MIPS and PA-RISC were in trouble, but Alpha was going gangbusters and even POWER was doing OK, considering IBM still regarded it as something to keep its hardware chops honed.

Picture Intel coming into this market and saying "we have the fabs that we use for mainstream processors, and we're going to kill all these guys with our manufacturing expertise, so don't buy anything until we show our future processor" - HP killed PA-RISC immediately in exchange for a seat on the table, MIPS choked, DEC panicked and the corpse was split between Intel and Compaq, and IBM noticed that if they couldn't sell POWER workstations any more, they could still run mainframe code on POWER.