More than a decade ago, Intel ran into an issue trying to deliver what was to be the world's top-ranked supercomputer: it looked possible that its new Pentium Pro processors at the heart of the system might not arrive in time. As a result, the chipmaker made an unusual move by paying Hewlett-Packard $100,000 to evaluate building the system using its PA-RISC processors in the machine, said Paul Prince, now Dell's chief technology officer for enterprise products but then Intel's system architect for the supercomputer. Called ASCI Red and housed at Sandia National Laboratories, it was designed to be the first supercomputer to cross the threshold of a trillion math calculations per second.

Just a simple question. What is the exact relationship between Itanium and PA-RISC?

Itanium started off initially as a project by HP as a successor to PA-RISC, hence, there is from the start the design of a compatibility layer so that people can run PA-RISC binaries on Itanium unmodified.

The problem with Itanium is that it placed far too much hope in the skill of compiler engineers to create a compiler that can handle all the things which HP thought should be pushed back as a matter of software compiling rather than at runtime. The net result has been what we've seen today a lacklustre CPU performance which seems to be more of a by-product of university theory rather than business practicality.

Btw, this isn't the first time a VLIW-like processor has been attempted; its one of those ideas that came out of the engineering academia when it should have stayed there. Nice on the blackboard when teaching kids but reality is that it throws out reality in favour of the perfect set of scenarios.

There are plenty of VLIW processors out there, esp in the GPU/DSP/embedded arena. Careful when making such statements regarding VLIW as a viable idea.

I am talking about VLIW in regards to CPU, not GPU or DSP. Intel created the i960, Sun created the Majc processor, I also believe that IBM might have tried it at one stage. VLIW when used for a general purpose CPU is ultimately epic failure - its performance is ultimately set on a perfect stream of data flowing into the processor which of course rarely happens in reality.

Does VLIW design have hope in specialised areas like GPU or encryption acceleration? sure, but I never suggested that it couldn't be used in specialised roles.