I remember at the time asking one of them what made them think that VLIW was going to be a good idea ......... this time ...... when it had failed every other attempt?

AMD/ATI R500 and later GPUs use VLIW, so it isn't totally worthless. VLIW doesn't work well with code that is statically compiled like on the Itanium, it is difficult to predict exactly what set of instructions to use when the data isn't available, which it is not for a statically compiled program. However, on the R500 and later, there is a compiler that compiles code and as a result can make use of the advantages that VLIW offers while minimizing the disadvantages because the data is in hand.

The presence of the compiler is also probably why IL is the lowest level one would want to go with GPGPU on AMD GPUs, the actual low level code generated by the compiler will vary based on the conditions at runtime.

Could you imagine what the world would have been like if Intel had really had its way at the turn of the century? All the servers and big iron would be Itanium. We'd all be on Pentium whatevers by now, though I guess we'd have desktop Itaniums by now. I wonder if we'd have an integrated memory controller by now? I guess we'd be using Rambus DDR RAM too. Maybe we'd even be using an Intel video card by now with a bunch of Pentium CPUs glued on it.

And while Intel relishes at this tought of "their" world, this is the one thing I never want to see happen. I'm glad for once, somehow, somethings didn't quit work out for Intel in this grand scheme of theirs. I wish Intel the best of luck and innovation ahead, but I wish them that luck in that there will be AMD and other companies around to challenge them and keep them honest or at least accountable.