What at first seemed like a relatively minor decision by Oracle to stop creating software meant to run on machines containing Intel’s technically sophisticated, but not widely used Itanium processors has escalated into a full-blown war of words between Oracle and Hewlett-Packard.

First came the initial announcement. Then came responses, first from Intel, indignant at Oracle’s suggestion that the Itanium processor was on a death watch, then from HP, saying it was “shocked” at Oracle’s decision. Later it went on to describe Oracle’s move to Bloomberg as a “shameless gambit” to harm competition.

That is very easy to answer, but there are multiple answers that need to be considered at the same time.

In no particular order.

1, Intel/HP created a new "instruction set", this of course meant that everything had to be re-compiled/written to be able to run without emulation.

2, AMD released the 64-bit extension for x86, Itanium suddenly lost a chunk of market share, they lost even more a couple of years later when Intel took up AMD's 64-bit extension.

3, HPC (high performance computing) in various forms started eating into the market that was traditionally dominated by heavy duty processors with non-x86 instruction sets, this included Itanium. This was primarily due to the price/performance that x86 clusters had over Itanium and others.

5, Raw performance, for decades people were craving more and more performance, for some they can now hit their needed level of performance with x86-64.

6, Software support, Intel told companies like Oracle, MS and Red-Hat that it was "focussing on x86 chips", this was the beginning of the end of Itanium, and Intel started it. No doubt Intel will in the future announce that it is finally dropping Itanium due to a lack of software support.

SPARC is open and there's some competition (or used to be some?). I don't know who's actively making Power hardware and how compatible are the various products but I bet there's still more than one big corporation involved. So they're not like Itanium. Plus there's lock-in and there's lock-in: it would take a special kind of fool to trust Intel nowadays.

SPARC and Power have a historical base. They used to provide clear benefits and to some extent they still do.

That is very easy to answer, but there are multiple answers that need to be considered at the same time.

In no particular order.

1, Intel/HP created a new "instruction set", this of course meant that everything had to be re-compiled/written to be able to run without emulation.

2, AMD released the 64-bit extension for x86, Itanium suddenly lost a chunk of market share, they lost even more a couple of years later when Intel took up AMD's 64-bit extension.

3, HPC (high performance computing) in various forms started eating into the market that was traditionally dominated by heavy duty processors with non-x86 instruction sets, this included Itanium. This was primarily due to the price/performance that x86 clusters had over Itanium and others.

5, Raw performance, for decades people were craving more and more performance, for some they can now hit their needed level of performance with x86-64.

6, Software support, Intel told companies like Oracle, MS and Red-Hat that it was "focussing on x86 chips", this was the beginning of the end of Itanium, and Intel started it. No doubt Intel will in the future announce that it is finally dropping Itanium due to a lack of software support.

From Apple POWERPC days and Itanium, wasn't x86 supposed to run out of steam, it has inherited so much legacy cruft that a clean room implementation supposed to be faster?

That was what I heard as well, it didnt (obviously) happen in reality, I suspect that a large reason was simply the power of the x86 market, creating an entirely new market is very expensive, I believe the number that was being toutede 10 years ago was something like 2 billion USD from Intel, and an unknown number from HP.

Now when you look at the revenue made on the sales of this new product it does not look good, have a look at what AMD did by adding the 64-bit support to the x86 architecture, did they spend 2 billion.? who knows, what I do know is that the sold vastly more chips and morer importantly a much greater revenue.

I personally believe that Itanium was doomed from the outset because it had to emulate x86. Why did they not integrate an x86 component to avoid emulation.? Why did they not make it 128-bit from the outset, 64-bit was merely to compete with existing technology, rather that surpassing it. It was doomed from the manufacturing point when best part of a decade ago they had made a 1-billion transistor Itanium - it was not a great architecture either, and again I am not sad to see it going away, and neither are Intel.

As far as POWERPC for the MAC is concerned, the same points apply, its not x86, the revenues were not there to fund investment, they were for x86 so POWERPC has bitten the dust - however they were essentially on a knife edge for years because I cant even name another user of POWERPC chips other than Apple.

Quote:

Wasn't Dec Alpha an example of what could be done with RISC?

DEC were destroyed by arseholes (HP i think it was), they had some of the best ideas, performance, and were going places even with minimal investment, but they were sadly destroyed.

Regarding RISC, remember that part of the core of AMD processors some years ago was (is.?) based on a RISC architechture that they licences from Alpha, along with a BUS architechture that they stopped using when they had developed HyperTransport.

The interesting thing of the future is ARM and GPGPU, along with of course how much more that can be squeezed out of x86, 128-bit perhaps, AMD is already there (Bulldozer) with a a 128-bit Floating Point unit.

As far as POWERPC for the MAC is concerned, the same points apply, its not x86, the revenues were not there to fund investment, they were for x86 so POWERPC has bitten the dust - however they were essentially on a knife edge for years because I cant even name another user of POWERPC chips other than Apple.

PowerPC was based on early IBM RISC designs and has been used for 20 years on their midrange UNIX servers. Today the IBM Power architecture can run AIX (IBM's UNIX), Linux, and iSeries (formally knows as IBM AS/400) operating systems. It is a wildly successful architecture for IBM in the high-end server market, and powered the machines that beat reigning World Chess Champion Garry Kasparov in 1997, and the machine that this year won the Jeopardy Challenge against the two best human Jeopardy players on the planet. IBM Power CPUs run many of the most powerful super-computers in the world and are widely used by large corporations. Not sure why you think Apple is/was the only one to use it.

There are a lot of theories as to why Apple switched to Intel from PowerPC. One of Apple's public claims was that the PowerPC ran too hot for laptops (although IBM made PowerPC laptops running AIX for years). Another theory is that Apple just got a better pricing deal with Intel. Also consider that manufacturing processors is not a big business for IBM, and it is hard to predict whether they will even be making CPU's in-house 5-10 years down the road. That is one reason why Apple insisted on the IBM/Motorola partnership for the PowerPC manufacturing when Apple first started using it, because they did trust that IBM (at one time a competitor of Apple in the PC market) would be motivated to supply them with enough chips in the long term. IBM has tried several times to sell some of it chip manufacturing plants (or partner with others), so their commitment to processor manufacturing has been suspect over the years. One thing we know with reasonable certainty, is that Intel will always be making processors, so it was a safe move for Apple.

But as noted above, the current Power7 processors (IBM's latest incarnation of the PowerPC CPU) make for killer machines. Just ask Ken Jennings and Brad Rutter for their opinion.

DEC were destroyed by arseholes (HP i think it was), they had some of the best ideas, performance, and were going places even with minimal investment, but they were sadly destroyed.

DEC was already on life-support by the time Compaq (not HP) bought them. That is why Compaq could afford to buy them. HP bought Compaq later.

I think DEC's main problem was that they mainly concentrated on high-end scientific workstations/mini-computers rather than the general business and consumer market. The market for scientific/workstations is relatively small compared to the explosion of computers used for general business and consumers starting in the mid-1990's. DEC also got hurt by the ability of Intel/AMD to run UNIX and Linux much cheaper than DEC, so it made it hard for DEC to compete on price. When MS dropped support of NT running on the DEC Alpha chips, that was the beginning of the end for the DEC. Other companies that focused almost exclusively in the high-end workstation/scientific market didn't do so well either. It was mostly Wintel that killed DEC, not Compaq/HP.

The only reason why Apple survived (not being part of the Wintel freight train) was because of the iPod, iBook, iPhone, iPad, iTunes/Store (which focused on consumers instead of high-end workstation markets), and not because of the Mac. The Mac would be dead by now if not for the other products.

Basically he says, in theory it was better, but in real world usage it strained to keep up with the older Intel architecture... and was always inferior.

He aught to know. He even worked for a RISC company.

Depends on what it is used for. Early RISC systems worked better for scientific systems, and not so well for business or consumer systems. Not sure that hold true anymore, at least regarding PowerPC (IBM Power7 being the latest incarnation).

Basically he says, in theory it was better, but in real world usage it strained to keep up with the older Intel architecture... and was always inferior.

He aught to know. He even worked for a RISC company.

Depends on what it is used for. Early RISC systems worked better for scientific systems, and not so well for business or consumer systems. Not sure that hold true anymore, at least regarding PowerPC (IBM Power7 being the latest incarnation).

If both Larry Ellison and Linus Torvalds can agree on something, it must be right. No?

Torvalds is a pretty credible source. He ports operating systems from hardware platform to hardware platform for a living. He and his team have more experience doing this than I believe any other team in the world.

You can't do that without being intimately familiar with each CPU's individual warts and all. And he worked for a RISC hardware company for a while as well.

The point he is making isn't whether or not PowerPC or Itanium might be theoretically better. They are. Theoretically.

It is just in his personal experience, theory is just that theory. From his personal experience (a pretty vast personal experience) the 86 architecture, while ugly... with all its warts... works. RISC can work, but works at a disadvantage to the 86.

Torvalds is a pretty credible source. He ports operating systems from hardware platform to hardware platform for a living. He and his team have more experience doing this than I believe any other team in the world.

You can't do that without being intimately familiar with each CPU's individual warts and all. And he worked for a RISC hardware company for a while as well.

The point he is making isn't whether or not PowerPC or Itanium might be theoretically better. They are. Theoretically.

It is just in his personal experience, theory is just that theory. From his personal experience (a pretty vast personal experience) the 86 architecture, while ugly... with all its warts... works. RISC can work, but works at a disadvantage to the 86.

I am not defending Itanium. But you seemed to suggest that Torvalds has problems with all RISC processors. I personally don't know about any problems with RISC, nor do I have a stake in it, but am just curious about whether he really has a problem with PowerPC or just Itanium. I have used many systems running RH Linux on PowerPC and never noticed any issues compared to running RH Linux on Intel or AMD platform.

With regard to Larry Ellison, Oracle supports a very large number of hardware and operating system platforms. That does not mean that Oracle recommends any of them. The decision to drop support of Itanium is market driven (based on how many customers are actually using Intanium and will continue to do so in the future) and not any reflection on what Ellison or Oracle thinks about it. Oracle will support any OS where there are a lot of potential Oracle database customers.

I am not defending Itanium. But you seemed to suggest that Torvalds has problems with all RISC processors.

It appears that he does. It is not anything you or I would be in a position to notice. It is pretty hard for us to do a clean A B comparison.

Someone who spends there time optimizing code to run on hardware, or optimizing hardware to run code... those are the people to whom the relative weaknesses and strengths would become evident. It may be that he was specifically targeting Itanium... but it seems his remarks are applicable to all RISC instruction set CPUs. The premise underlying the advantages of RISC appears to have been a mirage. It wasn't evident at the time, but through trial and error it has come to be evident.

The article ends: In what could be the best news for AMD, Torvalds summarised his thoughts on Itanium. "Code size matters. Price matters. Real world matters. And ia-64 at least so far falls flat on its face on ALL of these."

The article ends: In what could be the best news for AMD, Torvalds summarised his thoughts on Itanium. "Code size matters. Price matters. Real world matters. And ia-64 at least so far falls flat on its face on ALL of these."

I read the Torvalds link, and although he has negative comments about specific RISC processors, he seemed to say that IBM may avoid those problems with their RISC processors, so I am not sure exactly what he is saying.

With regard to the comments about AMD, I don't think that the demise of Itanium is going to be a panacea for AMD. Intel already makes (and sells) many x86-64 processors for servers (in addition to desktops). AMD already has stiff competition from Intel in the x86-64 arena.

I read the Torvalds link, and although he has negative comments about specific RISC processors, he seemed to say that IBM may avoid those problems with their RISC processors, so I am not sure exactly what he is saying.

Here is my take on it (which admittedly is not that well informed). I used to know a little bit about this... and I now know even less. But here goes:

1. He thinks these are important points: "Code size matters. Price matters. Real world matters."

2. "Code size matters" almost by definition RISC code is going to be bulkier. Smaller code speeds everything up. It takes less time to move down a bus. It effectively increases the size of the cache. Apparently this is more important than I thought. Torvalds puts it first.

3. Price matters. This is perhaps Itanium specific? IBM, Sun and Itanium chips are probably some expensive silicon. On the other hand ARM chips aren't.

4. Real world matters. That covers a lot of ground. It could mean the following:

If you have something complex that needs to be executed you can do it in the software or in the hardware. Risc moves it to the software. That permits the hardware to be simpler.

Seems like the 86 architecture is evolving in the other direction. For example the Sandy Bridge has new instructions to speed encoding.

ARM has done well because it doesn't use all these fancy extra instructions. As a result it keeps the die size down. With less silicon is uses less electricity. RISC permits it to do that.

But in recent generations they seem to be doing a good job of shutting down silicon that isn't being used. So you can have lots of extra specialized silicon that speeds things up, but doesn't use extra power except when in use... and when it use it does a lot more per cycle and a lot more per watt.

Reducing the silicon foot print perhaps is no longer an advantage.

5. IBM

If Torvalds believes the IBM chip doesn't suffer from the same from RISC to the same extent as Itanium, I would suspect that is because it isn't as pure a RISC architecture. It is pretty clear to me that Torvold's problem with Itanium isn't that it is a poor implementation of RISC but that it is an exemplary implementation of RISC. They went all the way. That is the basis of his objection. If they put back some of the missing instructions... it would just make it less bad.

6. Direction of Technology

It seems to me that the direction that CPU technology is taking is to use more and more silicon as a substitute for gHz. This used to cost wattage. Now the silicon can be lit up or turned down or off as needed. This reduces the burden of carrying that extra silicon.

RISC used to be a route to reduced energy usage. The future route will be to get more done per work cycle and have more idle cycles.

I get the impression from the Torvalds article that there are two x86 64 bit implementations. Intel only makes a disabled version available on the market. The other, I think the article refers to as the Hammer, exists but you can't buy it. Apparently Torvald has access to it and believes the existing 64 bit x86 implementation is a kludge that needlessly shelters the Itanium for enterprise competition from x86.

I know that enterprise vs desktop is much more complex than that. But perhaps there is still some kind of hope for AMD????

I get the impression from the Torvalds article that there are two x86 64 bit implementations. Intel only makes a disabled version available on the market. The other, I think the article refers to as the Hammer, exists but you can't buy it. Apparently Torvald has access to it and believes the existing 64 bit x86 implementation is a kludge that needlessly shelters the Itanium for enterprise competition from x86.

I know that enterprise vs desktop is much more complex than that. But perhaps there is still some kind of hope for AMD????

Nah.

I am not an expert on hardware (I do software for a living), although I use many large Intel x86-64 servers at work (running Linux) and I personally own 3 small systems with AMD x86-64 ruining Linux and Windows (some dual boot). I was under the distinct impression that Intel x86-64 is a different chip than Itanium.

As I mentioned before, the fact that Oracle is dropping future support for Itanium is an indication that corporations are not using it much anymore (if they ever did). I have not seen Itanium used where I work, which I know because when I download vendor software for Linux that runs on these servers, there are different versions for Intel/AMD x86-64 (same software version for both), and Intel Itanium 64-bit.

He goes on to write, "As far as I know, the _only_ things Itanium 2 does better on is (a) FP kernels, partly due to a huge cache and (b) big databases, entirely because the P4 is crippled with lots of memory". That crippling with lots of memory is due to what many people describe as a major kludge in the Pentium architecture called Page Address Extensions (PAE). According to Torvalds, "the only real major failure of the x86 is the PAE crud".

There are quite a few Xeons in particular that you will see advertised with 8GB or 16GB of memory. Astute observers will have wondered how a 32bit processor can address more than 4GB of memory. PAE is the answer. It allows 36bit addressing using 'pages' of memory. According to Torvalds, the Pentium 4 is crippled "because Intel refuses to do a 64-bit version (because they know it would totally kill ia-64)."

In article <[email protected]>, William Lee Irwin III <[email protected]> wrote: >On Sun, Feb 23, 2003 at 12:07:50AM -0800, David Lang wrote: >> Garrit, you missed the preior posters point. IA64 had the same fundamental >> problem as the Alpha, PPC, and Sparc processors, it doesn't run x86 >> binaries. > >If I didn't know this mattered I wouldn't bother with the barfbags. >I just wouldn't deal with it. Why? The x86 is a hell of a lot nicer than the ppc32, for example. On the x86, you get good performance and you can ignore the design mistakes (ie segmentation) by just basically turning them off. On the ppc32, the MMU braindamage is not something you can ignore, you have to write your OS for it and if you turn it off (ie enable soft-fill on the ones that support it) you now have to have separate paths in the OS for it. And the baroque instruction encoding on the x86 is actually a _good_ thing: it's a rather dense encoding, which means that you win on icache. It's a bit hard to decode, but who cares? Existing chips do well at decoding, and thanks to the icache win they tend to perform better - and they load faster too (which is important - you can make your CPU have big caches, but _nothing_ saves you from the cold-cache costs). The low register count isn't an issue when you code in any high-level language, and it has actually forced x86 implementors to do a hell of a lot better job than the competition when it comes to memory loads and stores - which helps in general. While the RISC people were off trying to optimize their compilers to generate loops that used all 32 registers efficiently, the x86 implementors instead made the chip run fast on varied loads and used tons of register renaming hardware (and looking at _memory_ renaming too). IA64 made all the mistakes anybody else did, and threw out all the good parts of the x86 because people thought those parts were ugly. They aren't ugly, they're the "charming oddity" that makes it do well. Look at them the right way and you realize that a lot of the grottyness is exactly _why_ the x86 works so well (yeah, and the fact that they are everywhere . The only real major failure of the x86 is the PAE crud. Let's hope we'll get to forget it, the same way the DOS people eventually forgot about their memory extenders. (Yeah, and maybe IBM will make their ppc64 chips cheap enough that they will matter, and people can overlook the grottiness there. Right now Intel doesn't even seem to be interested in "64-bit for the masses", and maybe IBM will be. AMD certainly seems to be serious about the "masses" part, which in the end is the only part that really matters).

I get the impression from the Torvalds article that there are two x86 64 bit implementations. Intel only makes a disabled version available on the market. The other, I think the article refers to as the Hammer, exists but you can't buy it. Apparently Torvald has access to it and believes the existing 64 bit x86 implementation is a kludge that needlessly shelters the Itanium for enterprise competition from x86.

I haven't read the article, but Hammer was the codename for the K8 design. Which is the AMD64. Hence the early core names of Sledgehammer and Clawhammer. Intel use a slightly modified implementation, originally codenamed Yamhill.

I haven't read the article, but Hammer was the codename for the K8 design. Which is the AMD64. Hence the early core names of Sledgehammer and Clawhammer. Intel use a slightly modified implementation, originally codenamed Yamhill.

Yes. I noticed that after the fact. It still seems like Torvalds finds a material deficiency in the Intel 64 architecture... at least for enterprise database purposes.

I haven't read the article, but Hammer was the codename for the K8 design. Which is the AMD64. Hence the early core names of Sledgehammer and Clawhammer. Intel use a slightly modified implementation, originally codenamed Yamhill.

Yes. I noticed that after the fact. It still seems like Torvalds finds a material deficiency in the Intel 64 architecture... at least for enterprise database purposes.

Probably due to scalability. The off-die memory controller used by early Intel x86_64 implementations does not scale to multi-socket systems well at all. The K8 and especially K10 cores scale extremely well.

Probably due to scalability. The off-die memory controller used by early Intel x86_64 implementations does not scale to multi-socket systems well at all. The K8 and especially K10 cores scale extremely well.

Torvals refers to the "PAE crud". The other article refers to that as "a major kludge in the Pentium architecture called Page Address Extensions (PAE)" that limits it eficiency in addressing large memory. Is that what you are referring to in the "off-die memory controller". Seems like Torvald's complaint is that it is still with us.

Probably due to scalability. The off-die memory controller used by early Intel x86_64 implementations does not scale to multi-socket systems well at all. The K8 and especially K10 cores scale extremely well.

Torvals refers to the "PAE crud". The other article refers to that as "a major kludge in the Pentium architecture called Page Address Extensions (PAE)" that limits it eficiency in addressing large memory. Is that what you are referring to in the "off-die memory controller". Seems like Torvald's complaint is that it is still with us.

No, it is not. PAE is an x86 extension which is present on nearly every x86 CPU since the Pentium Pro. It serves no purpose in long mode, and is not used.

ces, no offense but I seriously think you should rewind your life back to 1992, and pay more attention the second time through. Make that 1982 or even 1972 if you want to insist on making such self-important encyclopaedic posts. Honestly if you don't understand the basics and the history you've got no hope of understanding what Torvalds is saying, nor those who know more than Torvalds (of which there are many).

I've about had it with the recent lack of respect between those with differing viewpoints.

Disagree with each other (vehemently if you must), but at least stay away from the personal attacks. A simple, "I disagree, and here's why..." is all that's necessary to convey your point and still promote healthy discussion.

But if the purpose is to bully someone into not responding....

I've seen quite a few people from the US, UK and some from non-english speaking countries with this attitude. I can forgive the non-English speakers on the basis that they can't effectively convey their point well through translation, but native speakers should know better.

If we're talking about Linus Torvalds' history and the history of RISC and the IA64 architecture, then I can assume we're all old enough to have jobs and know what it's like to work in a professional environment. Why we are not treating this forum (one of my favorites, because of the respect most members have for one another) like we would a meeting among colleagues, escapes me.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum