Well before Oracle was even close to buying Sun Microsystems, the company was kind enough to tweak its per-core pricing for its eponymous database software to make it competitive on processors with fewer threads and higher clock speeds. Now, the company is making its software pricing less attractive on servers using Itanium 9300 …

Admission of defeat by Oracle?

1) An admission by Oracle that HP 9300 CPU cores are 8x as fast as T3 cores

-or-

2) An admission by Oracle that their SPARC business is failing and they need to give it a crutch to stand on

- the second one is questionable in terms of fair competition and Oracle abusing their market position, but I won't hold my breath on anything happening on that front.

The crazy thing here is that if Larry stopped and thought about this for one moment he'd realise there are only 2 winners in this play:

IBM - who must be killing themselves laughing at this

Microsoft - no doubt HP will push SQL Server on scale up x86 harder in accounts where these core factor issues become a problem

Either way, for customers already running anything but SPARC this is not good news - I will certainly be asking for an "explanation" from my Oracle account manager on why he's hiking my database costs for no good reason.

RE: Admission of defeat by Oracle?

Actually, this is a bit of a non-story, just more of TPM's "bash-Itanium-at-any-cost" mantra. Seeing as the core count on the new Tukzilla Itaniums has doubled per socket compared to the older Montecito/Montvale dual-cores, Oracle is simply maintaining the status quo. In fact, seeing as the new cores have more oomph than the old cores, which means you can do more Oracle grinding with less cores, Oracle will probably see LESS money per socket out of the new hp Integrity servers than the previous generation even after the new pricing. And as hp has halved the price of hp-ux licences the result is overall a lower cost for new Oracle instances on hp Integrity reagrdless. It's simply non-issue, as TPM should know. What would have been a declaration of war is if Larry had quadrupled the per-core cost of Oracle on Tukzilla and also doubled it on the older Itaniums. Cutting the costs on SPARC was an obvious and long-expected tactic. Can I suggest TPM looks for some real news?

Wrong

Matt....good to see you are alive.

Oracle just doubled the cost of all of their software on Itanium systems. This is big news for any HP customer who still buys Itanium.

The new cores don't have more oomph than the old cores. The chip has 2X the performance but that is because it has twice the cores. The big reason they don't have more oomph is because the EPIC architecture requires large amounts of cache. Montvale 24MB cache / 2 cores = 12MB/core Tukwila 24MB cache / 4 cores = 6MB/core

Ask HP for their Performance Query Reporting Tool (PQRT) report and it will show the per core performance has not improved from Montvale to Tukwila. I believe a companies own capacity planning tool, not some blinded technology drone.

And we will make their systems prohibitively expensive with our software pricing. HP is a parasite to Oracle.

"Seeing as the core count on the new Tukzilla Itaniums has doubled per socket compared to the older Montecito/Montvale dual-cores, Oracle is simply maintaining the status quo" This just does not make sense...the price per core and that is how Oracle prices software is 2X what it was in November.

HP didn't exactly half the price of HP-UX they just decided to price per socket and given the poor market share and declining business it is a marketing play.

I do agree that SPARC is very low performance and this helps address the discrepancy in pricing.

RE: Wrong

".....Oracle just doubled the cost of all of their software on Itanium systems...." What a surprise, Allison can't even start without getting it wrong from the word go! Larry has doubled the price on the new Tukzilla kit only, which have twice the core count, so the per-socket price compared to the previous generation is the same.

".....The new cores don't have more oomph than the old cores...." Sorry, still wrong! I have some demo kit, I've benched it with several of our stacks (including Oracle DB), and we're seeing between four and six times improvement. You are forgetting that hp's Tukzilla Integrity kit isn't just faster cores, it's also DDR3 and wider bandwidth to storage, all of which seems to help in keeping things spinning nicely. I know you IBMers like to talk core speeds and avoid discussing the rest of the system so I'm not surprised the performance gains in the new Tukzilla kit comes as a surprise to you.

"....Ask HP for their Performance Query Reporting Tool (PQRT) report ...." Seeing as PQRT is an hp internal only (which I would have thought an IBM marketting drone would know), you know it's very unlikely that anyone outside hp will be seeing it, so you can sprout whatever rubbish you like and claim it is gospel. But, I've alwasy put much more stock in benchmarking in our own environment with our own stack, and there we are seeing increases in performance. Sorry if you're not used to hearing results from the real World rather than some IBM benchmarking session.

".....Ellison Says Oracle Will 'Go After' H-P...." Yes, please do point out one area where Larry has made one actual and tangible move that could be considered even remotely hostile? Has he removed hp ProLiant or Integrity or NonStop from his list of supported platforms? Has he even ramped up theri licensing? No he hasn't. The changes for Tukzilla licensing mean zero real-World changes as they just balance out the core count increase. It's all marketting bluster to grab headlines. And why is he making the most noise about hp and not IBM? Well, if you want people to think you want to be number one you don't start carping on about number two, do you?

".....HP didn't exactly half the price of HP-UX they just decided to price per socket...." Hmmm, that sounds like the most pointless evasion of the truth I've heard in a long time! It's even too lame for a politician. Seeing as most upgrades start on the rule-of-thumb that you will want to have at least the same number of cores to ensure at least the same performance, the majority of new hp Integrity servers replacing old ones will be at most on a per-core basis, so that means the hp-ux licensing costs for the replacements will be half the replaced servers. Even a marketting droid like yourself should be able to follow that maths. But then we also have to look at power and coolling savings - Tukzilla means more cores for less of both - and savings in rackspace as the new Integrity blades are not only more dense than the old Integrity rack servers, they also save space by using the embedded switches in the C-class chassis, removing the need for top-of-rack switches. And I don't think you want me to point out the savings in admin costs and advantages of hp's Virtual Connect technology, especially compared to IBM's weak offering!

".....given the poor market share and declining business...." Newsflash - Allison is wrong again! Actually, I'm waiting for Allison to ever be right! The recent reg article here (http://www.channelregister.co.uk/2010/12/02/idc_q3_2010_server_numbers/) on Q3 results from IDC included the line "....The Unix midrange, bolstered by new products from IBM and HP, showed both shipment and revenue increases in Q3....", so it sounds like hp's new Tukzilla kit is doing quite nicely. I'd also like to remind you that the same article still puts hp top of the server heap (with more than twice the growth of IBM), top of the blades heap (how many quarters in a row, I've lost count!), and that's in an article from TPM who seems to just hate having to say anything nice about hp almost as much as you do!

real database tests

How about the newest and more reliable database test TPC-E which is a real trade Broker database. Top 10 entries is MS SQL server, and they have been for the last 2 years. where are the other entrants?

Also see how Oracle tried to cheat on the tpc-c test, just google it. Lets see if they will ever manage anything top 10 in the tpc-e, doesn’t matter how many SSD disks they use!

Anothe way of looking at it

@Kevin Elliott

Sure, that could be true if Sun hardware was e.g. slow.

But it seems you have missed it, but Sun hardware now has e.g. the TPC-C world record. As shocking 30 million tmpc. And HP's record is 4 million tmpc. The T3 cpu has several world records, too. In other words, this proves Sun hardware not to be slow, actually it is fastest in the world in some aspects.

In other words, to reason like what you describe is not logically sound, but contradictory.

If you were to argue Sun hardware was slow, but at the same time it has several world records - then your logic would be flawed.

Re: Kebabbert

> In other words, this proves Sun hardware not to be slow, actually it is fastest in the world

Do you have wet dreams?

Sun/Oracle needed 27 servers to reach 30 million tpmC -- that's about 1.1M tpmC per system, but this is a cluster which does not scale linearly (they will hide this fact and will never show a single-system tpc-c result), so my guess is a single 64-core T3 can do probably about 2M tpmC. Now, HP's single system is 4M tpmC, IBM's is 6M tpmC. All of them can work in clustered configs. Which hardware is the fastest?

Hey, Coward

Maybe you missed it, but Niagara T3 has several world records. So, yes, T3 can be considered to be fast, actually, fastest in the world in some benchmarks.

Regarding TPC-C, well it is not trivial to scale. If anyone could have done it, they would have. We will see if IBM can answer to the clustered TPC-C. If not, they couldnt get DB2 to scale. Oracle can do that, with out-of-the-box solutions. IBM can not, I have heard. IBM used some tailor made software to get their clustered record, I heard. Can anyone confirm?

If you need the highest TPC-C performance in the world, you have no choice than Oracle. IBM's result of 6 million tmpc is chicken shit, compared to 30.2 million tmpc.

So "....Which hardware is the fastest?...." the records say it is Oracle. Unless you will start that old IBM talk again about "IBM having faster cores, therefore IBM still has the world TPC-C record"? Will you start that bull sh*t again? If not, I suggest you look at the TPC-C list and see who is fastest in the world.

CPU licensing factors apply to named user plus model too

The article suggests that the named user plus licensing model can be significantly cheaper than the processor licensing model if a workload has a modest number of users. And it ends with the remark that for named user plus licensing "there is no processor architecture scaling factor".

However, the final paragraph of http://www.oracle.com/us/corporate/pricing/databaselicensing-070584.pdf clearly states that the scaling factors do apply.

Named user plus...

About time...

Well, can't say I'm surprised, it's about bloody time to be honest, I would have expected them to do this a bit sooner but with hindsight that would have looked too brash, even for Larry. They have timed it well to coincide with the Sunrise Supercluster jobby (love these new names..) and the Sparc T3's/T4's and a decent roadmap now in the public domain, not to mention the HP spat.

Some will cry foul that Oracle is abusing it's monopoly in the enterprise DB market, and it is, but this is just good business sense and is of course, flippin obvious...IBM has done this for years, not to mention M$. HP is the focus for Larry's aggression, and I don't see anyone at HP who can tussle with him, HP don't make their own chips, no Enterprise DB, no real middleware and no standout business app's so hence no integration and poor relative performance compared to Oracle and IBM.

HP will soon be finished in the High End, they will be left to compete with the likes of Dell at the bottom end of the x86 tin pusher game, which to be frank is where they belong, because in this new highly integrated hardware/software world we are heading in to, they have nothing to offer.

RE: About time...

Oooh, goodie, a new Sunshiner to poke fun at! They've been rather thin on the ground since the Sunset.

"Well, can't say I'm surprised...." Seeing as the move had been widely predicted/leaked for the last few months?

"....and the Sparc T3's/T4's and a decent roadmap...." Erm, isn't that just a mild rehash of the old Sun roadmap, still full of optimism and just as unlikley to happen as it was under McNeedy and Ponytail? And still with a Niagara hackjob that can't provide the single-threaded performance that you Sunshiners insisted Niagara didn't need (just like you lot insisted it didn't need more cache - LOL!). So what happens when Fudgeitso stops making SPARC64? All those customers will be looking at Pee7 and Tukzilla kit, and with hp taking advantage of their massive installed base of C-class blade chassis they are well positioned to mop up Larry's customers as they migrate off SPARC. I'm not surprised you forgot the vast majority of that installed SPARC base is on eight-socket or smaller UltraSPARC servers which can easily be out-performed by a dual-socket BL460c G7 running Linux.

"......HP don't make their own chips, no Enterprise DB, no real middleware and no standout business app's so hence no integration and poor relative performance compared to Oracle and IBM....." Just the biggest server seller on the planet, the number one database server provider (Oracle and M$ SQL), the number one enterprise CRM server provider (SAP, Oracle and M$), and still making more profit in a quarter from printer sales than Snoreacle makes in all hardware sales in a year. Even IBM will grudgingly admit more IBM software goes on hp servers than on their own! Don't let those little facts upset your rickety applecart of an argument. Truth is, despite Larry's grandstanding, he has done zilch to really threaten hp's postion in the market because Larry knows he is dependent on hp to provide the platform for over 50% of his Oracle software sales. Even doubling the per-core pricing on Tukzilla just matches the doubling in core count over the previous generation, it doesn't even take into account the increase in per-core performance.

"....HP will soon be finished in the High End...." Yes, and weren't you Sunshiners singing that song around the time you were expecting UltraSPARC V? And again when waiting for Rock? And when Niagara debuted? And all that happened was Sun disappeared. Please, try and grasp the fact that despite the strength of your delusions, just because you want it to be so doesn't mean it's going to be so.

".....because in this new highly integrated hardware/software world we are heading in to, they have nothing to offer...." Apart form the leading server platforms, a growing networking bizz topped by the leading network maangement tools, one of the leading storage offerings, and really integrated management and deployment tools for hp-ux, OpenVMS, Linux and WIndows that Snoreacle can only dream of. Oh, and lets not forget the ink, 'cos I know that really winds up you Sunshiners!

"....Enter Matt Bryant..." Well, when you paint that large a bullseye on your forehead it's hard not to resist showing you up!

You just don't get it dude...

I'm not contesting that in the heavy single threaded arena Sun/Oracle have a disavantage at the moment, Power and Intel's CPU's have the edge. You seem to think this is JUST about the hardware, trust me it isn't, as you probably know yourself. A large proportion of the Unix market runs that Oracle DB, as it is bloody good, it's all about that DB and the overall cost of that DB, what sit's beneath it really doesnt matter all that much...

If Oracle Sell you an completely integrated stack, tested at every level, which outperforms anything else for a given price point both in terms of cap-ex or support costs, who the hell care's if the thing is is using a larger number of weaker single thread cpu's than an IBM/HP box which has a smaller number of meatier cpu's???

True HP sells a lot of servers (low margin x86 tin mostly) running every type of app out there, but are you seriously going to tell me that in the future HP will be able to outdo Oracle for price/performance running the dominant ORACLE Database?? Impossible. Even if they did technically do it, Oracle can just undercut and make up the margin elsewhere in the stack, DB/Fusion middleware or whatever, thats where the real money is made, and it's market share is huge. You failed to answer my question about HP's lack of DB's/middleware/apps by the way. Name one that is even in the same league as Oracle's portfolio.....IBM have plenty, and Oracle acknowledges and respects this quite openly. Software sells servers my friend, not clock speeds and single threaded performance.

Growing Networking biz? Leading Network Management tools (that Sh1te called Openview, you serious!??) and leading storage!!! Please, don't make me laugh mate. You use the word "Lead(ing)" willy nilly, HP doesnt "Lead" in networking or storage, Cisco, juniper, Hitachi, EMC, Netapp a bunch of others would disagree with you there.

Best bet for HP, buy SAP, it's all about the software son.....without good software, HP is just going to be a cheap tin pusher while the other giants like IBM/Oracle mop up the high margin markets.

RE: You just don't get it dude...

Whilst I agree that Oracle DB is the enterprise DB of choice (despite MS SQL being the faster growing in overall market share), you're forgetting that a lot of us have previous experience of Slowaris on SPARC and the failings of Sun, both in product delivery and support.

".....If Oracle Sell you an completely integrated stack, tested at every level, which outperforms anything else for a given price point both in terms of cap-ex or support costs, who the hell care's if the thing is is using a larger number of weaker single thread cpu's than an IBM/HP box which has a smaller number of meatier cpu's???...." Here's the problem for you - a lot of us used to buy Sun SPARC, we used it for Oracle, and we saw it fall behind to plain-Jane Xeon, let alone Power or PA-RISC and Itanium. But Sun dropped their pants and sold us kit at a loss just to get the business, so we carried on buying. Then the losses meant they had support "savings" which caused us pain, and promised products never arrived, and things got to the point where even offering kit at a loss and free for a year couldn't interest the board. What you don't get is that peoples' jobs ride on the kit they recommend their companies buy, and a lot of them got burnt by Sun. The massive decline in Sun sales in their last few years of Sun's existance was despite the fact Sun was offering solutions cheaper than IBM and hp. But when your board looks at a Sun solution that is 20% cheaper than the equivalent hp solution and still buys the hp one you begin to realise how deep the loss in faith is.

".....Growing Networking biz?...." yeah, it's called ProCurve, it's the number two in the market behind CISCO, and is growing whilst CISCO's share is shrinking. Juniper isn't even close. Try reading some industry news rather than relying on Sunshine. But please do supply details of Snoreacles nertworking bizz, I seem not to be able to find any such beast anywhere on the Web?

"....Leading Network Management tools...." As an example, Network Node Manager still maanges more of the Internet than any other management product. Please name the Sun/Oracle network management tool that even made a dent in NNM sales? You can't because most Sun shops run CA, OpenView or Tivoli, all non-Sun products. In fact, for many years Sun's own salesforce resold hp's OpenView products as they didn't have a product of their own to market.

".....buy SAP.... it's all about the software son...." Ah, the typical fixation of the Sunshiner - one product and one product only, a bit like that old mantra of "Solaris on SPARC and nothing else" - that worked out well for you! Instead, I suggest you look a little further afield and consider a statement by Steve Ballmer - "Developers, developers, developers!" Sun's Slowaris used to be the development platform of choice for many software companies, but now it's just about dead. What hp have done different was think not one OS on one platform, but as many options on as many platforms as they can make money on, and then as many applications on top as possible, which is what the developers like. Instead of one CRM vendor, hp can flog you solutions around Oracle, SAP, IBM, Microsoft and even open-source solutions. Hence why hp is making moeny out of hp-ux, OpenVMS, NonStop, Linux and Windows, whilst Snoreacle is struggling to get anyone interested in Slowaris. Which is why hp made $3.92bn in servers sales in Q3 whilst Snoreacle only made $0.78bn, because hp offered better products, with more choice, and with more applications and more developer support. Oh, and also why hp is the number one IT company in the World and was the first to report revenues in excess of $100bn. Should hp buy a CRM software company such as SAP, it would then still only have one real competitor, and that's IBM. But I expect that instead, hp will continue to be the software whore of the industry and continue being number one. If anyone is going to do any mopping up, it's not going to be Snoreacle, it is more likely that Snoreacle will become an acquisition target for one of the bigger players.

Matt Bryant

If Niagara suffers from a small cache, how can it be fastest in the world in some benchmarks? How can a 1.6GHz cpu with a too small cache, contest with and even win benchmarks against 5GHz cpus with huge caches? Shouldnt that be impossible if Niagara really suffered from a low cache?

See for instance entry no 14) at the bottom "References"

http://en.wikipedia.org/wiki/SPARC_T3

How do you explain that world record if Niagara suffers from a too small cache? What you say, does not add with facts and benchmarks. Either you are wrong, or the benchmarks are lying.

RE SPLITBRAIN.

"If Oracle Sell you an completely integrated stack, tested at every level, which outperforms anything else for a given price point both in terms of cap-ex or support costs, "

Well, the problem is that the product that Oracle is benchmarking and the products they are selling are not the same. The recent Clustered TPC-C benchmark for example, is not an EXASCALE it isn't a EXALOGIC solution either. It's a RAC cluster of standalone machines, using Solaris as a Intelligent Disk system. It's a brilliant executed benchmark but.. it has nothing to do with the products that Oracle is peddling to their highend clients. Although they will, just as you are right now, put an equal sign between the price of the solution in this benchmark and a Exascale/exalogic solution. The problem is just

1) you aren't buying the software in the benchmark. You are leasing it for 3 years. If you were to buy it and pay 3 years of support the _list_ price of the oracle software would be 59MUSD rather than 24MUSD.

2) Furthermore judging from the price that oracle is charging for the EXADATA x86 hardware compared to the actual price of a rack of SUN x86 servers, then the price of an EXADATA solution based upon T3-4 would be significantly more expensive than the price listed.

3) You would also have to add the cost of the 10.000USD list per disk for the Exadata Storage Server software. (that is 12MUSD (listprice) over 3 years if the benchmarked solution were an exadata solution)

So your claim about cap-ex and support costs being low might very well be right for the solution that Oracle has benchmarked. Although I doubt that it's an easy/cheap solution to support with 27 nodes and 97 "storage systems", but that is another story.

But the price of the benchmarked solution has NOTHING to do with the price of the "completely integrated stack, tested at every level" that Oracle selling under the Exalogic and Exadata names.

So your claim is IMHO not right.

"who the hell care's if the thing is is using a larger number of weaker single thread cpu's than an IBM/HP box which has a smaller number of meatier cpu's???"

One thing is an TPC-C benchmark. The problem is that RL code is often serial. And we have been listening to parallelization being the holy grail for 25 years now, and time and time again in the real world we are facing single CPU throughput problems.

And I am not only talking about code here, running a IT infrastructure on FAT cores is different than running it on thin ones. For example installation of software needs to be multi threaded also,. One of my wife's good friends who is an Oracle DBA actually quit her job out of frustration with their UNIX departments inability to understand the differences between SPARC64 and T systems.

RE: Matt Bryant

Kebbie, just take a deep breath then go look at the changes between T2 and the new T3 chips - see all that added cache? It's there for a reason, and it ain't just decoration! Then please consider that you were assuring us not so long ago that the Niagara designs didn't need cache, that there was no way it would have more cache in the nect generation, etc, etc.

@Matt Bryant

"....Kebbie, just take a deep breath then go look at the changes between T2 and the new T3 chips - see all that added cache? It's there for a reason, and it ain't just decoration! ...Then please consider that you were assuring us not so long ago that the Niagara designs didn't need cache,...."

So? So what is your point? I dont get it. According to wikipedia:

T2 has 8 cores and 4MB L2 cache, which means each core has 0.5 MB cache.

T3 has 16 cores and 6MB L2 cache, which means each core has 0.375 MB cache.

We see that each T3 core has less cache than T2 cores. This aligns with my earlier claim that "Niagara does not suffer from a small cache", as you say. The T3 should be even more cache starved than T2 - if you were right. But no, it has several world records. This proves it has no problem with feeding all it's threads. It does not suffers from a small cache. He who claims that, should study the Niagara a bit more.

.

".... that there was no way it would have more cache in the nect generation...." No, I never said that. The next gen cpus probably has more cores, higher Hz, more cache, etc. That is natural evolution. But I never said anything about smaller cache in next gen T3. You can not cite me on this, because I never said it.

.

In short, I repeat "Niagara does NOT suffer from a small cache" not matter how many times you claim it. Why? Because it surpasses and wins over 5GHz cpus with huge caches. That would be impossible if Niagara suffered from a too small a cache.

.

BTW, compare the tiny 6 MB cache with 16 cores of T3, to the new IBM z196 Mainframe cpu "The world's fastest" just released in september:

http://www-03.ibm.com/press/us/en/pressrelease/32414.wss

Also, I quote another source here:

"A 4-node [z296] system is equipped with 19.5MB of SRAM for L1 private cache, 144 MB for L2 private cache, 576MB of eDRAM for L3 cache, and massive 768MB of eDRAM for a level-4 cache."

And how fast is this Uber cpu with half a Gigabyte of cache running at 5.26GHz? Well, it is 50% faster than Mainframe z10 cpu. How fast is the z10 cpu? It is 50% faster than z9. How fast is z9 cpu? It is 20% slower than an single core Intel Xeon at 900 MHz!!!

This means an z196 is 1.5 x 1.5 = 2.25 times faster than a z9 cpu. That is, it is twice as fast a single core Xeon at 900MHz. That means, an IBM Mainframe cpu z196 is as fast as an 1.8GHz single core Xeon. Let us not compare this to an 8-core Intel Nehalem-EX at 2.4GHz.

So, Matt Bryant, how important is a big cache? Even if you have ~400 MB cache and record breaking speed ad 5.26GHz we see that the IBM "world's fastest cpu" can be slow.

On the other hand, a small cache at 6MB can be really fast. If you know what you do and give the cpu a radical new design.

The T3 crushes the z196 easily. As does any modern x86 with less cache and lower Hz speed. Cache is not important if you know what you do.

RE: @Matt Bryant

But Kebbie, you assured us that Niagara doesn't need cache! Surely, as the chip gets developed and gets "better", it shouldn't need ANY cache? After all, that is what you originally posted. Or could it be that all those stalled threads means cache is vital, otherwise Niagara sucks even worse than it does already?

@Matt Bryant

But Mattie, I just explained that to you, and answered your question. I can rewrite it here, just for you, Mattie:

I have never claimed that Niagara does not need a cache. Try to cite me on this.

I have epxlained numerous times, that "Niagara does not suffer from a too small cache", which you claim. Do we have any proof on who is correct, you or me? Yes, we have proof that I am correct: Niagara has several World records, and can even beat 5GHz CPUs with huge caches.

If you were correct, Niagara would be slowest of all cpus, which it is not. Just look at the benchmarks yourself.

In short: you claim Niagara is slow. But facit shows us that it has several world records, this means it is pretty fast. Actually, fastest in the world, on some work loads. So, you are not really correct when you claim Niagara is slow. Neither are you correct when you claim it suffers from a small cache - because then the performance would suck. But it is actually fastest in the world on some things. Even though it has 6MB cache in total.

Conclusion: cache size does not really matter that much, if you have a radical new design. Try a 4-6MB cache on a POWER6 or POWER7 or Itanium and see what happens. They will suffer and performance will maybe be halved or so. Their design needs large caches, Niagara doesnt.

They have different designs. What you know about POWER and Itanium (must have huge cache and high Hz), is not valid for Niagara (has small cache and low Hz).

I suggest you be careful when you extrapolate your knowledge from ordinary legacy cpu designs to new radical designs that circumvent old problems (such as cpu idles 50% under full load - because of cache misses).

RE: @Matt

The interesting thing with Windoze Server and M$ SQL is my company hasn't lost its appetite for either. Despite our taking up Linux , that has been used more as a tool to replace UNIX boxes and has made little inroads in the number of Win servers. VMware has been accepted as a cost saver, but only in that it allows us to consolidate and reduce hardware, if anything we end up running more Windoze instances! MYSQL has been passed over as M$ SQL has offered a better solution with superior support at a price the business can swallow. Through all our recent spending constraints, M$ still got plenty of money from us, and all going on x64 tin. Others I know see similar trends. You say you will only consider M$ SQL "enterprise" when it runs on your choice of OS, but you failed to notice the OS of choice for many companies is still coming from Redmond. Whilst good, old, anti-M$ snobbery is fun, it's smacks a bit too much of sticking your head in the sand.

totally agree with you!

I have a recent very live and real example of a Sybase database that was migrated to SQL Server. My basic 4G, 2 processor Windows box was 2-8 times faster than the 16Processor, 32G Ram Solaris box. Although that may be a problem with Sybase they also blamed the hard disks on Unix! Excuse my ignorance s this not a close to 100K kit you are talking about Unix guys? If a hard disk I can buy for £100 is faster than your supa-dupa unix disk then something is wrong here! I can see more and more migrations from Solaris to Windows and Linux, people wake up to basic facts.