Twenty years ago open systems was the battle cry that shook the absurdly profitable proprietary mainframe and minicomputer markets.
The proliferation of powerful and less costly x64-based systems that can run Solaris, Linux or Windows is making more than a few Unix shops think the unthinkable: migrating away from Unix for their …

COMMENTS

Page:

Throughput is key

The first was a Power 7 beast (there is no other word for it.) running AIX & Linux

The other was a quad socket Intel Xeon (latest models) rig running Linux

The application software was setup identically on the three configs.

The Power 7/AIX managed 36000 messages per second.

Running Linux on the same hardware gave us 26500/second

The intel machine managed a paltry 14,240/second.

sure the Power 7/AIX combo is expensive but the X86 world (in these particular circumstances) lags well behind the RICS System.

Then add in the mix the LPAR management in the Power Range and it is one hell of a solution.

IMHO, the X86 architecture is well past its sell by date. Intel recognised this. Itanic was not the answer. Simply die shrinking X86 to improve performance will not make up for the obvious shorcomings in the CPU Architecture.

That'll give you performance per unit of time per unit of currency. THAT will tell you which platform is better, because throughput is not key. Throughput for investment is key.

The Power7/AIX system is just under 253% the performance of the Intel/Linux system -- which means that if it costs 253% of the cost of the Intel/Linux system, it's not as cost-effective overall. (Theoretically, of course -- in real life, you do have some overhead managing workloads across multiple servers which would have to be factored in.)

* (= seconds/year)

**(I hope that one was damned near zero -- otherwise you got ripped off ; )

RE: Throughput is key

".....sure the Power 7/AIX combo is expensive but the X86 world (in these particular circumstances) lags well behind the RICS System....." Congratulations, you found the perfect solution. For your application. To pretend that means you will see the same performance across all applications in all environments is either naive or deceitful.

".....IMHO, the X86 architecture is well past its sell by date......" Seems to be going quite well, though. By the way, did you stop to consider that RISC is at the point where future performance gains by anything more than die shrink are unlikely? Why do you think there is so little info on the IBM Power public roadmaps, it's because there is little more they can add to a geriatric design.

Hmmm

Lots of good points, but that last one is not:

>THAT will tell you which platform is better, because throughput is not key. Throughput for >investment is key.

Sometimes, throughput is all that matters, irrespective of investment, because customers (hear banks, trading floors, etc) need to run their workload during a fixed time window which is incompressible. e.g. They have to run their jobs within a 4h hour window and saving $1M on an x86 system that runs in 5h is simpy not an option.

Missed the point.

"Sometimes, throughput is all that matters, irrespective of investment, because customers (hear banks, trading floors, etc) need to run their workload during a fixed time window which is incompressible. e.g. They have to run their jobs within a 4h hour window and saving $1M on an x86 system that runs in 5h is simpy not an option."

NO. You missed the point. It's not that throughput doesn't matter. It's that you have to get the throughput you need in _the_most_cost-effective_manner. In the scenario above, I wouldn't recommend getting a slower system, I'd recommend getting the most cost-effective system that performed the task needed. Let me give you two example cost scenarios that fit that scenario:

1. Say the x86 system mentioned costs $500,000 while the RISC system costs $1,500,000. If you bought two x86 systems and ran them in parallel, assuming a 10% overhead for synchronization, you could have the jobs run in under 3 hours and still save $500,000.

2. On the other hand, if the x86 system costs $4 million while the RISC system costs $5 million then it doesn't matter how many of each system you get because they're on the same price/performance curve. So you get the biggest that fits your budget, which would probably be the RISC system.

All of this is before factoring in the cost of actually running and maintaining the systems, which could very well be the difference as well. This is also not to mention that you won't have 1 x86 system to pick vs 1 RISC system. You'll have multiple vendors, with multiple solution points per vendor.

So even in the scenario you mentioned, throughput is only part of the equation -- and the the other major part, cost, can still be important enough to change the answer.

Re:Missed the point

"NO. You missed the point."

"1. Say the x86 system mentioned costs $500,000 while the RISC system costs $1,500,000. If you bought two x86 systems and ran them in parallel, assuming a 10% overhead for synchronization, you could have the jobs run in under 3 hours and still save $500,000."

I'd say that in responding to the previous poster's banking example it is you that has unfortunately missed the point. These people aren't numpties sat there running sequential batches. I can assure you that where tasks can be run in parallel they are. However that only gets you so far after which you need to up the hardware and it is this fixed window issue that the original poster is referring to as if they could run a job more cost effectively then believe me they would as any spare cash goes in some fat bastard's bonus pool. Ergo throughput is still king no matter (within reason) what the cost is.

check out the new Power roadmap

re: check out the new Power roadmap

One generation and just stating the word "more" does not make a roadmap. IBM still has the worst roadmap of the three (Oracle, Intel, IBM). Of course, as many have said, they do not have as much need to be public with their roadmaps as Intel and Oracle. They've had trouble meeting the timelines of their roadmaps in the past (like Oracle and Intel), so it makes sense to be vague. This way they can say they've met all of their promises.

@Steven Knox

bye Unix

We are most probably migrating to a Windows Server set-up due to the shortcomings of a certain DBMS (won't mention names) running on Unix. The database people said that if we run this database on a linux, apparently it can run faster. It can run faster cause the x86 processors have better technology (I have no comment to make on this cause have never seen the benchmarks). That's last chance to salvage anything.

But the bomb has been dropped, the Windows Server on the other DBMS offers 3-5 times the performance at a fifth of the price. I am sure with some tweaking can make little faster. You can't argue with figures like this.

@Steven Knox

You're right to suggest that some sort of performance metric should be calculated for a candidate IT solution, but you can't tell everyone what their metrics should be.

Google apparently use a metric of searchers per Watt. Sensible - searches are their business, energy is their highest cost. A banking system is more likely to be measured in terms of transactions per Watt second; banking systems are sort of real time because there is an expectation of performance, but energy costs will be a factor too. But ulimately it is for the individual business to decide what is important to them. For example a bank somewhere cold might not care about cooling costs!

I think that it is safe to conclude from IBM's sales figures that a fair proportion of businesses are analysing the performance metrics of x86, RISC, etc. and are deciding that a mainframe is the way ahead. IBM sell so much kit that not all their customers can be wrong!

vmware

What is changing in the industry has more to do about 'virtualisation' than it has to do about 'architecture'. Many enterprises are going for virtualisation on VMware, and yes, VMware does only run on x86. Therefor we find that Windows and Linux are on the rise... not because they're perse so good , but because they are easy to virtualise, which cuts TCO. That said, Solaris CAN run as well on VMware but currently is not (yet ?) considered probably for the simple reason that most IT managers simply don't realise that it does run on VMware.

Big iron RISC servers during an economy in trouble obviously are going to see a decline because of their cost. But when workloads need more than 4 CPU cores and the best in IO performance then Windows simply can't do what Solaris, AIX and HP-UX can do. Linux can do a lot but it also has it's limits, not in the least because Linux isn't an OS, it's a kernel. Build around it we have SuSe , Redhat, Ubuntu ... So choices need to be made because although these are very much alike, in an enterprise you want them to be IDENTICAL to keep down the cost of administration.

So Big Iron isn't going away any time soon, what is going away is 'Midrange iron' and it's being replaced by VMware on Blades, but only for the 'small' workloads, essentially Tier 1.

What ?

Eh... you don't do this for a living I hope.. If you did, you would know that running Oracle under VMware means that you pay license for the whole physical machine, as Oracle doesn't count VMware as a hard partitioning technology as they do with all the UNIX virtualization technologies.

Running Oracle under VMWare

only means paying for the whole physical machine if you're stupid enough not to haggle and threaten to port to MySQL. Oracle will wave the licence under your nose but will do a deal if you push back. If you need help, ask VMWare.

But you shouldn't run a database in a virtualised environment

Databases are I/O heavy. Even Microsoft's SQL Server bypasses Windows HAL and writes directly to hardware for performance reasons. Moving that to a Virtual environment is just dumb. I work for a company that has done that and there are myriad of problems that go along with it. Build a big honking server and put your databases on it. You can virtualize the rest of your kit. Much less expensive to do it that way and you won't have the inherent performance problems that come with trying virtualise everything to a shared environment.

License costs drive hardware choice

Beyond IBM's chips - what RISC chips are out there that are remotely competitive with x64 chips for servers? Not some specific benchmark - but TCO including software.

For years we were a Sun SPARC / Windows shop. About 4 years ago we dumped SPARC for x64 (AMD in that case). We still run Solaris / Windows, but on x64.

Why? Performance per socket or core depending on licensing. Software licenses are MUCH more expensive to buy and maintain per unit than hardware. The RISC cores were just too slow to justify their costs.

Instead of buying more licenses as loads increase, we just buy the latest / quickest hardware available - it is much cheaper to do so, especially for the socket license models where you can now have 8+ cores per socket.

One of our big cost apps is Oracle Enterprise with Spatial and Label Security. The last go around we moved from a Sun SPARC v880 to a 16-core monster AMD box. This year we are dumping that and moving DOWN to a 12-core Intel box with TMS flash for all storage. The money we save in license renewals will pay for the new hardware. And we calculate that it will be about an order of magnitude quicker to boot. THAT is a no brainier upgrade - even for the US Govt.

X86_64 was always a band-aid

X86 was always the bottem end of the performance range (actual and per watt) - but what it is, is _cheap_.

Power7 boxes might be faster, and so is AIX, but for the cost differential a company can have several X86 boxes and a couple of spares.

Wintel boxes will always be slower until they can remove all the compromises which are required to still boot DOS. There are dedicated X86 systems out there which are a lot better optimised (but the price goes up)

Linux is invariably faster with tuning - the defaults are for a wide range of operations, so I take performance comparisons like this with a bucket of salt (I've achieved speedups of 10-20X or more with appropriate tuning of boxes for the tasks they're performing) unless full details of the configurations and tuning mechanisms are provided.

The _BIG_ advantage of Linux is portability. Source code written on X86 should compile and run happily on MIPS/ARM/Power/Big Iron/Itanic/Whatever comes along.

Linux may not be Unix, but it's virtually identical in every respect that matters - and there is more/cheaper support than there is for the older *nixes. Because of that the market is really Win/*nix/VMS/Big Iron - and yes I still have VMS systems (brand new) in $orkplace for specific tasks because they're best suited to the task.

Unix vs Linux is a strawman unless you start breaking the *nixes up into their component flavours and assess competition between them.

I'm a happy Linux admin, but I also admin other Unixes, VMS and Windows(when I'm forced to). The point is that one should choose the software for the task then the hardware and OS it best runs on. Anything else is the tail wagging the dog.

Right now I'm looking forward to the arrival of MIPS and ARM based systems for testing. If they work as I expect them to then we'll be achieving far higher throughputs per rack with far lower power consumption figures - speed isn't everything.

Mine's the one with the fondleslab in the pocket, setup for remote X work.

Portability?

>The _BIG_ advantage of Linux is portability. Source code

>written on X86 should compile and run happily on

>MIPS/ARM/Power/Big Iron/Itanic/Whatever comes along.

Really? Maybe, provided you've got all the right libraries installed, the right versions of those libraries, the right GCC setup, and that your distribution's fs layout is along the same lines as the one used by the software developer, etc. etc. And then you may also have to worry about hardware architectural issues such as endianness. And then you have to wonder whether the software you have just compiled is actually running as the writer intended, or is there a need for some thorough testing?

The idea of distributing a source code tarball and then expecting ./configure, etc. to work first time for everyone on every platform is crazy. Pre compiled packages are a joke in the Linux world too; deb or rpm? Why is there more than one way of doing things? There is no overall benefit to be gained.

It is asking a lot of a software developer to maintain up-to-date rpms, debs and tarballs for each version of each linux distribution on each platform. Quite understandably they don't do it. If we're lucky the distribution builders do that for them.

IA64 itself is largely worthless

Yes, largely worthless.

Any value IA64 has is either part of the system into which it is built (especially in a really high end system), or is added by the OS, facts which TPM and many commentators (and HP/Intel) seem to struggle with.

There are a few restrictions on AMD64-based systems in terms of scalability vs IA64 - Chumpaq's biggest AMD64 box maxes out at 48 cores and 512GB rather than the 32 cores and 1TB in the Intel equivalent; in both cases rather fewer cores and rather less memory than a high end IA64 system. (1) does this matter to any but a tiny minority of customers (2) Is this an instruction set restriction, a chip restriction or a system restriction? Afaict it's not the instruction set that's holding things back, it's the stuff outside The Chip Inside (tm). The system design and the supporting software. And now that Intel's AMD64 clones and HP's IA64 boxes now have basically identical memory interfaces and similar IO interfaces (QPI or whatever it's called this month), what exactly are the relevant hardware differences at a system level?

As for the "IA64 has better RAS" fiction which used to be regularly trotted out: if TPM or Matt or anyone can find me a commercially relevant real world example where IA64 has significantly better RAS features than AMD64, I'll be amazed. Hint: in recent months even Intel VPs have realised this argument is largely worthless.

Pretty much anything you can sensibly do on an IA64 system could in principle be done just as well and more cost effectively on a decent AMD64 system (or maybe even on an Intel clone) - IFF the relevant software (specifically, HP-UX, NSK, or VMS) was available. There are no technical issues making this happen (the OSes have already been ported more than once), only commercial ones.

Why doesn't this happen? Your guess is as good as mine, but I'll bet it involves more politics than technology.

Did IA64 even get mentioned in the recent Intel Investors Conferences (US and Europe)?

@Steve Davies: you don't mention the OS which gave you the x86 result? Wouldn't be Windows would it by any chance? A Linux result on the same hardware would be interesting if that was the case.

One word missing - "enough"

For years Unix vendors made a very good living out of claiming that their systems had higher throughput, reliability and stability than the alternatives. Unfortunately, Moore's Law has caught up with them: if the systems are designed properly, a white-box X86 solution is more than likely to be reliable ENOUGH, have throughput ENOUGH and (except if running Windows), stability ENOUGH that a 3-to-1 price disadvantage just can't cover. Brandishing selective stellar benchmark figures for a particular processor is almost irrelevant - there are very few organisations that can make use of the full unbridled power of a single-image fully-loaded Power 7 or Integrity server.

Use the right systems for the right workload

Surely the best long term answer is to rationalise/standardise and get into a position where you can deploy different systems optimised for different workloads within a coherent environment of development, deployment and management tools.

In other words, use a stack of OS, hypervisor, middleware and apps that can run on a range systems that could vary in CPU from blades, racks, appliances to large SMP systems offering different price, performance and RAS and don't worry about betting on a particular CPU.

X86 system

Is it just me, or...

Does anyone else find the idea of "Linux vs Unix" as nonsensical?

In my little world view, "Unix" is a generic term that encompasses a wealth of OS implementations, including: AIX, Solaris, IRIX, HP-UX, Mac OS X, BSD, and "Linux", amongst others (and, yeah, I get to work with old stuff). None of the above are interchangeable, and they all have strengths and costs and weaknesses... but I submit that the differences between Red Hat, Suse, Ubuntu are not qualitatively different from the differences between Solaris and Red Hat, Suse and AIX, etc. But throw someone comfortable with from any of those into VMS and watch them flounder...

[ The differences change depending on viewpoint: from the perspective of a driver developer, all Linuxen tend towards looking the same, but very different from e.g. AIX; from the perspective of a developer using an X-based toolkit, they all tend to look similar with trivial differences in (e.g.) type faces right up until you get to integration with desktops. Etc. ]

In sum, this article is really not talking about "Unix" vs anything, but proprietary hardware vs commodity hardware. Turns out the former is more expensive but tends towards "better", while the latter is cheaper.

Those systems offer a unique value

Core performance on Power and Itanium has been consistently good or very good for most of the last decade, but that's never been what you pay for. Ever since the Pentium Pro, Intel has offered most of the core performance of RISC platforms at a fraction of the price. What you get with the proprietary UNIX boxes is a fast system that scales far higher than most commercial x64 solutions, with very high reliability and certain features (PowerVM, for instance) that are frequently worth the premium you pay. Judging UNIX systems by list prices is also silly, since most actual purchases involve big discounts.

Core performance on all three major RISC platforms (Itanium, SPARC, Power) will probably make massive leaps in the next few years. Oracle's public performance targets are especially aggressive, although it remains to be seen whether they can make it happen.

re: Eh no.

Don't get out the bagpipes just yet...

Migration away from proprietary RISC Unix hardware has been going on for a long time. A lot of this has been due to lackluster performance from the "market leader" (namely Sun). Over the years there has been some back and forth on this and you've had some shops realize that they might not need everything that proprietary RISC servers have to offer. The same also applies to big name proprietary Unix apps.

Although for really big jobs and large environments, the "little iron" from the Unix vendors still does things that bulked up desktop PCs still can't. The RISC platforms don't stand still either. The situation is not nearly as simple as some pundits would like it to be.

Longevity

The single outstanding feature of applications deployed to SPARC/Solaris is their longevity. Time and again, the only thing which is driving migration is the impending withdrawal of hardware support. This never seems to be the case with either Windows or GNU/Linux. Why is that?

Longevity

I've seen plenty of corporate systems that are out of support. Some part of the system has been de-supported long ago whether its the hardware, OS, or the app. How this goes over in large corporations is a mystery to me.

The length of support issue is a universal one. It is not something where Unix has a magical advantage. Enterprise Unix vendors will gladly screw you over in that respect just as Microsoft or Apple would. It's not just a "small systems" or "PC thing".

It depends....

There is no right answer a best system. It all depends on so many different factors. The move to "commodity boxes" is driven by the same numpties that drove the Dot.com bubble, working in lots of industries, they all seem to be jumping on the Linux bandwagon away from the big-iron, without really considering the whole picture and consequences.

Yep the X64 boxes are way cheaper than the RISC ones but what about the lack of features for partioning, virtualisation at the hardware and OS level that just isn't in Linux or X64 (or Windows)

You have to consider where the business is now and where it needs to be in the next 3 years, balancing the hardware, licensing, performance and human costs.

I moved a major operation from RISC to X64, because in 2006, the latter was so much faster. A major business model

Old RISC: 4h:45

New RISC 3h:15

X64 (AMD) 1h:20

but by mid 2010 I was recommending the move back to RISC as the workload and bandwidth required, X64 couldn't cope. Luckily the business was using an OS that ran on RISC and X64, no new skills or data.

However a year later, the with the arrival of the E7, the rise of flash storage, the prevelence of integrated 10GbE, 8GbFC etc....................... tough call.

Given that the business has a lack of human resources, I'd still stick with big RISC iron as less boxes is best for them and it doesn't make an iota of difference on licensing between the cores.

The big challenge with X64 now is the sheer amount of cores, how do you partition a large box - Vmware and Xen have an overhead but also limit on resources per guest. Linux is very immature, though RH6 is starting to get some decent features that the big boys (AIX, Solaris, Tru64, HPUX etc) have had for years.

Unix is on the rise in our shop

IBM continues to grow and be the torch bearer for Unix. Power7 is unmatched in the industry.

Yes our x86 boxes are doing more than before but there are still serious issues with reliability...we see it being about 5 year mean time between failure which is about 100 failures per year...yes we have 500 x86 servers.

scalability....we cannot put big workloads on it because each virtual machine can only scale to half of a nehalem chip. V5 will get scalabiltiy to 2 chips but we wont have that certified in production till next year. I/O is also a major bottleneck

performance.....good perforrmance but we dont buy anything past 4 sockets. Its funny everyone says how x86 has such big revenue and box counts but that is because its cheap and we have to buy so many of them. I would rather buy a box that is twice as expensive but only have to buy 1/10th of them

sparc and itanium are dead to us and off the approved platform list.

x86(intel) and power are the standards.

Its is ridiculous that people call Unix proprietary when you look at our "open x86" you find we have standardized on intel/hp/VMware/RedHat/Oracle its not only proprietary but a mess to support.

x86 is a steaming turd.

- Intel kept on cranking up the GHz to match the more power-efficient RISC chips,

- AMD went ahead and jerry-rigged 64-bit support onto the crappy x86 processor.

- Windows took over the desktop market, and it mostly runs on x86.

Points 1 and 2 turned x86 into a crappy but capable trashware processor capable of mimicking what the good processor archs were doing. Point 3 menat that x86 took over the desktop market, while the RISC alternatives began dying off, with Apple being the last one to vacate the PPC arch. Intel tried to move on with Itanium, but it failed in its first attempts and AMD seized the day with their jerry-rigged x86-64. Ironically, competition killed the path out of x86, while lack of competiton was what stuck us to x86 in the first place.

Umm- isn't there more than just performance?

One of the neatest tricks you can pull off with IBM irons is rip a full box out of a live stack, because its VM even has processors virtualised so it can use slices of them, and you can add capacity in pretty much the same way (read: no downtime).

I'm admittedly not entirely up to date on high end systems, but the ability to scale up on demand or seamlessly fail over is IMHO another part of that equation. The discussion so far has only been about bang for buck, but keeping critical things running is another criteria..

Unwilling to move?

"Organisations that have Unix skills are similarly unwilling to move to a new server architecture and operating system at the same time (although if they are using packaged software and migrating to a new version, this kind of transition can be done less painfully than actually porting home-grown applications from a Unix box to a Windows or Linux system)."

The bank I used to work for moved their realtime risk system from Solaris/Websphere to Linux/JBoss due to the fact they could have many more machines to share the load and pay a lot less for the privilege.

Some excellent points

"The point is that one should choose the software for the task then the hardware and OS it best runs on. Anything else is the tail wagging the dog."

Still, IMHO the best advice in business. Any other view is likely coming from con-sultant/con-tractor/reseller BS.

And some remarkably restrained commenting as well.

In reality companies have inertia. It takes a *very* strong management to say "We know that to cost justify our future plans we are going to migrate some of our *core* systems to a platform we have *no* current experience of (but we're going to acquire it)" and actually do it.

IRL buying more (or an upgrade) of something is always easier than buying different.

Some of those software licensing prices are quite incredible. I suspect COTB's comments is nearer the mark than a lot of people would like to admit.

But I think scalability is *very* much under appreciated. It's understanding *those* issues *before* they become issues that separates the professionals from the amateurs.

Re john Smith 19

I agree with you about this point.

Some of those software licensing prices are quite incredible. I suspect COTB's comments is nearer the mark than a lot of people would like to admit.

My finger points squarely at Oracle. AFAIK they have been raising prices for non Oracle H/W since their SUN takeover. Luckily our DB's run on Sun kit. Our main systems will remain on AIX for the forseeable future.

As has been mentioned the H/W VM capabilities and the facility to easily add extra CPU's when the workload demands it is really great.

If the likes of Intel/AMD get their finger out and make their h/w work like that if a Power 7/595 then I'll be pleased. Yes the costs for Intel/Linux/Jboss is less but there is no way that they can touch the P7/AIX/Websphere setup for raw throughput.

System throughput

Despite all the changes with multicore. RISC systems still have a advantage due to a switched crossbar non-blocking architecture. This allows much greater overall system throughput. Not certain why the X86 companies never adopted this...?

UNIX is determined by The Open Group

Something is UNIX when it is certified so by The Open Group (www.unix.org). Many people believe that the various Linux distros are UNIX until a non-trivial application must be ported (either between then or to real UNIX, the problems are substanial).

Perspective...

Actually I find this article kind of funny..

Right now UNIX and it's bastard sibling Linux is perhaps at the hight of their might. I mean you find them in everything, music players (ipods) , smartphones, pads, 99% of systems on the top500 supercomputers, a solid part of the server marked, and last a rising presense in the PC marked.

Sure it's sad that in the server marked is looking like AIX and Linux are going to be the only members from the UNIX'es left in the long run.

// Jesper

PS Sure you could talk about x86 versus the 'other processors', but in the OS war it's more and more looking like the UNIX family is the winner.

Hmm...

My former employer (top 5 global bank) are migrating off all proprietary unix (except for tactical essentials) and are replacing with Virtualised Windows/Linux (RHEL) and z/OS and z/Linux. The reasoning is simply that the bang for buck of the RISC processors running most modern UNIX OSes doesn't stack up against a modern x86_64 chip. This will be augmented with some of the new database appliances coming onto the market.

If they need really big throughput in Linux it goes on a Z server, otherwise it's vmware.

The UNIX guys really weren't very happy about this at all (why do people working in such a fast moving industry resist change so much?) as there was the typical unix guy suspicion of Linux. However they're coming round to the idea and can't really argue that much with the seer cost savings involved.