Posted
by
Hemos
on Monday October 03, 2005 @08:59AM
from the learn-how-to-do-more dept.

An anonymous reader writes "Logical partitioning provides POWER processor-based servers with the capability to do server consolidation and optimize system resources. Dynamic logical partitioning enhances this capability by providing control of the allocation of the resources without impacting the logical partitions availability. Linux on POWER supports dynamic LPAR for changes to physical I/O, virtual I/O, and processor resources."

And for once I drool over something I have only vaguely an idea of what it does.

What it does is allow reconfiguration of system resources, such as IO cards, memory or cpu's (or on Power 5 with AIX 5.3, portions of a cpu), etc. on the fly without having to reboot your server to acknowledge them. AIX has had this capability since 5.2.

It's great for being able to juggle your resources on the fly, but it really comes in handy for moving your DVD drive between partitions on a frame without having to reboot. Having to reboot 2 servers just for that is a royal PITA.

The POWER series has this on-chip, so it's a whole lot faster than doing it via software and it doesn't require a reboot. The dynamic partitioning is the real difference between the POWER series and the PowerPC chips IBM sold to Apple. This is a feature carried over from IBM's mainframe days, and if you actually need it, it is very cool.

DLPAR is way cool.................. DLPAR is generations ahead of any VM technology on the market. The big difference is that DLPAR utilizes a hardware based hypervisor that manages the different partitions. DLPAR has the ability to provision or de-provision new lpars on the fly based upon a servers load.
LPAR's go way back with IBM's Z-Series........

You need a few things to be able to do this. One is what IBM calls the hypervisor, this is essentially privledged instructions on the chip that are used to control access to underlying system resources. These instructuions are isolated so that the operating system cannot execute them. Two is a service processor, this is a separate, separately powered special purpose processor that CAN access the hypervisor instructions it also does things like control power etc.. Additionally you need to give each PCI card

Not quite the same. While Intel hyperthreading is in broad terms a similar idea the implementation is different.

As I understand it SMT duplicates all of the stuff before the guts of the processor so there are two complete pipelines etc. whereas Intel has a single pipeline so when the processor switches from one thread to another there is overhead involved in refilling the pipeline from the second thread that does not exist in the POWER implementation.

I've worked on it on a few times, and its still a bit buggy, but IBM seems to never cease to amaze me by pulling-out new patches on a daily/weekly basis. With time, this technology will perfect itself, and when it will, it will really rock, for now, I'd still go with a BladeCenter + SAN.

Well, in the case of LPAR, when your host OS decides to pull a quickie on you, you find yourself on a standstil with an entire company of 100+ users without their apps running. I wouldn't like to be in the shoes of the iSeries admin in that case.

It depends.. (do note: I'm not the expert, I just was forced to play(hack) with it when things didn't work)

The OS/400-centric implementation would have a tendancy of managing the partitions from the OS400 operating system, feeding-off virtual scsi devices to the neighbour partitions, which is logical in a way, since you confine all your backups to one operating system.

The LPAR will pool your cpu and memory ressources, but you still have to feed it an I/O subsystem for Disks, Ethernet, etc.. I aven't se

Well, the interesting thing is, with the i5 boxes, they dumped the manditory host partition for linux or AIX partitions. On the p5 systems, you can still have dedicated resource partitions (own disk adapters and such), but they also have the option of running a host partition for shared resources like ethernet or disk. It's called the VIOS. It's basically a skinnied down version of AIX and all it does is manage the physical resources that are being virtualized to the client partitions.

Well, yes and no. You can configure the LPAR to be entirely segregated from a hardware standpoint. However, for the best rate of consolidation, you'd need to use VIO servers (virtual I/O) for the sharing of disk (at an Logical Volume level) and the sharing of Ethernet cards (since you can't split the 2 port cards over 2 LPARs). In that case, the "loss" of the VIO server would impact any client LPARS.Which is why a lot of people create two VIO servers, to provide redundancy across two seperate LPARS. Tha

To expand on the other reply you got:Ther are two ways you can do LPARs. If you have enough available, Physical IO resources for each LPAR, then you can assign each LPAR it's own NIC, Hard Disc controller, SCSI controller for Tape, etc.

If you don't have enough Physical IO, then you can have an instance of your favorite os (Linux, AIX or os/400) "own" all of the physical IO and serve out it's resources to the other LPARS.

_or_ you can have a mix of the two. For example, you could have one LPAR running Linux f

blades don't buy you much. they still price about the same as a bunch of 1U boxes, and if you want hot-swap scsi drives, bladecenter doesn't increase density. if you go with boot from san like you're implying, your density doubles, but I've not known anyone who's had a good experience with boot from san. You can go with the IDE drives and still have the double density, but do you really want ide in your production environment?

After walking into a number of clients who a previous consultant had sold HP BL10e's to I can catagorically state that as a big fat negative. Those things had their laptop HDD's die left and right after just 2-3 years, much less reliable that just about any storage solution I have ever run into other than the troubled IBM Deskstar line.

The current 2 Way (HS20) Xeon DP EM64T based blades use SCSI drives (not hot swapable). So you can put 14 2 way servers in a 7U chassis. In addition to just density this also helps on saving other infrastructure for network cabling, KVM and power.

The 4 way blades (HS40) are still junk, old IA-32 only Xeon MP chips, IDE drives etc. If you have an application that must scale above 2 Way then don't use a blade for that application.

The problem is, they can't get the BladeCenters stable.:( I can tell you we did 4 BIOS upgrades just in the first half of 2005 on 4 fully populated BladeCenters. That SUCKED! All cuz they couldn't get the BIOS right, causing all traffic to just stop. Hrm. But they got it where it's at. My new company wants to swear off of them, but they can't STAND the HP or Dell blades.

Then entire world will move to POWER in the next 10 years. POWER 5 is where it's going folks. Great IBM Hardware is paving the way for the great OSs of the world to run like champs. I have been an AS/400 now iSeries Admin for over 15 years and POWER/5 is awesome. Good to see some Slashdot coverage on the topic of POWER. IBM is still trying to figure out what to do with LINUX and maybe this is it. Will have to wait and see what happens next.

If you are looking for a $999 server I think you might be looking in the wrong place. If you are going to haul freight you don't go get a Toyota Pickup with a 4 banger. You go get a Big Rig and get hauling. If you have an application that is a heavy load you go get a Power/5 and you might pay more but you know it is going to hold up under the strain of the load. You can get a small Power5 System from IBM for around $9000 and the same HP or DELL would not cost much less

They told the same thing 10 years ago about powerpc, and look what has happened.

Target market for Power(5): Servers. No mainframes (as those are a different area), no HPC (horrible FPU/$ performance compared to main competitors), no small servers (PPC970 is more or less dead in the water, and power5 with its horribly expensive MCMs isnt cost effective in the more "normal" enviroment.

SO i _seriously_ doubt "the entire world" will be using power in 10 years. They can be happy if the keep their market share.

"SO i _seriously_ doubt "the entire world" will be using power in 10 years. They can be happy if the keep their market share."

With PS3, Xbox 360 and Revolution containing some flavor of POWER5 (Cell certainly has inherited POWER5 technologies), I'm pretty sure "the entire world" would have these powerful capabilities. Unlike workstations and PCs, these game machines won't be so hacker friendly but then I doubt if most people of "the entire world" would care less....

Well then in that case the next SONY Playstation would have be a failure for your comments to be true, and since I would guess that I am going to get a new Playstation and so is the 10 guys near my desk I can assume that the majority of the world will be using Power/5 and the same technology that makes the iSeries work like a champ. Althought on a smaller scale but none the less not to far from what is running my iSeries right now, just in a smaller form factor.
Now you can take the Power/5 hardware that r

If your post were an advertisement, the following legalese would be stated near the bottom.

"10 guys near my desk" is not an accepted marketing survey technique and the results should not be used to estimate global demand. Other demographic groups, particularly females, may show significantly different preferences. Sony and IBM are registered trademarks. So are Dell, HP, Gateway, Intel and AMD but since they are not using POWER this statement is not needed.

Since when are mainframes are a totally different area? The IBM zSeries machines now use the POWER5 chips, and have even better LPAR functionality that is mentioned in the article (more partitions, more processors per system, etc).Also, no small servers? I don't know who you've been buying from, but tear up your contract with them right now. IBM offers really low end SMB servers (ie. the i5 520 Express Edition) that use POWER5 chips and have the same pricepoint as their Intel-based xSeries brothers.

This news is great to see on Slashdot. I think companies will start to adopt the POWER architecture mainly due to it's long term roadmap. POWER 6 and 7 are going to be even better. The only thing that will slow adoption is price. They are quite expensive which was one of Apples choices to move to Intel.

I respectfully and completely disagree. The world enjoys using a 64-bit extension to the 4004 architecture. We like using a single-accumulator processor with 3 "general purpose" registers. We adore the massively irregular instruction set, we like saying "push bp/mov bp,sp" every four instructions. We like the whole notion of putting values in certain (and only those) registers, so we can say "repne scasb", or "mul" or "div". The segmented memory architecture and the segment registers, are, in a word, brilliant. The notion of "near" and "far" calls and jumps, and the fact that the segment and offset are pushed in the wrong order is an endless source of delight for us. The floating point unit, and its instruction set, are nothing short of poetry in silicon. The pipelining and branch prediction are the the epitome of efficiency.

In other words, you are just another sadly mistaken fanboy of an inferior processor architecture.

What did you say....huh....Fanboy? Yes. You have to be a fan of something! I have no idea what you were saying, but I sure wished I did. Man you sure are smart. Sadly mistaken. I really don't think so, but time will tell and so far what we have seen is I am winning that argument already.

I wouldn't say POWER is the way everyone is going. I would instead say "virtualization" is the way everyone is going. While POWER architecture and IBM has a big lead in this, Microsoft, HP-UX, Sun Solaris, Intel, and AMD are making headway. With the success of VMWare, hardware and OS vendors are looking for ways to capitolize on this trend of system utilization.

I have yet to see anything other than a statement that AMD and Intel will be adding features to their next line if chips that will allow Xen to run virtualized OSes without modification. But, if that comes to pass, we may be looking at the gap between Power/5 and AMD/Intel virtualization becoming much smaller.

I like you Brunson (Not like that.) your my kind or "Sailor" and Bad-ass and latley I am not sure if even Jesus loves me. Thanks for the comments. Nice to see that there is someone smart in Colorado.
Rock On Power/5....I need to keep my job!

Different for the POWER 5 chip, because now the Hypervisor runs in firmware but has some hooks into the OS.

But, yeah. ITs all from the mainframe world.

I mean, without this you can still run Linux on an LPAR, and you can still give that LPAR fractional CPU allotment (capped or non capped) and frational IO (ethernet, harddisk). Now you can just do it on the fly.

This has been available from IBM for years. I've only started with POWER 4 but I think it was available a few versions back as well with limited functionality on Linux. The more advanced features recently came available with the release of the POWER 5 processor. Nearly all of the RAS features are now available for Linux on IBM that have been available for AIX.

POWER 4 systems and AIX 5.1 was the first LPAR capable systems. Even 5.1 didn't support DLPAR operations. Anytime you wanted to rearrange your resources, you had to bounce the partitions affected. AIX 5.2 was the first OS to support DLPAR operations. Linux just started to support this with the 2.6 kernel as well.

Actually, LPAR on power5 is _quite_ different, if you are used to LPAR on iSeries.The pSeries and zSeries guys have had an HMC (HSC, actually) for years. Now that IBM is trying to make it more friendly for the iSeries folks, it actually has a gui. Iv'e talked to quite a few AIX guys that really like the interface, and just as many that want their command line back.

Of course those guys are very happy to learn that a quick click on the HMC desktop, and you have a shell.

You know, that's pretty much exactly what I thought when I saw this 'article'. I suppose IBM's LPAR technology on Power is useful, cool tech. But this seems more like a press release than 'news'. This stuff has been around for a few years now (granted, a lot of folks still haven't heard of it, so I suppose in a way it's 'news' to them).

If you want a better idea of how LPARs are setup on POWER5 hardware check out episode 5 of The Packet Sniffers. They show a 570 system and some brief menu's of the HMC console one uses for LPAR resource management and DLPAR. Not exactly DLPAR on Linux, but the process is the same for Linux as AIX. If you are curious how this works then check it out.http://www.packetsniffers.org/ [packetsniffers.org]

This document just touches on the capabilities. If you want to see a little bit more detail regarding running linux LPAR's on a POWER 5 system, I suggest heading here [ibm.com].

This is a good technology, and if there are people wanting to get LPAR capabilities without having to purchase all that extra IBM OS's (AIX, i5/OS), you might look into the OpenPower line. 2 way or 4 way POWER 5 systems that run only linux and can create upto 40 LPAR's on one system. That's bascially like having 40 different Linux servers all running at the same time on 4 total processors.

I agree this technology has some limitations as of right now, but it may not be a bad idea to look at it. And remember, this is PPC Linux, not your standard Intel Linux. While your boss won't know the difference, you should.

It's a pretty cool technology. We're running multiple AIX 5.3 partitions on IBM pSeries boxes. Setup of the systems are pretty easy and allocating memory/CPUs is straightforward. The only concern is that the overhead increases substantially with the number of LPARs.The only problem I have with pSeries linux is that it somewhat negates the cost advantage of Linux on Intel. Well, make that obliterates the cost advantage. IBMs AIX is free with the hardware. Linux is licensed per instance. So it's cheaper to ru

While AIX isn't "free" (you're charged per active processor), almost all pSeries (soon to be IBM system p5) have AIX included with the system purchase. This does increase the value of using AIX in micropartitions instead of linux.

This type of technology has been available from IBM for years. I remember those old AS/400 machines during my undergrad that had removable boards that you could hot swap which contained extra processors. One of my professors told me about when he took operating systems, he made his OS on an IBM machine and was able to use one of the six processors available in his own little virtual space without interfering with anyone else's simulations.

Correct me if I'm wrong, but this seems like IBM has placed into hardware what systems like Xen currently does in software, allocating virtual space for different operating systems to share resources and execute simultaneously.

does Xen allow you to change a host OS from having 0.1 of a virtual processor to 2 virtual processors on the fly? I don't know Xen. That's what the dynamic LPAR's do. you can change these things on the fly.

does Xen allow you to change a host OS from having 0.1 of a virtual processor to 2 virtual processors on the fly?

Not without CPU level hardware support. I don't know if this [theregister.co.uk] tiny mention of support means they will have the features required for that level of control, but it's something that piqued my interest.

Correct me if I'm wrong, but this seems like IBM has placed into hardware what systems like Xen currently does in software, allocating virtual space for different operating systems to share resources and execute simultaneously.

You're wrong:P
The reason you're wrong is that the LPAR concept has been in big iron for quite a few years, long pre-dating Xen and even pre-dating VMWare. Saying that IBM have put the Xen concept into hardware makes it sound like IBM are the ones who copied an idea. Xen is a

Nice write up from IBM, but it's important to remember thaat the Linux kernel only supports dynamic changes in CPU and PCI devices, you can't move memory around. AIX allows dynamic memory; the Linux kernel will need some fundamental chages to enable this. Power5 is indeed the coolest technology around today, but Dynamic LPAR started on the Power4 back in 2001, so this is kinda old news.

This is more then enough for me to stick with AIX. Granted, lots of times the LPAR's Never change, but if you had spare memory and you want to allocate it without taking it down, you can't do it with Linux...yet. AIX is nice like that....

I've got a p570 sitting in a data center waiting for its install right now. I'm going to be using AIX 5.3 and DB2 8.2.

Linux would be fine, but there's no price advantage, and aix is more mature. But dynamic memory isn't the real issue there - most of the memory will be consumed by the database - so it's db2's ability to dynamically change memory footprint that is the more critical.

I'd love to see a version of "Linux" that executed functions solely to allocate system resources. Authenticated access by processes to each other and to the hardware that some of them represent (drivers). All other threads/processes would be userland apps. This LPAR system would offer enough flexibility under the hood of the actual OS that the rest of the system could be highly efficient, while also simple, secure and distributable.

That's exactly what the Virtual I/O Server is. You can get it with any p5 system that has APV (Advanced Power Virtualization). However, it's a skinnied down version of AIX instead of Linux.

The VIOS acts as a broker of disk and communication resources. You can have one ethernet adapter assigned to a VIOS and create upto 20 virtual LAN's and basically an unlimited number of SEA's (Shared Ethernet Adapters) for client partitions.

As for disk, you can have one SCSI controller and create vSCSI adapters

Since logical partitions is the only cool technology I have heard of that is new in Solaris 10, Linux needs this LPAR support (everywhere) to keep making in roads in Fortune 500 comapanies' datacenters.

We currently have a 570 and it's awesome. We can allocate resources on the fly to any of our AIX partitions and we can also run Linux on a LPAR and AIX on a LPAR and even OS/400 (or what ever they call it now as AS/400's are now basically Power Machines). DLPARing let's you allocate memory, processors and disk from one partition to another with out need to take it down. IBM makes THE BEST hardware around...BAR NONE when it comes to reliability and availablity. It is GREAT stuff.

it's possible to take server consolidation too far. suppose it costs you $2e6 to buy an IBM mainframe that can support 200 LPARs (I mean real, active ones, not idle ones.) when is this better than putting each on a $2k server of its own? sure, sometimes it is: the LPAR can react more dynamically, and some aspects of TCO would be lower. but we have to be honest when making this comparison - let's assume the separate servers are auto-provisioned, for instance, and have IPMI and some sort of intelligent st

I agree to a point. Some situations it makes more sense to fill up a rack with 1U 2way intel servers and run windows or linux as an OS. However, if some of the servers are running at 10-20% processor utilization and others are getting maxed out on processor utilization, how long does it take to redistribute your processing resouces? With virtualization, provided you have the processors assigned to a pool, the system automatically will move processing power from one LPAR that is not maxed on processor res

I think an even more compelling situation is one in which you max out your 2CPU or 4CPU box: and have no where to go for extra cycles/bandwidth/etc.In a lpar situation you can still easily provide the extra resources - with a more traditional server configuration you'll have to rehost. Which will completely waste the cost of your prior server, plus you've got to pay for a larger one as well - which may only be maxed out once a month. Then there's the labor, outage, and risk involved in the migration - whi

This has been around for while. Did anonymous coward just now find out about it? Or is anonymous coward IBM marketing just trying to get the word out. Anyway, I have been using it for about two years. It's pretty impressive technology. The recent addition to sub-processor partitioning is really cool.
However, the one item that seems out of place is the fact that you need a separate system (a Hardware Maintenance Console HMC) to manage it. This is a separate linux x86 box ($4,000 beige box) that you have to

I agree. The HMC is definitely a bad part of the LPAR management. It will be interesting if IBM addresses this with the new POWER5+ launch slated for the next few weeks.

But the HMC does provide a very good way of supporting multiple systems from one console. I'm not sure if you're working with p4 or p5 architecture, but on the p5 they've moved it from a serial network to an IP network for managing the servers. Much better to have one CAT 5 cable out of that HMC than to have a few RAN's hanging off th

It just so happens that we had a guy talking about the new Power5/LPAR/VM stuff at the Ohio LinuxFest Saturday. I won't claim to know very much about the topic, but the presentation was very clear about that fact that the new VM/Virtualization stuff in Power5/Linux isn't you're fathers LPAR. While you can have up to 40 LPARS, you can have many many more VM servers on top of that, or even on top of the hardware without LPARS at all.I'm trying to find the powerpoint of from the speaker in question (Scott Cour

I was apart of the first bunch or regulars to get to look at LPARs on the iSeries back in 2000. Thay are great. The AiX group gave us HMC and as a side note my brand new 570 with the HMC showed up this AM and man I am giddy!

I thought you would be very interested in this. IBM just announced the IVM. It allows you to partition a single server without the need of an HMC. That functionallity has been moved internal to the system. Check out the latest announcement [ibm.com].