Blogs

About this blog

AIXpert Blog is about the AIX operating system from IBM running on POWER based machines called Power Systems and software related to it like PowerVM for virtualisation, PowerVC for Deploying VM's and PowerSC for security plus performance monitoring and nmon

Tags

To extreme micro-partition or to Workload-Parttion, that is the question?

So I got asked, just as an example configuration which forces lots of workload per CPU:

Given a 16 CPU POWER machine and a need to run 100 workloads, would I recommend 100 LPARs or 100 WPARs?

In case you are not familiar with POWER technology:

LPAR = Logical Partition used the PowerVM to split CPU, memory and I/O across multiple virtual machines (also called virtual servers). The I/O is handles by a special purpose LPAR called the Virtual I/O Server, which provides virtual network, virtual disks, virtual optical and virtual tape. Each LPAR has its own OS - in this case AIX but it could be iOS or Power Linux. Typically, an LPAR gets AIX installed over the network using the NIM feature

WPAR = Workload Partition which is an AIX 6 and AIX 7 feature which behaves like a small copy of AIX running on top of a shared AIX kernel. It provides strong security, performance resource controls, shared binaries in memory, rapid creation. This does not involve any extra layer of software or overhead. These are only AIX. The "master" copy of AIX is called the Global AIX and can access all the files in the WPARs.

The answer is obviously:

100 Workload Partitions in half a dozen LPARs

Perhaps I should explain why?

1) Memory

100 LPARs are going to run 100 separate copies of AIX (or other operating systems) a basic starting memory size is 1 GB (this is on the small size). So there is 100 GB of memory just to get the LPARs running. This has a serious cost.

100 WPARs - We had better assume that not every LPAR or WPAR can be at an identical AIX technology level (TL) and service pack (SP). So lets assume with WPARs that we are going to run a bucket system - this means half a dozen LPARs with each on running a different Global AIX as a particular TL and SP. Each WPAR will be added to the Global AIX which matches the required TL and SP. If we use the recommended shared /usr and /opt approach the memory requirement is roughly for the Global AIX LPARs 6 x 1 GB plus for the WPARs 100 x 60 MB = 12 GB.

For bucket theory: see previous AIXpert blogs

In both cases the application memory requirements are in addition to the above but again WPARs can share in memory binaries and drastically reduce memory needs.

Give we are running 100 LPARs or 6 LPARs then a pair of Virtual I/O Servers are recommended and will take similar resources - actually WPAR s will be less with just 6 LPAR connections but that is nit picking! Assume each VIOS takes 4 GB of memory but that is the same to LPAR or WPARs..

LPARs=100 GB and WPAR 12 GB = WPARs win!

2) Installing

I would much rather install 6 LPARs than 100 LPARs. Note the 100 would not be all the same - they would have six combinations of TL and SP.

Of course, the LUNs would take a month to the setup by the SAN team (according to SAN team response time questions I have made at Technical Universities).

WPARs have more flexible disk options and less disk space. WPARs win!

4) Setting up Backup

LPAR: 100 LUNs would require this being set up 100 times and 100 tests - say 1 hour each (just a guess on my part) = 3 weeks effort

WPAR: 6 Global AIX as from the Global AIX you can access all the WPAR files = 1 day

WPAR takes less work to setup and organise. WPAR wins!

5) Performance Monitoring

LPAR: This is complicated as are different options:

IBM Tivoli Monitoring can do this but it would take a lot of setup to monitor 100 LPARs on a single graph. Then we have just too many lines on one graph or a big screen for a massive spreadsheet style output.

topas -C would also have too many lines of output only 20% of the LPARs would be visible at one time.

WPAR: We would monitor the six Global AIX LPARs and then drill into WPARs within them if necessary

IBM Tivoli Monitoring - will work well

topas -C to spot hot Global AIX LPARs

Six nmon on-screen monitoring Global AIX showing the hot WPAR in each

Tuning CPU and memory resource use could be scripted for LPARs (remote script to the HMC to perform Dynamic LPAR changes which can take time, particularly memory changes) or scripted for WPAR (which are simple local commands in the Global AIX and are sub-second). WPAR is simpler but not a great difference to LPARs and both have much slower GUI interfaces.

WPARs can be monitored easily. WPARs win!

6) Efficiency

100 LPARs require the Hypervisor to switch between LPARs on shared CPU cores at a 1 milli-second sort of level with zero shared memory cache between LPARs, so a binary in one LPAR will push the same binary in a different LPAR out of the memory caches.

100 WPARs requires Global AIX to switch between processes, which it can do at a 1 micro-second level (1000 times faster) and given the WPARs can share binaries code in memory there is higher memory cache efficiency.

WPARs Win!

7) AIX Updates

LPARs: 100 copies to update - aaaaaaaaaaaaaaaaargh! A 1 year project!!! Or we never get round to doing it and end up running out of date, loads of missing security updates and with RAS fixes missing for years.

WPARs: We shut down the WPAR, redeploy it to the new Global AIX at a later AIX TL and SP and run syncwpar. Then start it up. Roughly five minutes per WPAR.

WPAR Wins!

Well I could go on but these are the big items and enough to prove my case, If you disagree - please Comment.

You might disagree with the LPAR set-up times and have local automated scripting as you are a clever advanced POWER user but you are never going to convince me installing AIX into a LPAR is faster than creating a WPAR ... unless you know differently !!!