"The viosbr command automatically creates a backup, whenever there are any configuration changes. This functionality is known as the autoviosbr backup. It is triggered every hour, and checks if there are any configuration changes, or any other changes. If it detects any changes, a backup is created. Otherwise, no action is taken. The backup files resulting from the autoviosbr backup are located under the default path /home/padmin/cfgbackups with the names autoviosbr_SSP.<cluster_name>.tar.gz for cluster level and autoviosbr_<hostname>.tar.gz for node level. The cluster-level backup file is present only in the default path of the database node.

The -autobackup flag is provided for the autoviosbr backup functionality. By default, autoviosbr backup is enabled on the system. To disable the autoviosbr backup, use the stop parameter and to enable it you can use the start parameter. When the autoviosbr backupis disabled, no autoviosbr related tar.gz file is generated.

To check if the autoviosbr backup file, present in the default path is up to date, you can use the status parameter. To access the cluster-level backup file on any node of the cluster, use the save parameter. This action is necessary as the cluster-level backup file is present in the default path of the database node only.

If the node is a part of cluster, you can use the -type flag to specify the parameter. The parameter can be either cluster or node, depending on if it is a cluster-level or a node-level backup.

Flags""

On your VIOS, under oem_setup_env (as root), you'll find the following new entry in root's crontab:

# crontab -l | grep autoviosbr

0 * * * * /usr/ios/sbin/autoviosbr -start 1>/dev/null 2>/dev/null

This entry will check for any configuration changes and generate a new backup if necessary. Here's an example from my VIOS, running 2.2.5.10.

$ ioslevel

2.2.5.10

I create a new virtual optical device, which should trigger a new backup the next time the autoviosbr script runs (once per hour). Prior to creating the device, viosbr shows the autobackup status as Complete (no changes). I ensure that autobackup is configured by stopping and starting it with the viosbr command.

$ viosbr -autobackup stop -type node

Autobackup stopped successfully.

$ viosbr -autobackup start -type node

Autobackup started successfully.

$ viosbr -autobackup status -type node

Node configuration changes:Complete.

$ mkvdev -fbo -vadapter vhost34

vtopt13Available

Immediately after the vtopt device is created, the autobackup status displays as Pending (something has changed but has not yet been backed up).

$ viosbr -autobackup status -type node

Node configuration changes:Pending.

The autoviosbr file is created in /home/padmin/cfgbackups.

$ r oem

oem_setup_env

# cd /home/padmin/cfgbackups

# ls -tlr

total 72

-rw-r--r-- 1 root staff 12189 May 1014:00autoviosbr_s824vio2.tar.gz

On the hour, the autoviosbr script runs, notices that the configuration has changed and generates a new viosbr backup file. The viosbr autobackup status changes to Complete.

This post introduces two new features that I came across recently and found rather interesting. The first relates to PowerVP (VCPU affinity) and the second to POWER8 (Flexible SMT).

I’m particularly impressed by this new feature in PowerVP version 1.1.2 (SP1). You can view CPU and memory affinity information directly from the PowerVP GUI.

From the PowerVP Installation and User Guide:

“If you go to the View menu and select the Display CPU affinity information, the CPU utilization information will be replaced in the Core columns by the partition affinity information for the cores. If you hover your mouse over a core, you will see a tool tip showing the virtual CPU affinity by partition and will see the LPAR ID and the number of virtual CPUs assigned to the partition on that core. This information can be helpful when analyzing the processor affinity of your system. Note that for shared partitions, a partition could have affinity for multiple cores. Also, just because a partition has affinity for a core, that partition will not necessarily be dispatched to that core when it runs. Partition dispatching is performed by the hypervisor, if you want more information on this, refer to documentation on the hypervisor in the IBM Infocenter.”

Once I selected the “Display CPU affinity information” option, I noticed that the cores, shown in the “node drill down” view, showed partition affinity using different colours for each LPAR. Hovering my mouse over a core showed each of the LPAR ids and their associated virtual CPU count assigned to the core.

I was able to do the same with memory. The boxes next to the memory controllers (MC0 or 1) are memory affinity boxes. The colours in these boxes show the percentage of memory that is assigned to a partition on that particular memory controller. Hovering the mouse over this box showed the LPAR id and the percentage of memory assigned to each LPAR. This information may be useful if you are reviewing a particular partitions memory affinity.

To make it easier to read, I was able to obtain a list of LPARs and their associated colours from the Edit menu with the “Select Visible LPARs” option.

I have noticed that if you have partitions configured with dedicated processors, if you click on the LPAR name (in the partition list), PowerVP will highlight the cores with the colour assigned to the dedicated partition. However, if your partitions are configured with shared processors, they are all highlighted with the same colour (blue). At this time, PowerVP will not differentiate between different shared processor pools. Perhaps this feature will appear in the future?

You can learn more about PowerVP from the following Redbook on the topic:

Something else I wanted to mention, that is related to CPU affinity, is Flexible SMT. This new feature is available on POWER8 systems. It is covered in more detail in section 4.2 of the new POWER8 tuning Redbook. What is interesting is that compared to previous generations of POWER processor, the performance characteristics of a thread are the same, regardless of which h/w thread is active. This will allow for more equal execution of work on any thread of the processor. It also means that techniques such as rsets and bindprocessor may no longer be required on POWER8.

On POWER7 and POWER7+, there is a correlation between the hardware thread number (0-3) and the hardware resources within the processor. Matching the thread numbers to the number of active threads was required for optimum performance. For example, if only one thread was active, it was thread0; if two threads were active, they were thread0 and thread1.

On POWER8, the same performance is obtained regardless of which thread is active. The processor balances resources according to the number of active threads. There is no need to match the thread numbers with the number of active tasks. Thus, when using the bindprocessor command or API, it is not necessary to bind the job to thread0 for optimal performance.

With the POWER8 processor cores, the SMT hardware threads are designed to be more equal in the execution implementation, which allows the system to support flexible SMT scheduling and management.

On POWER8, any process or thread can run in any SMT mode. The processor balances the processor core resources according to the number of active hardware threads. There is no need to match the application thread numbers with the number of active hardware threads.

Hardware threads on the POWER8 processor have equal weight, unlike the hardware threads under POWER7. Therefore, as an example, a single process running on thread 7 would run just as fast as running on thread 0, presuming nothing else is on the other hardware threads for that processor core. AIX will dynamically adjust between SMT and ST mode based on the workload utilization.

The first
time I ran the ’viostat –adapter’ command I expected to find
non-zero values for kbps, tps, etc, for each vfchost adapter. However, the values were always zero, no matter
how much traffic traversed the adapters.

$ viostat
-adapter 1 10

...

vadapter:Kbpstpsbkreadbkwrtn

vfchost00.00.00.00.0

...

vadapter:Kbpstpsbkreadbkwrtn

vfchost10.00.00.00.0

I wondered
if this was expected behaviour. Was the output supposed to report the amount of
pass-thru traffic per vfchost? In 2011, I posed this question on the IBM developerWorks
PowerVM forum regarding this observation. One of the replies stated:

"viostat
does not give statistics for NPIV devices. The vfchost adapter is just a
passthru, it doesn't know what the commands it gets are."

I
appreciated someone taking the time to answer my question but I was still
curious. I tested the same command again (in 2013) on a recent VIOS level (2.2.2.1),
but I received the same result. It was time to source an official answer on
this behaviour.

Here is the
official response I received from IBM:

1.FC adapter stats in viostat/iostat do not include
NPIV.

2.viostat & iostat are an aggregate of all the
stats from the underlying disks, which of course NPIV doesn't have.

There's really no way for the vfchost adapter to monitor I/O,
since it doesn't know what the commands it gets are. He's just a passthru,
passing the commands he gets from the client directly to the physical FC
adapter.

3.You can run fcstat on the VIOS but that has the same
issues/limitations mentioned above.

Intent here was that customers would use tools on the client to monitor
this sort of thing.

To summarize
the comments from Development:

viostat
does NOT give statistics for NPIV devices.”

This made
sense but I wondered why the tool hadn’t been changed to exclude vfchost adapters from the output (to
avoid customer confusion). There's obviously no valid reason to ever display any
information for this type of adapter. I also understood that it was expected
that I/O would be monitored at the client LPAR level. But I must say that an
option for monitoring VFC I/O from a VIO server would be advantageous i.e. a
single source view of all I/O activity for all VFC clients; particularly when
there are several hundred partitions on a frame.The response was:

“…the way
the vfchost driver currently works is that it calls iostadd to register a
dkstat structure, resulting in the adapter being listed when viostat is
called.This is misleading, however,
since the vfchost driver does not actually track I/O.The commands coming from the client partition
are simply passed as-is to the physical FC adapter, and we don't know if a
particular command is an I/O command or not.The iostadd call is left over from porting the code from the vscsi
driver, and Development agrees it should probably have been removed before
shipping the code.

There has
also been mention of a DCR #MR0413117456 (Title: FC adapter stats in
viostat/iostat does not include NPIV) which you can follow-up with Marketing to
register your interest/track progress if that is something you're interested in
pursuing.”

In a previous post
I discussed how you can identify some of the different types of a PowerVM Capacity
on Demand (CoD) activation keys from IBM.

Recently I had to Activate Memory Expansion (AME) on a
couple of POWER7 systems. I discovered that all of the keys contained a similar
string. It appears that if a CoD key contains the string CA1F0000000800then it is safe to assume it will activate
AME for a particular system. e.g.

9741EF3AE6969F17CA1F0000000800419D

937A1240F00F5B05CA1F0000000800413D

And while I’m talking about AME, I thought I’d share this
tip as well.

I was performing a demo of AME for my team and wanted to
change the AME expansion factor using DLPAR during the demo. I did not want to
use the HMC GUI but rather the HMC command line (as it’s faster).

To change the expansion factor for an LPAR (that’s enabled
for AME), you can use the chhwres
command from the HMC CLI.

During the demo I highlighted the current (running)
expansion factor for the LPAR (using the lshwres
command).

Until recently, if you were
configuring a new LPAR with virtual FC adapters you couldn’t force it to log
into the SAN before an operating system (such as AIX) was installed. I’ve
written about this before (see link below). I also offered a way to work around
this issue.

I’ve successfully used this method
on both POWER6 (595) and POWER7 (795) systems. After configuring a new LPAR
profile with a single VFC adapter, the VIOS reported that the client was not
logged into the SAN:

If you run out of space in the root
file system, odd things can happen when you try to map virtual devices to
virtual adapters with mkvdev.

For example, a colleague of mine was
attempting to map a new hdisk to a vhost adapter on a pair of VIOS. The VIOS
was running a recent version of code. He received the following error message
(see below). It wasn’t a very helpful message. At first I thought it was due to
the fact that he had not set the reserve_policy
attribute for the new disk to no_reserve
on both VIOS. Changing the value for that attribute did not help.

I found the
same issue on the second VIOS i.e. a full root file system due to a core file
(from cimserver). I also found no trace of a full file system event in the error
report. Perhaps someone had taken it upon themselves to “clean house” at some
point and had removed entries from the VIOS error log.

Make sure
you monitor file system space on your VIOS. Who knows what else might fail if
you run out of space in a critical file system.

This little tip was passed on to me by
a friendly IBM hardware engineer many years ago.

When entering a capacity on demand
(CoD) code into a Power system, you can tell how many processors and how much
memory will be activated, just by looking at the code you’ve given by IBM.

For example, the following codes, when
entered for the appropriate Power system, will enable 4 processors (POD) and
64GB of memory (MOD). I can also tell* that once the VET code is entered, this
system will be licensed for PowerVM Enterprise Edition (2C28).

Or the PDF. It's not as interesting or fun, as the animations don't work but you get the idea. I'm sharing the PDF because it appears that you can open this file on Windows fine (in read-only mode) but on a Mac it prompts for a password without an option for opening in read-only. Shame.