I’ve received a couple of requests for an example of using a
post migration script with nimadm.
What follows is a simple example of using such a resource with NIM. If you are
not familiar with the nimadm tool then
perhaps you’d like to start first by reading my article on using nimadm
to migrate to AIX 6.1.

The nimadm utility
can perform both pre and post migration tasks. This is accomplished by running
NIM scripts either before or after a migration. The tool accepts the following
flags for pre and post migration script resources:

This
script resource that is run on the NIM master, but in the environment of the
client's alt_inst file system that is mounted on the master (this is done by
using the chroot command). This
script is run before the migration begins.

post-migration

This
script resource is similar to the pre-migration script, but it is executed
after the migration is complete.

We are going to focus on post-migration only, although the
configuration is the same for both.

In this example I need to uninstall and install a 3rd
party device fileset for a storage device. I need to perform this task as part
of the migration process. To protect the innocent, I have not named the storage
vendor in this post. But I will say that it was not IBM storage we are dealing
with in this case.

Before we start, first we collect all the necessary device
filesets that provide support for this type of storage on AIX. We place them
into a local directory on my NIM master. Along with the software, I also place
a copy of my NIM script in the same directory on the NIM master. The script
name is XYZpost.ksh.

root@nim1
: /usr/local/XYZ # ls -ltr

total
544

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1001I

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1002U

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1003U

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1004U

-r-xr-xr-x1 rootsystem51200 May 18 16:39 MPIO_1005U

-r-xr-xr-x1 rootsystem715 May 24 16:57
XYZpost.ksh

-rw-r--r--1 rootsystem2310 May 25 14:57
.toc

The contents of my script are simple. This script will
de-install the old device fileset and then immediately install the latest
version of the VendorXYZ’s device fileset. The script will then change the attributes
for the vendor’s storage to more appropriate default values.

At this point I copy the same directory and all of its contents
to the NIM client.

root@nim1 : /usr/local # scp –pr XYZ
lparaix01:/usr/local/

…etc…

lparaix01 : /usr/local/XYZ # ls -ltr

total
0

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1001I

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1002U

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1003U

-r-xr-xr-x1 rootsystem51200 Mar 11
2011MPIO_1004U

-r-xr-xr-x1 rootsystem51200 May 18 16:39 MPIO_1005U

-r-xr-xr-x1 rootsystem715 May 24 16:57
XYZpost.ksh

-rw-r--r--1 rootsystem2310 May 25 14:57
.toc

Make sure that any scripts you write for use with nimadm start with an appropriate ‘hashbang’ to announce it is a shell
script and the shell that must be used to execute it e.g. #!/usr/bin/ksh.
If you forget to do this nimadm will
fail to execute your script and will report an error message similar to the
following:

/lparaix01_alt/alt_inst/tmp/.alt_mig_chroot_script.11731036: Cannot run a file that does not have a valid format.

The next step is to define the script as a NIM resource so that nimadm can call the resource during the
migration process. I’ve decided to call this new NIM resource, XYZPOST.

This is easily achieved using smit nim_mkres:

root@nim1
: / # smit nim_mkres

|script= an executable file which is
executed on a client|

Define a Resource

Type
or select values in entry fields.

Press
Enter AFTER making all desired changes.

[Entry Fields]

*
Resource Name[XYZPOST]

*
Resource Typescript

* Server of Resource[master]+

* Location of Resource[/usr/local/XYZ/XYZpost.ksh]/

We can confirm that the NIM script resource is now available
using the lsnim command.

root@nim1
: / # lsnim -t script

XYZPOSTresourcesscript

root@nim1
: / # lsnim -l XYZPOST

XYZPOST:

class= resources

type= script

Rstate= ready for use

prev_state= unavailable for use

location= /usr/local/XYZ/XYZpost.ksh

alloc_count = 0

server= master

Now that the script is in place, and defined to NIM, we are
ready to test it. We will migrate the system from AIX 5.3 to AIX 6.1 using nimadm. Once the migration phase is
complete (phases 1 to 6), the post-migration script will be executed in the NIM
clients nimadm (chroot) environment
on the NIM master. Once this is finished the NIM clients data is synced back to
the NIM clients alternate disk and the boot image is created. The migration
process is then complete.

We add the –z flag to
our nimadm command line options to
specify the post migration resource.

In normal operation we would simply let nimadm run all phases in sequence with the following command.

Phase 6 has completed successfully. The NIM clients rootvg data
has been migrated from AIX 5.3 to 6.1 on the NIM master. The data has not yet been
synced back to the NIM client.

At this stage we can now run phase 7 separately and ensure that
it performs the required task. We expect it will de-install the device fileset,
install the latest version and change the ODM default attributes for the device
type. Again, you’ll notice that we specify the –P flag for phase 7 only.

Great news! Our script has worked as expected. The old fileset
was de-installed, the new fileset was installed and the PdAt default attributes
were changed successfully.

Note:
You can also review the post migration script output at a later
date if you wish. All nimadm
activities are logged, on the NIM master, to /var/adm/ras/alt_mig/NIMclientname_alt_mig.log
(where NIMclientname is the name
of the NIM client being migrated by nimadm).

With regard to nimadm
log files, please be aware that if you choose to run nimadm in phases (as I’ve shown in this example) that each run will
generate a new log file. So in my
case, when I ran phases 1 to 6, this created a log file named
lparaix01_alt_mig.log. When I ran phase 7, the original log file was moved to
lparaix01_alt_mig.log.prev. A new
log file was created and used for phase 7. Then when I ran phases 8 to 12, the
phase 7 log file was moved to lparaix01_alt_mig.log.prev and a new log file was
used for phases 8-12. For this reason you may want to backup each log file to a
unique file name as you execute each phase group, so that you do not lose any of
the information logged to the .log or .log.prev files.

Now we can complete the rest of the migration and execute the
remaining phases, 8 through 12.