Marcações

I have found reading blogs really useful on many occasions, to find out about new features, and more importantly how to get them working. In this case I’ll put one up about how to get SAN heartbeat working on PowerHA 7.1. The redbook PowerHA SystemMirror 7.1 for AIX sg247845 also has good information on this topic. One thing I setup recently was a two node PowerHA 7.1.1 cluster using SAN heartbeat. PowerHA 7.1 no longer supports a disk heartbeat, in it’s place SAN heartbeats can be used. The first thing that needs to be done is, to... [More]

Recently I came across an interesting problem during a migration from NetApp storage, to a new V7000 Unified Storage System. The migration from a block perspective was very straightforward. We used VMware VMotion to migrate from NetApp datastores to new V7000 block datastores, and we would then put the NetApp array behind the V7000 to perform image mode migrations. This went as planned and all was good. We were replicating using Global Mirror to another V7000 Unified storage system, so the next step was to create the auxiliary vdisks, and... [More]

I had an instance where an AIX LPAR had it's rootvg LUN that it was booting from made unavailable.The obvious fix was to ensure that the rootvg LUN was available again, and re-start the LPAR. What I found interesting was that the LPAR was still responding to a ping, even with no operating system, and it was still up! Obviously I wasn't able to log in, the LPAR was pretty much dead so after re-starting it, all was ok. I was reading about the "rootvg event" in PowerHA 7.1. There is a good post about it here:... [More]

There is a really good article here about how to create JFS2 snapshots in AIX: http://www.ibm.com/developerworks/aix/library/au-jfs2_snapshot.html This is a good way to get a point in time copy of an AIX filesystem, if your storage system doesn't have the capabilities of taking snapshots. The next problem is that of backing up the snapshots to TSM, and having your filespace naming correct. If you take a backup of /data by taking a snapshot of /data, and mounting it on /mnt/data that means that the TSM knows about /mnt/data which makes the... [More]

One thing that is great is having a VIO media repository. You can
store AIX base media, mksysb images or any other .iso image in your VIO
repository. There is a good post on how to do it here: https://www.ibm.com/developerworks/mydeveloperworks/blogs/AIXDownUnder/entry/vio_server_virtual_media_library?lang=en I always schedule mksysb backups to go to a NIM server regularly. One thing I take a mksysb of is the VIO servers' rootvg. The process is to: - Mount an NFS filesystem (ie /export/nim/mksysb) to the VIO server - Run the backupios... [More]

This is mentioned in the NIM on AIX redbook.. http://www.redbooks.ibm.com/abstracts/sg247296.html If
you have NFS reserved ports enabled in AIX, then you need to make sure
these are set on the NIM master AND in your NIM client definition. If
you don't mksysb restores may hang on 0611 which is an issue with NFS. Progress Code 0611 - Explanation: Remote mount of the NFS file system failed. To turn on NFS reserved ports: # nfso -po
portcheck=1 # nfso -po nfs_use_reserved_ports=1 The fast path smitty nim_global_nfs can allow you
to... [More]

The most powerful thing about TSM is how flexible it can be. There is a way for TSM to meet pretty much any data management requirement. The difficult part is to work out how long you need to keep data for, what rules need to apply. Once you know that setting up TSM is just a matter of following the motions. TSM performs backups and archives. (as well as restores and retrieves!) - Backup is typically the file level "incremental forever" type backups. - Archive is a full backup (eg a monthly backup) to keep for X amount of days.... [More]

Since SVC and V7000 6.3 code, the low bandwidth global mirror has been available, this is available in the GUI and called Global Mirror Change Volumes. This uses change volumes, which are space efficent volumes the same size as the primary and auxiliary volumes where a flashcopy of the primary volume is taken, it is then copied to the DR side, then applied to the auxiliary volume. This is good for low bandwidth FCoIP links. The below diagram shows how this works: To get this going is relatively simple, and below is the command line (the... [More]

I had to create an AIX volume group and some filesystems on an AIX LPAR using NPIV connected to an HDS VSP Storage Array. The
first thing I did was install the ODM drivers, then HDLM, set the queue
depth on my LUNs and I was ready to create an AIX volume group. root@aix01:/home/root
# mkvg -S -f -y my_vg -s 256 -P 64 hdisk1 hdisk2 0516-1254 mkvg:
Changing the PVID in the ODM. 0516-1254 mkvg:
Changing the PVID in the ODM. my_vg 0516-021
/usr/sbin/varyonvg: The varyonvg failed because the volume group's major number was... [More]

In the case that you have a PowerHA cluster that contains multiple resource groups that are related in some way and need to always exist on the same node, it is always best to have dependencies configured to ensure that when you fail over, both resource groups are always active on the same node. I came across this today, and opened up # smitty cm_rg_dependencies_menu and there were two ways to go about it: 1. Have a parent/child dependency, where one Resource group is a child of the other. 2. Configure an online on same node dependency. ... [More]

Recently IBM have introduced compression into the SVC and V7000 storage systems for block volumes. Our business runs on V7000, so we updated it to 6.4 code (required for compression) and I had a play around with it today. Compression is a licensed feature, and in a V7000 is licensed per tray of disk in your V7000. There are two ways to use it: 1. Create a new compressed volume. 2. Add a compressed copy to an existing volume, and then remove the original non-compressed copy. To create a non-compressed volume, is easy, here is how. What's... [More]

For the last few years, I have been using TPC (typically the TPC for disk component) to manage IBM storage, ranging from DS4000s, DS5000s, DS8000s, SVC and more recently V7000s. In terms of functionality TPC is great. The statistic collection and report generation is good, the ease of use is okay, but in comparison to the GUI that the storage systems have, it’s nothing spectacular. That is until now, as version 5.1 has just been released. IBM have put the XIV style GUI into TPC, and it looks awesome. The other thing that I noticed in TPC... [More]

Recently I was looking at a TSM server, and could see that TSM database backups to tape were working and expiring without issue, however database backups taken to disk were not being removed, and filling up the filesystem. There are three types of backups you can take of the TSM database: - Full. This is a full backup of then entire TSM database and this will truncate the TSM server active and archive logs. - Incremental. This will backup the changes in the TSM database between the current point in time and the last full database backup. -... [More]

Typically when a disk fails in a V7000, you just go into events, and follow the procedure to replace the disk and the drive is re-build automatically. I have done this before without issue, on firmware older than 6.4, however possibly this is a 6.4 code issue. Today we had an issue where a disk failed, but when we looked in events we could see that a spare disk was in use, however we had no procedure to replace the failed drive. Under the internal storage tab, the drive was offline. The fix? Work out what drive is offline, in the... [More]

As part of a HDLM device driver upgrade, we had to remove any native MPIO disks from a Virtual I/O server, install the driver and re-create the mappings. Since this was a new HDLM install, we have to do the below to to get HDLM installed: - Disable paths for VIO #1 on our client LPAR. - Record VIO Mappings - Remove any MPIO hdisk devices. Luckily our VIO servers are booting from internal disk. - Install HDLM - Put the VIO Mappings Back. - Enable paths for VIO #1 on our client LPAR. - Repeat for VIO #2. The plan was to do one VIO server at a... [More]