Blogs

About this blog

UNIX - IBM POWER, AIX, PowerHA, PowerVM, PowerVC and now PureFlex and Flex. Linux, Redhat, SUSE, Ubunutu, Solaris, HP-UX and more.
I'll also cover HA, VIO, Systems Director, FSM and other related parts of the POWER AIX environments.
Recently as I've started a new role I'll be adding information about LoZ, Docker and lots more

Similar Ideas

Tags

VIOS Shared Storage Pools

I've spent some time over the last few months working with a IBM SmartCloud Entry system running on a IBM POWER P740 connecting with storage services via IBM Systems Director to a V7000 Storage system. Setting it up and connecting to all the other IBM POWER systems in the cloud can be complicated and time consuming, along with the setup of Systems Director and ensuring all the license are working can be difficult. So I thought with the increased support of Shared Storage Pools and what looks like the simplicity of the setup I should be able to get my system configured in half the time. Along with this it should give me eased of increased flexibility to add more storage and servers to the cluster and disk pool. So first I'm going to go through my setup of the Shared Storage Pool on a number of POWER servers, in this example a POWER7 P740, and 2 HMC managed POWER7 PS703 blades.

First a little info about Shared Storage Pools (SSP) -

VIOS SSP will allows 16 VIOS on a number machines to operate as a cluster with a set of SAN LUNs in the pool, why you can then allocate disk space to a new or existing LPARs in around a second which can be thin or thick provisioning regardless of the underlying disks. This means the provisioning of systems is almost instantaneous, vastly reducing deployment time.

Assuming you use virtual networks to your VIOS clients then we are completely ready for Live Partition Mobility (LPM) as our SSP based LPARs are available across the whole cluster, so you have no requirement in regards LUNs and SAN Zones as the disks are already setup on your VIOS cluster.

Shared Storage Pool Setup gotchas -

These are a few things I have found that I have had to ensure are correct on the systems first; so ensure that /etc/resolv.conf is setup on your VIO server (oem_setup_env) as this causes numerous issues if not. Example -

domain <domain.com>
nameserver <name-ip>
nameserver <name-ip>

Hostname, make sure your VIOS systems have the fully qualified name as their hostname, for example; vios.domain.com

Your need to do this for any nodes in the cluster before you add them below as this can cause the node not to come online.

Creating a new cluster -

So lets try to create our cluster, in the example below we see the message about the qualified host name mentioned above -

Next we want to all the other VIO Servers into the cluster so they have access to the pool, hopefully you have already mapped up the disk to these servers so running this command on the first node in the cluster will go off and check that it can talk to them and see the correct disk -

# cluster -addnode -clustername cluster01 -hostname vios02.domain.com

If you haven't setup the hostname and such correctly your see a message similar to this -

# cluster -addnode -clustername cluster01 -hostname vios02.domain.com
Warning: Failed to add node to the cluster subsystem.
This may be due to network connectivity issues,
node being unavailable to be added to the cluster,
or other errors during cleanup..

vios02.domain.com

Warning: Retrying of operation or user intervention may be required to complete the request.

So fix the issue and give it another go. Once you have, one or more VIOS in your cluster then its status will look like this -