I have a scenario, where I need to LPM servers between datacenters. The storage is presented from one DC to the other, and can be zoned up so the target VIO servers can see the same storage. BUT, the target site, can not host the same VLAN tags as the source site - but it can host the same IP/Subnets, just over a different VLAN. Is there a way to perform an LPM between two servers, with the same storage connectivity, but different VLANs LIVE? Is there some way within the LPAR definitions, or within the LPM command line, to force a change of the VLANs for the LPARs clients virtual ethernet adapters, or force a VLAN translation within the target VIO servers, so that I can LPM live between the buildings. I've come to the conclusion that its not possible, but I've thought of a few options... Take an outage on each LPAR prior to LPM, tear down the ethernet adapters, and create them as etherchannels, adding a second virtual ethernet adapter on the target VLAN and adding it to the etherchannel in NIB mode - this is just a thought, I'm not sure if the LPM validation task will still fail or not. Take an outage, and force LPM the shutdown LPARs to the new server, update the VLAN tags and start them up. I'm hoping there is some tricky way to do this, as I'm sure I'm not the only person who needs to LPM between sites with networks running over different VLAN tags. Any advice welcome.

Sedman, now THIS is a very interesting challenge and thank you for sharing it!!!

My shortest answer to your main question, and I can be wrong, is this:
To LPM PowerHA clustered nodes, VLANs do not matter.
Why?
Because regardless of VLANs configured on VIOSes, as long as the configuration stays exactly like it is on the client LPARs, then LPM will be successful.

I will find time to dig into this matter and get back again with info tomorrow.

have you tried validation. what is the response ?
It will give so much information and may give some suggestions as well.
another option :
copy the running profile
edit it and keep bear minimum cofiguration. LPM e to destination frame.
edit the profile there and add the required adapters there and restart.

thanks
Nandi

Answered

Sorry! Something went wrong on our end. Please try again later.

Jaco Bezuidenhout

August 22, 2017 06:18 AM

Sedman, I need to better understand what you explained.
Are you current PowerHA cluster running nodeA and nodeB on one server or 2 servers. If 2 servers, are these servers inside 1 data center?
The way you explained, you want to setup LPM capability so as to migrate a PowerHA cluster from 1 or 2 servers to another data center that has got 1 or 2 servers there?

The point is this : You have 1 server that has got 2 VIOses. Both VIOSes has got etherchannels configured and SEA failover detailing the specific VLAN. The VLAN number was given by the networks team that configured a specific IP range to that VLAN. The OTHER site you have a server with 2 VIOSes with etherchannel and SEA failover with a different VLAN specified, but the same IP range configured on that VLAN. Logically it will work.
Do a test - create a test AIX LPAR on your source environment with single virtual network device and give it an IP and let it live inside the VIOses allocated VLAN. LPM it to the other environment. If you can successfully LPM, then you can LPM you cluster to and from.

DO NOT that long distance can cause heartbeat issues MAYBE and have the cluster fail over perhaps.

I can basically only suggest 1 other thing that will help :
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips1184.html
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips1185.html

I hope it helps

Jaco Bezuidenhout
Senior Compute Engineer

Answered

Sorry! Something went wrong on our end. Please try again later.

Jaco Bezuidenhout

August 22, 2017 06:25 AM

Look at this https://www.ibm.com/developerworks/aix/library/au-LPM_troubleshooting/index.html

and do a find (ctrl+f) of "VLAN" to see this:

""When we migrate at the target we see only virtual adapter configured with IP and etherchannel; HEA will not be migrated. Also, make sure the VLANs used in virtual adapters to create etherchannel are added to both source and target VIOS.""

What this means is that you add the destination VLAN (where you want to LPM to) to your current environment VIOSes where your cluster currently is AND you add your current environment VLAN ID to your destination environment's 2 VIOSes.
Makes sense? I'd love to test this, but have no such environment to do.

I put my money on the notion that it can be done.

Jaco Bezuidenhout
Senior Compute Engineer

Answered

Sorry! Something went wrong on our end. Please try again later.

Jaco Bezuidenhout

August 22, 2017 06:34 AM

Sedman, the more I think about this, the easier it seems.
You have boot IPs for your cluster hosts. You have the floating service IP for the cluster.
You can create additional service IPs on the cluster nodes for redundancy and use that to make sure you keep comms going - BUT it might be an issue if it is on the same subnet.
I think I am suggesting over-complicated configs that might just screw things up...

I think the easiest way you can do that is left the adapters as trunk on the vios using an untagged vlan, so lpm will work and tag the adapter on the server side, after you move the machine you just take down vlan adapter, change the vlan id and start it over, or recreate if you want over the same etherchannel.

Thanks for all your interest. Yes, its a hairy one.
The new environment hasn't been completed yet, so ATM, its just a thought experiment.

The current VIO servers use VLAN tagging end to end, the switch ports demand matching tags from the client virtual ethernet adapters and the VIOs contain all the tags with the default tag being a non-matching tag.

One of the requirements from LPM is that the source and target VIO's need to carry the same VLAN tags.

We can't stretch the existing VLANs because of architectural reasons, (client range to MSP range). The subnets can be routed through new VLAN IDs, but we can't duplicate the current VLAN tags in the target environment.

Once I have the new environment up, or have time to do some tests between our existing hardware VIO pairs, I'll have a better idea.

Our current plan is to create new LPAR profiles on the target hardware, maintain all the slot numbers, MACs and WWPNs, but use the new VLANs, then do a shutdown source, startup target on new profile.

I'll have to do the 'SMS mode> disk scan > select rootvg disk' dance, but the systems should startup OK.

I would have thought IBM would have come up with a way to do this seamlessly, with some way to switch the source and target VLANs on the client virtual adapters... it manages to swap around partition IDs and adapter slot IDs on the fly during LPM, if it just gave an option to select target for each virtual ethernet adapter per client, it'd be a pain via the GUI, but from the hmc cli, it'd be easy.