Online Migration Of vSphere NFS Mount Points

Chris Wahl · Posted on2012-12-142020-05-07

An interesting conversation was struck on the migration of virtual machine workloads between NFS datastores on the same array. In this particular situation, the original subnet chosen for NFS storage was also shared with other types of traffic.

The choice had been made to create a new, dedicated subnet and VLAN specifically for NFS. This also presented a challenge – how could one remap the mount points on a vSphere host without incurring an outage or requiring storage vMotions?

Is it even possible? I was curious to know, so took on the challenge!

A Few Failed Tactics

One of the first things I tried was to maintenance mode a host and unmount the “legacy” volume mount (to the old VLAN) and then mount the volume using the new VLAN. The goal was to introduce the new mount method on a single host, and use it to “jump” VMs onto the new network. However, vCenter is smart enough to figure this out and quickly adds a (1) to the end of the datastore name.

In the lab, I toyed with a safe volume (that I use for ISOs) to present the behavior to you. To try and “fool” vCenter, I logged directly into the host and added the datastore. You can see where vpxuser (the privledged account that vCenter uses to control the host) immediately renames it to include a (1) at the end, even when I try renaming it (I’m the root user).

Rename! Rename! Rename! OK, I give up.

I also tried mixing the IP name on one host and the DNS entry on another host. This, again, did not work.

Nice try!

Future Design: Mount Using A DNS Entry?

Because the environment was already using an IP address to mount all of the datastores, I wouldn’t be able to fix the new mount point with a simple DNS entry change. I’m actually somewhat opposed to using DNS entries for mounting any IP storage – it seems like a good way to add complication to the mix. After all, DNS entries are what made vSphere HA a huge pain in the butt for years – right?

As we moved forward with the project, a compromise was made:

We’d add a DNS entry on all of the criticalhosts that had DCs, DNS, and vCenter on them. Even if the DNS server was down, the hosts would be able to resolve the IP to storage.

Future mount points would be made to the DNS entry. The non critical hosts would point to a DNS server farm.

If a change needs to occur to the NFS subnet in the future, only the A records in DNS will need to be updated along with the local /etc/resolv.conf files on the critical vSphere hosts.

Thoughts

Unfortunately, I was not able to figure out a way to avoid using storage vMotion to shift running workloads to the new mount points.

I did begrudgingly give a little more credit to DNS in these corner cases. Realistically, I’d say the best way forward is to pick a completely isolated subnet that is earmarked entirely for NFS and forgo any use of DNS. Additionally, we had to present an entirely new volume to vSphere for the migrations because the VM folders on the datastore already existed and would make new ones with a (1) after them. After the migrations were completed, the legacy mount point and volume were destroyed.

I’m curious what your thoughts are on the topic of NFS mount points (or even iSCSI targets) – do you use DNS entries or IP addresses?