Month: January 2015

This is short recipe guide to test VMWare vMotion at home lab with not too costly equipment. I used a Macbook Pro 2012 with 512GB SSD and 16GB RAM, 2.7GHz Core i7 processor and VMWare Fusion but similar setup will run on VMWare Workstation in Linux and Windows with some adaptations.

This lab basically has an NSF exported folder as a shared datastore for two VMWare ESXi hosts, which is a requirement for vMotion, plus two ESXi 5.0.0 virtual machines inside VMWare Fusion which in turn can migrate a Linux or Cisco UCS emulator between them. The OS X server component servers as DNS service to the environment. The Macbook needs to keep a fixed IP address and it supports Airport wireless interface without issues. The ESXi virtual hosts would be physical hosts, blade servers, instances in some platforms, that would generally connected into a SAN (Storage Area Network) sharing their data stores so vMotion could occur.

This requires some previous experience with standalone VMWare ESXi, VMWare Fusion, Windows Server 2003 and licenses for vCenter as well. This will not describe whole Windows Server and vCenter installation process. Scott Lowe excellent book shows all about ESXi and vCenter installation, plus full vMotion explanation as well whole ESXi features > http://amzn.com/1118661141

Here what you will need:

– VMWare Fusion 7 and OS X Yosemite with Server App ($30 in app store).
– A share created in /Users/Volume for NFS sharing.
– Two ESXi standard licenses, one per virtual ESXi hosts and vCenter.
– VMWare vCenter and a Windows Server 2003 R2 64-bit installation disk.
– VMware ESXi 5.0.0 ISO files. Minor versions like 5.1 and 5.5 should work as well.
– A Linux distribution or any other light operation system to test.
– Core i7 with 512SSD and 16GB RAM machine running OS X 10.8 and VMWare 5 and above. In this setup I am running latest versions OS X 10.10 and Fusion 7 Professional but it does not seem required.

So, how is this setup?

VMWare vCenter will control an entity Data Center named Macbook and inside it, a cluster (cluster 1) that has the two virtual ESXi servers as members. By having both ESXi virtual hosts sharing a local NFS exported folder, and a secondary network adapter in a vSwitch for vMotion, vCenter will enable a live migration with little interruption.
This is not the only way to set up a vMotion and variations can be tried for real data center application simulation.

vMotion requires: Both hosts (ESXi) be on compatible versions, compatible hardware at some conditions (for example, moving a virtual machine from AMD processor to Intel might not be possible if virtual machine was allowed to use processor specific functions). Also, the hosts must belong to a cluster and have share storage known as data store in VMWare, plus a dedicated network interface reachable between the hosts (those usually are on a vMotion vlan).

Once all is set up and running, you can right initiate a live migration from one host to another. With this setup, you should lose one echo packet before the machine is reachable and responding again. In large virtual machines and environments, this can be slower, but usually a data center would be designed to minimize this to no loss at all.

Setup of VMWare Fusion, DNS server, Share and ESXi hosts.

Quick recipe:

Set the IP address of the Mac interface to 192.168.1.100/24 with gateway 192.168.1.1.

Would should have a new nfs folder. Now, let’s export it so it can be accessed by NFS clients. Go to folder /private/etc/ using command ‘cd /private/etc’ and edit the exports file to add the folder nfs to be exported in NFS server.

Type ‘sudopico exports’ to edit and create the file if it does not exist. Would should have the text editor like this.

Start a new line if needed, add the /Users/Shared/nfs -maproot=root:wheel line to enable sharing of nfs folder with root rights. A warning here. This is not the most secure way to do this and in fact would should have a non-privileged account for a non-lab environment for this (where a Mac server or any other Unix or NFS server could be a server to a specific deployment but I won’t get into details on how to do it, but keep in mind you should have quote and access control for it, possible auditing). Type <Control-X> and <Y> to save it.

If not, you might want to restart the nfsd (daemon) with command ‘sudo nsfd restart’.

Now you should have a proper server to host the 3 virtual machines and a shared data store.

Create two ESXi Hosts on Fusion

This time we are going to create two very similar virtual machines for ESXi host. So, open the VMWare Fusion virtual machine library, click on the Plus signal to create a new virtual machine choosing:

Virtual machine type as VMWare ESXi v5.

Two network adapters set as auto detect

Two core processor cores and 2048MB of RAM (2GB) and Enable hypervisor applications in this virtual machine check box.

Make a virtual disk of 40GB, and uncheck Split into multiple files and Pre-Allocate disk space checkboxes. We are not going to use anything like that disk space.

Set up CD/DVD to the ESXi v5 ISO image.

Save the virtual machine and start it up with ISO mounted to start ESXi installation. In a regular process, you should end up with new ESXi host and name it ESXi-Host1.lab.inc using IPv4 address of 192.168.1.61/24 and a root administration end user and password written down. Please, repeat the virtual machine creating process for the second host ESXi-Host2 using IPv4 address 192.168.1.62/24. Overtime you power up those virtual machines, OS X will ask you for the password of an administrator end user to enable the network interfaces to monitor the system, for each interface, so twice per virtual machine, even if your end user is an administrator one.

Create the vCenter server on Fusion

This is a part where it takes some time and Windows knowledge to deploy vCenter server. Again, a new virtual machine needs to be created with 2 processors, 2GB of RAM, Windows Server 2003 64 Enterprise Edition and 400GB of disk. Again, the disk is thin provisioned so the space used will be much less than that. Remember to point the server to use 192.168.1.100 as DNS server and have a secondary public address to reach updates for it.

Install Windows Server 2003, service packs and updates (assuming they won’t brake vCenter later here). Once stable, have the vCenter ISO and install it using a internal SQL database. Remember, for larger deployments it could and should probably be external server to handle data center size. Here, vCenter installation is not complicated but some previous knowledge helps on issues, which are rare by the way.

Create data center and cluster on vCenter.

Once vCenter is running, connect to it using the vSphere client directly on Windows Server you installed vCenter and create a data center by going on the Inventory > Hosts and Clusters menu. Named it Macbook and inside this data center, create a cluster named Cluster1 with not HA option enabled.

By now you can add both virtual ESXi hosts to this cluster. Select cluster1 before adding the hosts and you should be able to add them by their FQDN ‘esxi-host1.lab.inc’. At this time you will be asked to confirm or enter VMWare license for this vCenter server.

Now, we can add a new shared data store to them. Click on the first host, followed by the Configuration tab and select Storage on the left panel. You should see the first datastore in the 40GB disk we created for this ESXi host. Click o Add Storage command on the right upper side of the screen.

Select Network File System Type.

On the server enter macbook.lab.inc and on the folder line enter /Users/Shared/nfs/ which is the folder we share before.

Name the data store as NFS (capital letters).

Click on Finish and make sure you have a new data store listed for this virtual machine. If any errors occur is because nfs export is not working or there is some access right issues. You might want to grant your OS X end user rights to the NFS folder using the Finder and retry.

Repeat this process to the second virtual machine hosting ESXi-Host2.

It is normal a warning about Store I/O Control for this lab, but you should be able to see free space and use it.

It is time to add a vMotion interface to both ESXi hosts. This is a requirement for vMotion to work, so click on Networking option on the left panel to bring up the network interfaces and virtual switches.

Click on Add Networking and on VMKernel type and on <Next>

Select a new virtual switch that will be used with this vMotion network interface.

On the next screen, name the network label as vMotion and click on Enable this port group for vMotion.

Enter the IP address of another network and by now choose only to use IPv4 (by the way. Mac runs natively with IPv6 very well, so vCenter under Windows Server 2003 and ESXi v5 and you enhance this lab later to us IPv6 only on vMotion if you want to try it with address auto configuration and local link addresses).

Before clicking on Finish, make sure you had the second network interface attached to the new vSwitch. And you should have similar to this:

I used 10.0.0.1/8 IP for this which is unusual but fits the lab. Please, repeat the process on the second ESXi host using 10.0.0.2/8 address.

We should be all set to test a live VM migration.

Well, go ahead and create a small VM in ESXi-Host1 using the shared data store to hold the virtual machine and disk(s) and power it on. In a previous version of this lab I have used a Slackware 14 64-bit version with 16GB of disk (1 for swap partition and rest for the root file system and volume). This video shows a live migration and you should have similar results.

I know this setup can go wrong a few times and being far from VMWare experienced, there are still unknowns here. Be aware of shared datastore creation might not work and you might end up with two of them. Also, remember to have both ESXi hosts in a single cluster in a data center for the migration to list the server to where you want to move the virtual machine.

Fire up the virtual machine inside the virtual hypervisors (how’s cool that?) and once it is started up and quiet, proceed with right clicking it and choosing Migrate. Select the Change Host option and under cluster1 view, the second host (you should have a validation succeed message as well before going on). Click on <Next> , select High Priority, <Next>, check the summary screen and click on <Finish>. You should see the migration going on live like screen below. (using Cisco UCS Platform Emulator at this time). Hope this is useful to get you started on vMotion. There is much more to explore on > https://www.youtube.com/watch?v=iQfTuAdLfYw