Posts Tagged ‘vmotion’

Here’s a maintenance note for SMB environments attempting 100% virtualization and relying on SAN-based file shares to simplify backup and storage management: beware the chicken-and-egg scenario on restart before going home to capture much needed Zzz’s. If your domain controller is virtualized and it’s VMDK file lives on SAN/NAS, you’ll need to restart SMB services on the NexentaStor appliance before leaving the building.

Here’s the scenario:

An afterhours SAN upgrade in non-HA environment (maybe Auto-CDP for BC/DR, but no active fail-over);

Shutdown of SAN requires shutdown of all dependent VM’s, including domain controllers (AD);

AD-based DNS will not be available at SAN re-boot if all DNS servers are virtual and only one SAN is involved;

Lack of DNS resolution on re-boot will cause a failure for DNS name based NTP service synchronization;

NexentaStor SMB service will fail to properly initialize AD credentials;

VMware 4.1 now pushes AD authentication all the way to ESX hosts, enabling better credential management and security but creating a potential AD dependency as well;

Using auto-startup order on the remaining ESX host for AD and vCenter could automate the process (steps 17 & 19), however, I prefer the “manual” approach after a SAN upgrade in case the upgrade failure is detected only after ESX host is restarted (i.e. storage service interaction in NFS/iSCSI after upgrade).

SOLORI’s Take: This is a great opportunity to re-think storage resources in the SMB as the linchpin to 100% virtualization. Since most SMB’s will have a tier-2 or backup NAS/SAN (auto-sync or auto-CDP) for off-rack backup, leveraging a shared LUN/volume from that SAN/NAS for a backup domain controller is a smart move. Since tier-2 SAN’s may not have the IOPs to run ALL mission critical applications during the maintenance interval, the presence of at least one valid AD server will promote a quicker RTO, post-maintenance, than coming up cold. [This even works with DAS on the ESX host]. Solution – add the following and you can ignore step 15:

3a. Migrate always-on AD server to LUN/volume on tier-2 SAN/NAS;

24. Migrate always-on AD server from LUN/volume on tier-2 SAN/NAS back to tier-1;

Since even vSphere Essentials Plus has vMotion now (a much requested and timely addition) collapsing all remaining VM’s to a single ESX host is a no brainer. However, migrating the storage is another issue which cannot be resolved without either a shutdown of the VM (off-line storage migration) or Enterprise/Enterprise Plus version of vSphere. That is why the migration of the AD server from tier-2 is reserved for last (step 17) – it will likely need to be shutdown to migrate the storage between SAN/NAS appliances.

“moving a virtual machine over a network from one VirtualBox host to another, while the virtual machine is running. This works regardless of the host operating system that is running on the hosts: you can teleport virtual machines between Solaris and Mac hosts, for example.”

Teleportation operates like an in-place replacement of a VM’s facilities, requiring that the “target” host has a virtual machine in VirtualBox with exactly the same hardware settings as the “source” VM. The source and target VM’s must also share the same storage, etc. and must use either the same VirtualBox accessible iSCSI targets or some other network storage (NFS or SMB/CIFS) – and no snapshots.

“The hosts must have fairly similar CPUs. While VirtualBox can simulate some CPU features to a degree, this does not always work. Teleporting between Intel and AMD CPUs will probably fail with an error message.”

The recipe for teleportation begins on the target and is given in an example, leveraging VirtualBox’s VBoxManage command syntax:

VBoxManage modifyvm --teleporter on --teleporterport

On the source, the running virtual machine is modified according to the following:

VBoxManage controlvm teleport --host --port

For testing, same-host teleportation is allowed (source and target equal loopback). Obviously a ready and clean-up script would be involved to copy the settings to a target location, provide the teleport maintenance and clean-up the former VM configuration that is obsoleted in the teleportation. In the case of an error, the running VM stays running on the source host, and the target VM fails to initialize.

SOLORI’s Take: This represents the writing on the wall for VMware and vMotion. Perhaps the shift from VMotion to vMotion telegraphs the reduced value VMware already sees in the “now standard” feature. Adding vMotion to vSphere Essentials and Essentials Plus would garner a lot of adoption from the SMB market that is moving quickly to Hyper-V over Citrix and VMware. With VirtualBox’s obvious play in desktop virtualization – where minimalist live migration features would be less of a burden – VMware’s market could quickly become divided in 2010 with some crafty third-party integration along with open VDI. It’s a ways off, but the potential is there…

There are many features in vSphere worth exploring but to do so requires committing time, effort, testing, training and hardware resources. In this feature, we’ll investigate a way – using your existing VMware facilities – to reduce the time, effort and hardware needed to test and train-up on vSphere’s ESXi, ESX and vCenter components. We’ll start with a single hardware server running VMware ESXi free as the “lab mule” and install everything we need on top of that system.

Part 1, Getting Started

To get started, here are the major hardware and software items you will need to follow along:

Recommended Lab Hardware Components

One 2P, 6-core AMD “Istanbul” Opteron system

Two 500-1,500GB Hard Drives

24GB DDR2/800 Memory

Four 1Gbps Ethernet Ports (4×1, 2×2 or 1×4)

One 4GB SanDisk “Cruiser” USB Flash Drive

Either of the following:

One CD-ROM with VMware-VMvisor-Installer-4.0.0-164009.x86_64.iso burned to it

An IP/KVM management card to export ISO images to the lab system from the network

For the hardware items to work, you’ll need to check your system components against the VMware HCL and community supported hardware lists. For best results, always disable (in BIOS) or physically remove all unsupported or unused hardware- this includes communication ports, USB, software RAID, etc. Doing so will reduce potential hardware conflicts from unsupported devices.

The Lab Setup

We’re first going to install VMware ESXi 4.0 on the “test mule” and configure the local storage for maximum use. Next, we’ll create three (3) machines two create our “virtual testing lab” – deploying ESX, ESXi and NexentaStor running directly on top of our ESXi “test mule.” All subsequent tests VMs will be running in either of the virtualized ESX platforms from shared storage provided by the NexentaStor VSA.

Popular Posts

In Medio Stat Veritas

SOLORI's Take and Quick Take posts express my personal opinion unless explicitly attributed to other sources. Where possible, supporting facts are presented to properly frame and ground these opinions, however they are presented "AS-IS" without regard to warranty or promise: expressed or implied.

Comments are open to all registered users and may be edited for decorum. Spam is deleted with prejudice.