Storage - Support for 62TB VMDK- vSphere 5.5 increases
the maximum size of a virtual machine disk file (VMDK) to 62TB (note the
maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).The maximum size for a Raw Device Mapping
(RDM) has also been increased to 62TB.

16Gb End-to-End Support – In vSphere 5.5 16Gb end-to-end
FC support is now available.Both the
HBAs and array controllers can run at 16Gb as long as the FC switch between the
initiator and target supports it.

Graphics acceleration now possible on Linux Guest OS.

vSphere App HA - works in conjunction with vSphere HA
monitoring and VM monitoring to improve application up-time. can be configured
to restart application service when issue is detected. Can also reset VM if
Application fails to start.

VMware
Enhanced vMotion compatibility - On the hardware side - Intel & AMD put
functions in the CPUs that would allow them to modify the CPU ID value returned
by the CPUs. Intel calls this functionality as FlexMigration. AMD - embedded
this into the AMD-V virtualization extenstions. On Software side, VMware
created s/w that takes advantage of this hardware functionality to create a
common CPU ID baseline for all servers within the cluster. Introduced in
ESX/ESXi 3.5 Update2.

Storage vMotion enables live migration of running virtual
machine disk files from one storage location to another with no downtime or
service disruption.

Benefits:

This simplifies storage array migration or storage upgrades.

Dynamically optimize storage I/O performance.

Efficiently utilize storage and manage capacity.

Manually balances the storage load.

Storage vMotion process:

vSphere copies the non-volatile files that make up a VM:
vmx, swp, logs & snapshots.

vSphere starts a ghost or shadow VM on the destination
datastore. Because the ghost VM does not yet have a virtual disk (that hasn't
been copied over yet), it sits idle waiting for its virtual disks.

Storage vMotion first creates a destination disk. Then a
mirror device - a new driver that mirrors I/Os between source & destination.

I/O mirroring in place, vSphere makes a single-pass copy of
virtual disks from source to destination. As the changes are made to the
source, the mirror driver ensures that changes are also reflected at the
destination.

When the virtual disk copy completes, vSphere quickly
suspends & resumes in order to transfer control over to the ghost VM on the
destination datastore.

VMware DRS aggregates the computing capacity across a
collection of servers and intelligently allocates the available resources among
the virtual machines based on predefined rules. When the virtual machine
experiences increased load, DRS evaluates its priority.

VMware DRS allows you to control the placement of virtual
machines on the hosts within the cluster by using affinity rules. By default, VMware
DRS checks every 5mins to see if the cluster's workload is balanced. DRS is
needed to be enabled for resource pools to be created.

DRS is invoked by certain actions in the cluster

adding or removing the ESXi host

changing resource settings on the VM

Automatic DRS mode determines the best possible distribution
of virtual machines and the manual DRS mode provides recommendation for optimal
placement of the virtual machines and leaves it the system administrator to
decide.

Manual – every time you power on the VM, the cluster prompts
you to select the ESXi host where the VM should be hosted. Recommends migration

Partially Automatic – every time you power on the VM, the
cluster DRS automatically selects the ESXi host & Recommends migration

Fully Automatic – every time you power on the VM, the
cluster DRS automatically selects the ESXi host & migration. Scaled from Conservative
to Aggressive

Apply
all recommendations - promise even a slight improvement to cluster load
balance.

There are three major elements here:

Migration Threshold

Target host load standard deviation

Current host load standard deviation

When you change the “Migration Threshold” the value of the
“Target host load standard deviation” will also change. Two host cluster with
threshold set to three has a THLSD of 0.2, a three host cluster has a THLSD of
0.163.

VMware vMotion enables live migration of running virtual
machines from one physical to another with zero downtime, continuous service availability
& complete transaction integrity. This feature improves availability of conducting
maintenance without disrupting business.

VMware vMotion is enabled by three underlying technologies:

1) Entire state of virtual machine is encapsulated
by set of files stored on shared storage

2) Active memory page & system state of virtual
machine (preCopy) is rapidly transferred over high-speed vMotion network
allowing to switch from source host to destination host. Keeps track of
on-going memory transaction in a memory bitmap. Once entire memory & system
state are copied to destination host, the source virtual machine is Quiesced.
Memory bitmap does not have contents of memory; instead it has addresses of
that memory (also called dirty memory). Target host reads the addresses in the
memory bitmap file and requests the contents of the addresses from the source
host. After copying the bitmap to target host, the virtual machine resumes on
the target ESX host. The entire process takes < 2 seconds.

3) The network is also virtualized, ensuring even
after the migration virtual machine network identity & network connections
are preserved. VMware vMotion manages Virtual MAC. Once the destination machine
is activated, vMotion sends RARP message to the physical switch to ensure that
it is aware of the new physical location of the virtual MAC address. After virtual
machine successfully operating on the target host, memory on the source host is
deleted.

VMware vMotion also migrates resource allocation (CPU &
memory) from one host to another. On a bad day, ping -t may result in loss of
one ping packet. Applications can withstand loss of more than a packet or two.

2) All port groups to which virtual machine being
migrated must exists on both ESXi - case sensitive, VLANs.

3) Processors must be compatible.

A successful vMotion relies on the following virtual machine conditions:

1) Virtual machine must not be connected to any
physical device available only to one ESXi host.

2) Virtual machine must not be connected to an
internal-only virtual switch.

3) Virtual machine must not have CPU affinity set
to specific CPU.

4) Virtual machine must have all the files on VMFS
or NFS datastore accessible to both ESXi hosts.

High priority migration does not proceed if the resources
aren’t available to be reserved for the migration. Standard priority migration might
proceed slowly and might fail to complete if enough resources aren't available.

At 14%, a pause occurs while the hosts establish communications
and gather information for pages in memory to be migrated.

And at 65%, another pause occurs when the source virtual machine is quiesced and dirty
memory pages are fetched from the source host.

Sometimes, the vMotion progress action fails at certain percentage.
Below are the reasons for the vMotion failure at certain percentage:

3)
10% - log.rotateSize
value in the virtual machine's .vmx file is set to a very low value, it causes
the vmware.log file to rotate so
quickly that by the time the destination host is requesting the vmware.log
file's VMFS lock. The destination host is then unable to acquire a proper file
lock, and this causes the vMotion migration failure.

4)
14% - fails if there are multiple VMkernel ports
in the same network (or) incorrect VMkernal interfaces selected for vMotion.