Contents

Overview

VMware introduced the vStorage APIs for Array Integration (VAAI) in vSphere 4.1 with a plugin, and provided native VAAI support with vSphere 5. VAAI significantly enhances the integration of storage and servers by enabling seamless offload of locking and block operations onto the storage array. The LinuxIO provides native VAAI support for vSphere 5.

Delete is disabledDelete is disabled per default, see below for more details.

Primitives

ATS

ATS is arguably one of the most valuable storage technologies to come out of VMware. It enables locking of block storage devices at much finer granularity than with the preceding T10 Persistent Reservations, which can only operate on full LUNs. Hence, ATS allows more concurrency and thus significantly higher performance for shared LUNs.

For instance, Hewlett-Packard reported that it can support six times more VMs per LUN with VAAI than without it.

↑ If a new VMFS-5 is created on a non-ATS storage device, SCSI-2 reservations will be used.

↑ When creating a multi-extent datastore where ATS is used, the vCenter Server will filter out non-ATS devices, so that only devices that support the ATS primitive can be used.

Zero

Thin provisioning is difficult to get right because storage arrays don't know what’s going on in the hosts. VAAI includes a generic interface for communicating free space, thus allowing large ranges of blocks to be zeroed out at once.

Zero uses the T10 WRITE_SAME command, and defaults to a 1 MB block size. Zeroing only works for capacity inside a VMDK. vSphere 5 can use WRITE_SAME in conjunction with the T10 UNMAP command.

This change takes immediate effect, without requiring a 'Rescan All' from VMware.

Clone

This is the signature VAAI command. Instead of reading each block of data from the array and then writing it back, the ESX hypervisor can command the array to duplicate a range of data on its behalf. If Clone is supported and enabled, VMware operations like VM cloning and VM vMotion can become very fast. Speed-ups of a factor of ten or more are achievable, particularly on fast flash-based backstores over slow network links, such as 1 GbE.

This change takes immediate effect, without requiring a 'Rescan All' from VMware.

Delete

VMFS operations like cloning and vMotion didn’t include any hints to the storage array to clear unused VMFS space. Hence, some of the biggest storage operations couldn't be accelerated or "thinned out".

Performance

Cloning VMware VMs in 25s over 1 GbE on an LIO SAN with VAAI and Fusion-IO ioDrive PCIe flash memory.

Performance improvements offered by VAAI can be grouped into three categories:

Reduced time to complete VM cloning and Block Zeroing operations.

Reduced use of server compute and storage network resources.

Improved scalability of VMFS datastores in terms of the number of VMs per datastore and the number of ESX servers attached to a datastore.

The actual improvement seen in any given environment depends on a number of factors, discussed in the following section. In some environments, improvement may be small.

Cloning, migrating and zeroing VMs

The biggest factor for Full Copy and Block Zeroing operations is whether the limiting factor is on the front end or the back end of the storage controller. If the throughput of the storage network is slower than the backstore can handle, offloading the bulk work of reading and writing virtual disks for cloning and migration and writings zeroes for virtual disk initialization can help immensely.

One example where substantial improvement is likely is when the ESX servers use 1 GbE iSCSI to connect to an LIO storage system with flash memory. The front end at 1 Gbps doesn't support enough throughput to saturate the back end. When cloning or zeroing is offloaded, however, only small commands with small payload go across the front, while the actual I/O is completed by the storage controller itself directly to disk.

VMFS datastore scalability

Documentation from various sources, including VMware professional services best practices, has traditionally recommended 20 to 30 VMs per VMFS datastore, and sometimes even fewer. Documents for VMware Lab Manager suggest limiting the number of ESX servers in a cluster to eight. These recommended limits are due in part to the effect of SCSI reservations on performance and reliability. Extensive use of some features, such as VMware snapshots and linked clones, can trigger large numbers of VMFS metadata updates, which require locking. Before vSphere 4.1, reliable locks on smaller objects were obtained by briefly locking the entire LUN with a SCSI Persistent Reservations. Any other server trying to access the LUN during the reservation would fail and would wait and retry up to 80 times by default. This wait and retry added to perceived latency and reduced throughput in VMs. In extreme cases, if the other server exceeded the number of retries, errors would be logged in the VMkernel logs and I/Os could return as failures to the VM.

When all ESX servers sharing a datastore support VAAI, ATS can eliminate SCSI Persistent Reservations, at least reservations due to obtaining smaller locks. The result is that datastores can be scaled to more VMs and attached servers than previously.

Datera has tested up to 128 VMs in a single VMFS datastore on LIO. The number of VMs was limited in testing to 128 because the maximum addressable LUN size in ESX is 2 TB, which means that each VM can occupy a maximum of 16 GB, including virtual disk, virtual swap, and any other files. Virtual disks much smaller than this generally do not allow enough space to be practical for an OS and any application.

Load was generated and measured on the VMs by using iometer. For some tests, all VMs had load. In others, such as when sets of VMs were started, stopped, or suspended, load was placed only on VMs that stayed running.

Tests such as starting, stopping, and suspending numbers of VMs were run with iometer workloads running on other VMs that weren't being started, stopped, or suspended. Additional tests were run with all VMs running iometer, and VMware snapshots were created and deleted as quickly as possible on all or some large subset of the VMs.

The results of these tests demonstrated that performance impact measured before or without VAAI was either eliminated or substantially reduced when using VAAI, to the point that datastores could reliably be scaled to 128 VMs in a single LUN.

Statistics

The VMware esxtop command in ESX 5 has two new sets of counters for VAAI operations available under the disk device view. Both sets of counters include the three VAAI key primitives. To view VAAI statistics using esxtop, follow these steps from the ESX 5 CLI:

~ # esxtop

Press 'u' to change to the disk device stats view.

Press 'f' to select fields, or 'o' to change their order. Note: This selects sets of counters, not individual counters.

Devices that support VAAI (LUNs on a supported storage system) are listed by their NAA ID. You can get the NAA ID for a datastore from the datastore properties in vCenter, the Storage Details—SAN view in Virtual Storage Console, or using the vmkfstools -P /vmfs/volumes/<datastore> command. LIO LUNs start with naa.6001405.

Note: Devices or datastores other than LUNs on an external storage system such as CD-ROM, internal disks (which may be physical disks or LUNs on internal RAID controllers), and NFS datastores are listed but have all zeroes for VAAI counters.

CLONE_RD

Number of Full Copy reads from this LUN.

CLONE_WR

Number of Full Copy writes to this LUN.

CLONE_F

Number of failed Full Copy commands on this LUN.

MBC_RD/s

Effective throughput of Full Copy command reads from this LUN in megabytes per second.

MBC_WR/s

Effective throughput of Full Copy command writes to this LUN in megabytes per second.

ATS

Number of successful lock commands on this LUN.

ATSF

Number of failed lock commands on this LUN.

ZERO

Number of successful Block Zeroing commands on this LUN.

ZERO_F

Number of failed Block Zeroing commands on this LUN.

MBZERO/s

Effective throughput of Block Zeroing commands on this LUN in megabytes per second.

Counters that count operations do not return to zero unless the server is rebooted. Throughput counters are zero when no commands of the corresponding primitive are in progress.

Clones between VMFS datastores and Storage VMotion operations that use VAAI increment clone read for one LUN and clone write for another LUN. In any case, the total for clone read and clone write columns should be equal.