VMware vSphere Storage Appliance 5.1 Features

VMware vSphere Storage Appliance 5.1 provides a distributed shared storage solution that abstracts the computing and internal hard disk resources of two or three ESXi hosts to form a VSA cluster. The VSA cluster enables vSphere High Availability and vSphere vMotion.

VSA Cluster Features

A VSA cluster provides the following set of features:

Shared datastores for all hosts in the datacenter

One replica of each shared datastore

VMware vMotion and VMware High Availability

Failover and failback capabilities from hardware and software failures

Replacement of a failed VSA cluster memberNote: VSA cluster member is an ESXi host with a running vSphere Storage Appliance that participates in a VSA cluster.

Recovery of management of an existing VSA cluster after a fatal vCenter Server failure

Collection of VSA cluster logs

VSA Cluster Components

vCenter Server

Two or three ESXi

vSphere Storage Appliance

vSphere Storage Appliance Manager

vSphere Storage Appliance Cluster Service (used only in a VSA cluster with two ESXi hosts)

What's New in vSphere Storage Appliance 5.1

vSphere Storage Appliance 5.1 includes many new features and enhancements that improve the management, performance, and security of VSA clusters.

Support for multiple VSA clusters managed by a single vCenter Server instance.

Ability to run vCenter Server on a subnet different from a VSA cluster.

Support for vCenter Server run locally on one of the ESXi hosts in the VSA cluster.

Ability to install VSA on existing ESXi hosts that have virtual machines running on their local datastores.

Ability to run the VSA cluster service independently from a vCenter Server instance in the same subnet as the VSA cluster, installed on either Linux or Windows. The VSA cluster service is required for a cluster with two members. For information about how to install the service, see the Install and Configure the VSA Cluster Environment section in the VMware vSphere Storage Appliance Installation and Administration documentation.

Improved supportability for VSA clusters in an offline state. VSA has improved its ability to identify issues that cause a cluster to be offline, and also provides an option to return a cluster to an online state.

Ability to specify and increase the storage capacity of a VSA cluster.

Support of memory over-commitment for a VSA cluster that includes ESXi hosts 5.1. The VSA cluster that includes ESXi hosts with earlier ESXi 5.0 version does not support memory over-commitment.

A new licensing model. In addition to being an add-on for vCenter Server, VSA can now have its own standalone license. Use the add-on license if you manage a single cluster. To manage multiple clusters, you need to obtain the standalone VSA license.

In addition, VSA 5.1 extends support for the following:

RAID: Support for hardware RAID5 and RAID 6 on the ESXi host.

Updated VMFS heap size is increased to 256MB. This, combined with support for larger drives, allows for a per-node VSA virtual storage capacity of 24TB. This capacity maximum applies in aggregate across all VMFS datastores created on the ESXi host.

Installation Notes

vSphere Storage Appliance supports the VSA Manager Installer wizard and the VSA Automated Installer script as installation workflows. The following table shows a comparison of the requirements for each workflow. Read the VMware vSphere Storage Appliance Installation and Administration documentation to learn more about each workflow.

Installation Requirements

VSA Manager Installer

VSA Automated Installer

UpdatedHardware Requirements

Two or three servers with homogenous hardware configuration

CPU 64-bit x86 Intel or AMD processor, 2GHz or faster

Memory

6GB, minimum

24GB, recommended

72GB, maximum testedNote: You can have more than 72GB memory per ESXi host as there is no memory limitation for the VSA cluster.

Hard disks All disks that are used in each host must be of the same model, capacity and performance characteristics. Do not mix SATA and SAS disks. For possible configurations, see RAID Settings and Requirements. Although disk drive configurations with heterogeneous vendor and model combinations, drive capacity, and drive speed might work, the write I/O performance of the RAID adapter will be a multiple of the slowest drive in the RAID set, and the capacity will be a multiple of the smallest drive in the RAID set. VMware strongly recommends that you do not use hybrid disk drive configurations except where you must rebuild a RAID set after acquiring a replacement drive from the server manufacturer that differs only slightly. Substituting a smaller drive than the current minimum capacity drive will cause the RAID set rebuild to fail and is not supported.

Yes

Yes

vCenter Server 5.1 system on a physical or virtual machine. You can run vCenter Server on one of the
ESXi hosts in the VSA cluster. The following configuration is required for vCenter Server installation:

CPU 64-bit x86 Intel or AMD processor, 2GHz or faster

Memory and hard disk space The amount of memory and hard disk space needed depends on your vCenter Server configuration. For more details, see the vSphere Installation and Setup documentation.

NIC 1 Gigabit Ethernet NIC or 10 Gigabit Ethernet NIC

For VSA Manager, additional hard disk space is required:

VSA Manager 10GB

VSA Cluster Service 2GB

Yes

Yes

Network Hardware Requirements and Configuration

Two 1 Gigabit/10 Gigabit Ethernet switches, recommendedNote: Network configuration with two switches eliminates a single point of failure in the physical network layer

One 1 Gigabit/10 Gigabit Ethernet switch, minimum

Yes

Yes

Static IP addresses. vCenter Server and VSA Manager do not need to be in the same subnet as VSA clusters. Members of each VSA cluster, including the VSA cluster service for a 2-member configuration, need to be in the same subnet.

Yes

Yes

(Optional) One or two VLAN IDs configured on the Ethernet switches

Yes

Yes

Software Installations

ESXi 5.1 on each host

Yes

Yes

Windows Server 2003 or Windows Server 2008 64-bit installation

Yes

Yes

vCenter Server 5.1 on a physical system or a virtual machine. You can run vCenter Server on one of the
ESXi hosts in the VSA cluster.

Yes

Yes

vSphere Client or vSphere Web Client

Yes

Yes

Read the VMware vSphere Storage Appliance Installation and Administration document to learn more about each workflow.

Updated RAID Settings and Requirements

RAID Type

SATA
Drives

SAS
Drives

RAID5

Not supported

These configurations are
supported:

10 X 0.5T => 4.5T VMFS datastore

8 X 0.75T => 5.25T VMFS datastore

7 X 1T => 6T VMFS datastore

6 X 1.5T => 7.5T VMFS datastore

5 X 2T => 8T VMFS datastore

3 X 3T => 6T VMFS datastore

4 X 2.5T => 7.5T VMFS datastore

4 X 3T => 9T VMFS datastore

RAID6

Maximum supported VMFS datastore limit per host is 24T.

Maximum supported VMFS datastore limit per host is 24T. This capacity maximum applies in aggregate across all VMFS datastores created on the ESXi host.

RAID10

Maximum supported VMFS datastore limit per host is 8T. This is not a VMFS datastore limit, it is the limit imposed by the expected aggregate disk drive resiliency for a RAID set. Beyond this limit, the storage resiliency is below the acceptable limit.

Disk Rotational Speed

At least 7200 RPM

At least 10000 RPM

Note: For best performance, select 15000 RPM disks.

Known Issues

The known issues in this vSphere Storage Appliance release are grouped as follows:

Installation Issues

NewThe Select Datacenter page of the VSA Installer displays an error
If any of the ESXi hosts in the datacenter that you select for the VSA cluster uses a distributed vSwitch, the VSA Installer displays the following error: java.security.InvalidParameterException : Invalid gateway: null. This issue occurs even when you do not intend to use the host with the distributed vSwitch for the VSA cluster. Workaround:
All participating ESXi hosts must use standard vSwitch for management network. If your datacenter includes non-participating hosts that use distributed vSwitches, move these hosts to a different datacenter.

VSA Manager installation fails with an error
When installing VSA Manager, you might see the following error message:
Port port_number is already in use. This error indicates that another process might be using the port required by VSA Manager.Workaround:

Use the netstat command to find the PID of the process using that port: netstat -ano | findstr port number.

Stop the process.

When the process stops, use netstat again to ensure that the port is available before continuing with VSA Manager installation.

VSA 5.1 installation fails with the Error 2896: Executing action failed message
This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed.Workaround:
Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive.

Attempts to uninstall VSA Manager fail when vCenter Server is uninstalled first
If you have uninstalled vCenter Server from the system where both vCenter Server and VSA Manager run, you might not be able to uninstall VSA Manager.
Workaround:
If you need to uninstall vCenter Server, make sure to uninstall VSA Manager and other plug-ins first.

Entering a license key that does not include VSA support results in an incorrect warning message
When you enter an incorrect license key on the License Information page of the VSA Manager installer, the following message appears:The license of this vCenter Server and/or Virtual Storage Appliance has expired. Provide a valid license key to continue with the installation.
This message is incorrect because the entered license key has not expired. The message should indicate that the license key does not support VSA and, as a result, installation cannot proceed.
Workaround:
Enter the license key that includes the VSA support.

You can have only one instance of the VSA cluster service per physical server
One instance of the VSA cluster service is always installed with VSA Manager. Use the installer for the VSA cluster service only when installing the VSA cluster service on a separate server without VSA Manager. For information about how to install the service, see the Install and Configure the VSA Cluster Environment section in the vSphere Storage Appliance Installation and Administration documentation.
Workaround:
None.

Upgrade Issues

Before upgrading to VSA 5.1, verify that the VSA cluster is up and functioning properly.

After a failed upgrade to VSA 5.1, VMFS heap size remains set to 256MB
During the VSA upgrade, the heap size of a VMFS datastore is increased to 256MB. If the upgrade fails and VSA Manager reverts to its initial state, the heap size of the VMFS datastore does not change to the original value and remains set to 256MB.Workaround: Manually reset the VMFS heap size to the original value.

Attempts to upgrade VSA Manager fail with an error
You see the following error message: A cluster is not available. This failure might occur when you upgrade VSA Manager to version 5.1, but do not have a previously created VSA cluster.Workaround: Uninstall the earlier version of VSA Manager before installing VSA Manager 5.1.

After an upgrade to VSA 5.1, a VSA cluster might take longer to exit maintenance mode
When you upgrade from VSA 1.0 to VSA 5.1, the VSA cluster might take more than 20 minutes to exit the cluster maintenance mode.Workaround: None.

A failed VSA upgrade might leave an orphaned VSA virtual machine
If the VSA upgrade failure is caused by a lost communication with an ESXi host during or after virtual machine deployment, an orphaned virtual machine might be left on the datacenter after the upgrade rollback.Workaround: Manually delete the orphaned virtual machines before you retry the upgrade.

Upgrade to VSA 5.1 fails if a VSA cluster was recovered in VSA 1.0
If a VSA cluster was recovered in VSA 1.0, and the vApp properties were not manually restored on the VSA virtual machines, upgrade to VSA 5.1 might fail.Workaround: Restore the vApp configuration and provide the correct networking properties. For details, see the VMware Knowledge Base article 2033916.

Updating your system without using the recommended upgrade order causes upgrade failures with vSphere Storage Appliance
If you do not follow the recommended order of component upgrades, the upgrades fail. Specifically, upgrading ESXi before vSphere Storage Appliance (VSA) causes the VSA upgrade to fail. Upgrading VSA before upgrading vCenter Server causes VSA to stop working when vCenter Server is upgraded, due to metadata and licensing changes. Follow the recommended order when upgrading your system. The following is the recommended order for upgrade:

Upgrade vCenter Server from version 5.0 to 5.1.

Upgrade the vSphere Storage Appliance from version 1.0 to 5.1.

Enter cluster maintenance mode.

Upgrade the ESXi hosts from version 5.0 to 5.1.

Exit cluster maintenance mode.

Workaround:

If you have already upgraded VSA to version 5.1 before upgrading vCenter Server, upgrade vCenter Server (selecting the Use the existing DB option) and then uninstall and reinstall VSA Manager.

If you have already upgraded ESXi to version 5.1 before upgrading VSA, reinstall ESXi 5.0 on your hosts preserving the local VMFS datastore, and then restore certain configurations before you can upgrade VSA. For information about reinstalling ESXi 5.0 and restoring the appropriate configurations, see the VMware Knowledge Base article 2034424.

Interoperability with vSphere Issues

Storage vMotion task of a virtual machine fails when a vSphere Storage Appliance recovers from failure
If you use Storage vMotion to migrate a virtual machine while a vSphere Storage Appliance is recovering from a failure, the Storage vMotion process might fail. While the failed vSphere Storage Appliance is recovering, the migration process might become slow, and the vSphere Client might display the Timed out waiting for migration data error message.Workaround: After the vSphere Storage Appliance recovers from the failure, restart the Storage vMotion task.

vSphere Update Manager scan and remediation tasks fail on ESXi hosts that are part of a VSA cluster
When you perform scan and remediation tasks with vSphere Update Manager on ESXi hosts that are part of a VSA cluster, the tasks might fail.Workaround: Before you perform scan and remediation tasks, place the VSA cluster member in maintenance mode.

On the VSA Manager tab, click Appliances.

In the Host column, right-click the ESXi host for which you want to perform scan and remediation tasks and select Enter Appliance Maintenance Mode.

In the confirmation dialog box, click Yes.
The status of the VSA cluster member changes to Maintenance.

In the Entering Maintenance Mode dialog box, click Close.

Perform scan and remediation tasks on the ESXi host that accommodates the VSA virtual machine that is in maintenance mode.

On the VSA Manager tab, click Appliances.

Right-click the VSA cluster member that is in maintenance mode and select Exit Appliance Maintenance Mode.

In the Exiting Maintenance Mode dialog box, click Close.

Repeat the steps for each ESXi host for which you want to perform stage and remediation tasks.

VSA datastores do not support virtual machines with Fault ToleranceWorkaround: None.

Performance Issues

I/O throughput to VSA datastores is slower when virtual machines perform disk writes whose sizes are not multiples of 4KB or are not aligned on a 4KB boundary
If an application is configured to perform disk writes that are not multiples of 4KB or are not aligned on a 4KB boundary, the I/O throughput on the VSA datastore that contains the virtual disks is affected by the need to read the contents of the data blocks before writing them to the VSA datastore.Workaround: To avoid the issue, make sure that your configuration meets the following conditions:

Disk partitions of the virtual machine start on a 4KB boundary

Applications that are either bypassing a file system or writing directly to files are issuing I/O that is both aligned and sized to 4KB multiples

If a VSA cluster member fails in a three-member VSA cluster, you can perform only up to two Storage vMotion tasks
If one of the VSA cluster members fails in a three-member VSA cluster, you cannot perform more than two Storage vMotion tasks between the VSA datastores. If you run three simultaneous Storage vMotion tasks, one of the tasks might time out.Workaround: Do not run more than two Storage vMotion tasks in a VSA cluster.

Maintenance Issues

One of the VSA datastores appears as Degraded in the VSA Manager UI after backend path fault has been resolved
You might also see in the task list a datastore synchronization task that has not started or appears stalled.
This issue might occur when a both backend path fault occurs.
This fault is the result of a loss of network communication on all backend network interfaces. When the communication is lost for a considerable period of time, the cluster is placed in the degraded state.
Workaround: Reboot the VSA node that exports the degraded VSA datastore.

Reconfigure VSA Cluster Network wizard does not check for IP address conflicts in the back-end network
When you perform network reconfiguration of the VSA cluster with the Reconfigure VSA Cluster Network wizard, the wizard does not check for IP address conflicts in the back-end network. The VSA cluster back-end network uses IP addresses in the 192.168.x.x subnet.
Workaround: No workaround is available. You must ensure that the addresses you assign to the back-end network are not used by other hosts or devices.

Documentation Issues

NewCorrection to the brownfield network configuration port group names in the documentation
The Network Configuration of the vSphere Storage Appliance documentation incorrectly defines the five port groups configured on each host as VSA Front End Network, VM Network, Management Network, VSA Back End Network, and VSA vMotion.
Entering the port group names that is shown in the documentation causes the installation to fail.
Workaround:
Use the correct port group names. They must be named exactly as shown.

VSA-Front End

VM Network

Management Network

VSA-Back End

VSA-VMotion

The Select When to Format the Disks page in the Online Help contains incorrect information
The Select When to Format the Disks page indicates that for ESXi 5.1 hosts, disks are automatically configured using the optimized eager zeroed format. This is incorrect because the format is not currently supported.
Workaround: When formatting disks, you must select one of the options: Format disks on first access or Format disks immediately.

Resolved Issues

The following issues have been resolved since the last release of vSphere Storage Appliance. The list of resolved issues below pertains to this release of vSphere Storage Appliance only.