What's New

The following information provides highlights of some of the enhancements available in this release of VMware ESXi:

VMware View 4.0 support– This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.

Windows 7 and Windows 2008 R2 support – This release adds support for 32-bit and 64-bit versions of Windows 7 as well as 64-bit Windows 2008 R2 as guest OS platforms. In addition, the vSphere Client is now supported and can be installed on a Windows 7 platform. For a complete list of supported guest operating systems with this release, see the VMware Compatibility Guide.

Enhanced Clustering Support for Microsoft Windows – Microsoft Cluster Server (MSCS) for Windows 2000 and 2003 and Windows Server 2008 Failover Clustering is now supported on a VMware High Availability (HA) and Dynamic Resource Scheduler (DRS) cluster in a limited configuration. HA and DRS functionality can be effectively disabled for individual MSCS virtual machines as opposed to disabling HA and DRS on the entire ESX/ESXi host. Refer to the Setup for Failover Clustering and Microsoft Cluster Service guide for additional configuration guidelines.

Enhanced VMware Paravirtualized SCSI Support – Support for boot disk devices attached to a Paravirtualized SCSI ( PVSCSI) adapter has been added for Windows 2003 and 2008 guest operating systems. Floppy disk images are also available containing the driver for use during the Windows installation by selecting F6 to install additional drivers during setup. Floppy images can be found in the /vmimages/floppies/ folder.

Improved vNetwork Distributed Switch Performance– Several performance and usability issues have been resolved resulting in the following:

Improved performance when making configuration changes to a vNetwork Distributed Switch (vDS) instance when the ESX/ESXi host is under a heavy load

Improved performance when adding or removing an ESX/ESXi host to or from a vDS instance

Increase in vCPU per Core Limit– The limit on vCPUs per core has been increased from 20 to 25. This change raises the supported limit only. It does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core depends on the workload and specifics of the hardware. For more information see the Performance Best Practices for VMware vSphere 4.0 guide.

Enablement of Intel Xeon Processor 3400 Series – Support for the Xeon processor 3400 series has been added. For a complete list of supported third party hardware and devices, see the VMware Compatibility Guide.

Resolved Issues– In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Before You Begin

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware vSphere Compatibility Matrixes provide details on the compatibility of current and previous versions of VMware vSphere components, including ESXi, vCenter Server, the vSphere Client, and optional VMware products. In addition, check the vSphere 4.0 Compatibility Matrixes for information on supported management and backup agents before installing ESXi or vCenter Server.

Hardware Compatibility

Learn about hardware compatibility:

The Hardware Compatibility Lists are available on the Web-based Compatibility Guide at http://www.vmware.com/resources/compatibility. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides and provides the option to search the guides, and save the search results in PDF format.

After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, some licensing, networking, and security configuration is necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

Future releases of VMware vSphere might not support VMFS version 2 (VMFS2). VMware recommends upgrading or migrating to VMFS version 3 or higher. See the vSphere Upgrade Guide Update 1.

Future releases of VMware vCenter Server might not support installation on 32-bit Windows operating systems. VMware recommends installing vCenter Server on a 64-bit Windows operating system. If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide Update 1 for instructions on installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

The VMware Tools RPM installer, which was previously available on the VMware Tools ISO image for Linux guest operating systems, has been removed for ESXi. VMware recommends using the tar.gz installer to install VMware Tools on virtual machines with Linux guest operating systems.

Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.

Upgrading or Migrating to ESXi 4.0 Update 1

vSphere 4.0 Update 1 offers following applications that you can use to upgrade to ESXi 4.0 Update01:

vSphere Host Update Utility—For standalone hosts. A standalone host is an ESX/ESXi host that is not managed by vCenter Server. See the vSphere Upgrade Guide Update 1 for more information.

Upgrading VMware Tools

Patches Contained in this Release

In addition to ISO images, the ESXi 4.0 Update 1 release, both embedded and installable, is distributed as a patch that can be applied to existing installations of ESXi 4.0 software.

The patch bundle can be downloaded from the VMware Download Patches page, or can be applied using VMware Update Manager.
The patch bundle contains the same bulletins as the individual bulletins that can be seen in VMware Update Manager. The following is the patch bundle:

This release also contains all patches for the ESXi Server software that were released prior to the release date of this product. See the VMware Download Patches page for more information on the individual patches.

ESXi 4.0 Update 1 also contains all fixes in the following previously released bundles:

CIM and API

ESXi Lockdown mode not handled correctly
The CIMOM on an ESXi host shows these problems while the host is in Lockdown mode:

The PowerManagementService.Reboot() method
incorrectly reports success while the host is in lockdown mode.

The RecordLog.ClearLog() method succeeds, but it should be rejected.

During Lockdown mode, the CIMOM makes system calls in a loop,
which could increase CPU utilization to 100%.

The repeated system calls cause authentication failure messages to be written
to system logs.

These problems have been corrected.
The CIMOM now rejects the following extrinsic method calls during
lockdown:

Requests authenticated using the 'root' username are always rejected during lockdown.

The PowerManagementService.Reboot() method always fails during lockdown
because the PowerManagementService provider always authenticates with the host
when it executes. During lockdown, the host does not accept authentication requests.

The CIMOM accepts the following extrinsic method calls during lockdown mode:

Extrinsic method calls other than PowerManagementService.Reboot() that are authenticated with a user name other than 'root' if the user name
has vSphere administrative privileges on the host.
These method calls continue to be authenticated from the authentication cache.

Extrinsic methods that are authenticated with a 'ticket'
acquired from
an AcquireCIMServicesTicket() request to the vSphere Web Services API.
The ticket can only be issued if the host is managed by vCenter,
and only before the host enters lockdown mode.
Ticket authentication is valid for the RecordLog.ClearLog() method.
However, the PowerManagementService.Reboot() method is an exception.

New: Overestimated memory usage of guest operating system causes alarms in vCenter Server to go off spuriously
A guest operating system's memory usage might be overestimated on Intel systems that support EPT technology or on AMD systems that support RVI technology. This might cause the memory alarms in the vCenter Server to go off even if the guest operating system is not actively accessing a lot of memory. This issue is resolved in this release.

Font rendering issue in virtual machines when viewing in widescreen
When a virtual machine is configured for widescreen resolution, fonts appear distorted in Microsoft Office applications.
This issue appears when the resolution is set to 2560 x 1024.

The issue is resolved in this release.

New: System commands fail while using the guest SDK Library on vMA 4.0 system
When you try to use the guest SDK Library on a vSphere Management Assistant (vMA) 4.0 system, the system commands fail, and the following error might be displayed: Cannot read file data: Error 21
This issue occurs because the guest SDK Libraries cannot be found on the system with directories available to the ldconfig command.

This issue is resolved in this release.

If you upgrade VMware Tools or the VSS components in VMware Tools to version 4.0, applications that require the msvcp71.dll file fail to start when a virtual machine is rebooted

This issue is resolved in this release.

Fixes an issue where e1000 vNIC emulation does not function properly under the OS/2 guest operating system
This fix includes updated e1000 vNIC emulation to work around the issue.

Miscellaneous

Patches continue to be listed as installed after you restore factory settings or remove custom extensions
Even after you select Restore Factory Settings or Remove Custom Extensions in the direct console user interface, an esxupdate or vihostupdate query displays all patches as Installed.

This issue is resolved in this release.

ESX/ESXi 4.0 service console applications repeatedly call a driver's poll function rather than using a blocking method to wait for external notification from the driver†Applications running on ESX/ESXi 4.0 that use poll- or select-based event notification protocols, poll() or select(), on VMkernel character devices might repeatedly call the device until the device reports that events are available to the application. Because the driver is called repeatedly rather than waiting until called by an external event, this behavior results in a high CPU load.

This issue is resolved in this release.

New: After starting ESX/ESXi host a warning message is logged in /var/log
After powering on or restarting the ESX/ESXi host, the following warning message is logged in the /var/log/messages file:Peer table full for sfcbd.
You can ignore this message. It does not indicate any issues with ESX/ESXi.

This issue is resolved in this release.

A series of vmklinux heap allocation warnings are followed by an ESX/ESXi system failure
This issue is caused by an erroneous response to a legitimate overcommitment of memory. When memory runs low from heavy swapping or vMotion use, a vmklinux limitation might be encountered. Specifically, the problem is triggered by a shortage of memory located below address 4GB. In such a situation, a series of log messages warn of a failure to allocate memory for the vmklinux heap. ESX/ESXi then becomes unavailable, logging exception 14 in a helper world. The following log excerpt is indicative of the messages logged:

While this issue might in theory occur on ESXi, all observations have been with ESX installations. ESX has a higher vulnerability due to the use of low memory by the service console.

This issue is resolved in this release.

Networking

When Intel MT and PT dualport NICs are in promiscuous mode the vlan filter is turned on
This results in dropped Cisco Discovery Protocol (CDP) packets. This fix adds checks for CDP packets with VLAN tags, sanity checks on incoming packets, and logging output for parsing errors.

Fixes an issue with the NetXen nx_nic network driver on NX2031 cards where ESX might stop responding after a queue stop on systems with more than 32GB of memory

Fixes an issue where ESX might disable the CDP daemon in the console operating system

Updates the description of Broadcom NetXtreme BCM5722 network adapter in vCenter for some Dell servers
The description in vCenter for Broadcom NetXtreme BCM5722 network adapter on some Dell servers which contained a few unnecessary words are removed and updated to NetXtreme BCM5722 Gigabit Ethernet. The description is updated for Dell PowerEdge T105, R300 and T300 servers.

Fixes an issue where ESX might fail with the nx_nic driver after decoding an Ethernet header as an IP headerThis fix sets the LRO enabled variable to 0 because Large Receive Offload (LRO) is not supported in ESX 4.0 Update 1.

Security

Fixes an issue where the NTPD daemon might have a stack-based buffer overflow if it is configured to use the autokey security model
The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-1252 to this issue.

A stack-based buffer overflow in ISC dhclient might allow remote DHCP servers to execute arbitrary code by using a crafted subnet-mask option
The Common Vulneras and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-0692 to this issue.

Server Configuration

On ESX/ESXi 4.0, A TPM-related warning is issued even though TPM is unavailable on the system (KB 1011452)

ESXi 4.0 server sometimes becomes unresponsive when host memory is heavily over-committed
ESXi 4.0 server sometimes becomes unresponsive when the host memory is over-committed where the usage is more than 200%, and the server is configured as a Long Uptime Server.
All virtual machines fail when the server is not responding to any actions. This issue is resolved in this release.

Fixes an issue where the BIOS version reported by CIM OMC_SMASHFirmwareIdentity is different from dmidecode on some machines

Fixes an issue where the DRAC gets the operating system name through IPMI and displays VMware ESX Server for both ESX and ESXi
This fix replaces the hard coded name with VMware_HypervisorSoftwareIdentity.ElementName.

Fixes an issue where the version information retrieved for the VMware_HypervisorSoftwareIdentity is hard coded at build time
If CIM components are patched individually, the build values for this provider might not match the same build number or version information in other components or applications. Once this fix is applied version information is retrieved using a common mechanism utilized by other components in the system.

New: ESXi DCUI fails if management network test is interrupted and then restarted
The ESXi Direct Console User Interface (DCUI) fails if you interrupt a management network test by pressing the Esc key and then start the test again.
This issue is resolved in this release.

Storage

Fixes an issue where the Dell MegaRAID SAS1078R controller is not identified as PERC 6 because the PCI ID subsystem information is not handled properly

ESX/ESXi 4.0 does not support HP Enterprise Virtual Arrays with Egenera BladeFrame BF400S2†When you try to install ESX/ESXi 4.0 on a Egenera BF400S2 BladeFrame that boots from HP EVA series arrays, the installation fails and an error message appears stating that no disks are found. This happens because the installer does not recognize the disks that are presented through the Egenera BF400S2 BladeFrame. HP EVA with Egenera BF400S2 is not supported.

This issue is resolved in this release

The size of virtual RDMs with snapshots cannot be expanded†
When you increase the size of an underlying LUN for a virtual machine with a virtual RDM attached, the size is not updated. You cannot expand the size of virtual RDMs that are independent and nonpersistent, and/or virtual RDMs with snapshot(s). This type of disk is read-only inside the VMkernel and metadata updates are not performed on them. Therefore, the virtual machine will not see the new size of the LUN.

This issue is resolved in this release

Setting VMkernel:Boot.storageHeapMaxSize to a value of 2147483647 or higher can cause a non-responsive server†If you use the Advanced Settings dialog box on the vSphere Client Configuration tab to set the VMkernel:Boot.storageHeapMaxSize option to a value of 2147483647 or higher, the ESX/ESXi host will fail with a purple screen after you reboot it.

New: Applications might fail and display I/O error during a LUN reset
During a LUN or SCSI bus reset on an RHEL 5 guest operating system, the applications in the virtual machine might fail due to an I/O error. This issue occurs only when you are using the PVSCSI adapter and ESX iSCSI initiator. This issue is resolved in this release.

Proxy file path access to an SMB/CIFS shared storage fails
Booting a virtual machine from a CD-ROM fails, where the ISO file is located on a share mounted using SMB/CIFS protocol.
This issue occurs because the proxy file path access is denied when using SMB/CIFS protocol. This issue is resolved in this release.

This fix increases the memory heap for the HPSA driver so that it can handle more than 2 controllers

New: SCSI abort failure reported during testing of multiple virtual machinesSince the HP Smart Array driver does not support SCSI abort, a device reset is issued instead. The immediacy of the reset can disrupt more concurrent IO, causing more IO failures and SCSI abort failures.

This issue is resolved in this release in that the reset is issued after a slight delay.

Supported Hardware

vCenter issues an error when the Power.CpuPolicy configuration option is changed to dynamic
While changing the Power.CpuPolicy option from static to dynamic, vCenter issues the following error message:

The value entered is not valid. Enter another value

This error appears because ESX/ESXi 4.0 attempts to change the system's CPU power management policy to dynamic even when the BIOS does not properly support processor performance states (P-states).

This issue is resolved in this release.

Upgrade and Installation

Upgrading ESX Server 3i 3.5 to ESXi 4.0 fails in specific casesThis issue only occurs with installations on serial attached SCSI (SAS) disks or fibre channel (FC) disks. In such cases, when you attempt to upgrade ESX Server 3i 3.5 installed on SAS or FC disk, the following error occurs during the upgrade:

Unsupported boot disk. The boot device layout on the host will not support the upgrade

Note that this issue is one of a variety that can cause the preceding error to appear.

This issue is resolved in this release.However, be aware that an in-place upgrade of ESXi Installable is not possible on a boot LUN which also contains a VMFS partition.

Update: ESXi Installable upgrade through vSphere Host Update Utility fails with the error “ERROR: Unsupported boot disk”
This error can occur when a large amount of local storage exists for the boot device of the host. In such a scenario, a precheck script for upgrade fails because the script does not have large integer support. However, such support is necessary when the storage size reaches a certain point. Therefore, the precheck fails because of a limitation in the script not because of an upgrade support issue.

The full error for this issue is as follows:ERROR: Unsupported boot disk
The boot device layout on the host does not support upgrade

This issue is resolved in this release.

Fixes an issue where installing VMware Tools overwrites the existing virtual printer drivers "TPOG" and "TPOGPS" if ThinPrint's .print server is installed
This fix checks for a registry entry created by .print, if this registry entry is detected the virtual printer drivers bundled with VMware Tools will not be installed.

vCenter Server and vSphere Client

On IBM x3655 Systems, the vSphere Client Configuration tab might list a power distribution sensor with an unknown status†
If the vSphere Client is installed on an IBM x3655 system that is equipped with certain types of power supplies, the Power Distribution sensor might indicate the health status of that power supply as Unknown. The Unknown status results from a discrepancy between the IPMI firmware and the power supply sensors. The IPMI firmware reports the presence of a Power Distribution entity in the system. However, the available power supply sensors cannot determine the status of the Power Distribution entity. As a result, the vSphere Client cannot indicate the current status of the Power Distribution entity.

This issue is resolved in this release.

vSphere Client does not show correct identifier for FreeBSD operating system
The Summary tab in the vSphere Client does not display the correct identifier for FreeBSD guest operating system.
This issue is resolved in this release.

Fixes an issue where ESX might fail if an excessive number of synctime RPC messages build up in the queue, which results in VMX running out of memory
This fix limits the number of synctime RPC messages in the queue to 1.

New: Networking performance data is missing when the VMXNET3 adapter is usedThe Networking panel is missing in the Performance tab of a virtual machine when a guest is using a VMXNET Generation 3 adapter. If a virtual machine has a mix of virtual adapters, the Networking panel for adapters of a type other than VMXNET3 is still displayed.

This issue is resolved in this release.

Fixes an issue where a virtual machine's heartbeat status might appear healthier than it is

vMotion and Storage vMotion

The fix for a previously identified vMotion failure might prevent the migration (of virtual machines with video ram greater than 30MB only) to a host without the fix
ESX/ESXi 4.0 Update 1 fixes the vMotion failure as described in KB 1011971. However, using vMotion to migrate a virtual machine with video ram of greater than 30MB to an ESX/ESXi 4.0 Update 1 host might prevent you from migrating back to a host that does not have this fix.

New: Applications such as MPlayer and MEncoder running in a virtual machine fail with an illegal instruction
A variety of applications run Supplemental Streaming SIMD Extension 3 (SSSE3) instructions, such as MPlayer and MEncoder.

These applications, when run in a virtual machine, fail with an illegal instruction error following a vMotion migration or a suspension and resumption of the virtual machine. On rare occasions, this type of failure occurs during normal execution.

This issue is resolved in this release.

New: When using the vMotion feature, a migration failure occurs followed by a system failure of the destination host
When a virtual machine migration using vMotion fails due to a rare resume error, the source virtual machine might retain a stale swap state. Any subsequent migration attempt from the source virtual machine can result in a destination-host system failure.

This issue is resolved in this release.

VMware High Availability and Fault Tolerance

Enabling HA fails when the ESX/ESXi host does not have DNS connectivity
If the ESX/ESXi host does not have DNS connectivity, when you enable or configure VMware HA, and the host short name is not populated in the /etc/hosts file, enabling or configuring HA might fail. This issue is resolved in this release.

When enabling FT, the Secondary VM starts for a few seconds and then fails, which causes the Primary VM to go into the Need Secondary VM state
When Primary and Secondary VMs for FT run on hosts with mixed steppings of Intel Xeon 5400 or 5200 Series Processors (CPUID Family 6, Model 23, steppings 6 and 10), the Secondary VM starts for a few seconds and then fails, which causes the Primary VM to go into the Need Secondary VM state.

VMware Consolidated Backup (VCB) is not supported with Fault Tolerance
A VCB backup performed on an FT-enabled virtual machine powers off both the primary and the secondary virtual machines and might render the virtual machines unusable.

Workaround: None

Guest Operating System

Removing the disk from a virtual machine with a RHEL3 guest operating system without informing the guest causes the virtual machine to fail
For 32-bit a virtual machine with a RHEL3 guest operating system and a BusLogic driver, hot removing the disk without informing the guest OS about the disk removal causes the virtual machine operation to fail.

Workaround: Remove the disk from the guest explicitly. To remove the disk, first get the disk details from /proc/scsi/scsi for the disk that you want to remove:

Get the HOST CHAN ID and LUN numbers for the device from /proc/scsi/scsi

Run the following command in the RHEL console:

echo "scsi remove-single-device HOST CHAN DEV LUN" > /proc/scsi/scsi

Solaris 10 U4 virtual machine becomes nonresponsive during VMware Tools upgrade
Upgrading or restarting VMware Tools in a Solaris 10 U4 virtual machine with an advanced vmxnet adapter might cause the guest operating system to become nonresponsive and the installation to be unable to proceed.

Solaris 10 U5 and later versions are not affected by this issue.

Workaround: Before installing or upgrading VMware Tools, temporarily reconfigure the advanced vmxnet adapter by removing its autoconfiguration files in /etc/ or removing the virtual hardware.

Workaround: From the VMware Web site, download an alternative Linux Tools ISO image that contains VMware Tools for supported as well as a variety of older and unsupported Linux guest operating systems. Alternatively, you can compile kernel modules for the unsupported Linux release by using the install-vmware.pl script distributed as part of VMware Tools.

Devices attached to hot-added BusLogic adapter are not visible to Linux guest
Devices attached to hot-added BusLogic adapter are not visible to a Linux guest if the guest previously had another BusLogic adapter. In addition, hot removal of the BusLogic adapter might fail. This issue occurs because the BusLogic driver available in Linux distributions does not support hot plug APIs. This problem does not affect performing hot add of disks to the adapter, only performing hot add of the adapter itself.

Workaround: Use a different adapter, such as a parallel or SAS LSI Logic adapter, for hot add capabilities. If a BusLogic adapter is required, attempt a hot remove of the adapter after unloading the BusLogic driver in the guest. You can also attempt to get control of the hot-added adapter by loading another instance of the BusLogic driver. You can load another instance of the BusLogic adapter by running the command modprobe -o BusLogic1 BusLogic (where you replace BusLogic1 with BusLogic2, BusLogic3 for BusLogic2, and so on, for every hot add operation).

Virtual machines with WindowsNT guests require a response to a warning message generated when the virtual machine attempts to automatically upgrade VMware Tools
If you set the option to automatically check and upgrade VMware Tools before each power-on operation for WindowsNT guests, the following warning message appears:

Set up failed to install the vmxnet driver Automatically, This driver will have to be installed manually

Workaround: The upgrade stops until the warning is acknowledged. To complete the upgrade, log into the WindowsNT guest and acknowledge the warning message.

Multiple DNS suffixes are not applied correctly after image customization of Linux distributions
DNS suffixes are automatically appended when a Linux distribution tries to resolve a DNS domain name. When more than one DNS suffix is customized, only the last DNS suffix is applied. Depending on the Linux distribution, not all customized DNS suffixes appear in the Linux distribution user interface.

Workaround: None

Creating a virtual machine of Ubuntu 7.10 Desktop can result in the display of a black screen
When you run the installation for the Ubuntu 7.10 Desktop guest on a virtual machine with paravirtualization enabled on an AMD host, the screen of the virtual machine might remain blank. The correct behavior is for the installer to provide instructions for you to remove the CD from the tray and press return.

Workaround: Press the return key. The installation proceeds to reboot the virtual machine. Furthermore, this issue does not occur if you start the installation on a virtual machine with two or more virtual processors.

New: Wake-on-LAN does not work with the e1000 vNIC on newer Windows guestsFor ESX/ESXi hosts, the Wake-on-LAN feature (turning on a host with a network message) is not available with the e1000 vNIC on certain Windows guests. Specifically, Windows from Vista forward and 64-bit versions do not work.

Workaround: Use a vNIC type that supports Wake-on-LAN, such as VMXNET3

For Windows Vista and Windows 7 guests, video output might be displayed incorrectlyWindows Media Player in a Windows Vista or Windows 7 guest may incorrectly display video files when the video is scaled.

Workaround: Either of the following actions are appropriate as a workaround for this issue:

Play video in full screen mode (ALT + ENTER).

Uncheck Fit Video to player on resize.

Windows 7 operating system with Media Player 11 is not supported
Microsoft Windows 7 guest operating system which contains Windows Media Player 11 is not supported on a virtual machine. If you try to run the media player and maximize the media player window on a virtual machine that is running Microsoft Windows 7 operating system, the virtual machine might fail.

Internationalization

Parallel/serial port output file name does not accept non-ASCII characters and displays an error message
When configuring a virtual machine, filenames that include non-ASCII characters might be rejected with an error message. The validation for filenames is not localization-safe and might result in rejection of a valid name. This problem affects output files for serial and parallel ports, and it might affect ISO and FLP names or disk (VMDK) filenames.

Workaround: Restrict all datastore contents (directories and filenames) to ASCII.

Licensing

A host with a single server license that fails to be added to vCenter Server is not given the option to correct licensing during a subsequent add host operation
When an ESX or ESXi host configured with a single server license is added to a licensed vCenter server, vCenter Server displays an error message explaining that the host cannot be added.

Workaround: Remove the disconnected host, and add it again with a non-single server license.

Virtual machines cannot power on if certain licenses are installed during a scripted or interactive installation
If you do not have the correct license serial numbers for your hardware, when you install ESX/ESXi, you might encounter a licensing error. This problem is seen because the vendor and resource check validation of license keys is not performed during the installation. After a license is validated with lib/licensecheck, a subsequent test is needed to check that the system installed is within the limits imposed by the license. However, the installer does not perform this second check.

Workaround: Switch to evaluation mode, and then get the appropriate license from the portal.

Purchased add-on licenses are not displayed in the License list on the vSphere Client Licensing page
When you view your purchased licenses on the vSphere Client Licensing page, a separate product line item for add-on editions is not displayed. For example, if you purchased a vSphere 4.0 Standard with vMotion license, or a vSphere 4.0 Standard with vMotion and Data Recovery license, only the vSphere 4.0 Standard license appears.

Workaround: To view the product features and add-on features for a license key, follow these steps:

On the vSphere Home page, click Licensing.

In the upper-right corner, click Manage vSphere Licenses to launch the License wizard.

Click Next to go to the Assign Licenses page.

Move your cursor over the host license key to see the available product and add-on features.

Virtual Machine with multi-way virtual SMP support might fail to start and might report a license failure immediately after upgrading a host's license
Immediately after upgrading the license for a host, vCenter Server fails to power on virtual machine. For example, if you upgrade from an edition that supports 4 vCPU to an edition supporting 8 vCPU, vCenter Server reports a license failure such as: machine has 8 virtual cpus, but host only supports 4....

Workaround: For the license upgrade to take effect, wait at least one minute before you power on virtual machines. To manually initiate the license change, go to Home > Licensing and click Refresh.

Miscellaneous

ESXi runs in degraded mode and some operations might fail if insufficient memory is available
ESXi Embedded and ESXi Installable require at least 3GB of physical memory. If the available memory is less than 3GB, ESXi performance is degraded and some operations might fail. Features that are most likely to fail include VMware High Availability and any operation that you perform when the vSphere Client is connected directly to an ESXi host.

Workaround: Increase the amount of available memory.

Diagnostic data from vCenter might be contained in file that cannot be decompressed
While extracting a .tgz file that contains diagnostic data from vCenter, a dialog lists files that cannot be extracted, as well as an error message:

Symbolic link points to missing files.

Workaround: None

ESX/ESXi host's core dump partition setting is not persistent under certain conditions
If you change the core dump partition from /root to a second location and experience an ESX/ESXi host failure within an hour after this change is made but before the host is rebooted, the core dump partition reverts to its original setting of /root.

Workaround: After changing the core dump partition, immediately run esxcfg-boot.

An ESX/ESXi host or a virtual machine's guest operating system might fail to operate if running multi-threaded applications using VMCI Sockets SDK
A known issue with the VMCI Sockets library can cause a host or guest operating systems to fail to operate if they are running multi-threaded applications that use the VMCI Sockets SDK.

Workaround: Avoid running multi-threaded applications that use VMCI Sockets that exhibit this behavior.

New: The diagnostic partition is not initialized until a system failure occurs
By default the diagnostic partition (or dump partition) is not initialized. Trying to collect the information in the diagnostic partition, for example by running the vm-support command, creates a harmless error message indicating that the diagnostic partition is not initialized.

Workaround: None. This issue does not affect the processing of the vm-support command. You can safely ignore the error message.

Stopping or restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to an error message
Under certain circumstances, the vCenter Server service might take longer than usual to start. Stopping and restarting the vCenter Server service through the Windows Services Control MMC plug-in might lead to the following error message:

Service failed to respond in a timely manner.

This message indicates that the time required to shut down or start up vCenter Server was more than the configured system-wide default timeout for starting or stopping the service.

Workaround: Refresh the Services Control screen after a few minutes, which should show that the service has been correctly stopped and restarted.

Help menu items appear inactive or clicking other links for Help results in an error
In vSphere Client installations on machines running non-English, non-Japanese, non-German, and non-Simplified Chinese Windows operating systems, Help menu items appear inactive. In addition, if you click other links or buttons for Help within the vSphere Client the following error message appears:

Ensure that only the contents of the vSphere Client online help subdirectory (VIC40) is copied to this level.

To view the online help for other product components under C:\Program\VMware\Infrastructure\Virtual Infrastructure Client\Help\en, double-click the index.html file in the subdirectory for that help system. For example, to view the DRS Troubleshooting help system, double-click the index.html file in the DSR40 subdirectory. The subdirectories for online help systems for other vSphere modules at this location vary depending on the vSphere products you have installed.

Networking

Removing an ESX/ESXi host configured with a vDS from a vCenter Server system results in inconsistent networking state on the host
If you remove an ESX/ESXi host configured with a vDS from a vCenter Server system, the host cannot reconnect to the vDS. When you add the host back to the vCenter Server system, a warning similar to the following appears:

The distributed Virtual Switch corresponding to the proxy switches d5 6e 22 50 dd f2 94 7b-a6 1f b2 c2 e6 aa 0f bf on the host does not exist in vCenter or does not contain the host.

The virtual machines continue to function on their respective ports, but new virtual machines are not allowed to power on. You cannot modify the vDS settings for this host by using a vSphere Client connected to the vCenter Server system.

Workaround: Perform the following steps:

Use a vSphere Client to connect directly to the ESX/ESXi host. This workaround requires a direct connection.

Migrate the virtual machines off of the invalid vDS ports one by one by editing the settings of each virtual machine. This will result in prolonged network interruption to the virtual machines.

In a vSphere Client connected to the vCenter Server system, refresh the network settings of the host. The errors are cleared.

Add the host back to the vDS, either manually or by using a host profile.

Migrate the virtual machines back to their respective ports or portgroups on the vDS. To do so, right-click the vDS and choose Migrate Virtual Machine Networking. This process also results in network interruption to the virtual machines.

VLAN tagging does not work in SLES10 guest operating systems when Oplin NIC is used in passthrough mode (FPT)
This issue occurs when an Oplin 10GB adapter is assigned to a virtual machine running the SUSE Enterprise Linux 10 (SLES10) 32-bit or 64-bit guest operating system as an FPT (fixed passthrough) device and the guest operating system is configured to perform VLAN tagging. In such a case, TCP traffic deteriorates and a call to netperf terminates prematurely with an error message. ICMP traffic still goes through and you can ping.

Workaround: Run tcpdump while the TCP traffic is active. Running tcpdump puts the NIC in promiscuous mode, which ensures that the traffic flows properly but consumes a high number of CPU cycles on the guest operating system.

For VMDirectPath Gen I, sharing a dual-function QLogic 2532 adapter between a virtual machine and either another virtual machine or the VMkernel might result in data corruption
When you configure a dual-function QLogic 2532 adapter for VMDirectPath IO, and assign the first PCI function to a virtual machine and the second to either the VMkernel or another virtual machine, data corruption might occur. This happens because both ports use the same credentials to log in to the fabric and have the same storage visibility. VMware does not support this configuration for VMDirectPath IO.

Workaround: If you cannot avoid sharing the dual-function adapter between a virtual machine and the VMkernel, assign the first PCI function to the virtual machine and the second to the VMkernel. The PCI functions cannot be split between two virtual machines.

NetXen chipset does not have hardware support for VLANs
The NetXen NIC does not display Hardware Capability Support for VMNET_CAP_HW_TX_VLAN and VMNET_CAP_HW_RX_VLAN. This occurs because NetXen chipset does not have hardware support for VLANs. NetXen VLAN support is available in software.

The Custom creation of a virtual machine allows a maximum of four NICs to be addedDuring the creation of a virtual machine using the Custom option, vSphere Client provides the Network configuration screen. On that screen, you are queried about the number of NICs that you would like to connect. The drop down menu allows up to four NICs only. However, 10 NICs are supported on ESX/ESXi 4.0 Update 1.

Workaround: Add more NICs with the task that follows.

Using the vSphere Client, navigate to Home>Inventory>VMs and Templates.

VMware Distributed Power Management (DPM) fails to wake a host from standby when the host is configured using Wake-On-LAN and a NetXen NIC for its vMotion network
The driver for NetXen NICs (nx_nic) that is included in this release advertises Wake-on-LAN support for all NetXen NICs, but the Wake-on-LAN feature does not work on most NICs when using this version of the driver. The only NetXen NIC for which Wake-on-LAN is known to work with this driver is the NetXen HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter, commonly found in the HP ProLiant ML370 G6. Because the driver advertises Wake-on-LAN support for other NetXen NICs as well, DPM is unaware that the support does not work for those NICs and will attempt to use it if configured to do so.

Workaround: If the host has support for IPMI or iLO as a wake protocol, configure DPM to use one of those protocols instead of Wake-on-LAN. Otherwise, install a NIC with working Wake-on-LAN support, or disable DPM on this host.

Switching a VMkernel NIC using DHCPv6 from static DNS attribution to DHCP DNS will not update the DNS name servers
Using the service console, vSphere CLI, or vSphere Client, to switch an Internet Protocol version 6 (IPv6) VMkernel NIC using Dynamic Host Configuration Protocol version 6 (DHCPv6) from static Domain Name System (DNS) attribution to DHCP DNS will not update the DNS name servers until the next DHCPv6 lease renewal.

Workaround: Manually disable and then re-enable the VMkernel NIC to acquire the new DNS name servers. You can accomplish this by selecting the Restart Management Network option in the Direct Console User Interface, the keyboard-only user interface for ESXi. If you take no action, the DNS name server is acquired when the DHCPv6 lease is renewed.

The VmwVmNetNum of VM-INFO MIB displays as Ethernet0 when running snmpwalk
When snmpwalk is run for VM-INFO MIB on an ESX/ESXi host, the VmwVmNetNum of VM-INFO MIB is displayed as Ethernet0 instead of Network Adapter1 while the MOB URL in the VmwVmNetNum of VM-INFO description displays as Network Adapter1.

Workaround: None

Removing a virtual machine from the inventory might fail if the virtual machine is associated with an invalid vNetwork Distributed Switch
If a virtual machine is connected to an invalid vNetwork Distributed Switch, attempting to remove it from the inventory results in a NotFound exception. This problem occurs when you perform the following steps:

Disconnect a host that is a member of a vNetwork Distributed Switch.

Remove the host from the vNetwork Distributed Switch or the vSphere Client inventory.

Reconnect the host or add the host back to vSphere Client inventory.

Remove the virtual machine from the vSphere Client inventory.

Workaround: Perform one of the following tasks before removing the virtual machine from the inventory:

If the vNetwork Switch still exists, reconnect the host to it to make the device backing valid.

Reconfigure the virtual machine's vNIC to connect to a valid network.

Remove the vNic from the virtual machine.

Applications that use VMCI Sockets might fail after virtual machine migration
If you have applications that use Virtual Machine Communication Interface (VMCI) Sockets, the applications might fail after virtual machine migration if the VMCI context identifiers used by the application are already in use on the destination host. In this case, VMCI stream or Datagram sockets that were created on the originating host stop functioning properly. It also becomes impossible to create new stream sockets.

Workaround: For Windows guest operating systems, reload the guest VMCI driver by rebooting the guest operating system or enabling the device through the device manager. For Linux guests, shut down applications that use VMCI Sockets, remove and reload the vsock kernel module and restart the applications.

ESX 4.0/ESXi 4.0 does not support configuring port binding with DVS-enabled VMkernel NICs
If you configure port binding with vNetwork Distributed Switch-enabled VMkernel NICs, the operation fails if you enter the esxcli swiscsi nic add -n vmkx -d vmhbaxx and vmkiscsi-tool -V -a vmkx vmhbaxx commands through the service console or the vSphere CLI.

Workaround: Only use legacy vSwitch VMkernel NICs for port binding.

Applying port groups with multiple statically assigned VMKNICs or VSWIFs results in repeated prompts for an IP address
Applying a vDS portgroup with multiple statically assigned VMKNICs or VSWIFs causes a situation in which the user is repeatedly prompted to enter an IP address. DHCP assigned interfaces are not affected.

Workaround: Use only one statically assigned VMKNIC or VSWIF per portgroup. If multiple statically assigned VMKNICs are desired on the same vDS portgroup, then assign each VMKNIC or VSWIF to a unique set of services (for example, vMotion, Fault Tolerance, and other services).

DPM cannot put a host in standby mode when the VMkernel vMotion NIC is part of a vDS and the host is configured to use Wake-on-LAN for remote power-on
If a host's VMkernel vMotion NIC is part of a vDS, the NIC is configured to not support Wake-on-LAN. Therefore, unless the host is configured to use IPMI or iLO for remote power-on, it is considered incapable of remote power-on and DPM cannot automatically move it into standby mode. DPM selects other hosts to put into standby mode, if possible. If you attempt to put the host into standby mode manually, the attempt fails and an Enter Standby Stopped dialog box appears.

Workaround: Use IPMI or iLO to power on hosts that support one of these protocols instead of Wake-on-LAN by configuring the IPMI or iLO credentials on each host. Alternatively, if you need to use Wake-on-LAN to power on hosts, configure the VMkernel vMotion interface on a vNetwork Standard Switch (vSwitch), rather than on a vDS.

The console for the guest operating system fails and you cannot access the guest through the console if you set MTU to less than 1500 for a vNetwork Distributed Switch or a vSwitch that has the service console port group or the management network port group
If you set the MTU for the vNetwork Distributed Switch or the vSwitch that includes the service console port group for ESX or the Management Network port group for ESXi Embedded to less than 1500, the console for the guest operating system fails and you cannot access the guest through the console. The service console port group for ESX and the management network port group for ESXi Embedded must be connected to a vSwitch or vNetwork Distributed Switch with an MTU set to 1500 or higher.

Workaround: Avoid setting the MTU value less than 1500 for the vNetwork Distributed Switch or the vSwitch that includes the service console port group for ESX or the management Network port group for ESXi Embedded.

The Retrieval of DNS and host name information from the DHCP server might be delayed or prevented

New: Changing the network settings of an ESX/ESXi host prevents some hardware health monitoring software from auto-discovering it

After the network settings of an ESX/ESXi host are changed, the third-party management tool that relies on the CIM interface (typically hardware health monitoring tools) are unable to discover automatically the host through the Service Location Protocol (SLP) service.
Workaround: Manually enter the hostname or IP address of the host in the third-party management tool. Alternatively, restart slpd and sfcbd-watchdog by using the applicable method:
On ESXi:

Restart management agents on the Direct Console User Interface (DCUI). This restarts other agents on the host in addition to the ones impacted by this defect and might be more disruptive.
On ESX: In the ESX service console, run the following commands:/etc/init.d/slpd restart/etc/init.d/sfcbd-watchdog restart

Server Configuration

Host profiles do not capture or duplicate physical NIC duplex information
When you create a new host profile, the physical NIC duplex information is not captured. This is the intended behavior. Therefore, when the reference host's profile is used to configure other hosts, the operation negotiates the duplex configuration on a per physical NIC basis. This provides you with the capability to generically handle hosts with a variety of physical NIC capabilities.

Workaround: To set the physical NIC duplex value uniformly across NICs and hosts that are to be configured using the reference host profile, modify the host profile after it is created and reapply the parameters.

To edit the profile, follow the steps below.

On the vSphere Client Home page, click Host Profiles.

Select the host profile in the inventory list, then click the Summary tab and click Edit Profile.

Select Network configuration > Physical NIC configuration > Edit.

Select Fixed physical NIC configuration in the drop-down menu and enter the speed and duplex information.

ESX/ESXi might fail to discover second port on certain IBM servers with dual-port FC HBAs
When you use IBM x3650 servers with dual-port FC HBAs, ESX/ESXi might fail to discover the second port. This problem can possibly happen on other IBM servers with the same version of BIOS.

Workaround: Depending on the type of adapter your server has, do one of the following:

For QLogic HBAs, upgrade the IBM BIOS to the latest BIOS version (version 1.2).

For Emulex HBAs the following solutions exist:

If you use ESX booting from SAN, disable the local disk in IBM server's BIOS.

If you use ESX booting from the local disk or ESXi, disable BootBIOS for both ports on the Emulex HBA.

The health status of the ESX/ESXi host server components does not appear on the Hardware Status tab
If you change the HTTPS port number in the SFCB configuration file (sfcb.cfg) to a port other than the default and restart the SFCB (CIM) server, the health status of the ESX/ESXi host server components does not appear on the Hardware Status tab. This behavior is also seen if you log directly in to an ESX/ESXi host and click the Configuration tab to view the health status. Status information for the server components does not appear. This problem occurs because vCenter Server and the SFCB server are communicating on different ports.

Workaround: Make sure that the SFCB server communicates only through the default port.

SNMP PowerOn traps generated during vmware_hostd restart
When you restart vmware_hostd, only the Warm Start trap message should be generated by default. However, for all virtual machines running on your host, the PowerOn trap messages are also generated.

Workaround: You can ignore the PowerOn trap messages.

Storage

When installing ESXi Installable hosts on a LUN on network storage, only one ESXi host boots from that image and booting from the LUN fails for additional ESXi hosts
Host configuration files are stored in the same LUN where the host image resides and have a one to one mapping relationship. Therefore, each host requires a unique LUN for booting ESXi from network storage. Multiple hosts may not be booted from the same LUN.

Note: Network booting is an experimental feature for ESXi.

Workaround: Create a unique LUN per host with a boot image for each host.

Entering additional static discovery targets for hardware iSCSI might fail
When you configure your hardware iSCSI adapter, an attempt to enter additional static discovery targets might fail. This occurs when a new target has the same iSCSI name as the existing one, even though their IP addresses are different.

Workaround: When configuring hardware iSCSI, use static discovery targets with different iSCSI names.

The path status for the CLARiiON iSCSI storage system changes from dead to active and from active to dead when the system is accessed through the ESX/ESXi software iSCSI initiator
When you use the software iSCSI initiator to access a CLARiiON iSCSI storage system, the path status frequently changes from dead to active and from active to dead. This occurs because CLARiiON does not support the advanced Delayed Ack parameter enabled on your ESX/ESXi host by default.

Workaround: Disable the Delayed Ack parameter on your ESX/ESXi host by performing the following steps:

Log in to the vSphere Client, and select a host from the inventory panel.

Click Configuration tab, and click Storage Adapters.

Select the software iSCSI initiator to configure and click Properties.

On the General tab, click Advanced.

Deselect Delayed Ack.

When you use the PSP_RR path selection policy with Failover Clustering, shared disks experience problems and the cluster may not operate
Failover Clustering conducts SCSI-3 reservations on shared disks. SCSI-3 registration sent down one path allows the cluster to do SCSI-3 reservations only on that path. When PSP_RR later switches to another path, Failover Clustering may be unable to do a reservation or use other SCSI-3 commands that depend on the reservation.

Workaround: Do not switch devices used for shared disks to PSP_RR. Instead, use the PSP_MRU or PSP_FIXED policies depending on the normal default for the array.

Adding a QLogic iSCSI adapter to an ESX/ESXi system fails if an existing target with the same name but a different IP address exists
Adding a static target for QLogic hardware iSCSI adapter fails if there is existing target with same iSCSI name,
even if the IP address is different.

You can add a QLogic iSCSI adapter to an ESX/ESXi system only with a unique iSCSI name for a target, not the combination of IP and iSCSI name. In addition, the driver and firmware do not support multiple sessions to the same storage end point.

Workaround: None. Do not use the same iSCSI name when you add targets.

Booting from an iSCSI LUN might be too slow or fail
If any iSCSI configuration data is present before you start configuring an iSCSI boot device through the QLogic BIOS, you can create duplicate iSCSI sessions for the same target. When this occurs, I/O operations might be very slow and might fail.

Workaround: Perform the following steps:

In the BIOS, select the Clear Persistent Targets option to remove any existing iSCSI configuration data.

Add iSCSI boot configuration data.

Changing the Maximum Outstanding R2T iSCSI parameter on your ESX/ESXi host to a value greater than one, might result in the EMC CX3 Series storage system not working properly
If you change the default value of the Maximum Outstanding R2T iSCSI parameter on your ESX/ESXi host to a value greater than one, the EMC CX3 Series storage system might not work properly.

Workaround: Do not change the default value of one for the Maximum Outstanding R2T parameter.

Connecting to a tape library through an Adaptec card with aic79xx driver might cause ESX to fail
If your ESX Server system is connected to a tape library with an Adaptec HBA (for example: AHA-39320A) that uses the aic79xx driver, you might encounter a server crash when the driver tries to access a freed memory area. Such a condition is accompanied by an error message similar to:

Loop 1 frame=0x4100c059f950 ip=0x418030a936d9 cr2=0x0 cr3=0x400b9000.

Workaround: None

The ESX/ESXi host does not register a path added from Storage Manager Application
When you add a new port on a storage system using Storage Manager Application, your ESX/ESXi host does not display a new path to the storage system.

Workaround: Perform the following steps:

Make sure the port is accessible by the ESX/ESXi host.

Remove the physical connection for the newly added port.

Wait for the Device Delay Missing timer to expire.

Reconnect the physical connection.

Path for a device cannot be unclaimed after you disable autoclaiming
You cannot unclaim a path for a device after you set the autoclaiming option to Off/Disabled.

Workaround: The autoclaiming option is not supported in ESX/ESXi 4.0.

On rare occasions, after repeated SAN path failovers, operations that involve VMFS changes might fail for all ESX/ESXi hosts accessing a particular LUN
On rare occasions, after repeated path failovers to a particular SAN LUN, attempts to perform such operations as VMFS datastore creation, vMotion, and so on might fail on all ESX/ESXi hosts accessing this LUN. The following warnings appear in the log files of all hosts:

I/O failed due to too many reservation conflicts.

Reservation error: SCSI reservation conflict

If you see the reservation conflict messages on all hosts accessing the LUN, this indicates that the problem is caused by the SCSI reservations for the LUN that are not completely cleaned up.

Workaround: Run the following LUN reset command from any system in the cluster to remove the SCSI reservation:

vmkfstools -L lunreset /vmfs/devices/disks/<device_name>

vCenter Server fails to open RDM after the RDMs LUN number changes
VMware does not support LUN number (position) changes within the target. When the LUN number changes vCenter Server fails to open the RDM that is built on that LUN. A raw device mapping file (RDM) resides on the VMFS datastore and points to a LUN. The LUN number shows the position of the LUN within the target. When this number (or position) changes, the vml identifier (vml_ID) for the RDM file also changes. For example, you cant disconnect VMFS datastores and reconnect them in a different order. This changes the identification of the LUN so that it is no longer accessible and vCenter Server prevents the virtual machine from powering on. vSphere Client uses the vml_ID for backward compatibility.

Workaround: Remove the RDM and re-create it. This generates a new vml_ID that the LUN can recognize.

vSphere Client displays drive fault alerts when the drive is not present on ESX and ESXi hosts with multi-node IBM systems
On some multi-node IBM systems, the BMC firmware reports a drive fault for drive slots when no drive is present. The vSphere Client reports the Drive Fault sensor as being in an Alert state. The same faults are shown in the IBM iLOM interface.

Workaround: None. A defect report is filed with IBM to address this issue.

NAS datastores report incorrect available space
When you view the available space for an ESX/ESXi host by using the df (ESXi) or vdf (ESX) command in the host service console, the space reported for ESX/ESXi NAS datastores is free space, not available space. The space reported for NFS volumes in the Free column when you select Storage > Datastores on the vSphere Client Configuration tab, also reports free space, and not the available space. In both cases, free space can be different from available space.

ESX file systems do not distinguish between free blocks and available blocks, but always report free blocks for both block types (specifically, f_bfree and f_bavail fields of struct statfs). For NFS volumes, free blocks and available might can be different.

Workaround: You can check NFS servers to get correct information regarding available space. No workarounds are available for ESX/ESXi.

Harmless warning messages concerning region conflicts are logged in the VMkernel logs for some IBM servers
When the SATA/IDE controller works in legacy PCI mode in the PCI config space, an error message similar to the following might appear in the VMkernel logs:

Workaround: Such error messages are harmless and can be safely ignored.

Using Storage vMotion to relocate a virtual machine back to its source volume might result in an insufficient disk space error
When you use Storage vMotion to move a virtual machine to another datastore and then back to its source volume, the vSphere Client does not immediately refresh the size of the source datastore, resulting in an error.

Workaround: Refresh the datastore in the vSphere Client. If the reported size of the datastore does not change after one attempt, wait for 30 minutes and refresh again.

vmfs-undelete utility is not available for ESX/ESXi 4.0
ESX/ESXi 3.5 Update 3 included a utility called vmfs-undelete, which could be used to recover deleted .vmdk files. This utility is not available with ESX/ESXi 4.0.

Workaround: None. Deleted .vmdk files cannot be recovered.

Cannot perform port binding in conjunction with IPv6 ports
Port binding is a mechanism of identifying certain VMkernel ports for use by the iSCSI storage stack. Port binding is necessary to enable storage multipathing policies, such as the VMware round-robin load-balancing, MRU, or fixed-path, to apply to iSCSI NIC ports and paths.
Port binding does not work in combination with IPv6. When users configure port binding, they expect to see additional paths for each bound VMkernel NIC. However, when they configure the array under a IPv6 global scope address the additional paths are not established. Users only see paths established on the IPv6 routable VMkernel NIC. For instance, if users have two target portals and two VMkernel NICs, they see four paths when using IPv4 but see only two paths when using IPv6. Because there are no paths for failover, path policy setup does not make sense.

Workaround: Use IPv4 and port binding, or configure the storage array and the ESX/ESXi host with LOCAL SCOPE IPv6 addresses on the same subnet (switch segment). You cannot currently use global scope IPv6 with port binding.

Supported Hardware

VMware ESXi Embedded fails to boot on HP DL385 G2 servers when BIOS uses USB 1.1 controller
VMware ESXi Embedded systems do not recognize the USB 1.1 controller on HP DL385 G2 servers. As a result, the ESXi system fails to boot. This problem always occurs on HP DL385 G2 servers when the BIOS is set to use the USB 1.1 Controller.

Workaround: During the boot phase of an ESXi Embedded system, enable the USB 2.0 controller in the BIOS settings. In some installations, this controller appears as V1.1+V2.0.

VMware ESX might fail to boot on Dell 2900 servers
If your Dell 2900 server has a version of BIOS earlier than 2.1.1, ESX VMkernel might stop responding while booting. This is due to a bug in the Dell BIOS, which is fixed in BIOS version 2.1.1.

Workaround: Upgrade BIOS to the version 2.1.1 or later.

No CIM indication alerts are received when the power supply cable and power supply unit are reinserted into HP servers
No new SEL(IML) entries are created for power supply cable and power supply unit reinsertion into HP servers when recovering a failed power supply. As a result, no CIM indication alerts are generated for these events.

Workaround: None

Core Dump fails with a timeout message
Configuring a device connected to a Perc 4/DC controller as a core dump device on which to store crash dumps in the event of a system crash can lead to timeouts and unsaved core dumps. This timeout behavior has been observed with different firmware versions on this controller (for example, 352B and 352D) and only for system crashes. No issues have been observed with I/O to the same device when the system is running.

Workaround: Do not configure a device connected to the Perc 4/DC controller as a core dump device for ESX/ESXi 4.0 systems.

Slow performance during virtual machine power-On or disk I/O on ESX/ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller
Some of these hosts might show slow performance during virtual machine power on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages.

Some Dell BIOSes may have duplicate interrupt routing entries in the ACPI table (KB 1013804)

On certain versions of vSphere Client, the battery status might be incorrectly listed as an alert
In vSphere Client from the Hardware Status tab, when the battery is in its learn cycle, the battery status provides an alert message indicating that the health state of the battery is not good. However, the battery level is actually fine.

Workaround: None.

A "Detected Tx Hang" message appears in the VMkernel log
Under a heavy load, due to hardware errata, some variants of e1000 NICs might lock up. ESX/ESXi detects the issue and automatically resets the card. This issue is related to Tx packets, TCP workloads, and TCP Segmentation Offloading (TSO).

Workaround: You can disable TSO by setting the /adv/Net/UseHwTSO option to 0 in the esx.conf file.

Event messages from StoreLib of IR cards show incorrect timestamp
The IndicationTime in the event messages from the StoreLib shows incorrect timestamp for LSI 1078 and 1068E Integrated RAID (IR) controllers.

Problems with TEAC DV-28E-V DVD drive
If an ESX/ESXi host is physically connected to a TEAC DV-28E-V DVD drive with old firmware (for example, C.AB), the virtual machine, host daemon, or ESX/ESXi host might become nonresponsive. The problem does not occur each time, and it is more likely to occur with Windows virtual machines.

Workaround: Upgrade the DVD drive firmware to the latest version or replace the DVD drive with a different model.

Upgrade and Installation

vSphere Client installation might fail with Error 1603 if you do not have an active Internet connection
You can install the vSphere Client in two ways: from the vCenter Server media or by clicking a link on the ESX, ESXi, or vCenter Server Welcome screen. The installer on the vCenter Server media (.iso file or .zip file) is self-contained, including a full .NET installer in addition to the vSphere Client installer. The installer called through the Welcome screen includes a vSphere Client installer that makes a call to the Web to get .NET installer components.

If you do not have an Internet connection, the second vSphere Client installation method will fail with Error 1603 unless you already have .NET 3.0 SP1 installed on your system.

Workaround: Establish an Internet connection before attempting the download, install the vSphere Client from the vCenter Server media, or install .NET 3.0 SP1 before clicking the link on the Welcome screen.

Opening performance charts after an upgrade results in an error message
After you perform an upgrade using the Microsoft SQL Express edition database, the vSphere Client displays the error message Perf Charts service experienced an internal error when you open performance charts. This happens because the installer does not restart the database service after making changes in the database settings.

Workaround: Perform the following steps:

Stop the VMware VirtualCenter Server service in Windows.

Restart the database service.

Start the VMware VirtualCenter Server service.

Open a new vSphere Client instance and log into vCenter Server.

The vCenter Server system's Database Upgrade wizard might overestimate the disk space requirement during an upgrade from VirtualCenter 2.0.x to vCenter Server 4.0
During the upgrade of VirtualCenter 2.0.x to vCenter Server 4.0, the Database Upgrade wizard can show an incorrect value in the database disk space estimation. The estimation shown is typically higher than the actual space required.

Workaround: None

If SQL Native Client is already installed, you cannot install vCenter with the bundled SQL Server 2005 Express database
When you are installing vCenter with the bundled SQL Server 2005 Express database, if SQL Native Client is already installed, the installation fails with the following error message:

An Installation package for the product Microsoft SQL Native Client cannot be found. Try the installation using a valid copy of the installation package sqlcli.msi.

Workaround: Uninstall SQL Native Client if it is not used by another application. Then, install vCenter with the bundled SQL Server 2005 Express database.

The vmxnet driver is not installed automatically when you install or upgrade VMware Tools
When you install or upgrade VMware Tools on a virtual machine running the Windows NT guest operating system, the vmxnet driver is not installed automatically.

Workaround: Install the vmxnet driver manually. To do this, perform the following steps:

Log in to the virtual machine and right-click Network Neighborhood.

Click Properties and select the Adapters tab.

Click Have Disk and enter the path to the driver:C:\Program Files\VMware\VMware Tools\Drivers\vmxnet\

Reboot the virtual machine.

Cannot reinstall or uninstall product after terminating the uninstallation of vSphere Client 4.0
If vSphere Client installation is interrupted, a subsequent installation or uninstallation of the vSphere Client 4.0 results in the following error message:

Workaround: Navigate to the installation directory and delete the Virtual Infrastructure Client directory.

A minimum of 650MB of free space on the boot drive is required to install vCenter Server
Although vCenter Server itself does not need to be installed on the boot drive, some required components must be installed on the boot drive. 650MB of free space is required at installation time to accommodate these required components as well as temporary files used during the installation.

Workaround: Ensure that you have at least 650MB of free space on the boot drive before installing vCenter Server.

vSphere Client 4.0 download times out with an error message when you connect VI Client 2.0.x on a Windows 2003 machine to vCenter Server or an ESX/ESXi host
If you connect a VI Client 2.0.x instance to vCenter Server 4.0 or an ESX/ESXi 4.0 host, vSphere Client 4.0 is automatically downloaded onto the Windows machine where the VI Client resides. This operation relies on Internet Explorer to perform this download. By default, Internet Explorer on Windows 2003 systems blocks the download if the VI Client instance is VI Client 2.0.x.

ESX/ESXi installation might fail on IBM x336 machines due to BIOS incompatibility
On some IBM x336 machines, the ESX/ESXi installation process might stop. This is caused by a bug in the machine's BIOS.

Workaround: Update the machine's BIOS to version 1.15 prior to installing ESX or ESXi Installable.

vCenter Server database upgrade fails for Oracle 10gR2 database with certain user privileges
If you upgrade VirtualCenter Server 2.x to vCenter Server version 4.0 and you have connect, create view, create any sequence, create any table, and execute on dbms_lock privileges on the database (Oracle 10gR2), the database upgrade fails. The VCDatabaseUpgrade.log file shows following error:

Workaround: As database administrator, enlarge the user tablespace or grant the unlimited tablespace privilege to the user who performs the upgrade.

vCenter Server installation fails on Windows Server 2008 when using a nonsystem user account
When you specify a non-system user during installation, vCenter Server installation fails with the following error message:

Failure to create vCenter repository

Workaround: On the system where vCenter Server is being installed, turn off the User Account Control option under Control Panel > User Accounts before you install vCenter Server. Specify the non-system user during vCenter Server installation.

Incompatible legacy plug-ins appear as enabled in the vSphere Plug-in Manager after upgrading to vCenter Server 4.0
If you have VirtualCenter 2.5 installed with VMware Update Manager 1.0 or VMware Converter Enterprise for VirtualCenter 2.5, and you upgrade to vCenter Server 4.0, the legacy plug-ins appear as installed and enabled in the vSphere Client Plug-in Manager. However, earlier versions of the plug-in modules are not compatible with vCenter Server 4.0. In such cases, the plug-ins might be available, but are not functional.

Cannot log in to VirtualCenter Server 2.5 after installing VI Client 2.0.x, 2.5, and vSphere Client 4.0 and then uninstalling VI Client 2.0.x on a Windows Vista system
After you uninstall the VI Client 2.0.x on a Windows Vista machine where the VI Client 2.0.x, 2.5, and the vSphere Client 4.0 coexist, you cannot log in to vCenter Server 2.5. Log-in fails with the following message:

Class not registered(Exception from HRESULT:0x80040154(REGDB_E_CLASSNOTREG))

Workaround: Disable the User Account Control setting on the system where VI Client 2.0.x, 2.5, and vSphere Client 4.0 coexist,or uninstall and reinstall VI Client 2.5.

The ESX/ESXi installer lists local SAS storage devices in the Remote Storage section
When displaying storage locations for ESX or ESXi Installable to be installed on, the installer lists a local SAS storage device in the Remote Storage section. This happens because ESX/ESXi cannot determine whether the SAS storage device is local or remote and always treats it as remote.

Workaround: None

During installation, VMware ESXi Installable does not automatically create a VMFS partition on certain local SAS hardware storage devices
By default, VMware ESXi partitions occupy the remainder of the installation media disk to create a VMFS volume. Certain SAS hardware storage devices, although installed directly on the host, present themselves as remote storage devices. As a result, ESXi Installable does not create the VMFS volume on this type of device. After the installation process, the remainder of the SAS storage device remains empty.

Workaround: Create a VMFS datastore manually using vSphere Client by completing the following steps:

Open the vSphere Client and select the ESXi Installable host from the inventory.

Click Storage on the Configuration tab.

Click Add Storage and step through the Add Storage Wizard to create a VMFS partition on the SAS storage volume.

If vSphere Host Update Utility loses its network connection to the ESX host, the host upgrade might not work
If you use vSphere Host Update Utility to perform an ESX/ESXi host upgrade and the utility loses its network connection to the host, the host might not be completely upgraded. When this happens, the utility might stop, or you might see the following error message:

Failed to run compatibility check on the host.

Workaround: Close the utility, fix the network connection, restart the utility, and rerun the upgrade.

Workaround: Delete libeay32.dll and ssleay32.dll located at the following path:C:\Program Files\VMware\Infrastructure\Virtual Infrastructure Client\Launcher
Alternatively, uninstall VI Client version 2.5.

When vSphere Client 4.0 and VI Client 2.5 are installed on the same system, depending on the order in which you uninstall the applications, the desktop shortcut is not updated
If you install the vSphere Client 4.0 application on a system that includes an instance of the VI Client 2.5 application, only the vSphere Client 4.0 desktop shortcut appears on the desktop. You can launch both applications from the shortcut.

However, if you uninstall the vSphere Client 4.0 application but do not uninstall the VI Client 2.5 application, the vSphere Client 4.0 desktop shortcut remains on the system. You can continue to use the shortcut to log in to VI Client 2.5, but if you attempt to log in to vSphere Client 4.0, you are prompted to download the application.

Workaround: Perform one of the following steps:

If you uninstall only the vSphere Client 4.0 application, rename the desktop shortcut or reinstall the VI Client 2.5 application so that the link correctly reflects the installed client.

If you uninstall both applications, remove any nonworking shortcuts.

vCenter Server installation on Windows Server 2008 with a remote SQL Server database fails in some circumstances
If you install vCenter Server on Windows 2008, using a remote SQL Server database with Windows authentication for SQL Server, and a domain user for the DSN that is different from the vCenter Server system login, the installation does not proceed and the installer displays the following error message:

25003.Setup failed to create the vCenter repository

Workaround: In these circumstances, use the same login credentials for vCenter Server and for the SQL Server DSN.

Upgrading the hardware version of Windows virtual machines might require driver updates
Upgrading the hardware version of a Windows virtual machine to hardware version 7 on an ESX 4.0 host will cause the Flexible network adapter to incorrectly use the AMD PCNet family PCI Ethernet adapter (pcnetpci5.sys) driver and have 10Mbps speed. The correct driver is the VMware Accelerated AMD PCNet Adapter (vmxnet.sys) driver.

Workaround: Manually update the driver for the Flexible NIC to the VMware Accelerated AMD PCnet Adapter (vmxnet.sys) driver by pointing to C:\Program Files\VMware\VMware Tools\Drivers\vmxnet\vmware-nic.inf from the virtual machine Windows guest.

The Next run time value for some scheduled tasks is not preserved after you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0
If you upgrade from VirtualCenter 2.0.2.x to vCenter Server 4.0, the Next run time value for some scheduled tasks might not be preserved and the tasks might run unexpectedly. For example, if a task is scheduled to run at 10:00 am every day, it might run at 11:30 am after the upgrade.

This problem occurs because of differences in the way that VirtualCenter 2.0.2.x and vCenter Server 4.0 calculate the next run time. You see this behavior only when the following conditions exist:

You have scheduled tasks, for which you edited the run time after the tasks were initially scheduled so that they now have a different Next run time.

The newly scheduled Next run time has not yet occurred.

Workaround: Perform the following steps:

Wait for the tasks to run at their scheduled Next run time before upgrading.

After you upgrade from vCenter 2.0.x to vCenter Server 4.0, edit and save the scheduled task. This process recalculates the Next run time of the task to its correct value.

ESX Server 2.5 virtual machines with non-persistent disks upgraded to ESX 4.0 might enter a suspended state
When you upgrade the virtual hardware of an ESX Server 2.5 virtual machine with non-persistent disks (identifiable by config version = 6, hardware version = 3) in ESX 4.0, the virtual machine will be incorrectly set to autorevert. If you take snapshots on this virtual machine (identifiable by config version = 8, hardware version = 7) in ESX 4.0, the virtual machine might enter a suspended state while reconfiguring its virtual hardware devices in the powered-off state.

Workaround: Remove the entry snapshot.action = "autoRevert" from the configuration file manually after you upgrade the virtual machine.

Default alarms new with vCenter Server 4.0 are not added to the system during upgrade
When upgrading to vCenter Server 4.0, the default alarms that are new with 4.0 are not added to the system. The following is a list of missing default alarms:

Installation or upgrade of vCenter Server 4.0 might fail with disk space error
During installation of vCenter Server 4.0, when you provide the amount of free space estimated by the installer, the installation might fail and issue a Not enough disk space error message. As a result, you might have to rerun the installation.

Workaround: Provide at least 1GB of free space in addition to the amount recommended by the installer.

Virtual machine hardware upgrades from version 4 to version 7 cause Solaris guests lose their network settings
Virtual machine hardware upgrades from version 4 to version 7 changes the PCI bus location of virtual network adapters in guests. Solaris does not detect the adapters and changes the numbering of its network interfaces (for example, e1000g0 becomes e1000g1). This numbering change occurs because Solaris IP settings are associated with interface names, so it appears that the network settings have been lost and the guest is likely not to have proper connectivity.

Workaround: Determine the new interface names after the virtual machine hardware upgrade by using the prtconf -D command, and then rename all the old configuration files to their new names. For example, e1000g0 might become e1000g1, so every /etc/*e1000g0 file should be renamed to its /etc/*e1000g1 equivalent.

The vCenter Server installer cannot detect service ports if the services are not running
When you install vCenter Server and accept the default ports, if those ports are being used by services that are not running, the installer cannot validate the ports. The installation fails, and an error message might appear, depending on which port is in use.

This problem does not affect IIS services. IIS services are correctly validated, regardless of whether the services are running.

Workaround: Verify which ports are being used for services that are not running before beginning the installation and avoid using those ports.

vCenter Server installer reports incorrect warning message during an installation or upgrade
During installation or upgrade, the vCenter Server installer reports a warning message to enable TCP/IP and named pipes for remote connections. This message is reported if you use a local SQL Server database and enter the server name something other than (local) and "." when you create DSN.

Workaround: Ignore the warning and click OK to continue the installation or upgrade.

vCenter Server service might not start if vCenter Server is installed as a local system account on a local Microsoft SQL Server database with Integrated Windows NT Authentication
If you install an instance of vCenter Server as a local system account on a local SQL Server database with Integrated Windows NT Authentication and then add an Integrated Windows NT Authentication user to the local database server with the same default database as vCenter Server, vCenter Server might not start.

Workaround: Remove the Integrated Windows NT Authentication user from the local SQL database server. Alternatively, change the default database for the local system user account to the vCenter Server database for the SQL Server user account setup.

Updated: Upgrades where two versions of ESXi co-exist on the same machine fail
Two versions of ESXi on the same machine is not supported. You must remove one of the versions. The following workarounds apply to the possible combinations of two ESXi versions on the same machine.

Workarounds:

If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Installable and only use ESXi Embedded, follow the steps below.

Make sure you can boot the machine from the ESX Embedded USB flash device.

Copy the virtual machines from the ESXi Installable VMFS datastore to the ESXi Embedded VMFS datastore.
This is a best practice to prevent loss of data.

Remove all partitions except for the VMFS partition on the disk with ESXi Installable installed.

Reboot the machine and configure the boot setting to boot from the USB flash device.

If ESXi Embedded and ESXi Installable are on the same machine and you choose to remove ESXi Embedded and only use ESXi Installable, follow the steps below.

Boot the system from ESXi Installable.

Reboot the machine and configure the boot setting to boot from the hard disk where you installed ESXi rather than the USB disk.

If you can remove the ESXi Embedded USB device, remove it. If the USB device is internal, clear or overwrite the USB partitions.

If two versions of ESXi Embedded or two versions of ESXi Installable are on the same machine, remove one of the installations.

The vihostupdate command can fail on ESXi 4.0 hosts for which the scratch directory is not configured
Depending on the configuration of the scratch directory, bundle sizes for example the size of the ESXi 4.0 Update 1
bundle, might be too large for an ESXi 4.0 host. In such cases, when you perform an installation with vihostupdate, if the scratch directory is not configured to use disk-backed storage the installation fails.

Workaround: You can change the configuration of the scratch directory by using the VMware vSphere Client or the VMware Update Manager. The following steps illustrate the use of the client.

Check the configuration of the scratch directory.
The following is the navigation path from vSphere Client:Configuration>Advanced Settings>ScratchConfig

For an ESXi host the following applies:

When the scratch directory is set to /tmp/scratch, the size of the bundle is limited. For example, you can apply a patch bundle of 163 MB, but you cannot apply an update bundle, such as an ESXi 4.0 update bundle of 281 MB.

When the scratch directory is set to the VMFS volume path, /<vmfs-volume-path>, you can apply bundles as large as an ESXi 4.0 bundle of 281 MB.

Change the scratch directory to the appropriate setting using vSphere Client.
The following is the navigation path from vSphere Client: Configuration>AdvancedSettings>ScratchConfig.

Reboot the ESXi host for the edited settings to take effect.

Issue the vihostupdate.pl command to install the bundle.
For example, you can issue a command such as the following, replacing the place holders as appropriate:

Patch installation using vihostupdate fails on ESXi hosts for file sizes above 256MB
Patch installation fails on an ESXi 4.0 host if you install using vihostupdate command on a server which does not have a scratch directory configured, and the downloaded file size is above 256MB. The installation fails usually on a host machine which does not have an associated LUN, ESXi 4.0 installation on Fibre Channel, or Serial Attached SCSI (SAS) disk.
You should verify the scratch directory settings on the ESXi server and make sure that the scratch directory is configured. When the ESXi server boots initially, the system tries to auto configure the scratch directory. If storage is not available for the scratch directory, the scratch directory is not configured and points to a temporary directory. Workaround
To work around the limitation on single file size, you should configure a scratch directory on a VMFS volume using the vSphere Client. To configure the scratch directory:

Connect to the host with vSphere Client.

Select the host in the Inventory.

Click the Configuration tab.

Select Advanced Settings from the Software settings list.

Find ScratchConfig in the parameters list and set the value for ScratchConfig.ConfiguredScratchLocation to a directory on a VMFS volume connected to the host.

Click OK.

Reboot the host machine to apply your changes to the host.

When ESXi 3.5 is upgraded to ESXi 4.0 Update 1, the esxupdate query command does not show the installed bulletinsBulletins are installed as part of the upgrade from ESXi 3.5 to ESXi 4.0 Update 1. However, after the upgrade, the esxupdate query command does not list any bulletins.

Workaround: The issue does not affect the core functionality of the host. No workaround.

The WS-Management service is not started automatically on a host upgraded from ESXi 3.5.x to ESXi 4.0 Update 1
This upgrade can prevent the WS-Management (wsman) service from starting automatically as observed by issuing the wsman status command as such:/etc/init.d/wsman status

Workaround:

Start the wsman service from Tech Support Mode.
See KB 1003677 for information on using Tech Support Mode. The following serves as an example of how to start this service: /etc/init.d/wsman start

Check the status of the wsman service to ensure that it is running.
For example: /etc/init.d/wsman status

Rename the WS-management service entry from wsmand to wsman in the /etc/chkconfig.db file to preserve the change across reboot.
The following is an example of the full path to the file: /etc/init.d/wsman

vCenter Server and vSphere Client

Using the vSphere Client Delete All option to remove virtual machine snapshots might leave the snapshot disk files in the virtual machine folder
This behavior occurs only when you previously used the snapshot to create linked clones and then deleted them from the vCenter Server. If you now attempt to remove the snapshot using the Snapshot Managers Delete All option, the snapshot is deleted. However, the snapshot disk is not consolidated with the parent disk and is left undeleted in the virtual machine folder.

Workaround: Instead of using the "Delete All" option, use the "Delete" option to remove the snapshot.

The Hardware Status tab does not display the ESX/ESXi host's hardware status information
The Hardware Status tab in the vSphere Client does not populate the hardware status of the selected ESX/ESXi host when the vCenter Server machine or ESX/ESXi host are running in pure IPv6 mode.

Workaround: Add and configure the IPv4 interface for both the vCenter Server machine and the ESX/ESXi host.

If a system has virtual network adapter, Guided Consolidation might compute a larger number of NICs for that system than the number of physical NICs
The number of NICs for a system computed by Guided Consolidation can be larger than the number of physical NICs for the system if the system has virtual network adapters. In this case, you might get the following warning during the Plan Consolidation phase: "Host does not have the desired number of VM networks. A consolidation will result in the mapping of multiple networks of the physical computer to a single VM network." This happens for any machine with virtual NICs (for example, for any virtual machine and any (physical or virtual) machine running VMware Workstation or other hosted virtualization platform).

Workaround: No workaround needed. You can ignore the warning.

Refresh delay for powered-on Virtual Machines field on the vSphere Client resource pool summary page
The resource pool summary page in the vSphere Client displays the number of powered-on virtual machines for the selected resource pool and all of its nested resource pools. This information in Powered on Virtual Machines might be out of date at any given moment because the field is refreshed periodically (usually in less than two minutes) and not when a change occurs.

Workaround: Count the number of powered on virtual machines by viewing the virtual machine list or by expanding the inventory tree.

The vSphere Client might take longer than expected to display newly installed extensions in the list of installed extensions
After the installation of extensions is finished, 30-60 seconds pass before newly installed extensions appear in the list of installed extensions.

Workaround: Restart the vSphere Client.

Guided Consolidation cannot import systems that are running vCenter Converter
Guided Consolidation import operations encounter a problem when the source system (the imported system) is running vCenter Converter. Guided Consolidation imports the system and attempts to uninstall vCenter Converter from the source system. The import operation succeeds but the following error is displayed when Guided Consolidation attempts to uninstall vCenter Converter:

VMware Converter Agent Install failed.

Workaround: Uninstall vCenter Converter from source systems before attempting to import them using Guided Consolidation.

For VMDirectPath Gen I, sharing a dual-function QLogic 2532 adapter between a virtual machine and either another virtual machine or the VMkernel might result in data corruption
When you configure a dual-function QLogic 2532 adapter for VMDirectPath IO, and assign the first PCI function to a virtual machine and the second to either the VMkernel or another virtual machine, data corruption might occur. This happens because both ports use the same credentials to log in to the fabric and have the same storage visibility. VMware does not support this configuration for VMDirectPath IO.

Workaround: If you cannot avoid sharing the dual-function adapter between a virtual machine and the VMkernel, assign the first PCI function to the virtual machine and the second to the VMkernel. The PCI functions cannot be split between two virtual machines.

The vSphere Client does not update sensors that are associated with physical events
The vSphere Client does not always update sensor status. Some events can trigger an update, such as a bad power supply or the removal of a redundant disk. Other events, such as chassis intrusion and fan removal, might not trigger an update to the sensor status.

Workaround: None

In the vSphere Client, clicking Close Tab on the Getting Started tab for certain objects (cluster, host, virtual machine) does not result in any action
In the vSphere Client, clicking Close Tab [x] on the Getting Started tab for certain objects (cluster, host, virtual machine) does not result in any action. This issue occurs only if the vSphere Client is running on a machine whose operating systems is configured to disable javascript.

Workaround: None

The Overview performance charts do not display when vCenter Server uses an Oracle database
You view the performance charts through the Overview view of the Performance tab. If your vCenter Server uses an Oracle database, the charts do not appear when you open this view. Instead, the following error message appears:

Overwrite the VMware placeholder file with the ojdbc5.jar file you downloaded. By default, this file is installed in the C:\Program Files\VMware\Infrastructure\tomcat\lib directory.

Restart vCenter Server Web Service.

Alarms with health status trigger conditions are not migrated to vSphere 4.0
The vSphere 4.0 alarm triggers functionality has been enhanced to contain additional alarm triggers for host health status. In the process, the generic Host Health State trigger was removed. As a result, alarms that contained this trigger are no longer available in vSphere 4.0.

Workaround:
Use the vSphere Client to recreate the alarms. You can use any of the following preconfigured VMware alarms to monitor host health state:

Host battery status

Host hardware fan status

Host hardware power status

Host hardware temperature status

Host hardware system board status

Host hardware voltage

Host memory status

Host processor status

Host storage status

If the preconfigured alarms do not handle the health state you want to monitor, you can create a custom host alarm that uses the Hardware Health changed event trigger. You must manually define the conditions that trigger for this event alarm. In addition, you must manually set which action occurs when the alarm triggers.

Note: The preconfigured alarms already have default trigger conditions defined for them. You only need to set which action occurs when the alarm triggers.

vCenter Server allows addition of the same ESX/ESXi system twice with two different IPv6 addresses
If you add an ESX/ESXi system to the vCenter inventory, and if that system is already managed by vCenter under a different IP address, the vCenter Server does not detect the problem.
The ESX/ESXi system appears in the inventory with a new IP address, with status disconnected. Connections to the ESX/ESXi system that use the old IP address remain active.

Workaround: Do not add the same ESX/ESXi system twice.

Adapter Type drop-down menu missing vmxnet3 option on virtual machine running SUSE Enterprise Linux
A virtual machine running SLES 10 or SLES 11 for which SLES is selected as the guest operating system type does not include vmxnet3 in the Adapter Type drop-down menu. The problem is most likely to occur in virtual machines that are migrated from ESX Server 3.x to ESX 4.x, but it might also occur in other circumstances.

Workaround: The vmxnet3 option becomes available if you change the guest operating system type from SLES to SLES10 or SLES11.

Power off the virtual machine.

Right-click the virtual machine and select Edit Settings.

In the Options tab, click General Options.

In the version field, select either SLES10 or SLES11.

Virtual machines disappear from the virtual switch diagram in the Networking View for host configuration
In the vSphere Client Networking tab for a host, virtual machines are represented in the virtual switch diagram. If you select another host and then return to the Networking tab of the first host, the virtual machines might disappear from the virtual switch diagram.

Workaround: Select a different view in the Configuration tab, such as Network Adapters, Storage, or Storage Adapters, and return to the Networking tab.

If you change the HTTPS port number in the SFCB configuration file (sfcb.cfg) to a port other than the default and restart the SFCB (CIM) server, the health status of the ESX/ESXi host server components does not appear on the Hardware Status tab
This behavior is also seen if you log directly in to an ESX/ESXi host and click the Configuration tab to view the health status. Status information for the server components does not appear.

This problem occurs because vCenter Server and the SFCB server are communicating on different ports.

Workaround: Make sure that the SFCB server communicates only through the default port.

Starting or stopping the vctomcat Web service at the Windows command prompt might result in an error message
On Windows operating systems, if you use the net start and net stop commands to start and stop the vctomcat Web service, the following error message might appear:

The service is not responding to the control function.
More help is available by typing NET HELPMSG 2186.

Workaround: You can ignore this error message. If you want to stop the error message from occurring, modify the registry to increase the default timeout value for the service control manager (SCM).
For more information, see the following Microsoft KB article: http://support.microsoft.com/kb/922918.

Overview performance charts do not display after upgrade from vCenter Server 2.5 with SQL Express bundled database
If you upgrade from a vCenter Server 2.5 to vCenter Server 4.0 using an SQL Express bundled database, Overview performance charts won't display. When you open the Overview view of the Performance tab, the following error appears:

STATs Report service internal error

This error occurs because the vCenter Server upgrade tool cannot reconfigure the existing database. You must manually perform the configuration.

Error message appears if you add a second virtual disk to a virtual machine
Suppose you create a virtual machine with default options by using Web Access connected to ESX/ESXi 4.0. If you then connect from vSphere Web Access to the vCenter Server that manages the ESX/ESXi host and add a second virtual disk to the same virtual machine with the option Create a New Virtual Disk, the error The specified file already exists on the server appears.

Workaround: Use the vSphere Client to connect to vCenter Server and add a second virtual disk to the virtual machine.

The vc-support command uses a 64-bit DSN application and cannot gather data from the vCenter Server database
When you use the VMware cscript vc-support.wsf command to retrieve data from the vCenter Server database, the default Microsoft cscript.exe application is used. This application is configured to use a 64-bit DSN rather than a 32-bit DSN, which is required by the vCenter Server database. As a result, errors occur and you cannot retrieve the data.

Workaround: At a system prompt, run the vc-support.wsf command with the 32-bit DSN cscript.exe application:

%windir%\SysWOW64\cscript.exe vc-support.wsf

The vc-support command uses a 64-bit DSN application and cannot gather data from the vCenter Server Database
When you use the VMware cscript vc-support.wsf command to retrieve data from the vCenter Server database, the default Microsoft cscript.exe application is used. This application is configured to use a 64-bit DSN rather than a 32-bit DSN, which is required by the vCenter Server database. As a result, errors occur and you cannot retrieve the data.

Workaround: Type the following command at a system prompt to run the vc-support.wsf command with the 32-bit DSN cscript.exe application:

%windir%\SysWOW64\cscript.exe vc-support.wsf

The vSphere Client Roles menu does not display role assignments for all vCenter Server systems in a Linked Mode group
When you create a role on a remote vCenter Server system in a Linked Mode group, the changes you make are propagated to all other vCenter Server systems in the group.
However, the role appears as assigned only on the systems that have permissions associated with the role. If you remove a role, the operation only checks the status
of the role on the currently selected vCenter Server system. However, it removes the role from all vCenter Server systems in the Linked Mode group without issuing a warning that the role might be in use on the other servers.

Workaround: Before you delete a role from vCenter Server system, ensure that the role is not being used across other vCenter Server systems. To see if a role is in use, go to the Roles view and use the navigation bar to select each vCenter Server system in the group. The role's usage is displayed for the selected vCenter Server system.

Snapshot deletion and virtual machine hot clone operations might take a long time when the virtual machine runs a heavy workload
Deleting a snapshot or cloning a powered-on virtual machine might take a long time to complete when the virtual machine is running a heavy input/output workload. For example, when the virtual machine writes to its local disks, the input/output load is very heavy.

Workaround: Avoid these operations when the virtual machine is writing to its local disks or issuing any other heavy input/output workloads. This can help to reduce the completion time.

Joining a Linked mode group after installation is unsuccessful if UAC is enabled on Windows Server 2008
When User Account Control (UAC) is enabled on Windows Server 2008 32- or 64-bit operating systems and you try to join a machine to a Linked Mode group on a system that is
already running vCenter Server, the link completes without any errors, but it is unsuccessful. Only one vCenter Server appears in the inventory list.

Workaround: Complete the following procedures.

After installation, perform the following steps to turn off UAC before joining a Linked Mode group:

Select User Account Control (UAC) to help protect your computer and click OK.

Reboot the machine when prompted.

Removing a virtual machine's virtual switch that is being used might result in an error message
If you try to remove a virtual switch that a powered-on virtual machine is using, an error message appears. The warning message should alert you that the virtual switch is in use and cannot be removed. Removing virtual switches in such cases might cause the virtual machine to become unusable.

Workaround: Do not remove a virtual switch that is in use.

Search in the vSphere Client does not support the colon characters
Queries in the advanced search form that contain the colon result in a Bad Request error.

Workaround: Modify the query to remove unsupported characters. For example, replace the colon character in the search text with a space or an asterisk.

The vCenter Server Resource Manager does not update the host tree after a migration in a cluster that has neither DRS nor HA enabled
If you use vMotion to power on or migrate a virtual machine on clusters that have neither HA nor DRS enabled, the operations might fail with one of the following messages:

The host does not have sufficient memory resources to satisfy the reservation.

The host does not have sufficient CPU resources to satisfy the reservation

This problem occurs even when the host appears to have sufficient capacity available and occurs only if both hosts are in the same cluster.

When vMotion is used to migrate a virtual machine to a host or power it on under the new host, vCenter Server assesses whether the host has sufficient unreserved resources to meet the requirements
of the virtual machine. However, the internal data structures used for this assessment are not updated when you use vMotion to migrate a virtual machine from a source host to a destination host in a cluster after having
powered on the virtual machine on the source host. Any future admission control calculations for the source host consider the reservation of this virtual machine, even though it is no longer running on the host.
This behavior might cause power-on and vMotion operations that target the source host to fail.

Note: These failures are reported as the following faults in the log file:

vim.fault.InsufficientHostCpuCapacityFault

vim.fault.InsufficientHostMemoryCpuCapacityFault

Workaround: Reconfigure the virtual machine's reservation, or power the machine on or off. These actions force vCenter Server to update its internal data structures.

You cannot re-display the toolbar in the Reports view of the Storage Views tab after you hide it
The Reports view of the Storage Views tab has a toolbar that contains an object filter menu and a search field. These controls enable you to filter the reports tables based on object type, storage attributes, and keywords. If you hide the toolbar by selecting Hide from the toolbar's right-click menu, there is no mechanism to re-display it.

Workaround: Close and reopen the vSphere Client.

Joining two vCenter Server instances fails with an error message in status.txt about failure to remove VMwareVCMSDS
Joining an existing standalone vCenter Server instance to a Linked Mode group causes the vCenter Server installer to fail. When this happens, vCenter Server does not start on the machine where you are performing the installation. Also, messages indicating problems with LDAP connectivity or the LDAP service being unreachable are written to the <TEMP>/status.txt file, where <TEMP> is the temporary directory defined on your Windows system. To diagnose this problem, open the status.txt file and look for the following message: [2009-03-06 21:44:55 SEVERE] Operation "Join instance VMwareVCMSDS" failed: : Action: Join Instance
Action: Removal of standalone instance
Action: Remove Instance
Problem: Removal of instance VMwareVCMSDS failed: The removal wizard was not able to remove all of the components. To complete removal, run "Adamuninstall.exe /i:<instance>" after resolving the following error:

Folder '<vCenter Server installation directory>\VMwareVCMSDS' could not be deleted.
The directory is not empty.

Workaround: Perform the following steps:

From a command prompt with administrator-level privileges, change directories to the vCenter Server installation directory.

Networking problems and errors might occur when analyzing machines with VMware Guided Consolidation
When a large number of machines are under analysis for Guided Consolidation, the vCenter Collector Provider Services component of Guided Consolidation might be mistaken for a virus or worm by the operating system on which the Guided Consolidation functionality is installed. This occurs when the analysis operation encounters a large number of machines that have invalid IP addresses or name resolution issues. As a result, a bottleneck occurs in the network and error messages appear.

Workaround: Do not add machines for analysis if they are unreachable. If you add machines by name, make sure the NetBIOS name is resolvable and reachable. If you add machines by IP address, make sure the IP address is static.

Multiple SSL warning messages appear when vCenter Server systems are joined in a Linked Mode group
If multiple vCenter Server systems are joined in a Linked Mode group and do not use SSL certificates for authentication, multiple SSL warnings might be displayed in the vSphere Client when you log in.

Workaround: Address each warning individually. Select the Always ignore this certificate option on each host. You must configure vCenter Servers to use SSL certificates.

vSphere Client displays inaccurate information in the General section of the Summary tab for hosts
Under heavy load, the right-hand panel in the vSphere Client might fail to refresh and displays an inaccurate information in the General section.

Workaround: You may need to refresh the vSphere Client manually by selecting a different host, and then select the first host again.

When you run the Linked Mode Configuration Wizard after linking a vCenter Server system to a group in a pure IPv6 environment, there is no option to isolate the vCenter Server system from Linked Mode
If vCenter Server is linked in a pure IPv6 environment (Windows 2008 x32 or Windows 2008 x64) and you invoke the Linked Mode Configuration Wizard, the Join vCenter Server instance to existing Linked Mode group or another instance option is enabled. There is no option to isolate the vCenter Server system from Linked Mode.

Workaround: Configure the vCenter Server system with mixed mode (IPv4 and IPv6) and invoke the Linked Mode Configuration Wizard: Start > VMware > Linked Mode Configuration Wizard. The Join option is disabled and the Isolate this vCenter server instance from linked mode group option is enabled.

Note: If you configure a vCenter Server system with mixed mode, the domain controller must also be in mixed mode. If you do not want to use mixed mode in your IPv6 environment, you must uninstall vCenter Server to isolate the system from Linked Mode.

For large vCenter Server inventories, when you open the vSphere Client in Linked Mode with the inventories of all vCenter Server systems fully expanded, the vSphere Client might be nonresponsive for several minutes
Fully expanded vSphere Client inventories are those with clusters and datacenters expanded. If you close the vSphere Client after fully expanding the inventories, the next time you open it, the expanded inventory view is loaded. As a result, the vSphere Client might become nonresponsive for several minutes, depending on the number of vCenter Server systems and the number of objects in each vCenter Server system's inventory. The vSphere Client will start responding after it loads all inventory objects.

Workaround: As a best practice, do not to expand the nodes of every vCenter Server system in the inventory of a Linked Mode group. Collapse the nodes before you close the vSphere Client to avoid loading the expanded nodes at start-up.

Some alarms remain in a triggered state after the virtual machine is migrated using vMotion
Metric-based alarms (for example, CPU usage or memory Usage) that apply to virtual machines may remain in a triggered state after they are migrated using vMotion to another host.

Workaround: Any of the following actions will solve the problem:

Reconfigure the triggered alarm.

Delete the triggered alarm.

Change the virtual machine's power state.

Induce a workload on the virtual machine such that the alarm triggers, and then remove the workload. The alarm is cleared.

vCenter Server might fail if permissions for vpxuser account are changed
When you use a vSphere Client to connect to an ESX/ESXi host directly, if you select the Permission tab, it is possible to change the permissions for the vpxuser account. You might consider changing the permissions, for example, so vpxuser does not have administrator privileges on all host inventory objects. However, vCenter Server might fail after such a change.

Workaround: Set the vpxuser account on the host to have administrator privileges from the root folder down. You can do so by connecting to the host with a vSphere Client, then selecting the Permissions tab.

Disabled alarms for inventory objects are enabled if vCenter Server is restarted
If an alarm for an inventory object, such as a hosts, virtual machine, datastore, and so on, is disabled in vCenter Server and vCenter Server is restarted, the alarms are enabled after the vCenter Server restart is complete.

Workaround: Disable the alarms on the appropriate inventory objects when vCenter Server restarts.

Virtual machine templates stored on shared storage become unavailable after Distributed Power Management (DPM) puts a host in standby mode or when a host is put in maintenance mode
The vSphere Client associates virtual machine templates with a specific host. If the host storing the virtual machine templates is put into standby mode by DPM or into maintenance mode, the templates appear disabled in the vSphere Client. This behavior occurs even if the templates are stored on shared storage.

Workaround: Disable DPM on the host that stores the virtual machine templates. When the host is in maintenance mode, use the Datastore browser on another host that is not in maintenance or standby mode and also has access to the datastore on which the templates are stored to find the virtual machine templates. Then you can provision virtual machines using those templates.

You might encounter a LibXML DLL module load error when you fresh install vSphere CLI on some Windows platforms, such as Windows Vista Enterprise SP1 32bit for the first time

New: Incorrect links on ESX and ESXi Welcome page
The download links under vSphere Remote Command Line section, vSphere Web Services SDK section, and the links to download vSphere 4 documentation and VMware vCenter on the Welcome page of ESX and ESXi are wrongly mapped.
Workaround: Download the products from the VMware Web site.

On Nexus 1000v, distributed power management cannot put a host into standby If a host does not have Integrated Lights-Out (iLO) or Intelligent Platform Management Interface (IPMI) support for distributed power management (DPM), that host can still use DPM provided all the physical NICs of the host that are added to Nexus 1000V DVS have Wake-on-LAN support. If even one of the physical NICs is not Wake-on-LAN supported, the host cannot be put into standby by DPM.

Workaround: None.

Data displayed on the vSphere Client Performance tab does not relate to the selected object
When you select a certain sequence of objects from the inventory list, the vSphere Client might display charts pertaining to the wrong object on the Overview page of the Performance tab. This problem occurs regardless of whether you select the objects in order from top to bottom or randomly.

Workaround: Select a different inventory object. If this doesn't fix the problem, restart the vSphere Client.

Virtual Machine Management

Prompts to install a PCI standard PCI-PCI bridge occur after upgrading Windows 2000 virtual machines from hardware version 4 to hardware version 7
When you upgrade a Windows 2000 virtual machine from hardware version 4 to hardware version 7, you might see numerous pop-up messages prompting you to install a PCI standard PCI-PCI bridge. These messages are harmless.

Workaround: Accept all prompts to complete the hardware version upgrade.

Custom scripts assigned in vmware-toolbox for suspend power event do not run when you suspend the virtual machine from the vSphere Client
If you have assigned a custom script to the suspend power event in the Script tab of vmware-toolbox and you have configured the virtual machine to run VMware Tools scripts when you perform the suspend scripts, then the custom scripts are not run when you suspend the virtual machine from the vSphere Client.

Workaround: None

Automatic VMware Tools upgrade on guest power on reboots the guest automatically without issuing a reboot notification
If you select to automatically update VMware Tools on a Windows Vista or Windows 2008 guest operating system, when the operating system powers on, VMware Tools is updated and the guest operating system automatically reboots without issuing a reboot notification message.

Cloning virtual machines with customization might result in a dialog box for Sysprep file information
When you clone a virtual machine with customization, the cloning process might not finish and a Sysprep dialog box might prompt you for additional files.

Workaround: Perform the following steps:

Note the list of missing files that Windows mini-setup cannot find.

Copy the required files (for example, c_20127.nls) from the source machine to the Sysprep install files folder, c:\sysprep\i386. The files that Sysprep prompts for are usually in the following location on the virtual source machine: C:\Windows\system32.

Perform the cloning with customization.

Note: The Sysprep directory is removed after the virtual machine is started and customization is completed.

Virtual machines running a Windows NT guest operating system require a reinstall of the network adapter driver after upgrading virtual hardware from version 4 to version 7
After upgrading the virtual hardware on a Windows NT guest operating system, the virtual machine is unable to get an IP address because Windows NT does not fully support plug-and-play specification.

Workaround: After upgrading virtual hardware from version 4 to version 7 on a Windows NT virtual machine, reinstall the network adapter driver by following the steps below.

Right-click Network Neighborhood and select Properties.

Select the Adapters tab.

Remove the existing adapter.

Add a new adapter.

For an AMD PCNet driver, select AMD PCNET Family Ethernet adapter and specify the path as C:\winnt\system32.
For a vmxnet driver, click Have disk and specify path as C:\Program Files\VMware\VMware Tools\Drivers\vmxnet\.

Reboot the virtual machine.

An IDE hard disk added to a hardware version 7 virtual machine is defined as Hard Disk 1 even if a SCSI hard disk is already present
If you have a hardware version 7 virtual machine with a SCSI disk already attached as Hard Disk 1 and you add an IDE disk, the virtual machine alters the disk numbering. The IDE disk is defined as Hard Disk 1 and the SCSI disk is changed to Hard Disk 2.

Workaround: None. However, if you decide to delete one of the disks, do not rely exclusively on the disk number. Instead, verify the disk type to ensure that you are deleting the correct disk.

Reverting to snapshot might not work if you cold migrate a virtual machine with a snapshot from an ESX/ESXi 3.5 host to an ESX/ESXi 4.0 host
You can cold migrate a virtual machine with snapshots from an ESX/ESXi 3.5 host to ESX/ESXi 4.0 host. However, reverting to a snapshot after migration might not work.

Workaround: None

The vCenter Server fails when the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32
If the delta disk depth of a linked virtual machine clone is greater than the supported depth of 32, the vCenter Server fails and the following error message appears:

Win32 exception: Stack overflow

In such instances, you cannot restart the vCenter Server unless you remove the virtual machine from the host or clean the vCenter Server database. Consider removing the virtual machine from the host rather than cleaning the vCenter Server database, because it is much safer.

Workaround: Perform the following steps:

Log in to the vSphere Client on the host.

Display the virtual machine clone in the inventory.

Right-click the virtual machine and choose Delete from Disk.

Restart the vCenter Server.

Note: After you restart the vCenter Server, if the virtual machine is listed in the vSphere Client inventory and the Remove from Inventory option is disabled in the virtual machine context menu, you must manually remove the virtual machine entry from the vCenter database.

New: Creating a new SCSI disk in a virtual machine can result in an inaccurate error message
When you create a new SCSI disk in a virtual machine and you set the SCSI bus to virtual, an error message is issued with the following line:

Please verify that the virtual disk was created using the "thick" option.

However, thick by iteself is not an option. The option should be eagerzeroedthick.

Workaround: Using the command line, create the SCSI disk with the vmkfstools command and the eagerzeroedthick option.

The Installation Boot options for a virtual machine are not exported to OVF
When you create an OVF package from a virtual machine that has the Installation Boot option enabled, this option is ignored during export. As a result, the OVF descriptor is missing the InstallSection element, which provides information about the installation process. When you deploy an OVF package, the InstallSection element is parsed correctly.

Workaround: After exporting the virtual machine to OVF, manually create the InstallSection parameters in the OVF descriptor. If a manifest (.mf) file is present, you must regenerate it after you modify the OVF descriptor.

The inclusion of the InstallSection parameters in the descriptor informs the deployment process that an install boot is required to complete deployment. The ovf:initialBootStopDelay attribute specifies the boot delay.

See the OVF specification for details.

The SCSI driver floppy image required for Windows virtual machines that use the BusLogic virtual SCSI adapter is not displayed in the vSphere Client floppy drive configuration screen
Certain Windows virtual machines that use the BusLogic virtual SCSI adapter require a SCSI driver to operate properly. By default, the driver floppy image is available on the ESXi host. However, when you attempt to connect the floppy image to your virtual machine using the Virtual Machine Properties dialog box, the file is not displayed for your selection in the datastore browser.

Workaround: To connect the floppy image file to your virtual machine, enter the following in the Use existing floppy image in datastore text box:/usr/lib/vmware/floppies/vmscsi-1.2.1.0-signed.flp

A virtual machine cloned from a snapshot of a virtual machine with an LSI SAS controller might be erroneously configured with a BusLogic controller
If you take a snapshot of a virtual machine with an LSI SAS controller and then clone a virtual machine from the snapshot, the virtual machine that was cloned from the snapshot might have BusLogic controller configured in the virtual machine properties instead of LSI SAS controller.

Workaround: After cloning a virtual machine from a snapshot of a virtual machine with an LSI SAS controller, check the controller type for the cloned virtual machine in the Snapshot.config property. Reconfigure the controller type for the cloned virtual machine if necessary.

New: Following the suspension and resumption of a virtual machine, disabling of the VMXNET3 adapter can fail
Between suspension and resumption, if the network connection goes into an undefined state, for example, if the port-group name changes, then the virtual network device cannot update the new network connection to the driver. This state prevents the VMXNET3 adapter from being disabled, uninstalled, or updated.

Workaround: Reconnect the adapter to a valid port group.

vMotion and Storage vMotion

Reverting to a snapshot might fail after reconfiguring and relocating the virtual machine
If you reconfigure the properties of a virtual machine and move it to a different host after you have taken a snapshot of it, reverting to the snapshot of that virtual machine might fail.

Workaround: Avoid moving virtual machines with snapshots to hosts that are very different (for example, different version, different CPU type, etc.)

Using Storage vMotion to migrate a virtual machine with many disks might time out
A virtual machine with many virtual disks might be unable to complete a migration with Storage vMotion. The Storage vMotion process requires time to open, close, and process disks during the final copy phase. Storage vMotion migrations of virtual machines with many disks might time out because of this per-disk overhead.

Workaround: Increase the Storage vMotion fsr.maxSwitchoverSeconds setting in the virtual machine configuration file to a larger value. The default value is 100 seconds. Alternatively, at the time of the Storage vMotion migration, avoid running a large number of provisioning operations, migrations, power on, or power off operations on the same datastores the Storage vMotion migration is using.

Storage vMotion does not support source RDM conversion to target NFS volumes
Disk-only Storage vMotion fails for virtual mode RDMs when you convert the disks to flat/sparse format on NFS volumes.

Workaround: Perform the following steps to migrate virtual mode RDMs to NFS volumes:

Use Storage vMotion to convert an RDM virtual machine disk to disk type flat/sparse using intermediate an SAN, local, or iSCSI volume.

Use Storage vMotion to relocate the converted disks from the SAN, local, or iSCSI volume to an NFS volume.

Storage vMotion of NFS volume may be overriden by NFS server disk format
When you use Storage vMotion to migrate a virtual disk to an NFS volume or perform other virtual machine provisioning that involves NFS volumes, the disk format is determined by the NFS server where the destination NFS volume resides. This overrides any selection you made in the Disk Format menu.

Workaround: None

If ESX/ESXi hosts fail or reboot during Storage vMotion, the operation can fail and Virtual machines might become orphaned
If hosts fail or reboot during Storage vMotion, the vMotion operation can fail. The destination virtual machine's virtual disks might show up as orphaned in the vSphere inventory after the host reboots. Typically, the virtual machine's state is preserved before the host shuts down.

If the virtual machine does not show up in an orphaned state, check to see if the destination VMDK files exist.

Workaround: You can manually delete the orphaned destination virtual machine from the vSphere inventory. Locate and delete any remaining orphaned destination disks if they exist on the datastore.

File[vol] <vmpath> is larger than the maximum size supported by datastore <destination datastore>.

Workaround: None. Do not configure a virtual machine disk to its maximum volume limit if you plan to migrate the virtual machine.

Storage vMotion conflicts with remote CD/DVD and floppy disk device connections
CD/DVD and floppy remote backup devices are not supported with Storage vMotion. However, when you perform Storage vMotion on a powered-on virtual machine hosted by ESX/ESXi 4.0, the toolbar icon for connecting and disconnecting CD/DVD and floppy devices remains enabled, allowing you to add these devices while Storage vMotion is in progress even though this might cause failures.

Workaround: Before initiating Storage vMotion, disconnect all remote CD/DVD and floppy devices attached to the virtual machine by clicking the CD/DVD and floppy device connect/disconnect icon.

Storage vMotion failure mode can result in power off of virtual machine
When Storage vMotion is used on an ESX/ESXi 4.0 host, if moving the data fails due to a transient error (for example: out of memory), StoragevMotion might not complete successfully, migration performance may degrade, or the source virtual machine may power off.

Workaround: Power on the virtual machine.

Storage vMotion on ESX/ESXi 3.5 hosts does not display correct disk type if disk type is changed during Storage vMotion
The Storage vMotion wizard presents an option to convert disk type (from thick to thin or from thin to thick) for virtual machines on any ESX/ESXi host version. After a disk is converted and Storage vMotion is complete, the disk type is not reflected properly for ESX/ESXi 3.5 hosts. The vSphere Client still reflects the old disk type.

Workaround: Power off the virtual machine, un-register it, and then re-register it.

Virtual machines stored on a local datastore are not migrated off of the host when the host is placed in maintenance mode
Virtual machines stored on a local datastore are not migrated off of the host when the host is placed in maintenance mode.

Workaround: Manually move virtual machine on local datastores to another host if they need to remain available while their current host is in maintenance mode.

Using Storage vMotion to relocate a virtual machine back to its source volume might result in an insufficient disk space error
When you use Storage vMotion to move a virtual machine to another datastore and then back to its source volume, the vSphere Client does not immediately refresh the size of the source datastore, resulting in an error.

Workaround: Refresh the datastore in the vSphere Client. If the reported size of the datastore does not change after one attempt, wait for 30 minutes and refresh again.

VMware High Availability and Fault Tolerance

Failover to VMware FT secondary virtual machine produces error message on host client
When VMware Fault Tolerance is failing over to a secondary virtual machine, if the host chosen for the secondary virtual machine has recently booted, the host client sees this attempt as failing and displays the following error message:

Login failed due to a bad username or password.

This error message is seen because the host has recently booted and it is possible that it has not yet received an SSL thumbprint from the vCenter Server. After the thumbprint is pushed out to the host, the failover succeeds. This condition is likely to occur only if all hosts in an FT-enabled cluster have failed, causing the host with the secondary virtual machine to be freshly booted.

Workaround: None. The failover succeeds after a few attempts.

Changing the system time on an ESX/ESXi host produces a VMware HA agent error
If you change the system time on an ESX/ESXi host, after a short time interval, the following HA agent error appears:

HA agent on <server> in <cluster> in <data center> has an error.

This error is displayed in both the event log and the host's Summary tab in the vSphere Client.

Workaround: Correct the host's system time and then restart vpxa by running the service vmware-vpxa restart command.

Turning on Fault Tolerance for a powered-on virtual machine with lazyzeroed virtual disks fails
If you try to turn on FT for a powered-on virtual machine with lazyzeroed virtual disks, the operation fails and the secondary virtual machine is not created. If you try to enable FT for a powered-on virtual machine with lazyzeroed virtual disks, the operation fails and the secondary virtual machine is not started.

Workaround: Power off the virtual machine, turn on FT, and power the virtual machine back on. The virtual disks are correctly converted to eagerzeroed disks and the operation to turn on FT succeeds.

Trying to change the disk format of an FT-enabled virtual machine while migrating it across datastores results in a fault
If you attempt to change the disk format of a powered-off, FT-enabled virtual machine while migrating it across datastores, the vSphere Client displays an InvalidArgument error message indicating that the operation has failed. The correct behavior is for the vSphere Client to disable the option to change the disk format.

Workaround: When you relocate an FT-enabled virtual machine to another datastore, select the Same format as source as the default option.

Workaround: Disable the VM Monitoring feature for such virtual machines or upgrade the ESX/ESXi host to ESX Server 3.5 Update 3 or later.

Nonresponsive Secondary VMs or copies of virtual machines with possibly different names might remain in the host inventory if there is an interruption when turning on Fault Tolerance
If you have a virtual machine on which VMware HA is enabled and you turn on Fault Tolerance, a nonresponsive Secondary VM might be added to the cluster's inventory, or you might end up with multiple copies of the virtual machine with different names. This situation occurs if the destination ESX/ESXi host of the Secondary VM loses connectivity to its managing vCenter Server through a reboot, power loss, or disconnection from the network while the secondary copy is being created and might result in incomplete configuration settings on the Secondary VM.

Secondary VM remains in inventory after Fault Tolerance has been turned off for Primary VM
In some rare cases, selecting Turn Off Fault Tolerance in the vSphere Client for a Primary VM succeeds but the associated Secondary VM object is left in the inventory. This occasionally happens when a failover operation has just occurred and the new Secondary VM has not yet been started. This does not cause any serious consequences because the Secondary VM's files should have already been deleted..

Workaround: Manually delete the Secondary VM.

When a virtual machine running on a NAS datastore is configured to shut down in response to host isolation, the virtual machine might attempt to run simultaneously on two hosts in a network isolation event
If a virtual machine is configured with the default setting to shut down on host isolation, the virtual machine might fail to respond indefinitely while trying to shut down in the event of a multiple network failure that causes host isolation and the loss of network access to the datastore. HA tries to power off the virtual machine and restart it on another host. Two instances of the virtual machine might appear in the vSphere Client. There is no data corruption, because HA and VMFS correctly control access to the virtual machine's data, but the original virtual machine becomes nonresponsive. You can clear the condition by connecting directly the host that was isolated and dismissing the question.

Workaround: In NFS environments, select Power Off as the host isolation response for the cluster default on affected virtual machines.

Upgrading from an ESX/ESXi 3.x host to an ESX/ESXi 4.0 host results in a successful upgrade, but VMware HA reconfiguration might fail
When you use vCenter Update Manager 4.0 to upgrade an ESX/ESXi 3.x host to ESX/ESXi 4.0, if the host is part of an HA or DRS cluster, the upgrade succeeds and the host is reconnected to vCenter Server, but HA reconfiguration might fail. The following error message displays on the host Summary tab:

Configuring VMware High Availability (HA) on a heavily loaded system might result in an error message
If you are enabling HA on a host that is experiencing a heavy load from its guest virtual machines, HA configuration might be interrupted for the host and an error message is displayed:HA Agent on the host failed

Workaround: Reconfigure HA for the host, preferably after reducing the load either by powering off virtual machines or by migrating them to another host in the cluster using vMotion.

Suspended virtual machines with independent nonpersistent disks do not failover on VMware HA hosts
If you have suspended or powered off virtual machines on a host that has VMware HA enabled and if the virtual machine disks are configured to be independent and nonpersistent, failover does not happen. Those disks are not migrated to another host if the host fails, is powered off, or is placed in Maintenance Mode.
Migrating these virtual machines is currently not supported on HA because the machines are not compatible with any other host in the cluster.

Workaround: Unregister the virtual machine and register it on a compatible host.

VMware HA might report misleading timeout errors when powering on or failing over a host with many VMs
VMware HA timeout errors might appear a few minutes after powering on or migrating (using VMware HA) a host with many VMs (more than 70). This timeout error disappears when most of the VMs are powered on.

Workaround: They can be ignored.

VMware Fault Tolerance does not support IPv6 addressing
If the VMkernel NICs for Fault Tolerance (FT) logging or vMotion are assigned IPv6 addresses, enabling Fault Tolerance on virtual machines fails.

Workaround: Configure the VMkernel NICs using the IPv4 addressing.

Hot-plugging of devices is not supported when FT is disabled on virtual machines
The hot-plugging feature is not supported on virtual machines when VMware Fault Tolerance is enabled or disabled on the virtual machines. You must Turn off Fault Tolerance temporarily before you hot-plug a device. After hot-plugging you can turn-on the FT. But after a hot-removal of a device, you should reboot the virtual machine to turn-on the FT.