This issue occurs when there is a contention for semaphore between the CIM server and the providers.

This issue is resolved in this release.

ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after rebootAn ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after you reboot the host. An error similar to the following is written to syslog file:

Unable to monitor hardware status with vCenter ServerIf the CIM client sends two requests of Delete Instance to the same CIM indication subscription, the sfcb-vmware_int service might stop responding due to memory contention. You might not be able to monitor the hardware status with the vCenter Server and ESXi.

This issue is resolved in this release.

After upgrading firmware, false alarms might appear in the Hardware Status tabAfter upgrading firmware, false alarms appear in the Hardware Status tab of the vSphere Client even if the system has been idle for two to three days. Error messages similar to the following might be logged in the /var/log/syslog.log file:

Openwsman might not support createInstance()
The ESXi WSMAN agent (Openwsman) included in the ESXi 5.0 Update 3 or ESXi Patch Release ESXi500-201406001, the ESXi 5.1 Update 2 or ESXi Patch Release ESXi510-201407001, or the ESXi 5.5 Update 2 might not support array parameter to createInstance(). When you run wsmand service to create a CIM instance with array type property value, using createInstanse() in Openwsman, messages similar to the following is displayed:

Unable to monitor hardware status on an ESXi host
An ESXi host might report an error in the Hardware Status tab due to the unresponsive hardware monitoring service (sfcbd). An error similar to the following is written to syslog.log file:

The openwsmand service might stop responding when RAID controller properties are changed
The openwsmand service might stop responding when you change RAID controller properties using the ModifyInstance option. This happens when properties for the following are changed:

Rebuild priority

Consistency check priority

Patrol read priority

This issue is resolved in this release.

CIM client might display an error due to multiple enumeration
When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 6.0 might notice an error message similar to the following:

CIM error: enumInstances Class not found

This issue occurs when the management software fails to retrieve information provided by VMware_EthernetPort()class. When the issue occurs, query on memstats might display the following error message:

MemStatsTraverseGroups: VSI_GetInstanceListAlloc failure: Not found.

This issue is resolved in this release.

Miscellaneous Issues

Unable to end the sfcb process when the UserWorld is stuck in HeapMoreCoreWhen the UserWorld is stuck in HeapMoreCore with an infinite timeout due to an improper stop order, you are unable to end the sfcb process. Error message similar to the following is displayed:

failed to kill /sbin/sfcbd (8314712): No such process

This issue is resolved in this release.

Networking Issues

ESXi host might become unusable with no connectivity until reboot
When an ESXi host has three or more vmknics, if you reset network settings from DCUI or apply a Host Profile where the vmknics are on a DVS, including the management vmknic, a Hostctl exception might occur. This might cause the host to become unusable with no connectivity until it is rebooted.

This issue is resolved in this release.

Throughput statistics for TX and RX might be very high causing unnecessary remapping of source ports The values of TX and RX throughput statistics might be very high leading to unnecessary remapping of source ports to different VMNICs. This might be due to a miscalculation of statistics by the Load-Based Teaming algorithm.

Attempts to create more than 16 TB of VMFS5 datastore on storage device fail
An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16TB. An error message similar to the following is written to the vmkernel.log file:

Overall ESXi utilization might decrease when you set the CPU limit of a single processor virtual machine
When you set the CPU limit of a single processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler is making incorrect CPU load balancing estimations, and considers virtual machines as running. For more details, see Knowledge Base article 2096897.

This issue is resolved in this release.

The iSCSI network port-binding might fail even when there is only one active uplink on a switch
The iSCSI network port-binding fails even when there is only one active uplink on a switch.

The issue is resolved in this release by counting only the active uplinks to decide if the VMkernel interface is compliant or not.

iSCSI initiator name allowed when enabling software iSCSI via esxcli
This release provides the option to pass an iSCSI initiator name to the esxcli iscsi software set command.

Persistently mounted VMFS snapshot might not get mounted
Persistently mounted VMFS snapshot volumes might not get mounted after you reboot the ESXi host. Log messages similar to following are written to the syslog file:

Reduced IOPS than the configured limit for the read-write operation
When you limit the Input Output Per Second (IOPS) value for a disk from a virtual machine, you see reduced IOPS than the configured limit for the read-write operation (I/O), if the size of the read-write operation (I/O) is greater than or equal to 32 KB. This is because I/O scheduler considers 32 KB as one scheduling cost unit of an IO operation. Any operation of size greater than 32 KB is considered as multiple operations and results in throttling I/O.

Th issue is resolved in this release by making the SchedCostUnit value configurable as per the application requirement.

To view the current value, run the following command:esxcfg-advcfg -g /Disk/SchedCostUnit

To set a new value, run the following command:esxcfg-advcfg -s 65536 /Disk/SchedCostUnit

The vmkiscsid process might stop responding
The vmkiscsid process might stop responding when you run an iSCSI adapter rescan operation using IPv6.

This issue is resolved in this release.

ESXi host might not receive SNMP v3 traps with third-party management tool An ESXi host might not receive SNMP v3 traps when you are using a third-party management tool to collect SNMP data. Entries similar to the following are written to /var/snmp/syslog.log file:

Attempts to boot an ESXi 6.0 host from an iSCSI SAN might failAttempts to boot an ESXi 6.0 host from an iSCSI SAN might fail. This happens when the ESXi host is unable to detect the iSCSI Boot Firmware Table (iBFT), causing boot to fail. This issue might occur with any iSCSI adapter, including Emulex and QLogic.

This issue is resolved in this release.

The setPEContext VASA API call to a provider might failThe setPEContext VASA API call to a provider might fail. An error message similar to the following might be reported in the vvold.log file:

Random EMC targets might not recognize the initiatorApplying HostProfile, initially assigns a randomly generated iSCSI initiator name and then renames it to the user defined name. This might cause some EMC targets to not recognize the initiator.

This issue is resolved in this release.

IBM BladeCenter HS23 might be unable to write the coredump file when the active coredump partition is configured The IBM BladeCenter HS23 that boots from a USB device is unable to write the coredump file when the active coredump partition is configured on an USB device. A purple screen displays a message that the dump is initiated but is not completed.

This patch resolves this issue by disabling delayed acknowledgements for NFS connections.

This issue is resolved in this release.

Upgrade and Installation Issues

First boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX might halt server due to a segmentation fault
The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loading vmw_satp_alua module due to a segmentation fault during the process of discovering controllers.

The lsu-lsi-lsi-mr3-plugin and lsu-lsi-megaraid-sas-plugin VIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.

First boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX might halt server due to a segmentation fault
The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loading vmw_satp_alua module due to a segmentation fault during the process of discovering controllers.

The lsu-lsi-lsi-mr3-plugin and lsu-lsi-megaraid-sas-plugin VIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.

This issue is resolved in this release by extending support for the IA32_PAT MSR to all versions of virtual hardware.

Note: This support is limited to recording the guest's PAT in the IA32_PAT MSR. The guest's PAT does not actually influence the memory types used by the virtual machine.

Deleting a VDI environment enabled desktop pool might delete VMDK files from a different desktop poolWhen you delete a VDI environment enabled desktop pool, the VMDK files of the other virtual machine from a different desktop pool might get deleted. Multiple virtual machines from different desktop pools might be affected. This happens when after deleting the disk, the parent directory gets deleted due to an error where the directory is perceived as empty, even though it is not. The virtual machine might fail to power on with the following error:

VMDK deletion occurs when a particular virtual machine's guest aperating system and user data disk are spread across different datastores. This issue is not visible when all VM files reside in the same datastore.

This issue is resolved in this release.

Automatic option for a virtual machine startup or shutdown might not work
The Automatic option for virtual machine startup or shutdown might not work when the vmDelay variable value is set to more than 1800 seconds. This might occur in the following situations:

If the vmDelay variable is set to 2148 seconds or more, the automatic virtual machine startup or shutdown might not be delayed, and might cause the hostd service to fail.

If the vmDelay variable is set to more than 1800 seconds, then the vim-cmd command hostsvc/autostartmanager/autostart might not delay the auto startup or shutdown tasks on a virtual machine. This is because the command might timeout if the task is not completed within 30 minutes.

Note: Specify the blockingTimeoutSeconds value in the hostd configuration file, /etc/vmware/hostd/config.xml. If the sum of delays is larger than 1800 seconds, then you must set blockingTimeoutSeconds to a value larger than 1800 seconds.

For example:<vimcmd><soapStubAdapter><blockingTimeoutSeconds>7200</blockingTimeoutSeconds></soapStubAdapter></vimcmd>

This issue is resolved in this release.

Virtual SAN Issues

ESXi host in a Virtual SAN cluster with 40 or more nodes might display a purple diagnostic screenAn ESXi host that is part of a Virtual SAN cluster with 40 or more nodes might display a purple diagnostic screen due to a limit check when the nodes are added back into the membership list of a new master after master failover.

This issue is resolved in this release.

Reducing the proportionalCapacity policy does not affect the disk usageReducing the proportionalCapacity policy does not affect the disk usage. This is because modifications made to the policy parameters are not passed on to the components on which they are applied.

This issue is resolved in this release.

Provisioning a virtual machine using a storage policy with the Flash Read Cache Reservation attribute might failAttempts to provision a virtual machine using a storage policy with the Flash Read Cache Reservation attribute fails in a Virtual SAN All-flash cluster environment.

This issue is resolved in this release.

Lightweight Virtual SAN Observer capable of collecting statistics without requiring hostd introducedThe Virtual SAN Observer is unable to collect statistics when hostd is not reachable as the collection happens through hostd. This release introduces a lightweight Virtual SAN Observer capable of collecting statistics without requiring hostd.

This issue is resolved in this release.

VMware Tools Issues

VMware Tools might fail to automatically upgrade when the VM is powered on for the first timeWhen a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the VM to automatically upgrade VMware Tools at next power on, VMware Tools might fail to automatically upgrade when the VM is powered on for the first time.

This issue is resolved in this release.

Attempts to open telnet using the start telnet://xx.xx.xx.xx command might failAfter installing VMware Tools on a Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using the start telnet://xx.xx.xx.xx command fails with the following error message:

Make sure the virtual machine's configuration allows the guest to open host applications

This issue is resolved in this release.

The vShield Endpoint drivers renamed as Guest Introspection driversThe vShield Endpoint drivers are renamed as Guest Introspection drivers and two of these drivers, NSX File Introspection driver (vsepflt.sys) and NSX Network Introspection driver (vnetflt.sys), can be installed separately now. This allows you to install the file driver without installing the network driver.

This issue is resolved in this release.

Applications such as QuickTime might experience a slowdown in performance with Unidesk When you use Unidesk in conjunction with VMware View or vSphere with vShield Endpoint enabled, applications such as QuickTime might experience a slowdown in performance. This is due to an interoperability issue that is triggered when the Unidesk volume serialization filter driver and the vShield driver are present on the stack. For each file opened by the application, even if it is just for reading the attributes, the vShield driver calls FltGetFileNameInformation, causing further processing to be performed on the files. As a result, Unidesk driver opens directories and causes an overall application performance degradation.

This issue is resolved in this release.

IPv6 Router Advertisements do not function as expected when tagging 802.1q with VMXNET3 adapters on a Linux virtual machineIPv6 Router Advertisements (RA) do not function as expected when tagging 802.1q with VMXNET3 adapters on a Linux virtual machine as the IPv6 RA address intended for the VLAN interface is delivered to the base interface.

This issue is resolved in this release.

Quiesced snapshot might fail during snapshot initializationQuiesced snapshot might fail due to a race condition during snapshot initialization. An error message similar to the following is displayed on the Tasks and Events tab of the vCenter Server:

An error occurred while saving the snapshot

You might also see the following information in the guest event log:

System Event LogSource: Microsoft-Windows-DistributedCOMEvent ID: 10010Level: ErrorDescription:The server {nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn} did not register with DCOM within the required timeout.

This issue is resolved in this release.

Performing a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might failAttempts to perform a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might fail and the VM might panic with a blue screen and error message similar to the following:

A problem has been detected and Windows has been shut down to prevent damage to your computer.
If this is the first time you've seen this Stop error screen restart your computer. If this screen appears again, follow these steps:

Disable or uninstall any anti-virus, disk defragmentation or backup utilities. Check your hard drive configuration, and check for any updated drivers. Run CHKDSK /F to check for hard drive corruption, and then restart your computer.