Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name}

CustomFieldDefAddedEvent

Custom field definition added

info

Created new custom field definition {name}

CustomFieldDefRemovedEvent

Custom field definition removed

info

Removed field definition {name}

CustomFieldDefRenamedEvent

Custom field definition renamed

info

Renamed field definition from {name} to {newName}

CustomFieldValueChangedEvent

Custom field value changed

info

Changed custom field {name} on {entity.name} to {value}

Changed custom field {name} on {entity.name} to {value}

Changed custom field {name} on {entity.name} to {value}

Changed custom field {name} to {value}

Changed custom field {name} on {entity.name} in {datacenter.name} to {value}

CustomizationFailed

info

Cannot complete customization

Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details.

CustomizationLinuxIdentityFailed

Customization Linux Identity Failed

error

An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details.

CustomizationNetworkSetupFailed

Cannot complete customization network setup

error

An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details.

CustomizationStartedEvent

Started customization

info

Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS.

CustomizationSucceeded

Customization succeeded

info

Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS.

CustomizationSysprepFailed

Cannot complete customization Sysprep

error

The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information.

CustomizationUnknownFailure

Unknown customization error

error

An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS.

All shared datastores failed on the host {hostName} in cluster {computeResource.name}

All shared datastores failed on the host {hostName}

All shared datastores failed on the host {hostName}

All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent

Host complete network failure

error

All VM networks failed on the host {hostName} in cluster {computeResource.name}

All VM networks failed on the host {hostName}

All VM networks failed on the host {hostName}

All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.HA.HeartbeatDatastoreChanged

vSphere HA changed a host's heartbeat datastores

info

Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name}

Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name}

Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on this host

Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.HA.HeartbeatDatastoreNotSufficient

vSphere HA heartbeat datastore number for a host is insufficient

warning

The number of vSphere HA heartbeat datastores for host {host.name} in cluster {computeResource.name} is {selectedNum}, which is less than required: {requiredNum}

The number of vSphere HA heartbeat datastores for host {host.name} is {selectedNum}, which is less than required: {requiredNum}

The number of vSphere HA heartbeat datastores for this host is {selectedNum}, which is less than required: {requiredNum}

The number of vSphere HA heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum}

com.vmware.vc.HA.HostAgentErrorEvent

vSphere HA agent on a host has an error

warning

vSphere HA agent for host {host.name} has an error in {computeResource.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

vSphere HA agent for host {host.name} has an error: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

vSphere HA agent for this host has an error: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

vSphere HA agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

com.vmware.vc.HA.HostDasErrorEvent

vSphere HA agent error

error

vSphere HA agent on host {host.name} has an error: {reason.@enum.com.vmware.vc.HA.HostDasErrorEvent.HostDasErrorReason}

vSphere HA agent on host {host.name} has an error. {reason.@enum.com.vmware.vc.HA.HostDasErrorEvent.HostDasErrorReason}

vSphere HA agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.

vSphere HA agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.

vSphere HA agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.

com.vmware.vc.HA.NotAllHostAddrsPingable

vSphere HA agent cannot reach some cluster management addresses

info

The vSphere HA agent on the host {host.name} in cluster {computeResource.name} cannot reach some of the management network addresses of other hosts, and thus HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

The vSphere HA agent on the host {host.name} cannot reach some of the management network addresses of other hosts, and thus HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

The vSphere HA agent on this host cannot reach some of the management network addresses of other hosts, and HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

The vSphere HA agent on the host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} lost access to {datastore}

Virtual machine {vm.name} on host {host.name} lost access to {datastore}

Virtual machine {vm.name} lost access to {datastore}

Virtual machine lost access to {datastore}

Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore}

com.vmware.vc.vcp.VmNetworkFailedEvent

Virtual machine lost VM network accessibility

error

Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} lost access to {network}

Virtual machine {vm.name} on host {host.name} lost access to {network}

Virtual machine {vm.name} lost access to {network}

Virtual machine lost access to {network}

Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network}

com.vmware.vc.vcp.VmPowerOffHangEvent

VM power off hang

error

HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} successfully after trying {numTimes} times and will keep trying

HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} successfully after trying {numTimes} times and will keep trying

HA VM Component Protection could not power off virtual machine {vm.name} successfully after trying {numTimes} times and will keep trying

HA VM Component Protection could not power off virtual machine successfully after trying {numTimes} times and will keep trying

HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying

com.vmware.vc.vcp.VmWaitForCandidateHostEvent

No candidate host to restart

error

HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} after waiting {numSecWait} seconds and will keep trying

HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} after waiting {numSecWait} seconds and will keep trying

HA VM Component Protection could not find a destination host for virtual machine {vm.name} after waiting {numSecWait} seconds and will keep trying

HA VM Component Protection could not find a destination host for this virtual machine after waiting {numSecWait} seconds and will keep trying

HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying

com.vmware.vc.vflash.SsdConfigurationFailedEvent

Operation on the SSD device failed

error

Configuration on disk {disk.path} failed. Reason : {fault.msg}

Configuration on disk {disk.path} failed. Reason : {fault.msg}

com.vmware.vc.vm.DstVmMigratedEvent

Virtual machine migrated successfully

info

Virtual machine {vm.name} {newMoRef} in {computeResource.name} was migrated from {oldMoRef}

Virtual machine {vm.name} {oldMoRef} in {computeResource.name} in {datacenter.name} was migrated to {newMoRef}

com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent

Virtual NIC reservation is not satisfied

error

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} is not satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} is not satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on this host is not satisfied

Reservation of Virtual NIC {deviceLabel} is not satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is not satisfied

com.vmware.vc.vm.VmAdapterResvSatisfiedEvent

Virtual NIC reservation is satisfied

info

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} is satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} is satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on this host is satisfied

Reservation of Virtual NIC {deviceLabel} is satisfied

Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is satisfied

com.vmware.vc.vm.VmStateFailedToRevertToSnapshot

Failed to revert the virtual machine state to a snapshot

error

Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId}

Failed to revert the execution state of the virtual machine {vm.name} on host {host.name} to snapshot {snapshotName}, with ID {snapshotId}

Failed to revert the execution state of the virtual machine {vm.name} to snapshot {snapshotName}, with ID {snapshotId}

Failed to revert the execution state of the virtual machine to snapshot {snapshotName}, with ID {snapshotId}

Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId}

com.vmware.vc.vm.VmStateRevertedToSnapshot

The virtual machine state has been reverted to a snapshot

info

The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

The execution state of the virtual machine {vm.name} on host {host.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

The execution state of the virtual machine {vm.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

The execution state of the virtual machine has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent

vSphere HA detected application heartbeat status change

warning

vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name} on {host.name} in cluster {computeResource.name}

vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name} on {host.name}

vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name}

vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for this virtual machine

vSphere HA detected that the application heartbeat status changed to {status.@enum.VirtualMachine.AppHeartbeatStatusType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.vmam.VmAppHealthStateChangedEvent

vSphere HA detected application state change

warning

vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name}

vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name}

vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name}

vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for this virtual machine

vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent

Virtual SAN disk that does not support checksum

error

Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} does not support checksum

Virtual SAN disk {disk} on {host.name} does not support checksum

Virtual SAN disk {disk} does not support checksum

Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum

Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum

com.vmware.vc.vsan.DatastoreNoCapacityEvent

Virtual SAN datastore {datastoreName} does not have capacity

error

Virtual SAN datastore {datastoreName} in cluster {computeResource.name} does not have capacity

Virtual SAN datastore {datastoreName} does not have capacity

Virtual SAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity

Virtual SAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity

com.vmware.vc.vsan.HostCommunicationErrorEvent

Host cannot communicate with all other nodes in the Virtual SAN enabled cluster

error

Host {host.name} in cluster {computeResource.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster

Host {host.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster

Host cannot communicate with all other nodes in the Virtual SAN enabled cluster

Host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster

Host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} cannot communicate with all other nodes in the Virtual SAN enabled cluster

Found another host participating in the Virtual SAN service which is not a member of this host's vCenter cluster

error

Found host(s) {hostString} participating in the Virtual SAN service which is not a member of this host's vCenter cluster {computeResource.name}

Found host(s) {hostString} participating in the Virtual SAN service which is not a member of this host's vCenter cluster

Found host(s) {hostString} participating in the Virtual SAN service which is not a member of this host's vCenter cluster

Found host(s) {hostString} participating in the Virtual SAN service in cluster {computeResource.name} in datacenter {datacenter.name} is not a member of this host's vCenter cluster

Found host(s) {hostString} participating in the Virtual SAN service in cluster {computeResource.name} in datacenter {datacenter.name} is not a member of this host's vCenter cluster

com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent

Failed to turn off the disk locator LED

error

Failed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg}

Failed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg}

com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent

Failed to turn on the disk locator LED

error

Failed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg}

Failed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg}

com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent

Virtual SAN cluster needs disk format upgrade

warning

Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation

Virtual SAN cluster has one or more hosts for which disk format upgrade is recommended: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation

Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation

Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation

A 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

esx.problem.3rdParty.info

A 3rd party component on ESXi has reported an informational event.

info

A 3rd party component, {1}, running on ESXi has reported an informational event. If needed, please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

esx.problem.3rdParty.warning

A 3rd party component on ESXi has reported a warning.

warning

A 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

{1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.

esx.problem.iorm.badversion

Storage I/O Control version mismatch

info

Host {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore.

esx.problem.iorm.nonviworkload

Unmanaged workload detected on SIOC-enabled datastore

info

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {1}.

LACP error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode.

esx.problem.net.lacp.uplink.fail.speed

uplink speed is different

error

LACP error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed.

esx.problem.net.lacp.uplink.inactive

All uplinks must be active

error

LACP error: All uplinks on VDS {1} must be active.

esx.problem.net.lacp.uplink.transition.down

uplink transition down

warning

LACP warning: uplink {1} on VDS {2} is moved out of link aggregation group.

esx.problem.net.migrate.bindtovmk

Invalid vmknic specified in /Migrate/Vmknic

warning

The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.

esx.problem.net.migrate.unsupported.latency

Unsupported vMotion network latency detected

warning

ESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance.

esx.problem.net.portset.port.full

Failed to apply for free ports

error

Portset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports.

esx.problem.net.portset.port.vlan.invalidid

Vlan ID of the port is invalid

error

{1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095.

esx.problem.net.proxyswitch.port.unavailable

Virtual NIC connection to switch failed

warning

Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.

QErr cannot be changed on device. Please change it manually on the device if possible.

warning

QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The system is not configured to change the QErr setting of device. The QErr value supported by system is 0x{3}. Please check the SCSI ChangeQErrSetting configuration value for ESX.

esx.problem.scsi.device.io.qerr.changed

Scsi Device QErr setting changed

warning

QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The device was originally configured to the supported QErr setting of 0x{3}, but this has been changed and could not be changed back.

esx.problem.scsi.device.is.local.failed

Plugin's isLocal entry point failed

warning

Failed to verify if the device {1} from plugin {2} is a local - not shared - device

esx.problem.scsi.device.is.pseudo.failed

Plugin's isPseudo entry point failed

warning

Failed to verify if the device {1} from plugin {2} is a pseudo device

esx.problem.scsi.device.is.ssd.failed

Plugin's isSSD entry point failed

warning

Failed to verify if the device {1} from plugin {2} is a Solid State Disk device

esx.problem.scsi.device.limitreached

Maximum number of storage devices

error

The maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created.

A VFAT filesystem, being used as the host's scratch partition, is full.

error

The host's scratch partition, which is the VFAT filesystem {1} (UUID {2}), is full.

esx.problem.visorfs.inodetable.full

The root filesystem's file table is full.

error

The root filesystem's file table is full. As a result, the file {1} could not be created by the application '{2}'.

esx.problem.visorfs.ramdisk.full

A ramdisk is full.

error

The ramdisk '{1}' is full. As a result, the file {2} could not be written.

esx.problem.visorfs.ramdisk.inodetable.full

A ramdisk's file table is full.

error

The file table of the ramdisk '{1}' is full. As a result, the file {2} could not be created by the application '{3}'.

esx.problem.vm.kill.unexpected.fault.failure

A VM could not fault in the a page. The VM is terminated as further progress is impossible.

error

The VM using the config file {1} could not fault in a guest physical page from the hypervisor level swap file at {2}. The VM is terminated as further progress is impossible.

esx.problem.vm.kill.unexpected.forcefulPageRetire

A VM did not respond to swap actions and is forcefully powered off to prevent system instability.

error

The VM using the config file {1} contains the host physical page {2} which was scheduled for immediate retirement. To avoid system instability the VM is forcefully powered off.

esx.problem.vm.kill.unexpected.noSwapResponse

A VM did not respond to swap actions and is forcefully powered off to prevent system instability.

error

The VM using the config file {1} did not respond to {2} swap actions in {3} seconds and is forcefully powered off to prevent system instability.

esx.problem.vm.kill.unexpected.vmtrack

A VM is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability.

error

The VM using the config file {1} is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability.

esx.problem.vmfs.ats.incompatibility.detected

Multi-extent ATS-only VMFS Volume unable to use ATS

error

Multi-extent ATS-only volume '{1}' ({2}) is unable to use ATS because HardwareAcceleratedLocking is disabled on this host: potential for introducing filesystem corruption. Volume should not be used from other hosts.

esx.problem.vmfs.ats.support.lost

Device Backing VMFS has lost ATS Support

error

ATS-Only VMFS volume '{1}' not mounted. Host does not support ATS or ATS initialization has failed.

esx.problem.vmfs.error.volume.is.locked

VMFS Locked By Remote Host

error

Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover.

esx.problem.vmfs.extent.offline

Device backing an extent of a file system is offline.

error

An attached device {1} may be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.

esx.problem.vmfs.extent.online

Device backing an extent of a file system came online

info

Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available.

Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.

esx.problem.vmfs.heartbeat.unrecoverable

VMFS Volume Connectivity Lost

error

Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.

esx.problem.vmfs.journal.createfailed

No Space To Create VMFS Journal

error

No space for journal on volume {1} ({2}). Volume will remain in read-only metadata mode with limited write support until journal can be created.

esx.problem.vmfs.lock.corruptondisk

VMFS Lock Corruption Detected

error

At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.

esx.problem.vmfs.lockmode.inconsistency.detected

Inconsistent VMFS lockmode detected.

error

Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.

esx.problem.vmfs.spanned.lockmode.inconsistency.detected

Inconsistent VMFS lockmode detected on spanned volume.

error

Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume.

Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume.

Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume.

esx.problem.vmfs.spanstate.incompatibility.detected

Incompatible VMFS span state detected.

error

Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

esx.problem.vmsyslogd.remote.failure

Remote logging host has become unreachable.

error

The host "{1}" has become unreachable. Remote logging to this host has stopped.

esx.problem.vmsyslogd.storage.logdir.invalid

The configured log directory cannot be used. The default directory will be used instead.

error

The configured log directory {1} cannot be used. The default directory {2} will be used instead.

esx.problem.vmsyslogd.unexpected

Log daemon has failed for an unexpected reason.

error

Log daemon has failed for an unexpected reason: {1}

esx.problem.vpxa.core.dumped

Vpxa crashed and a core file was created.

warning

{1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.

The ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.

vprob.net.proxyswitch.port.unavailable

Virtual NIC connection to switch failed

warning

Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.

Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover.

vprob.vmfs.extent.offline

Device backing an extent of a file system is offline.

error

An attached device {1} might be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.

vprob.vmfs.extent.online

Device backing an extent of a file system is online.

info

Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available.

vSphere HA host monitoring is disabled. No virtual machine failover will occur until Host Monitoring is re-enabled for cluster {computeResource.name} in {datacenter.name}

com.vmware.vc.HA.FailedRestartAfterIsolationEvent

vSphere HA failed to restart a network isolated virtual machine

error

vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} after it was powered off in response to a network isolation event

vSphere HA was unable to restart virtual machine {vm.name} after it was powered off in response to a network isolation event

vSphere HA was unable to restart virtual machine {vm.name} after it was powered off in response to a network isolation event

vSphere HA was unable to restart this virtual machine after it was powered off in response to a network isolation event

vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on.

vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} because vCloud Distributed Storage is enabled but the host does not support that feature

vSphere HA cannot be configured on host {host.name} because vCloud Distributed Storage is enabled but the host does not support that feature

vSphere HA cannot be configured because vCloud Distributed Storage is enabled but the host does not support that feature

vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature

vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature

com.vmware.vc.HA.HostHasNoIsolationAddrsDefined

Host has no vSphere HA isolation addresses

error

Host {host.name} in cluster {computeResource.name} has no isolation addresses defined as required by vSphere HA

Host {host.name} has no isolation addresses defined as required by vSphere HA

This host has no isolation addresses defined as required by vSphere HA

Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA.

com.vmware.vc.HA.HostHasNoMountedDatastores

vSphere HA cannot be configured on this host because there are no mounted datastores.

error

vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} because there are no mounted datastores.

vSphere HA cannot be configured on {host.name} because there are no mounted datastores.

vSphere HA cannot be configured on this host because there are no mounted datastores.

vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores.

vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores.

com.vmware.vc.HA.HostHasNoSslThumbprint

vSphere HA requires a SSL Thumbprint for host

error

vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.

vSphere HA cannot be configured on {host.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.

vSphere HA cannot be configured on this host because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for this host has been verified.

vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.

com.vmware.vc.HA.HostIncompatibleWithHA

Host is incompatible with vSphere HA

error

The product version of host {host.name} in cluster {computeResource.name} is incompatible with vSphere HA.

The product version of host {host.name} is incompatible with vSphere HA.

The product version of this host is incompatible with vSphere HA.

The product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with vSphere HA.

com.vmware.vc.HA.HostPartitionedFromMasterEvent

vSphere HA detected a network-partitioned host

warning

vSphere HA detected that host {host.name} is in a different network partition than the master to which vCenter Server is connected in {computeResource.name}

vSphere HA detected that host {host.name} is in a different network partition than the master to which vCenter Server is connected

vSphere HA detected that this host is in a different network partition than the master to which vCenter Server is connected

vSphere HA detected that host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name}

com.vmware.vc.HA.HostUnconfigureError

vSphere HA agent unconfigure failed on host

warning

There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name}. To solve this problem, reconnect the host to vCenter Server.

There was an error unconfiguring the vSphere HA agent on host {host.name}. To solve this problem, reconnect the host to vCenter Server.

There was an error unconfiguring the vSphere HA agent on this host. To solve this problem, reconnect the host to vCenter Server.

There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, reconnect the host to vCenter Server.

com.vmware.vc.HA.VMIsHADisabledIsolationEvent

vSphere HA did not perform an isolation response for vm because its VM restart priority is Disabled

info

vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} because its VM restart priorirty is Disabled

vSphere HA did not perform an isolation response for {vm.name} because its VM restart priority is Disabled

vSphere HA did not perform an isolation response for {vm.name} because its VM restart priority is Disabled"

vSphere HA did not perform an isolation response because its VM restart priority is Disabled"

vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

com.vmware.vc.HA.VMIsHADisabledRestartEvent

vSphere HA did not attempt to restart vm because its VM restart priority is Disabled

info

vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} because its VM restart priority is Disabled

vSphere HA did not attempt to restart {vm.name} because its VM restart priority is Disabled

vSphere HA did not attempt to restart {vm.name} because its VM restart priority is Disabled"

vSphere HA did not attempt to restart vm because its VM restart priority is Disabled"

vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

Virtual machine {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected.

com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull

vSphere HA has unprotected out-of-disk-space VM

info

vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} because it ran out of disk space

vSphere HA has unprotected virtual machine {vm.name} because it ran out of disk space

vSphere HA has unprotected virtual machine {vm.name} because it ran out of disk space

vSphere HA has unprotected this virtual machine because it ran out of disk space

vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space

com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore

vSphere HA did not terminate a VM affected by an inaccessible datastore: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

warning

vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

vSphere HA did not terminate this VM affected by an inaccessible datastore: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

com.vmware.vc.HA.VmcpStorageFailureCleared

Datastore {ds.name} mounted on this host was inaccessible. vSphere HA detected that the condition was cleared and the datastore is now accessible

info

Datastore {ds.name} mounted on host {host.name} in cluster {computeResource.name} was inaccessible. vSphere HA detected that the condition was cleared and the datastore is now accessible

Datastore {ds.name} mounted on host {host.name} was inaccessible. vSphere HA detected that the condition was cleared and the datastore is now accessible

Datastore {ds.name} mounted on this host was inaccessible. vSphere HA detected that the condition was cleared and the datastore is now accessible

Datastore {ds.name} mounted on host {host.name} was inaccessible. The condition was cleared and the datastore is now accessible

com.vmware.vc.HA.VmcpStorageFailureDetectedForVm

vSphere HA detected that a datastore was inaccessible. This affected the VM with files on the datastore

warning

vSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore

vSphere HA detected that a datastore mounted on host {host.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore

vSphere HA detected that a datastore mounted on this host was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore

vSphere HA detected that a datastore was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected the VM with files on the datastore

vSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} in {datacenter.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore

com.vmware.vc.HA.VmcpTerminateVmAborted

vSphere HA was unable to terminate VM affected by an inaccessible datastore after it exhausted the retries

error

vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} after {retryTimes} retries

vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} after {retryTimes} retries

vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on this host after {retryTimes} retries

vSphere HA was unable to terminate this VM affected by an inaccessible datastore after {retryTimes} retries

vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries

com.vmware.vc.HA.VmcpTerminatingVm

vSphere HA attempted to terminate a VM affected by an inaccessible datastore

warning

vSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} because the VM was affected by an inaccessible datastore

vSphere HA attempted to terminate VM {vm.name} on host{host.name} because the VM was affected by an inaccessible datastore

vSphere HA attempted to terminate VM {vm.name} on this host because the VM was affected by an inaccessible datastore

vSphere HA attempted to terminate this VM because the VM was affected by an inaccessible datastore

vSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} in {datacenter.name} because the VM was affected by an inaccessible datastore

HA VM Component Protection protects virtual machine {vm.name} on host {host.name} as non-FT virtual machine since it has been in the needSecondary state too long

HA VM Component Protection protects virtual machine {vm.name} on host {host.name} as non-FT virtual machine because it has been in the needSecondary state too long

HA VM Component Protection protects virtual machine {vm.name} as non-FT virtual machine because it has been in the needSecondary state too long

HA VM Component Protection protects this virtul machine as non-FT virtual machine because it has been in the needSecondary state too long

HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long

com.vmware.vc.vcp.VcpNoActionEvent

No action on VM

info

HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} due to the feature configuration setting

HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} due to the feature configuration setting

HA VM Component Protection did not take action on virtual machine {vm.name} due to the feature configuration setting

HA VM Component Protection did not take action due to the feature configuration setting

HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting

{host.name} with Virtual SAN service enabled is not in the vCenter cluster {computeResource.name}

{host.name} with Virtual SAN service enabled is not in the vCenter cluster

Host with Virtual SAN service enabled is not in the vCenter cluster

{host.name} with Virtual SAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name}

{host.name} with Virtual SAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name}

com.vmware.vc.vsan.HostNotInVsanClusterEvent

Host is in a Virtual SAN cluster but does not have Virtual SAN service enabled

error

{host.name} is in a Virtual SAN cluster {computeResource.name} but does not have Virtual SAN service enabled

{host.name} is in a Virtual SAN cluster but does not have Virtual SAN service enabled

Host is in a Virtual SAN cluster but does not have Virtual SAN service enabled

{host.name} is in a Virtual SAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have Virtual SAN service enabled

{host.name} is in a Virtual SAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have Virtual SAN service enabled

com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent

Virtual SAN host vendor provider has been successfully unregistered

info

Virtual SAN vendor provider {host.name} has been successfully unregistered

Virtual SAN vendor provider {host.name} has been successfully unregistered

Virtual SAN vendor provider {host.name} has been successfully unregistered

Virtual SAN vendor provider {host.name} has been successfully unregistered

com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent

Virtual SAN host vendor provider registration succeeded

info

Virtual SAN vendor provider {host.name} has been successfully registered

Virtual SAN vendor provider {host.name} has been successfully registered

Virtual SAN vendor provider {host.name} has been successfully registered

Virtual SAN vendor provider {host.name} has been successfully registered

com.vmware.vc.vsan.NetworkMisConfiguredEvent

Virtual SAN network is not configured

error

Virtual SAN network is not configured on {host.name} in cluster {computeResource.name}

Virtual SAN network is not configured on {host.name}

Virtual SAN network is not configured

Virtual SAN network is not configured on {host.name}, in cluster {computeResource.name}, and in datacenter {datacenter.name}

Virtual SAN network is not configured on {host.name}, in cluster {computeResource.name}, and in datacenter {datacenter.name}

esx.audit.dcui.defaults.factoryrestore

Restoring factory defaults through DCUI.

warning

The host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

esx.audit.dcui.disabled

The DCUI has been disabled.

info

The DCUI has been disabled.

esx.audit.dcui.enabled

The DCUI has been enabled.

info

The DCUI has been enabled.

esx.audit.dcui.host.reboot

Rebooting host through DCUI.

warning

The host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

esx.audit.dcui.host.shutdown

Shutting down host through DCUI.

warning

The host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

esx.audit.dcui.hostagents.restart

Restarting host agents through DCUI.

info

The management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

esx.audit.dcui.network.factoryrestore

Factory network settings restored through DCUI.

warning

The host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

esx.audit.esximage.install.novalidation

Attempting to install an image profile with validation disabled.

warning

Attempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations.

esx.audit.host.boot

Host has booted.

info

Host has booted.

esx.audit.host.stop.reboot

Host is rebooting.

info

Host is rebooting.

esx.audit.host.stop.shutdown

Host is shutting down.

info

Host is shutting down.

esx.audit.lockdownmode.disabled

Administrator access to the host has been enabled.

info

Administrator access to the host has been enabled.

esx.audit.lockdownmode.enabled

Administrator access to the host has been disabled.

info

Administrator access to the host has been disabled.

esx.audit.lockdownmode.exceptions.changed

List of lockdown exception users has been changed.

info

List of lockdown exception users has been changed.

esx.audit.maintenancemode.canceled

The host has canceled entering maintenance mode.

info

The host has canceled entering maintenance mode.

esx.audit.maintenancemode.entered

The host has entered maintenance mode.

info

The host has entered maintenance mode.

esx.audit.maintenancemode.entering

The host has begun entering maintenance mode.

info

The host has begun entering maintenance mode.

esx.audit.maintenancemode.exited

The host has exited maintenance mode.

info

The host has exited maintenance mode.

esx.audit.net.firewall.disabled

Firewall has been disabled.

warning

Firewall has been disabled.

esx.audit.shell.disabled

The ESXi command line shell has been disabled.

info

The ESXi command line shell has been disabled.

esx.audit.shell.enabled

The ESXi command line shell has been enabled.

info

The ESXi command line shell has been enabled.

esx.audit.ssh.disabled

SSH access has been disabled.

info

SSH access has been disabled.

esx.audit.ssh.enabled

SSH access has been enabled.

info

SSH access has been enabled.

esx.audit.usb.config.changed

USB configuration has changed.

info

USB configuration has changed on host {host.name} in cluster {computeResource.name}.

USB configuration has changed on host {host.name}.

USB configuration has changed.

USB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

esx.audit.vmfs.lvm.device.discovered

LVM device discovered.

info

One or more LVM devices have been discovered on this host.

esx.audit.vsan.clustering.enabled

Virtual SAN clustering services have been enabled.

info

Virtual SAN clustering and directory services have been enabled.

Virtual SAN clustering and directory services have been enabled.

esx.audit.vsan.net.vnic.added

Virtual SAN virtual NIC has been added.

info

Virtual SAN virtual NIC has been added.

Virtual SAN virtual NIC has been added.

esx.clear.coredump.configured

A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.

info

A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.

A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.

esx.clear.coredump.configured2

At least one coredump target has been configured. Host core dumps will be saved.

info

At least one coredump target has been configured. Host core dumps will be saved.

At least one coredump target has been configured. Host core dumps will be saved.

esx.problem.coredump.unconfigured

No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.

warning

No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.

No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.

esx.problem.coredump.unconfigured2

No coredump target has been configured. Host core dumps cannot be saved.

warning

No coredump target has been configured. Host core dumps cannot be saved.

No coredump target has been configured. Host core dumps cannot be saved.

esx.problem.cpu.amd.mce.dram.disabled

DRAM ECC not enabled. Please enable it in BIOS.

error

DRAM ECC not enabled. Please enable it in BIOS.

esx.problem.cpu.intel.ioapic.listing.error

Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.

error

Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.

esx.problem.cpu.mce.invalid

MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.

error

MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.

esx.problem.host.coredump

An unread host kernel core dump has been found.

warning

An unread host kernel core dump has been found.

esx.problem.migrate.vmotion.default.heap.create.failed

Failed to create default migration heap

warning

Failed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure.

esx.problem.scsi.apd.event.descriptor.alloc.failed

No memory to allocate APD Event

warning

No memory to allocate APD (All Paths Down) event subsystem.

esx.problem.scsi.device.io.invalid.disk.qfull.value

Scsi device queue parameters incorrectly set.

warning

QFullSampleSize should be bigger than QFullThreshold. LUN queue depth throttling algorithm will not function as expected. Please set the QFullSampleSize and QFullThreshold disk configuration values in ESX correctly.

esx.problem.syslog.config

System logging is not configured.

warning

System logging is not configured on host {host.name}.

System logging is not configured on host {host.name}. Please check Syslog options for the host under Configuration -> Software -> Advanced Settings in vSphere client.

esx.problem.syslog.nonpersistent

System logs are stored on non-persistent storage.

warning

System logs on host {host.name} are stored on non-persistent storage.

System logs on host {host.name} are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

esx.problem.visorfs.failure

An operation on the root filesystem has failed.

error

An operation on the root filesystem has failed.

esx.problem.vmsyslogd.storage.failure

Logging to storage has failed.

error

Logging to storage has failed. Logs are no longer being stored locally on this host.

A host local port is created to recover from management network connectivity loss.

info

A host local port {hostLocalPort.portKey} is created on vSphere Distributed Switch {hostLocalPort.switchUuid} to recover from management network connectivity loss on virtual NIC device {hostLocalPort.vnic}.

A host local port {hostLocalPort.portKey} is created on vSphere Distributed Switch {hostLocalPort.switchUuid} to recover from management network connectivity loss on virtual NIC device {hostLocalPort.vnic} on the host {host.name}.

HostMissingNetworksEvent

Host is missing vSphere HA networks

error

Host {host.name} does not have the following networks used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage

Host {host.name} does not have the following networks used by other hosts for vSphere HA communication:{ips}. Consider using vSphere HA advanced option das.allowNetwork to control network usage

HostMonitoringStateChangedEvent

vSphere HA host monitoring state changed

info

vSphere HA host monitoring state in {computeResource.name} changed to {state.@enum.DasConfigInfo.ServiceState}

The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to the physical NIC.

error

The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name}

The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name}

The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}

NASDatastoreCreatedEvent

NAS datastore created

info

Created NAS datastore {datastore.name} on {host.name}

Created NAS datastore {datastore.name} on {host.name}

Created NAS datastore {datastore.name}

Created NAS datastore {datastore.name} on {host.name} in {datacenter.name}

NetworkRollbackEvent

Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.

error

Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.

Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.

Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.

Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server.

NoAccessUserEvent

No access for user

error

Cannot login user {userName}@{ipAddress}: no permission

NoDatastoresConfiguredEvent

No datastores configured

info

No datastores have been configured

No datastores have been configured on the host {host.name}

NoLicenseEvent

No license

error

A required license {feature.featureName} is not reserved

NoMaintenanceModeDrsRecommendationForVM

No maintenance mode DRS recommendation for the VM

info

Unable to automatically migrate {vm.name}

Unable to automatically migrate from {host.name}

Unable to automatically migrate {vm.name} from {host.name}

NonVIWorkloadDetectedOnDatastoreEvent

Unmanaged workload detected on SIOC-enabled datastore

info

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.

An unmanaged I/O workload is detected on a SIOC-enabled datastore: {datastore.name}.

Insufficient resources to fail over {vm.name} in {computeResource.name}. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault}

Insufficient resources to fail over {vm.name}. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault}

Insufficient resources to fail over {vm.name}. vSphere HA will retry the fail over when enought resources are available. Reason: {reason.@enum.fdm.placementFault}

Insufficient resources to fail over this virtual machine. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault}

Insufficient resources to fail over {vm.name} in {computeResource.name} that recides in {datacenter.name}. vSphere HA will retry the fail over when enough resources are available. Reason: {reason.@enum.fdm.placementFault}

OutOfSyncDvsHost

The vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server.

warning

The vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server.

The vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server.

PermissionAddedEvent

Permission added

info

Permission created for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate}

PermissionRemovedEvent

Permission removed

info

Permission rule removed for {principal} on {entity.name}

PermissionUpdatedEvent

Permission updated

info

Permission changed for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate}

ProfileAssociatedEvent

Profile attached to host

info

Profile {profile.name} has been attached.

Profile {profile.name} has been attached.

Profile {profile.name} has been attached with the host.

Profile {profile.name} attached.

ProfileChangedEvent

Profile was changed

info

Profile {profile.name} was changed.

Profile {profile.name} was changed.

Profile {profile.name} was changed.

Profile {profile.name} was changed.

ProfileCreatedEvent

Profile created

info

Profile is created.

ProfileDissociatedEvent

Profile detached from host

info

Profile {profile.name} has been detached.

Profile {profile.name} has been detached.

Profile {profile.name} has been detached from the host.

Profile {profile.name} detached.

ProfileReferenceHostChangedEvent

The profile reference host was changed

info

The profile {profile.name} reference host was changed to {referenceHost.name}.

The profile {profile.name} reference host was changed to {referenceHost.name}.

The profile {profile.name} reference host was changed to {referenceHost.name}.

Profile {profile.name} reference host changed.

ProfileRemovedEvent

Profile removed

info

Profile {profile.name} was removed.

Profile {profile.name} was removed.

Profile was removed.

RecoveryEvent

Recovery completed on the host.

info

The host {hostName} network connectivity was recovered on the virtual management NIC {vnic}. A new port {portKey} was created on vSphere Distributed Switch {dvsUuid}.

The host {hostName} network connectivity was recovered on the virtual management NIC {vnic}. A new port {portKey} was created on vSphere Distributed Switch {dvsUuid}.

The host {hostName} network connectivity was recovered on the management virtual NIC {vnic} by connecting to a new port {portKey} on the vSphere Distributed Switch {dvsUuid}.

The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch.

info

The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name}.

The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name}.

The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.

UplinkPortVlanUntrunkedEvent

Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch.

error

Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name}.

Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name}.

Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}.

{vm.name} on {host.name} in cluster {computeResource.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}.

{vm.name} on {host.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}.

{vm.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}

This virtual machine reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}

{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by vSphere HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}.

vSphere HA unsuccessfully failed over {vm.name} on {host.name} in cluster {computeResource.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}

vSphere HA unsuccessfully failed over {vm.name} on {host.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}

vSphere HA unsuccessfully failed over {vm.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}

vSphere HA unsuccessfully failed over this virtual machine. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}

vSphere HA unsuccessfully failed over {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. vSphere HA will retry if the maximum number of attempts has not been exceeded. Reason: {reason.msg}

VmFaultToleranceStateChangedEvent

VM Fault Tolerance state changed

info

Fault Tolerance state of {vm.name} on host {host.name} in cluster {computeResource.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}

Fault Tolerance state on {vm.name} on host {host.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}

Fault Tolerance state of {vm.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}

Fault Tolerance state changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}

Fault Tolerance state of {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState}

VmFaultToleranceTurnedOffEvent

VM Fault Tolerance turned off

info

Fault Tolerance protection has been turned off for {vm.name} on host {host.name} in cluster {computeResource.name}

Fault Tolerance protection has been turned off for {vm.name} on host {host.name}

Fault Tolerance protection has been turned off for {vm.name}

Fault Tolerance protection has been turned off for this virtual machine

Fault Tolerance protection has been turned off for {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}

VmFaultToleranceVmTerminatedEvent

Fault Tolerance VM terminated

info

The Fault Tolerance VM {vm.name} on host {host.name} in cluster {computeResource.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}

The Fault Tolerance VM {vm.name} on host {host.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}

The Fault Tolerance VM {vm.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}

The Fault Tolerance VM has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}

The Fault Tolerance VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}

VMFSDatastoreCreatedEvent

VMFS datastore created

info

Created VMFS datastore {datastore.name} on {host.name}

Created VMFS datastore {datastore.name} on {host.name}

Created VMFS datastore {datastore.name}

Created VMFS datastore {datastore.name} on {host.name} in {datacenter.name}

vSphere HA shut down this virtual machine on the isolated host {isolatedHost.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation}

vSphere HA shut down {vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation}