Upgrade ESXi

To upgrade the ESXi software on a host, you must power off all virtual machines or migrate the virtual machine to a different host.

Instructions for installing or upgrading to a specific ESXi release are available in the release notes of the ESXi version you install. For example, here are the VMware ESX 4.1 Update 1 Release Notes.

After upgrading ESXi, or whever moving a virtual machine to a different host that is running a different version of ESXi, VMware Tools must be upgraded on the UC applications so that the tools versions shows "Up to Date" in the vSphere Client (see VMware Tools).

Monitor Your Hardware Health

When deployed in a virtualized environment, the UC applications do not monitor the hardware. Hardware must be monitored independently from the UC applications. There are multiple ways to do this:

Out-of-Band Hardware Monitoring

In-Band Hardware Monitoring

Each method is explained in this page.

Out-of-Band Hardware Monitoring

Out-of-Band Hardware Monitoring uses Intelligent Platform Management Interface (IPMI) to communicate with the Baseboard Management Controller (BMC). Common IPMI interfaces are HP Integrated Lights Out (iLO) and Cisco Integrated Management Controller (CIMC). IPMI allows the user to to inspect system sensors, power cycle the chassis, obtain remote console and manipulate virtual media. UCS C-series hardware provide IPMI over LAN for use by tools such as ipmitool.

CIMC

CIMC provides GUI and CLI interfaces to hardware sensors and other status. This information is available via ipmitool using IPMI over LAN. The login screen of CIMC shows hardware status:

See the selections for hardware sensors and inventory in the following image:

IPMI over LAN (ipmitool, ipmiutil)

ipmiutil connects to the CIMC over LAN UDP port 623 (ASF remote management). There are a huge number of options to this command; some of which can be used to set up Platform Event Filtering (pef) and alerting. We have not set up alerting from ipmitool.

In-Band Hardware Monitoring

In-Band hardware monitoring talks to an agent or provider on the host operating system via the Common Information Model (CIM). CIM defines standard XML definitions of computer equipment, including RAM, CPU and peripherals such as disks, RAID controllers, and other options.

CIM data is monitored by:

Third-party tools like IBM Director

Linux tools such as wbemcli

vCenter

ESXi Access

ESXi is a closed platform. The following network ports are provided:

TCP 22 (ssh) Same as "unsupported" login, only if enabled

TCP 80 (http)

TCP 443 (vSphere, VMware API)

TCP 902 (older VMware API)

UDP 427 (SLP)

TCP 5989 (WBEM)

vSphere Client

vShpere Client, when logged directly into an ESXi host, provides a Health and Status selection under the Configuration tab, from vCenter, this information is available from the Hardware tab when you've selected a host in the Hosts and Clusters view. Health and Status gives an overview of individual physical drives and logical drive groups (RAID arrays). This information is transferred via the following CIM classes in the root/cimv2 namespace:

VMware_HHRCDiskDriveInformation about each physical drive including

VMware_HHRCStorageVolumeInformation about the logical volumes (RAID groups), including

Member drives (<drive number>e<enclosure number>)

Array state (Optimal, degraded, etc)

Here is an image of the vSphere Client showing sensor data by connecting the viClient directly to the ESXi host.

VMware API

The VMware API is a set of perl and java libraries and classes that talks SOAP over port 443 to the ESXi host to implement functions used by vSphere Client and virtual Center.

Everything that can be done from vSphere Client can be done from perl scripts using the VMware Perl SDK, including virtual machine creation, configuration and management, performance statistics collection, virtual switch administration and virtual host administration.

Perhaps the multicast group registrations via IGMP aren't being seen or processed by our lab switches, and the multicast traffic isn't routed to the management LAN of some of these ESXi hosts. At any rate, SLP discovery has been unreliable to ESXi hosts, at least in our lab.

wbemcli

wbemcli is a command line interface to CIM servers. This command is able to dump information aboutThe LSI CIM namespace is lsi/lsimr12. This command enumerates all classes in that namespace:

While the RAID array is being rebuilt we see the following events in the Event Log:

LSI MegaRAID Storage Manager (MSM)

The LSI MSM Release 3.6 (downloaded as 2.91) is able to monitor, configure and repair RAID arrays while ESXi is active. This LSI MSM depends on SLP via multicast to find servers. Multicast to an ESXi host appears to be unreliable, and server discovery of ESXi hosts suffers as a result. This Release of MSM does not support alerting on specified events.

LSI MSM 6.9 does appear to support email alerting. However it still depends on SLP discovery and so must be installed on a VM that shares vSwitch0 with its ESXi host. Once the server is discovered:

syslog

Direct inspection of ESXi syslog requires you log into the CIMC KVM console:

Press ALT-F1

Enter unsupported followed by the root password

vi /var/log/messages

It is possible to enable remote syslog in ESXi by specifying the host name or IP address of a syslog server under the Configuration tab and Advanced Settings. We used our CUCM publisher and it worked fine.

The following syslog messages are generated for RAID drive rebuild progress indication:

Upgrading Firmware on your TRCs

When running virtualized, UC applications do not manage the firmware on the physical server (host).

The customer must manage the firwmare manually. The customer must monitor new releases of firmware published by the hardware vendor and upgrade when necessary based upon the recommendations of the hardware vendor and VMware.

For deployments on Cisco UC hardware, instructions for upgrading the firmware are available in the Release Notes for each version of posted firmware. It is important to understand that the upgrade procedure may vary between releases of firmware. For example, see firmware release for the UCS C-series for details on the rackmount servers.

For installation and configuration information on UCS servers, refer to the documentation roadmap for either the B-Series or C-Series servers:

Backup, Restore, and Server Recovery

Disaster recovery for Cisco Unified Communications application virtual machines supports the same in-host techniques as Cisco Unified Communications applications on physical servers: the same backup options are available with Cisco Unified Communications running on ESXi as on physical servers.

Other backup, restore techniques available in a virtualizaed environment are currently not supported.