You can have one or more Unified CCE VMs co-resident on the same ESXi server.&nbsp; However, you must follow the rules described below:

You can have one or more Unified CCE VMs co-resident on the same ESXi server.&nbsp; However, you must follow the rules described below:

:* You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation is not over committed on the available ESXi server computing resources.&nbsp;

:* You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation is not over committed on the available ESXi server computing resources.&nbsp;

-

:* You must not have&nbsp;CPU overcommit on the ESXi server that is running Unified CCE&nbsp;realtime application components.&nbsp; The total number of vCPUs among all&nbsp;the virtual machines on an ESXi host must not greater than the total number of CPUs available on the ESXi server.&nbsp; In the case of the Cisco UCS B-200 M1, the total number of CPUs available is 8.

+

:* You must not have&nbsp;CPU overcommit on the ESXi server that is running Unified CCE&nbsp;realtime application components.&nbsp; The total number of vCPUs among all&nbsp;the virtual machines on an ESXi host must not greater than the total number of CPUs available on the ESXi server.&nbsp; In the case of the Cisco UCS B-200 M1 and C-210 M1, the total number of CPUs available is 8.

-

+

:* You must not have&nbsp;memory overcommit on the ESXi host that running UC realtime applications.&nbsp; You must allocate&nbsp;minimum&nbsp;2GB of memory for the ESXi kernel.&nbsp; For example, if an ESXi server on B-200 M1 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel, you have 34GB available for the virtual machines.&nbsp; The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.

:* You must not have&nbsp;memory overcommit on the ESXi host that running UC realtime applications.&nbsp; You must allocate&nbsp;minimum&nbsp;2GB of memory for the ESXi kernel.&nbsp; For example, if an ESXi server on B-200 M1 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel, you have 34GB available for the virtual machines.&nbsp; The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.

-

:* VM co-residency with Unified Communications and third party applications not covered in following examples is not supported.

:* VM co-residency with Unified Communications and third party applications not covered in following examples is not supported.

-

+

:* On a C-Series server, the HDS cannot co-reside with a Router, Logger, or a PG.

-

+

=== Sample CCE Deployments ===

=== Sample CCE Deployments ===

Line 524:

Line 521:

'''Notes'''

'''Notes'''

+

+

The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

+

+

Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot co-reside with a Router, Logger, or a PG, this restriction is '''not''' present on a B-Series hardware platform.

For deployments where Historical Data Servers (HDSs) are co-resident, two RAID 5 groups (one for each HDS) are recommended.

For deployments where Historical Data Servers (HDSs) are co-resident, two RAID 5 groups (one for each HDS) are recommended.

Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be colocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)

CVP is supported with CCE on UCS B-Series solution. Please refer to the CVP product specific pages for detail.

The following deployments and Unified CCE components have not been qualified and are not supported in virtualization:

Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.

In addition, note the following expectations for UCS hardware points of failure:

For communication path single point of failure performed by Cisco on the UCCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.

Multiple points of failure on the UCCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

B-Series Considerations

When deploying Clustering Over the WAN with B-Series hardware, use of the Cisco UCS M81KR Virtual Interface Card is mandatory.

New B-Series deployments using Clustering Over the WAN must use a Nexus 7000 Series / Nexus 5000 Series vPC infrastructure, or a Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720-10G.

C-Series Considerations

If deploying Clustering Over the WAN with C-Series hardware, do not trunk public and private networks. You must use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in Network Requirements for C-210 M1 Servers.

Notes for Deploying UCCE Applications on UCS B-Series Hardware with SAN

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

Keep the following considerations in mind when deploying UCCE applications on UCS B-series hardware with SAN.

Each Historical Data Server (HDS) requires a dedicated LUN and a datastore with a 2 MB block size. No other application can reside on the same datastore as the HDS. The HDS requires a 2 MB block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores. The HDS block size is configured in VMware at datastore creation.

To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.

Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.

The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:

AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).

%Disktime must remain less than 60%.

The total size of all Virtual Machines on a disk (total size = VM disk + RAM copy) must not exceed 90% of the capacity of a datastore.

Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements & guidelines.

IOPS utilization should be monitored for each application to ensure that the aggregate IOPS is not exceeding the capacity of the array. Prolonged buffering of IOPS against an array may result in degraded system performance and delayed reporting data availability.

Unified CCE requires application storage to be on VMFS, Raw Device Mapping (RDM) is not supported.

Example of SAN Configuration for UCCE ROGGER Deployment up to 2000 Agents

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Unified CCE Component Capacities and VM Configuration Requirements

This table shows the supported Unified CCE components, their capacities, and the VM computing resource requirements. You must use the OVA virtual machine templates to create the Unified CCE component VMs.

Unified CCE Component

Capacity

vCPU

RAM (GB)

vDisk (GB)

vNIC

Template Name

Router

8,000 agents

2

4

80

2

UCCE_router_8000_v1.0_vmv7.ova

Logger

8,000 agents

4

4

150

2

UCCE_logger_8000_v1.0_vmv7.ova

Agent PG

2,000 agents

2

4

80

2

UCCE_agtpg_2000_v1.0_vmv7.ova

Agent PG

450 agents

1

2

80

2

UCCE_agtpg_450_v1.0_vmv7.ova

MR PG

2000 agents, 10 PIMs

2

4

80

2

UCCE_agtpg_2000_v1.0_vmv7.ova

MR PG

1000 agents, 5 PIMs

1

2

80

2

UCCE_agtpg_450_v1.0_vmv7.ova

VRU PG

9,600 ports, 10 PIMs

2

2

80

2

UCCE_vrupg_9600_v1.0_vmv7.ova

VRU PG

1,200 ports, 4 PIMs

1

2

80

2

UCCE_vrupg_1200_v1.0_vmv7.ova

Administration Server - AW

25 clients

1

2

40

1

UCCE_aw_v1.0_vmv7.ova

AW-CONFIG

50 clients

1

2

40

1

UCCE_aw_config_v1.0_vmv7.ova

AW-HDS

200 reporters

4

4

500

1

UCCE_aw_hds_v1.0_vmv7.ova

AW-HDS-DDS

200 reporters

4

4

500

1

UCCE_aw_hds_dds_v1.0_vmv7.ova

HDS-DDS

200 reporters

4

4

500

1

UCCE_hds_dds_v1.0_vmv7.ova

Administration Client (Client AW)

1 user

1

2

40

1

UCCE_clientaw_v1.0_vmv7.ova

Support Tools

n/a

1

2

40

1

UCCE_support_tools_v1.0_vmv7.ova

Rogger

4000 agents

4

4

150

2

UCCE_logger_8000_v1.0_vmv7.ova

Unified CCE Component Co-Residency and Sample Deployments

You can have one or more Unified CCE VMs co-resident on the same ESXi server. However, you must follow the rules described below:

You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation is not over committed on the available ESXi server computing resources.

You must not have CPU overcommit on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not greater than the total number of CPUs available on the ESXi server. In the case of the Cisco UCS B-200 M1 and C-210 M1, the total number of CPUs available is 8.

You must not have memory overcommit on the ESXi host that running UC realtime applications. You must allocate minimum 2GB of memory for the ESXi kernel. For example, if an ESXi server on B-200 M1 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel, you have 34GB available for the virtual machines. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.

VM co-residency with Unified Communications and third party applications not covered in following examples is not supported.

On a C-Series server, the HDS cannot co-reside with a Router, Logger, or a PG.

Sample CCE Deployments

Legend

Grey denotes solution "optional", meaning that not all customers may choose that option in their deployment.

Notes

The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot co-reside with a Router, Logger, or a PG, this restriction is not present on a B-Series hardware platform.

For deployments where Historical Data Servers (HDSs) are co-resident, two RAID 5 groups (one for each HDS) are recommended.

ROGGER (up to 2000 Agents)

Chassis 1 (B-Series)/Rack of C-Series Rack Mount Servers

..

Chassis 2 (B-Series)/Rack of C-Series Rack Mount Servers

ESXi Server

Component

# vCPUs

Memory

ESXi Server

Component

# vCPUs

Memory

ESXi Server-A-1

Rogger A

4 vCPU

4GB RAM

ESXi Server-B-1

Rogger B

4 vCPU

4GB RAM

Agent PG A

2 vCPU

4GB RAM

Agent PG B

2 vCPU

4GB RAM

Domain Controller A

1 vCPU

2GB RAM

Domain Controller B

1 vCPU

2GB RAM

Support Tools

1 vCPU

2GB RAM

ESXi Server-A-2

AW-HDS-DDS 1

4 vCPU

4GB RAM

ESXi Server-B-2

AW-HDS-DDS 2

4 vCPU

4GB RAM

AW-HDS-DDS 3

4 vCPU

4GB RAM

AW-HDS-DDS 4

4 vCPU

4GB RAM

ESXi Server-A-3

UCM Publisher

2 vCPU

6GB RAM

ESXi Server-B-3

UCM Subscriber 2

2 vCPU

6GB RAM

UCM Subscriber 1

2 vCPU

6GB RAM

UCM Subscriber 4

2 vCPU

6GB RAM

UCM Subscriber 3

2 vCPU

6GB RAM

CUP Server 1

2 vCPU

4GB RAM

IPIVR 1A

2 vCPU

4GB RAM

IPIVR 1B

2 vCPU

4GB RAM

ROGGER (up to 4000 Agents)

Chassis 1 (B-Series)/Rack of C-Series Rack Mount Servers

..

Chassis 2 (B-Series)/Rack of C-Series Rack Mount Servers

ESXi Server

Component

# vCPUs

Memory

ESXi Server

Component

# vCPUs

Memory

ESXi Server-A-1

Rogger A

4 vCPU

4GB RAM

ESXi Server-B-1

Rogger B

4 vCPU

4GB RAM

Agent PG 1A

2 vCPU

4GB RAM

Agent PG 1B

2 vCPU

4GB RAM

Agent PG 2A

2 vCPU

4GB RAM

Agent PG 2B

2 vCPU

4GB RAM

ESXi Server-A-2

AW-HDS-DDS 1

4 vCPU

4GB RAM

ESXi Server-B-2

AW-HDS-DDS 2

4 vCPU

4GB RAM

AW-HDS-DDS 3

4 vCPU

4GB RAM

AW-HDS-DDS 4

4 vCPU

4GB RAM

ESXi Server-A-3

Domain Controller A

1 vCPU

2GB RAM

ESXi Server-B-3

Domain Controller B

1 vCPU

2GB RAM

UCM Publisher

2 vCPU

6GB RAM

UCM Subscriber 2

2 vCPU

6GB RAM

UCM Subscriber 1

2 vCPU

6GB RAM

UCM Subscriber 4

2 vCPU

6GB RAM

UCM Subscriber 3

2 vCPU

6GB RAM

Support Tools

1 vCPU

2GB RAM

ESXi Server-A-4

UCM Subscriber 5

2 vCPU

6GB RAM

ESXi Server-B-4

UCM Subscriber 6

2 vCPU

6GB RAM

UCM Subscriber 7

2 vCPU

6GB RAM

UCM Subscriber 8

2 vCPU

6GB RAM

CUP Server 1

2 vCPU

4GB RAM

CUP Server 2

2 vCPU

4GB RAM

IPIVR 1A

2 vCPU

4GB RAM

IPIVR 1B

2 vCPU

4GB RAM

Router/Logger (up to 8000 Agents)

Chassis 1 (B-Series)/Rack of C-Series Rack Mount Servers

..

Chassis 2 (B-Series)/Rack of C-Series Rack Mount Servers

ESXi Server

Component

# vCPUs

Memory

ESXi Server

Component

# vCPUs

Memory

ESXi Server-A-1

Router A

2 vCPU

4GB RAM

ESXi Server-B-1

Router B

2 vCPU

4GB RAM

Support Tools

1 vCPU

2GB RAM

Domain Controller B

1 vCPU

2GB RAM

Domain Controller A

1 vCPU

2GB RAM

Agent PG 1B

2 vCPU

4GB RAM

Agent PG 1A

2 vCPU

4GB RAM

Agent PG 3B

2 vCPU

4GB RAM

Agent PG 3A

2 vCPU

4GB RAM

ESXi Server-A-2

Logger A

4 vCPU

4GB RAM

ESXi Server-B-2

Logger B

4 vCPU

4GB RAM

Agent PG 2A

2 vCPU

4GB RAM

Agent PG 2B

2 vCPU

4GB RAM

Agent PG 4A

2 vCPU

4GB RAM

Agent PG 4B

2 vCPU

4GB RAM

ESXi Server-A-3

UCM 1 Subscriber 1

2 vCPU

6GB RAM

ESXi Server-B-3

UCM 1 Subscriber 2

2 vCPU

6GB RAM

UCM 1 Subscriber 3

2 vCPU

6GB RAM

UCM 1 Subscriber 4

2 vCPU

6GB RAM

UCM 2 Subscriber 1

2 vCPU

6GB RAM

UCM 2 Subscriber 2

2 vCPU

6GB RAM

UCM 2 Subscriber 3

2 vCPU

6GB RAM

UCM 2 Subscriber 4

2 vCPU

6GB RAM

ESXi Server-A-4

AW-HDS 1

4 vCPU

4GB RAM

ESXi Server-B-4

AW-HDS 2

4 vCPU

4GB RAM

HDS-DDS-1

4 vCPU

4GB RAM

HDS-DDS-2

4 vCPU

4GB RAM

ESXi Server-A-5

AW-HDS 3

4 vCPU

4GB RAM

ESXi Server-B-5

AW-HDS 4

4 vCPU

4GB RAM

AW-HDS 5

4 vCPU

4GB RAM

AW-HDS 6

4 vCPU

4GB RAM

ESXi Server-A-6

UCM 1 Subscriber 5

2 vCPU

6GB RAM

ESXi Server-B-6

UCM 1 Subscriber 6

2 vCPU

6GB RAM

UCM 1 Subscriber 7

2 vCPU

6GB RAM

UCM 1 Subscriber 8

2 vCPU

6GB RAM

UCM 2 Subscriber 5

2 vCPU

6GB RAM

UCM 2 Subscriber 6

2 vCPU

6GB RAM

UCM 2 Subscriber 7

2 vCPU

6GB RAM

UCM 2 Subscriber 8

2 vCPU

6GB RAM

ESXi Server-A-7

UCM 1 Publisher

2 vCPU

6GB RAM

ESXi Server-B-7

CUP Server 2

2 vCPU

4GB RAM

UCM 2 Publisher

2 vCPU

6GB RAM

IPIVR 1B

2 vCPU

4GB RAM

CUP Server 1

2 vCPU

4GB RAM

IPIVR 1A

2 vCPU

4GB RAM

Creating Virtual Machines from OVA VM Templates

Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances. Files in this format have an extension of .ova. The naming convention for the template is PRODUCT_COMPONENT_USER COUNT_VERSION_VMVER.ova

Click here to download the OVA templates from cisco.com to a local datastore that vSphere Client can access.

Downloading OVA Templates

To download a single OVA file, click the Download File button next to that file. To download multiple OVA files, click the Add to Cart button next to each file that you want to download, then click on the Download Cart link. A Download Cart page appears.

Click the Proceed with Download button on this page. A Software License Agreement page appears.

Read the Software License Agreement, then click the Agree button

On the next page, click on either the Download Manager link (requires Java) or the Non Java Download Option link. A new browser window appears.

If you selected Download Manager, a Select Location dialog box appears. Specify the location where you want to save the file, and click Open to save the file to your local machine.

If you selected Non Java Download Option, click the Download link on the new browser window. Specify the location and save the file to your local machine.

Creating Virtual Machines by Deploying the OVA Templates

In the vSphere client, perform the following steps to deploy the Virtual machines.

Highlight the host or cluster to which you wish the VM to be deployed.

Select File > Deploy OVF Template.

Click the Deploy from File radio button and specify the name and location of the file you downloaded in the previous section OR click the Deploy from URL radio button and specify the complete URL in the field, then click Next.

Verify the details of the template, and click Next.

Give the VM you are about to create a name, and choose an inventory location on your host, then click Next.

Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click Next.

Choose a virtual network for the VM, then click Next.

Verify the deployment settings, then click Finish.

Notes

VM CPU affinity is not supported. You don't need to set CPU affinity for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform.

VM resource Reservation - VM resource reservation is not supported for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform. The VM computing resources should have a default reservation setting, which is no resource reservations.

You cannot change the computing resource configuration of your VM at any time.

You can never go below the minimum VM computing resource requirements as defined in the OVA templates.

ESXi Server hyperthread is enabled by default.

Remote Control of the Virtual Machines

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

Installing VMware Tools

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Installing Unified CCE Components on Virtual Machines

You can install the Unified CCE components after the configuration of the VMs. Installation of these Unified CCE components on a VM is the same as the installation of these components on physical hardware.

Refer to the Unified CCE documentation for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrating Unified CCE Components to Virtual Machines

You can migrate the Unified CCE components from physical hardware or another virtual machine after the configuration of the VMs. Migration of these Unified CCE software components to a VM is the same as the migration of these components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted Release 8.0(1).

Performance Requirements

CPU usage (average) should not exceed 60% for the ESXi Server and for each of the individual processors, and for each VM.

Memory usage (average) should not exceed 80% for the ESXi Server and for each of the VMs.

VM snapshots are not supported in production since they have significant impact on system performance.

The SAN must be able to handle the following Unified CCE application disk I/O characteristics.

Timekeeping Best Practices for Windows

ESXi hosts and domain controllers should synchronize the time from the same NTP source.

When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.

Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

System Performance Monitoring Using ESXi Counters

Make sure that you follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance.

VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.

You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.

You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.

The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

You can use the following ESXi counters as performance indicators.

Category

Object

Measurement

Units

Description

Performance Indication and Threshold

CPU

ESXi Server

VM

CPU Usage (Average)

Percent

CPU Usage Average in percentage for:

ESXi server

Virtual machine

Less than 60%.

CPU

ESXi Server Processor#

VM_vCPU#

CPU Usage 0 - 7 (Average)

Percent

CPU Usage Average for:

ESXi server for processors 0 to 7

Virtual machine vCPUs

Less than 60%

CPU

VM

CPU Ready

mSec

The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU.

Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.

Memory

ESXi Server

VM

Memory Usage (Average)

Percent

Memory Usage = Active/ Granted * 100

Less than 80%

Memory

ESXi Server

VM

Memory Active (Average)

KB

Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amound of memory on the host, the server starts swap.

Less than 80% of the Granted memory

Memory

ESXi Server

VM

Memory Balloon (Average)

KB

ESXi use balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory.

Since we do not over commit the memory, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.

Memory

ESXi Server

VM

Memory Swap used (Average)

KB

ESXi Server swap usage. Use the disk for RAM swap

Since we do not over commit the memory, this should be 0 or very low

Disk

ESXi Server

VM

Disk Usage (Average)

KBps

Disk Usage = Disk Read rate + Disk Write rate

Ensure that your SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VMbha ID

Disk Usage Read rate

KBps

Rate of reading data from the disk

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Usage Write rate

KBps

Rate of writing data to the disk

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Commands Issued

Number

Number of disk commands issued on this disk in the period.

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Aborts

Number

Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates storage performance issue.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Latency

mSec

The average amount of time taken for a command from the perspective of a Gust OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency.

Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.

Network

ESXi Server

VM

Network Usage (Average)

KBps

Network Usage = Data receive rate + Data transmit rate

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Receive Rate

KBps

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Transmit Rate

KBps

The average rate at which data is transmitted on this Ethernet port

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.