| Updated the ''Creating Virtual Machines from OVA VM Templates'' section.<br>Removed OVA&nbsp;list from this page and added a link.<br>Added the Scalability Assumptions section.<br>Updated the CUIC&nbsp;RAM (GB) numbers in the sample CCE&nbsp;deployments.<br>Updated the sample deployment tables and highlighted optional items.

-

|-

-

| December 22, 2010

-

| Updated the pointer links for the Bill of Material, CCMP virtualization page, and the Hybrid Deployment section.

-

|-

-

| December 20, 2010

-

| Updated the Component Capacities section, the VM configuration requirements table, the Hybrid Deployment Options section, the steps for installing/migrating Unified CCE Components, the Support for Virtualization on the ESXi/UCS Platform section, and the Hardware Requirements section.

| This page, [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information], and [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE UCS Network Configuration for UCCE] updated to reflect support of UCS C-Series hardware on UCCE.

-

|-

-

| September&nbsp;28,&nbsp;2010

-

| CVP added to list of supported components/deployments.

-

|-

-

| September&nbsp;21,&nbsp;2010

-

| IPIVR added to list of supported components/deployments and as an optional component in the sample deployment tables at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Sample_CCE_Deployments Sample CCE Deployments].

'''It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page''' [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information].

-

-

This page contains essential information for partners about the following:

<sup>2</sup>On UCCE Release 8.5(4), an Engineering Special (ES) is required to support ICME on UCS on Windows Server 2008. Please contact TAC to obtain the required ES before proceeding with deployment. Additionally, if the deployment is greater than 8,000 agents for ICME on UCS, a manual change of the ICM router, logger, peripheral gateways, and HDS/DDS components is necessary. The virtual machine specifications (vCPU, RAM, and CPU and Memory reservations) must be changed to match the UCCE 9.0 OVAs. For details, see the [http://docwiki.cisco.com/wiki/OVA_Template_Details_for_Unified_Contact_Center_Enterprise_Release_9.x CCE 9.x OVA specifications] Wiki page.

-

-

<sup>3</sup>Starting with Release 9.0(3), virtualization of the Unified ICME with more than 12,000 agents and/or the use of NIC, SIGTRAN (for up to 150 PGs) is also supported on Cisco Unified Computing Systems (UCS) B Series and C Series hardware.

-

-

<sup>4</sup>Maximum Agents scales with call rate; please refer to BOM for further guidance.

<br>The following deployments and Unified CCE components have not been qualified and '''are not supported''' in virtualization:

-

-

*Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.

-

*Unified CCH (multi-instance)

-

*Unified ICMH (multi-instance)

-

*Outbound Option with SCCP Dialer

-

*WebView Server

-

*Expert Advisor

-

*Span based Silent Monitoring on UCS B-series chassis. For support on UCS C-series, please consult the VMware Community/Knowledge Base regarding the possible use of promiscuous mode to receive the monitored traffic from Span port.

-

*Cisco Unified CRM Connector

-

*Agent PG with more than 1 Agent PIM (2nd PIM for UCM CTI RP is allowed as per SRND)

-

*Multi-Instance CTIOS

-

*IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.

-

-

<br>

-

-

== Virtual and non-virtual servers deployment model ==

-

-

The hybrid (virtual and non-virtual servers) deployment model is supported. However, for paired components having side A and side B (e.g., Rogger A and Rogger B), the server hardware must be identical on both sides (e.g., you cannot mix bare metal MCS on side A of the paired component and virtual UCS on side B of the paired component). You may, however, deploy a mixture of UCE B and C series for the separate Duplex pair sides, so long as the UCS server and processor generation aligns (UCS-B200M2-VCS1 is equal generation to UCS-C210M2-VCD2, for example). The hybrid support '''is''' for non-paired components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployments. See the [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Hybrid_Deployment_Options Hybrid Deployment Options] section for more information.

[http://www.cisco.com/web/partners/incentives_and_promotions/cisco_smartplay_promo.html Cisco SmartPlay Solution Packs for UC], which are the pre-configured bundles (value UC bundles) based on '''UCS B200M2 or C210M2''' as an alternative ordering to UC on UCS TRCs above are '''supported with caveats''':

+

-

+

-

:*For '''B200M2 and C210M2 Solution Packs (''Value UC Bundles'')''' that have better specification than the UC on UCS B200M2/C210M2 TRC models (e.g., 6 cores per same cpu family, etc.), the UC on UCS spec-based HW support policy needs to be followed and these bundles are supported by UCCE/CVP as an exception providing the same UCCE VM co-residency rules are compliant and the number of CVP Call Server VMs cannot be more than two on the same server/host.

+

-

:*For ''other Spec-based Servers'' according to UC on UCS Spec-based Hardware Policy that have specification equal to or better than the UC on UCS B200M2/C210M2 TRCs, they '''may''' be used for UCCE/CVP once validated in the Customer Collaboration DMS/A2Q (Design Mentoring Session/Assessment To Quality) process. This also means a particular desired spec-based server model may '''not be approved''' for use after the server design review in the DMS/A2Q session.

+

-

+

-

<br>Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted]. Unified CCE does not support UCS C200.

:*If you are upgrading ESXi software, see the section [http://docwiki.cisco.com/wiki/Ongoing_Virtualization_Operations_and_Maintenance Upgrade ESXi].

+

-

:*The Windows, SQL, and other third party software requirements for the Unified CCE applications in the ESXi/UCS platform are the same as in the physical server. For more information see the appropriate release of [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].

+

-

+

-

=== ESXi 4.1 and 5.0 Software Requirements ===

+

-

+

-

When Cisco Unified CCE is running on ESXi 4.1 and 5.0, you must install or upgrade VMware Tools for ESXi to match the running version on each of the VMs and use all of the VMware Tools default settings. For more information, see the section VMware Tools. You must do this every time the ESXi version is upgraded.

+

-

+

-

Starting with CCE/CVP Release 9.x with Windows Server 2008 environment, disabling the LRO in ESXi 5.0 (and later) is no longer a requirement.

{{note| You must use the OVA VM templates to create the Unified CCE component VMs.}}

+

-

+

-

<br>For instructions on how to obtain the OVA templates, see [http://docwiki.cisco.com/wiki/Downloading_OVA_Templates_for_UC_Applications Downloading OVA Templates for UC Applications]. <br><br>

+

-

+

-

== Unified CCE Scalability Impacts ==

+

-

+

-

The [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information] is based on the operating conditions published in the ''Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.''x''/9.''x'''', Chapter 10, ''Operating Conditions'' and the ''Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted''''''<i>,</i>'''8.''x''/9.''x''''''<i>, Section 5. Both documents are available at [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco.com]'''.'''</i>

+

-

+

-

The following features reduce the scalability of certain components below the agent count of the respective OVA [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information].

*Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.

:*'''QoS must be enabled''' for both the Private network connections (between Side A and B) and the public network connections (between the Router and the PG) in Unified CCE Setup. Refer to Chapter 12 of the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)] for more details.

+

-

+

-

== Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware ==

+

-

+

-

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

+

-

+

-

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)].

+

-

+

-

In addition, note the following expectations for UCS hardware points of failure:

+

-

+

-

*For communication path single point of failure performed by Cisco on the Unified CCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.

+

-

+

-

*Multiple points of failure on the Unified CCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

+

-

+

-

=== B-Series Considerations ===

+

-

+

-

Cisco recommends use of the M81KR Virtual Interface Card (VIC) for Unified CCE deployments, though the M71KR(-E/Q) and M72KR(-E/Q) may also be used as per reference design 2 detailed at the dedicated networking page linked below. M51KR, M61KR and 82598KR are not supported for Contact Center use in UCS B series blades.

+

-

+

-

New B Series deployments are recommended to use Nexus 5000/7000 series data center switches with vPC PortChannels. This technology has been shown to add considerable advantageous to Contact Center applications in fault recovery scenarios.

+

-

+

-

See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_B-Series_Network_Configuration UCCE on UCS B-Series Network Configuration].

+

-

+

-

=== C-Series Considerations ===

+

-

+

-

If deploying Clustering Over the WAN with C-Series hardware, '''do not''' trunk public and private networks. You '''must''' use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_C-Series_Network_Configuration UCCE on UCS C Series Network Configuration].

+

-

+

-

<br>

+

-

+

-

= Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN =

+

-

+

-

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

+

-

+

-

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

+

-

+

-

Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.

+

-

+

-

*SAN disk arrays must be configured as RAID 5 or RAID 10. Note: RAID 6/ADG is also supported as an extension of RAID 5.

**Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.

+

-

*The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:

+

-

**AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).

+

-

**&nbsp;%Disktime must remain less than 60%.<br>

+

-

*Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in&nbsp;[http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements]

+

-

*vSphere will alarm when disk free space is less than 20% free on any datastore. Recommendation is to provision at least 20% free space overhead, with 10% overhead '''required'''.

+

-

*Recommend deploying from 4-8 VMs per LUN/datastore so long as IOPS and space requirements can be met, with supported range from 1-10.

+

-

+

-

<br>

+

-

+

-

See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Unified_CCE_Component_Coresidency_and_Sample_Deployments Unified CCE Component Coresidency and Sample Deployments].

+

-

+

-

== Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents ==

+

-

+

-

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x)/9.0(x). For more information, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

+

-

+

-

#Acquire the supported servers for Unified CCE 8.0(2) or later release.

#*MCS-7845-I3-CCE2 is specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

+

-

#*If there are PG VMs that are running on the following older MCS servers, MCS-7845-H2 or MCS-7845-I2, replace these servers with supported servers.

+

-

#Install, setup, and configure the servers.

+

-

#Configure the network. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#UCS_Network_Configuration UCS Network Configuration]. {{note |Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.}}

#Create the Unified CCE virtual machines from the OVA templates. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Creating_Virtual_Machines_from_OVA_VM_Templates Creating Virtual Machines from OVA VM Templates]. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.

+

-

#Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.

+

-

#Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines. {{note| Microsoft Windows Server 2008 R2 ''Standard Edition'' and Microsoft SQL Server 2008 R2 ''Standard Edition'' should be used for virtual machine guests. See related information in the links below. If you have prior to Unified CCE 9.0 release deployment using Windows Server 2003 and SQL 2005 then plan to use them accordingly on the older Unified CCE releases.}}

You can have one or more Unified CCE VMs co-resident on the same ESXi server, however, you must follow the rules as described below:

+

-

+

-

:*You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.

+

-

:*You must not have CPU overcommitment on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPU cores available on the ESXi server (not counting Hyper-threading cores). Commonly UCS servers have two physical socket CPUs with 4-10 cores each.

+

-

:*You must not have memory over-committed on the ESXi host running UC realtime applications. You must allocate a minimum of 2GB of memory for the ESXi kernel. For example, a B200M2 server with 48GB of memory would allow for up to 46GB to be available for virtual machine allocation. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 46GB in this case. Note that the ESXi kernel memory allocation can vary with hardware server platform type, so please take care to ensure you do not over allocate.

+

-

:*VM co-residency with Unified Communications '''and''' third party applications (for example, WFM) is '''not''' supported unless it is specifically indicated in the following subsection.

+

-

+

-

<br>

+

-

+

-

The following tables show Unified CCE component supported co-residencies. A '''diamond''' indicates that co-residency is allowed, whereas an '''asterisk''' with a # will be used to denote limited or conditional support; please check Exceptions and Notes below the table for guidance on those co-residencies.

! colspan="6" | <center>'''[[UC Virtualization Supported Hardware#UC_on_UCS_Tested_Reference_Configurations|UCS Tested Reference Configurations]]'''<br>S is same as BE6000 MD<br>S+ is same as BE6000&nbsp;HD<br>M includes BE7000</center>

:*An HDS (all types) '''cannot''' co-reside with a Logger, Rogger, CVP Reporting Server or another HDS unless those applications are deployed on separate DAS (Direct Attached Storage) RAID arrays. Standard RAID array for virtualization is 8 disk drives in RAID 5 or 10. Thus two arrays would require 16 drives to allow for co-residency. Example, the UCS-C210M2-VCD2 would not allow Rogger and HDS on same server as it has only a single 8 disk RAID 5 array. A UCS-C260M2-VCD2 has 16 drives in two 8 disk RAID 5 arrays that will allow Rogger and HDS to be deployed on that single C series server, so long as each application is installed to separate arrays.

-

-

<br>The following section depicts the CCE sample UCS deployments compliant to the co-residency rules described above.

-

-

<br>

-

-

== Sample Unified CCE Deployments ==

-

-

'''Notes'''

-

-

#The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

-

#The sample deployments in these tables reflect the enhanced CCE 9.x (and later) VM co-residency rules. Note that the C-Series restriction that prevented the HDS from co-residing with a Router, Logger, or a PG has been changed. See the preceding note about Historical Data Servers (HDS) VM co-residency rule with other applications.&nbsp;&nbsp;

-

#Any deployment &gt; 2k agents requires at least 2 chassis

-

#It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.

-

#ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.

#For large multi-cores UCS models (more than 8 cores per ESXi host), you can still use the sample deployments below (which is based on 8 cores per ESXi host)&nbsp;and then collapse them to the actual available cores. For example,&nbsp;VMs compliant to co-resident rules on 2 x C210M2-VCD2 TRC can be collapsed to a single 1 x C260M2 TRC or 1 x C240M3 TRC host. Extra cores on large multi-cores UCS&nbsp;models may not all be utilized by CCE on UCS solution due to storage constraints. This is subject to be verified in the DMS/A2Q process.

! colspan="6" | <center>'''[[UC Virtualization Supported Hardware#UC_on_UCS_Tested_Reference_Configurations|UCS Tested Reference Configurations]]'''<br>S is same as BE6000 MD<br>S+ is same as BE6000&nbsp;HD<br>M includes BE7000</center>

! colspan="6" | <center>'''[[UC Virtualization Supported Hardware#UC_on_UCS_Tested_Reference_Configurations|UCS Tested Reference Configurations]]'''<br>S is same as BE6000 MD<br>S+ is same as BE6000&nbsp;HD<br>M includes BE7000</center>

All Unified CCE VM vDisks must be deployed Thick Provisioned, Lazy, or Eager Zeroed. Thin Provisioned vDisks are not supported.

+

+

Storage Area Networks and Specs-based policy storage solutions must be able to handle the following Unified CCE application disk I/O characteristics. Tested Reference Configurations (TRC) for Cisco UCS C-Series servers do not need to apply these requirements against their Direct Attached Storage Arrays (DAS).

+

+

The data below is based on CCE 10.0(1) running on '''Windows Server 2008 R2 Enterprise'''.

The data below is based on CCE 8.0(2) running on '''Windows Server 2003 R2'''.

-

= Hybrid Deployment Options =

+

{| class="wikitable FCK__ShowTableBorders"

+

|-

+

! rowspan="2" | '''Unified CCE Component'''

+

! colspan="3" | '''IOPS'''

+

! colspan="3" | '''Disk Read KBytes / sec'''

+

! colspan="3" | '''Disk Write KBytes / sec'''

+

! rowspan="2" | '''Operating Conditions'''

+

|-

+

| '''Peak'''

+

| '''Avg.'''

+

| '''95th Percentile'''

+

| '''Peak'''

+

| '''Avg.'''

+

| '''95th Percentile'''

+

| '''Peak'''

+

| '''Avg.'''

+

| '''95th Percentile'''

+

<br>

-

Some Unified Contact Center deployments support a "hybrid" deployment. In hybrid deployments, you deploy certain components on (bare-metal) Media Convergence Servers (MCS) or generic servers. You deploy other components in VM guests on Unified Computing System (UCS), MCS, or Third-party specs-based servers that meet the requirements for your specific release. The following sub-sections provide further details on these hybrid deployment options.

*You can deploy a NAM on bare-metal servers as specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted] (BOM) for your release. In Release 9.0(3) and later, you can deploy a NAM on a VM matching an appropriate template listed on the OVA Template Details page for your release.

+

<br>

-

*Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)

+

-

*As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

+

-

== Parent/Child Deployments ==

+

= Unified CCE Network Configuration in a Virtualized Environment =

-

*You can deploy the Parent ICM on (bare-metal) servers as specified in the appropriate BOM or, if supported by your specific release, VMs that meet the capacity requirements outlined in the VM Templates for that release.

*The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

+

-

= Cisco Unified CCE-Specific Information for OVA Templates =

+

'''QoS must be enabled''' for Unified CCE Private network communications Web Setup for the Router and PG Setup. For additional details, refer to the appropriate section in the [http://www.cisco.com/c/en/us/support/customer-collaboration/unified-contact-center-enterprise/tsd-products-support-install-and-upgrade.html Cisco Unified Contact Center Enterprise Installation and Upgrade Guide.]

You can download the OVAs for each release on this web page: [http://www.cisco.com/cisco/software/release.html?mdfid=268439622&release=1.1&relind=AVAILABLE&flowid=5210&softwareid=283914286&rellifecycle=&reltype=latest Unified CCE OVA Templates]

+

Microsoft Windows Server software based IPSec is not supported.

+

<br>

-

== Creating Virtual Machines by Deploying the OVA Templates ==

+

= Unified CCE Co-residency Support Policy =

-

In the vSphere client, perform the following steps to deploy the Virtual machines.

+

Unified CCE Co-residency policy is stated as Limited, though it may actually align to one of the other policies. This is due to IO limitations for Direct Attached Storage (DAS) arrays used with Unified CCE database applications.

-

#Highlight the host or cluster to which you wish the VM to be deployed.

+

'''Note:''' For any single DAS array, you cannot have two or more of the following on any single array: Logger, Rogger, Progger, HDS, or CVP Reporting Server.

-

#Select '''File''' &gt; '''Deploy OVF Template'''.

+

<br>

-

#Click the '''Deploy from File''' radio button and specify the name and location of the file you downloaded in the previous section '''OR '''click the '''Deploy from URL''' radio button and specify the complete URL in the field, then click '''Next'''.

+

-

#Verify the details of the template, and click '''Next'''.

+

-

#Give the VM you are about to create a name, and choose an inventory location on your host, then click '''Next'''.

+

-

#Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click '''Next'''.

+

-

#Choose a virtual network for the VM, then click '''Next'''.

+

-

#Verify the deployment settings, then click '''Finish'''.

+

-

== Notes ==

+

Unified CCE otherwise adheres to the following Co-residency policies by version:

-

:*VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.

:*''Starting with Unified CCE 9.0(1) and later, VM resource reservation is supported and the computing resources have a default setting when&nbsp;deployed from the OVA&nbsp;for&nbsp;CCE 9.0''.&nbsp;

+

! scope="col" | Version

-

:*You must not change the computing resource configuration of your VM at any time.

+

! scope="col" | Co-residency Policy

-

:*You must never go below the minimum VM computing resource requirements as defined in the OVA templates.

+

|-

-

:*ESXi Server hyperthreading is enabled by default.

+

| 10.0(x)

-

:*VM vDisks must be deployed as either Thick Eager-Zeroed or Thick Lazy-Zeroed. Thin provisioning of Unified CCE VM vDisks is '''not''' supported. Per VMware best practices, Cisco recommends using Eager-Zeroed for best performance of application VMs on initial writes. For more information, refer to [http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware UC Virtualization Supported Hardware.]

+

| Full

+

|-

+

| 8.0(2+) - 9.0(x)

+

| UC on UC only

+

|}

-

<br>

-

== Preparing for Windows Installation ==

-

In the vSphere client, perform the following steps to prepare for operating system installation.

+

<br>

-

#Right click on the virtual machine you want to edit and select '''Edit Settings'''. A Virtual Machine Properties dialog appears.

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

+

The following unified CCE applications do not support hybrid A/B:

-

= Installing VMware Tools =

+

*Router

+

*Logger

+

*Rogger

+

*Peripheral Gateway (all types)&nbsp;

-

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

Refer to the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/prod_installation_guides_list.html Unified CCE documentation] for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_installation_guides_list.html Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].

+

Unified CCE allows for application VMs to be deployed on different supported servers, whether Cisco UCS or 3rd Party Specs-based within the same or multiple data center sites, excepting duplex and synchronized operation applications, which have a more restrictive requirement.

The following Unified CCE applications are hardware-constrained for side A and B pairings:

+

* Router

+

* Logger

+

* Rogger

+

* Peripheral Gateway (PG, all types)

-

This section is applicable to storing Virtual Machines on C-210 local storage. The C-210 Server comes with a default local storage configured with two sets of RAID groups. Disk 1-2 is RAID 1, while the remaining disks (3-10) are RAID 5.

+

These require that the A and B application instances be installed to separate host servers have exact matching:

+

* Processor make, model and speed

+

* Memory type and speed

+

* Mainboard chipset controller

-

The creation of the virtual machine for the Unified CCE Administration and Data Server requires a large virtual disk size. You must follow the steps described below to configure the ESXi data store block size to 2MB for it to handle the Unified CCE Administration and Data Server virtual disk size requirement before you deploy the OVAs for the following Unified CCE components:

+

<br>

-

:*AW-HDS

+

= Installing VMware Tools =

-

:*AW-HDS-DDS

+

-

:*HDS-DDS

+

-

<br>Steps to configure the ESXi data store block size to 2MB:

+

You must install the VMware Tools on each of the VMs, and you must use all of the VMware Tools default settings. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with the Windows operating system.

-

#After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi and use VMware vSphere Client to connect to the ESXi host.

+

<br>

-

#On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.

+

-

#Right-click on this data store and delete it. We will add the data store back in the following steps.

+

-

#Click on the “Add Storage…” and select the Disk/LUN.

+

-

#The data store that was just deleted will now be available to add, select it.

+

-

#In the configuration for this data store you will now be able to select the block size, select 2MB, and finish the adding of the storage to the ESXi host. This storage is now available for deployment of the virtual machines that requires large disk size, such as the Administration and Data Servers.

+

= Timekeeping Best Practices for Windows =

= Timekeeping Best Practices for Windows =

-

You should follow the best practices outlined in the VMware Knowledge Base article [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 VMware KB: Timekeeping best practices for Windows].

*ESXi hosts and domain controllers should synchronize the time from the same NTP source.

+

*ESXi hosts and domain controllers must synchronize the time from the same NTP source.

*When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.

*When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.

-

*Be sure that '''Time synchronization between the virtual machine and the host operating system''' in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

+

*Be sure that '''Time synchronization between the virtual machine and the host operating system''' in the VMware Tools tool box GUI of the Windows Server 2008 R2 guest operating system remains deselected; this check box is deselected by default.

+

+

<br>

= System Performance Monitoring Using ESXi Counters =

= System Performance Monitoring Using ESXi Counters =

-

:*Make sure that you&nbsp;follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance.&nbsp;

+

:*Make sure that you&nbsp;follow VMware's ESXi requirements and the SAN vendor's instructions for optimal system performance.&nbsp;

:*VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.

:*VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.

:*You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.

:*You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.

Line 1,603:

Line 1,863:

<br>

<br>

-

You can use the following ESXi counters as performance indicators.

+

Use the following ESXi counters as performance indicators.

{| class="wikitable FCK__ShowTableBorders"

{| class="wikitable FCK__ShowTableBorders"

Line 1,653:

Line 1,913:

| mSec

| mSec

| The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU.

| The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU.

-

| Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.

| A latency of 24ms or greater is indicative of a possibly overutilized, misbehaving, or misconfigured disk array.

|-

|-

| Disk

| Disk

Line 1,785:

Line 2,045:

| Kernel Disk Command Latency

| Kernel Disk Command Latency

| mSec

| mSec

-

| The average time spent in ESXi Server VMKernel per command

+

| The average time spent in ESXi Server VMKernel per command.

-

| '''Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero.&nbsp; Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.'''

+

| '''Kernel Command Latency must be very small in comparison to the Physical Device Command Latency, and it must be close to zero.&nbsp; Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.'''

|-

|-

| Network

| Network

Line 1,798:

Line 2,058:

| KBps

| KBps

| Network Usage = Data receive rate + Data transmit rate

| Network Usage = Data receive rate + Data transmit rate

-

| Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

+

| Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.

|-

|-

| Network

| Network

Line 1,809:

Line 2,069:

| Network Data Receive Rate

| Network Data Receive Rate

| KBps

| KBps

-

| Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

+

| Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.

|-

|-

| Network

| Network

Line 1,821:

Line 2,081:

| KBps

| KBps

| The average rate at which data is transmitted on this Ethernet port

| The average rate at which data is transmitted on this Ethernet port

-

| Less than 30% of the available network bandwidth. &nbsp;For example, it should be less than 300Mps for 1G network.

+

| Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.

You must comply with the requirements described in the System Performance Monitoring section of the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Unified Contact Center Enterprise Design Guide] and in the Performance Counters section in the Release 10.0(1) [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_installation_and_configuration_guides_list.html Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise.]

'''It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the following Doc Wiki page:''' [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information.]

+

+

This page contains essential information for partners about the following:

All Unified CCE VM vDisks must be deployed Thick Provisioned, Lazy, or Eager Zeroed. Thin Provisioned vDisks are not supported.

Storage Area Networks and Specs-based policy storage solutions must be able to handle the following Unified CCE application disk I/O characteristics. Tested Reference Configurations (TRC) for Cisco UCS C-Series servers do not need to apply these requirements against their Direct Attached Storage Arrays (DAS).

The data below is based on CCE 10.0(1) running on Windows Server 2008 R2 Enterprise.

Unified CCE Co-residency Support Policy

Unified CCE Co-residency policy is stated as Limited, though it may actually align to one of the other policies. This is due to IO limitations for Direct Attached Storage (DAS) arrays used with Unified CCE database applications.

Note: For any single DAS array, you cannot have two or more of the following on any single array: Logger, Rogger, Progger, HDS, or CVP Reporting Server.

Unified CCE otherwise adheres to the following Co-residency policies by version:

Version

Co-residency Policy

10.0(x)

Full

8.0(2+) - 9.0(x)

UC on UC only

Hybrid Unified CCE Deployment Options

Unified CCE 10.0(x) and later supports only virtualized deployment on VMware vSphere ESXi. This is inclusive of Unified ICME, CCH, and ICMH.

Unified CCE allows for application VMs to be deployed on different supported servers, whether Cisco UCS or 3rd Party Specs-based within the same or multiple data center sites, excepting duplex and synchronized operation applications, which have a more restrictive requirement.

The following Unified CCE applications are hardware-constrained for side A and B pairings:

Router

Logger

Rogger

Peripheral Gateway (PG, all types)

These require that the A and B application instances be installed to separate host servers have exact matching:

Processor make, model and speed

Memory type and speed

Mainboard chipset controller

Installing VMware Tools

You must install the VMware Tools on each of the VMs, and you must use all of the VMware Tools default settings. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with the Windows operating system.

Timekeeping Best Practices for Windows

ESXi hosts and domain controllers must synchronize the time from the same NTP source.

When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.

Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2008 R2 guest operating system remains deselected; this check box is deselected by default.

System Performance Monitoring Using ESXi Counters

Make sure that you follow VMware's ESXi requirements and the SAN vendor's instructions for optimal system performance.

VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.

You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.

You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.

The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

Use the following ESXi counters as performance indicators.

Category

Object

Measurement

Units

Description

Performance Indication and Threshold

CPU

ESXi Server

VM

CPU Usage (Average)

Percent

CPU Usage Average in percentage for:

ESXi server

Virtual machine

Less than 60%.

CPU

ESXi Server Processor#

VM_vCPU#

CPU Usage 0 - 7 (Average)

Percent

CPU Usage Average for:

ESXi server for processors 0 to 7

Virtual machine vCPUs

Less than 60%

CPU

VM

CPU Ready

mSec

The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU.

The raw counter value cannot be used by itself but rather should be converted to a percentage (i.e., the percentage of time spent waiting for a CPU to become ready) using the following formula (assume sample interval is 20 seconds and the CPU Ready counter value is 400ms):

Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amount of memory on the host, the server starts a memory swap.

Less than 80% of the Granted memory

Memory

ESXi Server

VM

Memory Balloon (Average)

KB

ESXi uses balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory.

Since the memory is not overcommitted, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.

Memory

ESXi Server

VM

Memory Swap used (Average)

KB

ESXi Server swap usage. Use the disk for RAM swap

Since the memory is not overcommitted, this should be 0 or very low.

Disk

ESXi Server

VM

Disk Usage (Average)

KBps

Disk Usage = Disk Read rate + Disk Write rate

Ensure that the SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VMbha ID

Disk Usage Read rate

KBps

Rate of reading data from the disk

Ensure that the SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Usage Write rate

KBps

Rate of writing data to the disk

Ensure that the SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Commands Issued

Number

Number of disk commands issued on this disk in the period.

Ensure that the SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Aborts

Number

Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates a storage performance issue.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Latency

mSec

The average amount of time taken for a command from the perspective of a Guest OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency.

A latency of 24ms or greater is indicative of a possibly overutilized, misbehaving, or misconfigured disk array.

Disk

ESXi Server vmhba ID

VM vmhba ID

Kernel Disk Command Latency

mSec

The average time spent in ESXi Server VMKernel per command.

Kernel Command Latency must be very small in comparison to the Physical Device Command Latency, and it must be close to zero. Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.

Network

ESXi Server

VM

Network Usage (Average)

KBps

Network Usage = Data receive rate + Data transmit rate

Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Receive Rate

KBps

Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Transmit Rate

KBps

The average rate at which data is transmitted on this Ethernet port

Less than 30% of the available network bandwidth; for example, less than 300Mps for a 1G network.