| Updated the ''Creating Virtual Machines from OVA VM Templates'' section.<br>Removed OVA&nbsp;list from this page and added a link.<br>Added the Scalability Assumptions section.<br>Updated the CUIC&nbsp;RAM (GB) numbers in the sample CCE&nbsp;deployments.<br>Updated the sample deployment tables and highlighted optional items.

-

Updated the ''Creating Virtual Machines from OVA VM Templates'' section.<br>Removed OVA&nbsp;list from this page and added a link.<br>Added the Scalability Assumptions section.<br>Updated the CUIC&nbsp;RAM (GB) numbers in the sample CCE&nbsp;deployments.<br>Updated the sample deployment tables and highlighted optional items.

+

-

+

|-

|-

| December 22, 2010

| December 22, 2010

Line 43:

Line 74:

|-

|-

| October&nbsp;1,&nbsp;2010

| October&nbsp;1,&nbsp;2010

-

| This page, &lt;a href="http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information"&gt;UCCE on UCS Deployment Certification Requirements and Ordering Information&lt;/a&gt;, and &lt;a href="http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE"&gt;UCS Network Configuration for UCCE&lt;/a&gt; updated to reflect support of UCS C-Series hardware on UCCE.

+

| This page, [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information], and [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE UCS Network Configuration for UCCE] updated to reflect support of UCS C-Series hardware on UCCE.

|-

|-

| September&nbsp;28,&nbsp;2010

| September&nbsp;28,&nbsp;2010

Line 49:

Line 80:

|-

|-

| September&nbsp;21,&nbsp;2010

| September&nbsp;21,&nbsp;2010

-

| IPIVR added to list of supported components/deployments and as an optional component in the sample deployment tables at &lt;a href="http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Sample_CCE_Deployments"&gt;Sample CCE Deployments&lt;/a&gt;.

+

| IPIVR added to list of supported components/deployments and as an optional component in the sample deployment tables at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Sample_CCE_Deployments Sample CCE Deployments].

'''It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page''' [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information].

This page contains essential information for partners about the following:

This page contains essential information for partners about the following:

<sup>2</sup>On UCCE Release 8.5(4), an Engineering Special (ES) is required to support ICME on UCS on Windows Server 2008. Please contact TAC to obtain the required ES before proceeding with deployment. Additionally, if the deployment is greater than 8,000 agents for ICME on UCS, a manual change of the ICM router, logger, peripheral gateways, and HDS/DDS components is necessary. The virtual machine specifications (vCPU, RAM, and CPU and Memory reservations) must be changed to match the UCCE 9.0 OVAs. For details, see the [http://docwiki.cisco.com/wiki/OVA_Template_Details_for_Unified_Contact_Center_Enterprise_Release_9.x CCE 9.x OVA specifications] Wiki page.

-

:*Administration Client

+

<sup>3</sup>Starting with Release 9.0(3), virtualization of the Unified ICME with more than 12,000 agents and/or the use of NIC, SIGTRAN (for up to 150 PGs) is also supported on Cisco Unified Computing Systems (UCS) B Series and C Series hardware.

-

:*Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be collocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)

+

-

:*Support Tools

+

-

:*Rogger (a Router and a Logger in the same VM)

+

-

:*The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE is supported; see the section &lt;a href="http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Support_for_UCM_Clustering_Over_the_WAN_with_Unified_CCE_on_UCS_Hardware"&gt;Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware&lt;/a&gt; for important information.

+

-

:*Unified IP-IVR is supported with Unified CCE on UCS B-Series solution and on UCS C-Series with the model (&lt;a href="http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/solution_overview_c22-597556.html"&gt;UCS-C210-VCD2)&lt;/a&gt; only. Please refer to the IPIVR product specific pages for detail.

The following deployments and Unified CCE components have not been qualified and '''are not supported''' in virtualization:

+

<br>The following deployments and Unified CCE components have not been qualified and '''are not supported''' in virtualization:

-

:*Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.

+

*Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.

-

:*Unified CCH

+

*Unified CCH (multi-instance)

-

:*Unified ICMH

+

*Unified ICMH (multi-instance)

-

:*Outbound Option with SCCP Dialer

+

*Outbound Option with SCCP Dialer

-

:*WebView Server

+

*WebView Server

-

:*Expert Advisor

+

*Expert Advisor

-

:*Remote Silent Monitoring (RSM)

+

*Span based Silent Monitoring on UCS B-series chassis. For support on UCS C-series, please consult the VMware Community/Knowledge Base regarding the possible use of promiscuous mode to receive the monitored traffic from Span port.

-

:*Span based Silent Monitoring on UCS B-series chassis

+

*Cisco Unified CRM Connector

-

:*Cisco Unified CRM Connector

+

*Agent PG with more than 1 Agent PIM (2nd PIM for UCM CTI RP is allowed as per SRND)

-

:*IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.

+

*Multi-Instance CTIOS

-

+

*IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.

-

&lt;/dd&gt;

+

<br>

<br>

-

&lt;/dd&gt;

+

== Virtual and non-virtual servers deployment model ==

-

<br>

+

The hybrid (virtual and non-virtual servers) deployment model is supported. However, for paired components having side A and side B (e.g., Rogger A and Rogger B), the server hardware must be identical on both sides (e.g., you cannot mix bare metal MCS on side A of the paired component and virtual UCS on side B of the paired component). You may, however, deploy a mixture of UCE B and C series for the separate Duplex pair sides, so long as the UCS server and processor generation aligns (UCS-B200M2-VCS1 is equal generation to UCS-C210M2-VCD2, for example). The hybrid support '''is''' for non-paired components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployments. See the [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Hybrid_Deployment_Options Hybrid Deployment Options] section for more information.

[http://www.cisco.com/web/partners/incentives_and_promotions/cisco_smartplay_promo.html Cisco SmartPlay Solution Packs for UC], which are the pre-configured bundles (value UC bundles) based on '''UCS B200M2 or C210M2''' as an alternative ordering to UC on UCS TRCs above are '''supported with caveats''':

-

&lt;/dd&gt;

+

:*For '''B200M2 and C210M2 Solution Packs (''Value UC Bundles'')''' that have better specification than the UC on UCS B200M2/C210M2 TRC models (e.g., 6 cores per same cpu family, etc.), the UC on UCS spec-based HW support policy needs to be followed and these bundles are supported by UCCE/CVP as an exception providing the same UCCE VM co-residency rules are compliant and the number of CVP Call Server VMs cannot be more than two on the same server/host.

+

:*For ''other Spec-based Servers'' according to UC on UCS Spec-based Hardware Policy that have specification equal to or better than the UC on UCS B200M2/C210M2 TRCs, they '''may''' be used for UCCE/CVP once validated in the Customer Collaboration DMS/A2Q (Design Mentoring Session/Assessment To Quality) process. This also means a particular desired spec-based server model may '''not be approved''' for use after the server design review in the DMS/A2Q session.

-

<br>

+

<br>Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted]. Unified CCE does not support UCS C200.

-

+

-

== Hardware Requirements for Unified CCE Virtualized Systems ==

+

-

+

-

Requirements for Cisco Unified CCE systems using UCS B200 or C210 hardware are located in the &lt;a href="Unified Computing System Hardware"&gt;Unified Computing System Hardware&lt;/a&gt;.

Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the &lt;a href="http://www.cisco.com/en/US/products/sw/custcosw/ps1844/prod_technical_reference_list.html"&gt;Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release, 8.0(x).&lt;/a&gt; Unified CCE does not support UCS C200.

:*If you are upgrading ESXi software, see the section &lt;a href="http://docwiki-dev.cisco.com/wiki/Ongoing_Virtualization_Operations_and_Maintenance"&gt;Upgrade ESXi&lt;/a&gt;.

+

-

:*The Windows, SQL, and other third party software requirements for the Unified CCE applications in the ESXi/UCS platform are the same as in the physical server. For such information see the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware and Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted, Release 8.0(x)&lt;/a&gt;.

+

-

&lt;/dd&gt;

+

:*If you are upgrading ESXi software, see the section [http://docwiki.cisco.com/wiki/Ongoing_Virtualization_Operations_and_Maintenance Upgrade ESXi].

+

:*The Windows, SQL, and other third party software requirements for the Unified CCE applications in the ESXi/UCS platform are the same as in the physical server. For more information see the appropriate release of [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].

-

<br>

+

=== ESXi 4.1 and 5.0 Software Requirements ===

-

&lt;/dd&gt;

+

When Cisco Unified CCE is running on ESXi 4.1 and 5.0, you must install or upgrade VMware Tools for ESXi to match the running version on each of the VMs and use all of the VMware Tools default settings. For more information, see the section VMware Tools. You must do this every time the ESXi version is upgraded.

-

<br>

+

Starting with CCE/CVP Release 9.x with Windows Server 2008 environment, disabling the LRO in ESXi 5.0 (and later) is no longer a requirement.

-

+

-

=== ESXi 4.1 Software Requirements ===

+

-

+

-

When Cisco Unified CCE is running on ESXi 4.1, you must perform the following steps:

+

-

+

-

:*You must install or upgrade VMware Tools for ESXi 4.1 on each of the VMs and use all of the VMware Tools default settings. For more information, see the section &lt;a href="http://docwiki-dev.cisco.com/wiki/VMware_Tools"&gt;VMware Tools&lt;/a&gt;

+

-

:*You must disable Large Receive Offload (LRO). For details, see the section &lt;a href="http://docwiki-dev.cisco.com/wiki/Disable_LRO"&gt;Disable LRO&lt;/a&gt;

{{note| You must use the OVA VM templates to create the Unified CCE component VMs.}}

-

<br>For instructions on how to obtain the OVA templates, see &lt;a href="http://docwiki.cisco.com/wiki/Downloading_OVA_Templates_for_UC_Applications"&gt;Downloading OVA Templates for UC Applications&lt;/a&gt;

+

<br>For instructions on how to obtain the OVA templates, see [http://docwiki.cisco.com/wiki/Downloading_OVA_Templates_for_UC_Applications Downloading OVA Templates for UC Applications]. <br><br>

== Unified CCE Scalability Impacts ==

== Unified CCE Scalability Impacts ==

-

The &lt;a href="http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise"&gt;capacity sizing information&lt;/a&gt; is based on the operating conditions published in the ''Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.x'', Chapter 10, ''Operating Conditions'' and the ''Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted'''','''''Release 8.x, Section 5. Both documents are available at &lt;a href="http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html"&gt;Cisco.com&lt;/a&gt;'''.'''

+

The [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information] is based on the operating conditions published in the ''Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.''x''/9.''x'''', Chapter 10, ''Operating Conditions'' and the ''Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted''''''<i>,</i>'''8.''x''/9.''x''''''<i>, Section 5. Both documents are available at [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco.com]'''.'''</i>

-

The following features reduce the scalability of certain components below the agent count of the respective OVA &lt;a href="http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise"&gt;capacity sizing information&lt;/a&gt;.

+

The following features reduce the scalability of certain components below the agent count of the respective OVA [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information].

*Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.

*Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.

:*QoS should&nbsp;be&nbsp;mandatorily enabled for both Private network connection(between Side A and B) and the public network connection(between the Router and the PG). Refer to Chapter 12, &nbsp;&lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/design/guide/uccesrnd80.pdf"&gt;Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)&lt;/a&gt;&nbsp;for more details

+

:*'''QoS must be enabled''' for both the Private network connections (between Side A and B) and the public network connections (between the Router and the PG) in Unified CCE Setup. Refer to Chapter 12 of the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)] for more details.

-

+

-

&lt;/dd&gt;

+

-

+

-

<br>

+

-

+

-

&lt;/dd&gt;

+

-

+

-

<br>

+

== Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware ==

== Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware ==

Line 236:

Line 247:

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

-

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the &lt;a href="http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html"&gt;Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND).&lt;/a&gt;

+

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)].

In addition, note the following expectations for UCS hardware points of failure:

In addition, note the following expectations for UCS hardware points of failure:

Line 246:

Line 257:

=== B-Series Considerations ===

=== B-Series Considerations ===

-

When deploying Clustering Over the WAN with B-Series hardware, use of the Cisco UCS '''M81KR''' Virtual Interface Card is mandatory.

+

Cisco recommends use of the M81KR Virtual Interface Card (VIC) for Unified CCE deployments, though the M71KR(-E/Q) and M72KR(-E/Q) may also be used as per reference design 2 detailed at the dedicated networking page linked below. M51KR, M61KR and 82598KR are not supported for Contact Center use in UCS B series blades.

-

New B-Series deployments using Clustering Over the WAN '''must''' use a Nexus 7000 Series / Nexus 5000 Series vPC infrastructure, or a Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720-10G.

+

New B Series deployments are recommended to use Nexus 5000/7000 series data center switches with vPC PortChannels. This technology has been shown to add considerable advantageous to Contact Center applications in fault recovery scenarios.

-

See the configuration guidelines in &lt;a href="http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_B-Series_Network_Configuration"&gt;UCCE on UCS B-Series Network Configuration&lt;/a&gt;.

+

See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_B-Series_Network_Configuration UCCE on UCS B-Series Network Configuration].

=== C-Series Considerations ===

=== C-Series Considerations ===

-

If deploying Clustering Over the WAN with C-Series hardware, '''do not''' trunk public and private networks. You '''must''' use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in &lt;a href="http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_C-Series_Network_Configuration"&gt;UCCE on UCS C Series Network Configuration&lt;/a&gt;.

+

If deploying Clustering Over the WAN with C-Series hardware, '''do not''' trunk public and private networks. You '''must''' use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_C-Series_Network_Configuration UCCE on UCS C Series Network Configuration].

= Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN =

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

Line 264:

Line 275:

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

-

Keep the following considerations in mind when deploying UCCE applications on UCS B-series hardware with SAN.

+

Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.

-

*This deployment must comply with the conditions listed in Section 3.1.6 of the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware &amp; System Software Specification (Bill of Materials) for Cisco Unified Contact Center Enterprise, Release 8.0(1).&lt;/a&gt; In particular, SAN disk arrays must be configured as RAID 5 or RAID 10.<br>

+

*SAN disk arrays must be configured as RAID 5 or RAID 10. Note: RAID 6/ADG is also supported as an extension of RAID 5.

-

+

-

*Each Historical Data Server (HDS) requires a dedicated LUN and a datastore with a 2 MB block size. No other application can reside on the same datastore as the HDS. The HDS requires a 2 MB block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores. The HDS block size is configured in VMware at datastore creation.

+

-

+

-

*To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.<br>

**Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.

**Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.

-

*The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:

*The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:

**AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).

**AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).

**&nbsp;%Disktime must remain less than 60%.<br>

**&nbsp;%Disktime must remain less than 60%.<br>

-

+

*Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in&nbsp;[http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements]

-

*Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements &amp; guidelines.

+

*vSphere will alarm when disk free space is less than 20% free on any datastore. Recommendation is to provision at least 20% free space overhead, with 10% overhead '''required'''.

+

*Recommend deploying from 4-8 VMs per LUN/datastore so long as IOPS and space requirements can be met, with supported range from 1-10.

<br>

<br>

-

See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: &lt;a href="http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Unified_CCE_Component_Coresidency_and_Sample_Deployments"&gt;Unified CCE Component Coresidency and Sample Deployments&lt;/a&gt;.

+

See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Unified_CCE_Component_Coresidency_and_Sample_Deployments Unified CCE Component Coresidency and Sample Deployments].

== Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents ==

== Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents ==

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x). For more information, see &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).&lt;/a&gt;

+

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x)/9.0(x). For more information, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

-

#Acquire the supported servers for Unified CCE 8.0(x).

+

#Acquire the supported servers for Unified CCE 8.0(2) or later release.

#*MCS-7845-I3-CCE2 is specified in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).&lt;/a&gt;

+

#*MCS-7845-I3-CCE2 is specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

#*If there are PG VMs that are running on the following older MCS servers, MCS-7845-H2 or MCS-7845-I2, replace these servers with supported servers.

#*If there are PG VMs that are running on the following older MCS servers, MCS-7845-H2 or MCS-7845-I2, replace these servers with supported servers.

#Configure the network. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#UCS_Network_Configuration UCS Network Configuration]. {{note |Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.}}

-

#If VMware VirtualCenter is used for virtualization management, install or update to VMware vCenter Server 4.0 or later.

#Create the Unified CCE virtual machines from the OVA templates. See reference at &lt;a href="http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Creating_Virtual_Machines_from_OVA_VM_Templates"&gt;Creating Virtual Machines from OVA VM Templates&lt;/a&gt;. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.

+

#Create the Unified CCE virtual machines from the OVA templates. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Creating_Virtual_Machines_from_OVA_VM_Templates Creating Virtual Machines from OVA VM Templates]. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.

#Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.

#Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.

#Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines. {{note| Microsoft Windows Server 2008 R2 ''Standard Edition'' and Microsoft SQL Server 2008 R2 ''Standard Edition'' should be used for virtual machine guests. See related information in the links below. If you have prior to Unified CCE 9.0 release deployment using Windows Server 2003 and SQL 2005 then plan to use them accordingly on the older Unified CCE releases.}}

:*You can have any number of Unified CCE virtual machines and combination of coresidency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.&nbsp;

+

You can have one or more Unified CCE VMs co-resident on the same ESXi server, however, you must follow the rules as described below:

-

:*You must not have&nbsp;CPU overcommit on the ESXi server that is running Unified CCE&nbsp;realtime application components.&nbsp; The total number of vCPUs among all&nbsp;the virtual machines on an ESXi host must not be greater than the total number of CPUs available on the ESXi server.&nbsp; In the case of the Cisco UCS B-200 and C-210, the total number of CPUs available is 8.

+

-

:*You must not have&nbsp;memory overcommit on the ESXi host running UC realtime applications.&nbsp; You must allocate&nbsp;a minimum&nbsp;2GB of memory for the ESXi kernel.&nbsp; For example, if an ESXi server on B-200 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel you have 34GB available for the virtual machines.&nbsp; The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.

+

-

:*VM coresidency with Unified Communications '''and''' third party applications (for example, WFM) is '''not''' supported unless it is described in the following subsection.

+

-

&lt;/dd&gt;

+

:*You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.

+

:*You must not have CPU overcommitment on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPU cores available on the ESXi server (not counting Hyper-threading cores). Commonly UCS servers have two physical socket CPUs with 4-10 cores each.

+

:*You must not have memory over-committed on the ESXi host running UC realtime applications. You must allocate a minimum of 2GB of memory for the ESXi kernel. For example, a B200M2 server with 48GB of memory would allow for up to 46GB to be available for virtual machine allocation. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 46GB in this case. Note that the ESXi kernel memory allocation can vary with hardware server platform type, so please take care to ensure you do not over allocate.

+

:*VM co-residency with Unified Communications '''and''' third party applications (for example, WFM) is '''not''' supported unless it is specifically indicated in the following subsection.

-

<br>

+

<br>

-

+

-

&lt;/dd&gt;

+

-

+

-

<br>

+

-

The following table shows how Unified CCE components can be coresident on the same ESXi server. '''A diamond indicates that coresidency is allowed'''. For example, the first row shows that Unified Communications Applications can not be colocated with Contact Center Tier 1 Applicaitons. The third row shows that Third Party Applications can only be colocated with Contact Center Tier 3 Applications.

+

The following tables show Unified CCE component supported co-residencies. A '''diamond''' indicates that co-residency is allowed, whereas an '''asterisk''' with a # will be used to denote limited or conditional support; please check Exceptions and Notes below the table for guidance on those co-residencies.

:*An HDS (all types) '''cannot''' co-reside with a Logger, Rogger, CVP Reporting Server or another HDS unless those applications are deployed on separate DAS (Direct Attached Storage) RAID arrays. Standard RAID array for virtualization is 8 disk drives in RAID 5 or 10. Thus two arrays would require 16 drives to allow for co-residency. Example, the UCS-C210M2-VCD2 would not allow Rogger and HDS on same server as it has only a single 8 disk RAID 5 array. A UCS-C260M2-VCD2 has 16 drives in two 8 disk RAID 5 arrays that will allow Rogger and HDS to be deployed on that single C series server, so long as each application is installed to separate arrays.

-

&lt;/dd&gt;

+

<br>The following section depicts the CCE sample UCS deployments compliant to the co-residency rules described above.

For&nbsp;coresidency restrictions specific to individual&nbsp;Unified Communications applications that run on VMs, see the Unified Communications Virtualization Sizing Guideline docwiki.

+

-

+

-

The following section depicts the CCE sample UCS&nbsp;deployments&nbsp;compliant to&nbsp;the coresidency rules described above.

+

-

== Sample CCE Deployments ==

+

== Sample Unified CCE Deployments ==

'''Notes'''

'''Notes'''

#The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

#The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

-

#Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot coreside with a Router, Logger, or a PG, this restriction is '''not''' present on a B-Series hardware platform.

+

#The sample deployments in these tables reflect the enhanced CCE 9.x (and later) VM co-residency rules. Note that the C-Series restriction that prevented the HDS from co-residing with a Router, Logger, or a PG has been changed. See the preceding note about Historical Data Servers (HDS) VM co-residency rule with other applications.&nbsp;&nbsp;

-

#For deployments where Historical Data Servers (HDSs) are coresident, two RAID 5 groups (one for each HDS) are recommended.

+

#Any deployment &gt; 2k agents requires at least 2 chassis

#Any deployment &gt; 2k agents requires at least 2 chassis

#It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.

#It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.

#ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.

#ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.

#For large multi-cores UCS models (more than 8 cores per ESXi host), you can still use the sample deployments below (which is based on 8 cores per ESXi host)&nbsp;and then collapse them to the actual available cores. For example,&nbsp;VMs compliant to co-resident rules on 2 x C210M2-VCD2 TRC can be collapsed to a single 1 x C260M2 TRC or 1 x C240M3 TRC host. Extra cores on large multi-cores UCS&nbsp;models may not all be utilized by CCE on UCS solution due to storage constraints. This is subject to be verified in the DMS/A2Q process.

*NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).&lt;/a&gt;

+

*NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

*Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)

*Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)

*As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

*As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

Line 1,500:

Line 1,498:

== Parent/Child Deployments ==

== Parent/Child Deployments ==

-

*The parent ICM is deployed on (bare-metal) servers as specified in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf"&gt;Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).&lt;/a&gt;.

+

*The parent ICM is deployed on (bare-metal) servers as specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].

*The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

*The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

:*VM CPU affinity is not supported. You don't need to set CPU affinity for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform.

+

:*VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.

-

:*VM resource Reservation - VM resource reservation is not supported for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform. The VM computing resources should have a default reservation setting, which is no resource reservations.

:*You cannot change the computing resource configuration of your VM at any time.

+

:*''Starting with Unified CCE 9.0(1) and later, VM resource reservation is supported and the computing resources have a default setting when&nbsp;deployed from the OVA&nbsp;for&nbsp;CCE 9.0''.&nbsp;

-

:*You can never go below the minimum VM computing resource requirements as defined in the OVA templates.

+

:*You must not change the computing resource configuration of your VM at any time.

-

:*ESXi Server hyperthread is enabled by default.

+

:*You must never go below the minimum VM computing resource requirements as defined in the OVA templates.

-

+

:*ESXi Server hyperthreading is enabled by default.

-

&lt;/dd&gt;

+

:*VM vDisks must be deployed as either Thick Eager-Zeroed or Thick Lazy-Zeroed. Thin provisioning of Unified CCE VM vDisks is '''not''' supported. Per VMware best practices, Cisco recommends using Eager-Zeroed for best performance of application VMs on initial writes. For more information, refer to [http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware UC Virtualization Supported Hardware.]

-

+

-

<br>

+

-

+

-

&lt;/dd&gt;

+

<br>

<br>

Line 1,565:

Line 1,559:

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

-

Refer to the &lt;a href="http://www.cisco.com/en/US/products/sw/custcosw/ps1844/prod_installation_guides_list.html"&gt;Unified CCE documentation&lt;/a&gt; for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

+

Refer to the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/prod_installation_guides_list.html Unified CCE documentation] for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

= Migrating Unified CCE Components to Virtual Machines =

= Migrating Unified CCE Components to Virtual Machines =

-

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/installation/guide/icm80ug.pdf"&gt;Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted Release 8.0(1)&lt;/a&gt;.

+

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_installation_guides_list.html Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].

#After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi 4.0 and use VMware vSphere Client to connect to the ESXi host.

+

#After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi and use VMware vSphere Client to connect to the ESXi host.

#On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.

#On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.

#Right-click on this data store and delete it. We will add the data store back in the following steps.

#Right-click on this data store and delete it. We will add the data store back in the following steps.

Line 1,600:

Line 1,586:

= Timekeeping Best Practices for Windows =

= Timekeeping Best Practices for Windows =

-

You should follow the best practices outlined in the VMware Knowledge Base article &lt;a href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1318"&gt;VMware KB: Timekeeping best practices for Windows&lt;/a&gt;.

+

You should follow the best practices outlined in the VMware Knowledge Base article [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 VMware KB: Timekeeping best practices for Windows].

*ESXi hosts and domain controllers should synchronize the time from the same NTP source.

*ESXi hosts and domain controllers should synchronize the time from the same NTP source.

:*The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

:*The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,637:

Line 1,617:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,651:

Line 1,625:

:*ESXi server

:*ESXi server

:*Virtual machine

:*Virtual machine

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,666:

Line 1,634:

:*ESXi Server Processor#

:*ESXi Server Processor#

:*VM_vCPU#

:*VM_vCPU#

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,680:

Line 1,642:

:*ESXi server for processors 0 to 7

:*ESXi server for processors 0 to 7

:*Virtual machine vCPUs

:*Virtual machine vCPUs

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,702:

Line 1,658:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,720:

Line 1,670:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,738:

Line 1,682:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,756:

Line 1,694:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,774:

Line 1,706:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,792:

Line 1,718:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VMbha ID

:*VMbha ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,810:

Line 1,730:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VM vmhba ID

:*VM vmhba ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,828:

Line 1,742:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VM vmhba ID

:*VM vmhba ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,846:

Line 1,754:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VM vmhba ID

:*VM vmhba ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,865:

Line 1,767:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VM vmhba ID

:*VM vmhba ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,883:

Line 1,779:

:*ESXi Server vmhba ID

:*ESXi Server vmhba ID

:*VM vmhba ID

:*VM vmhba ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,901:

Line 1,791:

:*ESXi Server

:*ESXi Server

:*VM

:*VM

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,919:

Line 1,803:

:*ESXi Server vmnic ID

:*ESXi Server vmnic ID

:*VM&nbsp;vmnic ID

:*VM&nbsp;vmnic ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,936:

Line 1,814:

:*ESXi Server vmnic ID

:*ESXi Server vmnic ID

:*VM&nbsp;vmnic ID

:*VM&nbsp;vmnic ID

-

-

&lt;/dd&gt;

-

-

<br>

-

-

&lt;/dd&gt;

<br>

<br>

Line 1,953:

Line 1,825:

= System Performance Monitoring Using Windows Perfmon Counters =

= System Performance Monitoring Using Windows Perfmon Counters =

-

You must comply with the best practices described in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/design/guide/uccesrnd80.pdf"&gt;Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND), Release 8.0&lt;/a&gt; section System Performance Monitoring, and in Chapter 8 Performance Counters in the &lt;a href="http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/configuration/guide/icm80srvg.pdf"&gt;Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise&lt;/a&gt;.

Updated the pointer links for the Bill of Material, CCMP virtualization page, and the Hybrid Deployment section.

December 20, 2010

Updated the Component Capacities section, the VM configuration requirements table, the Hybrid Deployment Options section, the steps for installing/migrating Unified CCE Components, the Support for Virtualization on the ESXi/UCS Platform section, and the Hardware Requirements section.

2On UCCE Release 8.5(4), an Engineering Special (ES) is required to support ICME on UCS on Windows Server 2008. Please contact TAC to obtain the required ES before proceeding with deployment. Additionally, if the deployment is greater than 8,000 agents for ICME on UCS, a manual change of the ICM router, logger, peripheral gateways, and HDS/DDS components is necessary. The virtual machine specifications (vCPU, RAM, and CPU and Memory reservations) must be changed to match the UCCE 9.0 OVAs. For details, see the CCE 9.x OVA specifications Wiki page.

3Starting with Release 9.0(3), virtualization of the Unified ICME with more than 12,000 agents and/or the use of NIC, SIGTRAN (for up to 150 PGs) is also supported on Cisco Unified Computing Systems (UCS) B Series and C Series hardware.

4Maximum Agents scales with call rate; please refer to BOM for further guidance.

The following deployments and Unified CCE components have not been qualified and are not supported in virtualization:

Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.

Unified CCH (multi-instance)

Unified ICMH (multi-instance)

Outbound Option with SCCP Dialer

WebView Server

Expert Advisor

Span based Silent Monitoring on UCS B-series chassis. For support on UCS C-series, please consult the VMware Community/Knowledge Base regarding the possible use of promiscuous mode to receive the monitored traffic from Span port.

Cisco Unified CRM Connector

Agent PG with more than 1 Agent PIM (2nd PIM for UCM CTI RP is allowed as per SRND)

Multi-Instance CTIOS

IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.

Virtual and non-virtual servers deployment model

The hybrid (virtual and non-virtual servers) deployment model is supported. However, for paired components having side A and side B (e.g., Rogger A and Rogger B), the server hardware must be identical on both sides (e.g., you cannot mix bare metal MCS on side A of the paired component and virtual UCS on side B of the paired component). You may, however, deploy a mixture of UCE B and C series for the separate Duplex pair sides, so long as the UCS server and processor generation aligns (UCS-B200M2-VCS1 is equal generation to UCS-C210M2-VCD2, for example). The hybrid support is for non-paired components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployments. See the Hybrid Deployment Options section for more information.

Cisco SmartPlay Solution Packs for UC/Hardware Bundles Support

Cisco SmartPlay Solution Packs for UC, which are the pre-configured bundles (value UC bundles) based on UCS B200M2 or C210M2 as an alternative ordering to UC on UCS TRCs above are supported with caveats:

For B200M2 and C210M2 Solution Packs (Value UC Bundles) that have better specification than the UC on UCS B200M2/C210M2 TRC models (e.g., 6 cores per same cpu family, etc.), the UC on UCS spec-based HW support policy needs to be followed and these bundles are supported by UCCE/CVP as an exception providing the same UCCE VM co-residency rules are compliant and the number of CVP Call Server VMs cannot be more than two on the same server/host.

For other Spec-based Servers according to UC on UCS Spec-based Hardware Policy that have specification equal to or better than the UC on UCS B200M2/C210M2 TRCs, they may be used for UCCE/CVP once validated in the Customer Collaboration DMS/A2Q (Design Mentoring Session/Assessment To Quality) process. This also means a particular desired spec-based server model may not be approved for use after the server design review in the DMS/A2Q session.

ESXi 4.1 and 5.0 Software Requirements

When Cisco Unified CCE is running on ESXi 4.1 and 5.0, you must install or upgrade VMware Tools for ESXi to match the running version on each of the VMs and use all of the VMware Tools default settings. For more information, see the section VMware Tools. You must do this every time the ESXi version is upgraded.

Starting with CCE/CVP Release 9.x with Windows Server 2008 environment, disabling the LRO in ESXi 5.0 (and later) is no longer a requirement.

Unified CCE Scalability Impacts

The capacity sizing information is based on the operating conditions published in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.x/9.x', Chapter 10, Operating Conditions and the Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted',8.x/9.x', Section 5. Both documents are available at Cisco.com.

The following features reduce the scalability of certain components below the agent count of the respective OVA capacity sizing information.

Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.

In addition, note the following expectations for UCS hardware points of failure:

For communication path single point of failure performed by Cisco on the Unified CCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.

Multiple points of failure on the Unified CCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

B-Series Considerations

Cisco recommends use of the M81KR Virtual Interface Card (VIC) for Unified CCE deployments, though the M71KR(-E/Q) and M72KR(-E/Q) may also be used as per reference design 2 detailed at the dedicated networking page linked below. M51KR, M61KR and 82598KR are not supported for Contact Center use in UCS B series blades.

New B Series deployments are recommended to use Nexus 5000/7000 series data center switches with vPC PortChannels. This technology has been shown to add considerable advantageous to Contact Center applications in fault recovery scenarios.

C-Series Considerations

If deploying Clustering Over the WAN with C-Series hardware, do not trunk public and private networks. You must use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in UCCE on UCS C Series Network Configuration.

Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.

SAN disk arrays must be configured as RAID 5 or RAID 10. Note: RAID 6/ADG is also supported as an extension of RAID 5.

Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.

The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:

AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).

Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Rogger Side A

Rogger Side B

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x)/9.0(x). For more information, see the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.

Acquire the supported servers for Unified CCE 8.0(2) or later release.

Create the Unified CCE virtual machines from the OVA templates. See reference at Creating Virtual Machines from OVA VM Templates. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.

Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.

Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines.

Note:

Microsoft Windows Server 2008 R2 Standard Edition and Microsoft SQL Server 2008 R2 Standard Edition should be used for virtual machine guests. See related information in the links below. If you have prior to Unified CCE 9.0 release deployment using Windows Server 2003 and SQL 2005 then plan to use them accordingly on the older Unified CCE releases.

Unified CCE Component VM Co-residency and Sample Deployments

You can have one or more Unified CCE VMs co-resident on the same ESXi server, however, you must follow the rules as described below:

You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.

You must not have CPU overcommitment on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPU cores available on the ESXi server (not counting Hyper-threading cores). Commonly UCS servers have two physical socket CPUs with 4-10 cores each.

You must not have memory over-committed on the ESXi host running UC realtime applications. You must allocate a minimum of 2GB of memory for the ESXi kernel. For example, a B200M2 server with 48GB of memory would allow for up to 46GB to be available for virtual machine allocation. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 46GB in this case. Note that the ESXi kernel memory allocation can vary with hardware server platform type, so please take care to ensure you do not over allocate.

VM co-residency with Unified Communications and third party applications (for example, WFM) is not supported unless it is specifically indicated in the following subsection.

The following tables show Unified CCE component supported co-residencies. A diamond indicates that co-residency is allowed, whereas an asterisk with a # will be used to denote limited or conditional support; please check Exceptions and Notes below the table for guidance on those co-residencies.

An HDS (all types) cannot co-reside with a Logger, Rogger, CVP Reporting Server or another HDS unless those applications are deployed on separate DAS (Direct Attached Storage) RAID arrays. Standard RAID array for virtualization is 8 disk drives in RAID 5 or 10. Thus two arrays would require 16 drives to allow for co-residency. Example, the UCS-C210M2-VCD2 would not allow Rogger and HDS on same server as it has only a single 8 disk RAID 5 array. A UCS-C260M2-VCD2 has 16 drives in two 8 disk RAID 5 arrays that will allow Rogger and HDS to be deployed on that single C series server, so long as each application is installed to separate arrays.

The following section depicts the CCE sample UCS deployments compliant to the co-residency rules described above.

Sample Unified CCE Deployments

Notes

The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

The sample deployments in these tables reflect the enhanced CCE 9.x (and later) VM co-residency rules. Note that the C-Series restriction that prevented the HDS from co-residing with a Router, Logger, or a PG has been changed. See the preceding note about Historical Data Servers (HDS) VM co-residency rule with other applications.

Any deployment > 2k agents requires at least 2 chassis

It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.

ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.

For large multi-cores UCS models (more than 8 cores per ESXi host), you can still use the sample deployments below (which is based on 8 cores per ESXi host) and then collapse them to the actual available cores. For example, VMs compliant to co-resident rules on 2 x C210M2-VCD2 TRC can be collapsed to a single 1 x C260M2 TRC or 1 x C240M3 TRC host. Extra cores on large multi-cores UCS models may not all be utilized by CCE on UCS solution due to storage constraints. This is subject to be verified in the DMS/A2Q process.

Hybrid Deployment Options

Some Unified Contact Center deployments are supported in a "hybrid" fashion whereby certain components must be deployed on (bare-metal) Media Convergence Servers (MCS) or generic servers, and other components are deployed in virtual machine guests on Unified Computing System (UCS) or MCS servers. The following sub-sections provide further details on these hybrid deployment options.

Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)

As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

The Unified Contact Center Enterprise (or Express) child may be deployed virtualized according to existing published VM requirements.

The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

Cisco Unified CCE-Specific Information for OVA Templates

Creating Virtual Machines by Deploying the OVA Templates

In the vSphere client, perform the following steps to deploy the Virtual machines.

Highlight the host or cluster to which you wish the VM to be deployed.

Select File > Deploy OVF Template.

Click the Deploy from File radio button and specify the name and location of the file you downloaded in the previous section OR click the Deploy from URL radio button and specify the complete URL in the field, then click Next.

Verify the details of the template, and click Next.

Give the VM you are about to create a name, and choose an inventory location on your host, then click Next.

Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click Next.

Choose a virtual network for the VM, then click Next.

Verify the deployment settings, then click Finish.

Notes

VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.

VM Resource Reservation - VM resource reservation is not supported for Unified CCE application VMs on vSphere prior to release 9.0. The VM computing resources should have a default reservation setting, which is no resource reservations.

Starting with Unified CCE 9.0(1) and later, VM resource reservation is supported and the computing resources have a default setting when deployed from the OVA for CCE 9.0.

You must not change the computing resource configuration of your VM at any time.

You must never go below the minimum VM computing resource requirements as defined in the OVA templates.

ESXi Server hyperthreading is enabled by default.

VM vDisks must be deployed as either Thick Eager-Zeroed or Thick Lazy-Zeroed. Thin provisioning of Unified CCE VM vDisks is not supported. Per VMware best practices, Cisco recommends using Eager-Zeroed for best performance of application VMs on initial writes. For more information, refer to UC Virtualization Supported Hardware.

Preparing for Windows Installation

In the vSphere client, perform the following steps to prepare for operating system installation.

Right click on the virtual machine you want to edit and select Edit Settings. A Virtual Machine Properties dialog appears.

On the Hardware tab, select CD/DVD Drive 1. Under Device Type, select Datastore ISO File and enter the location of the operating system ISO.

Click OK to save setting changes.

Power up your VM and continue with operating system installation.

Remote Control of the Virtual Machines

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

Installing VMware Tools

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Installing Unified CCE Components on Virtual Machines

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

Refer to the Unified CCE documentation for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrating Unified CCE Components to Virtual Machines

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted.

This section is applicable to storing Virtual Machines on C-210 local storage. The C-210 Server comes with a default local storage configured with two sets of RAID groups. Disk 1-2 is RAID 1, while the remaining disks (3-10) are RAID 5.

The creation of the virtual machine for the Unified CCE Administration and Data Server requires a large virtual disk size. You must follow the steps described below to configure the ESXi data store block size to 2MB for it to handle the Unified CCE Administration and Data Server virtual disk size requirement before you deploy the OVAs for the following Unified CCE components:

AW-HDS

AW-HDS-DDS

HDS-DDS

Steps to configure the ESXi data store block size to 2MB:

After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi and use VMware vSphere Client to connect to the ESXi host.

On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.

Right-click on this data store and delete it. We will add the data store back in the following steps.

Click on the “Add Storage…” and select the Disk/LUN.

The data store that was just deleted will now be available to add, select it.

In the configuration for this data store you will now be able to select the block size, select 2MB, and finish the adding of the storage to the ESXi host. This storage is now available for deployment of the virtual machines that requires large disk size, such as the Administration and Data Servers.

Timekeeping Best Practices for Windows

ESXi hosts and domain controllers should synchronize the time from the same NTP source.

When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.

Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

System Performance Monitoring Using ESXi Counters

Make sure that you follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance.

VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.

You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.

You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.

The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

You can use the following ESXi counters as performance indicators.

Category

Object

Measurement

Units

Description

Performance Indication and Threshold

CPU

ESXi Server

VM

CPU Usage (Average)

Percent

CPU Usage Average in percentage for:

ESXi server

Virtual machine

Less than 60%.

CPU

ESXi Server Processor#

VM_vCPU#

CPU Usage 0 - 7 (Average)

Percent

CPU Usage Average for:

ESXi server for processors 0 to 7

Virtual machine vCPUs

Less than 60%

CPU

VM

CPU Ready

mSec

The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU.

Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.

Memory

ESXi Server

VM

Memory Usage (Average)

Percent

Memory Usage = Active/ Granted * 100

Less than 80%

Memory

ESXi Server

VM

Memory Active (Average)

KB

Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amound of memory on the host, the server starts swap.

Less than 80% of the Granted memory

Memory

ESXi Server

VM

Memory Balloon (Average)

KB

ESXi use balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory.

Since we do not over commit the memory, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.

Memory

ESXi Server

VM

Memory Swap used (Average)

KB

ESXi Server swap usage. Use the disk for RAM swap

Since we do not over commit the memory, this should be 0 or very low

Disk

ESXi Server

VM

Disk Usage (Average)

KBps

Disk Usage = Disk Read rate + Disk Write rate

Ensure that your SAN is configured to handle this amount of disk I/O.

Disk

ESXi Server vmhba ID

VMbha ID

Disk Usage Read rate

KBps

Rate of reading data from the disk

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Usage Write rate

KBps

Rate of writing data to the disk

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Commands Issued

Number

Number of disk commands issued on this disk in the period.

Ensure that your SAN is configured to handle this amount of disk I/O

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Aborts

Number

Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates storage performance issue.

Disk

ESXi Server vmhba ID

VM vmhba ID

Disk Command Latency

mSec

The average amount of time taken for a command from the perspective of a Gust OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency.

Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.

Network

ESXi Server

VM

Network Usage (Average)

KBps

Network Usage = Data receive rate + Data transmit rate

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Receive Rate

KBps

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.

Network

ESXi Server vmnic ID

VM vmnic ID

Network Data Transmit Rate

KBps

The average rate at which data is transmitted on this Ethernet port

Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.