This article provides specifics and examples to aid in sizing Unified Communications applications for the UCS B-series and C-series servers.

This article provides specifics and examples to aid in sizing Unified Communications applications for the UCS B-series and C-series servers.

+

+

<br>

<br>

<br>

Line 13:

Line 15:

== Application Co-residency Support Policy ==

== Application Co-residency Support Policy ==

-

All [[Unified Computing System Hardware|UCS tested reference configurations]] are sized for co-residency except for C210 M1 Tested Reference Configuration #1 (which is only sized to host a single VM of 7500 user capacity). Note that the tested reference configuration for UCS C200 M2 is sized for co-residency at a lower capacity per VM than UCS B200 or C210 so only supports a subset of Virtual Machine templates.

+

Cisco UC virtualization only supports application co-residency under the specific conditions described below and as clarified [http://www.cisco.com/en/US/products/ps6884/products_tech_note09186a0080bbd913.shtml in TAC Technote Document ID: 113520.].

-

The max number of virtual machines supported per physical server depends on the hardware selected and the quantity and resource usage of selected virtual machine OVAs.

{{note | UC app VM performance is only guaranteed when installed on a [[Tested Reference Configurations (TRC) | UC on UCS Tested Reference Configuration]], and only if all other conditions in this policy are followed.}} <br>

+

+

"Application co-residency" in this UC support policy is defined as VMs sharing the same physical server and the same virtualization software host:

+

+

*E.g. VMs running on the same VMware vSphere ESXi host on the same physical rack-mount server, such as Cisco UCS C-Series.

+

*E.g. VMs running on the same VMware vSphere ESXi host on the same physical blade server in the same blade server chassis, such as Cisco UCS B-Series.

+

*"Co-resident application mix" in this UC support policy refers to the set of VMs sharing a physical server and a virtualization software host.

+

*VMs running on different virtualization hosts and different physical servers are not co-resident.

+

**E.g. VMs running on two different Cisco UCS C-Series rack-mount servers are not co-resident.

+

**E.g. VMs running on two different Cisco UCS B-Series blade servers in the same UCS 5100 blade server chassis are not co-resident.

+

+

[[Image:UConUCS co-res 1 whatis.JPG|600px|Co-residency defined]] <br>

+

+

<br>Virtual Machines (VMs) are categorized as follows for purposes of this UC support policy:

Each Cisco UC app supports one of the following four types of co-residency:<br>

+

+

{{note | Troubleshooting UC VMs co-resident with non-UC/3rd-party app VMs may require the changes described at <span style="color:#ff0000"> <<in this TAC Technical Tip>>. </span>. To be supported by Cisco TAC, customers must agree to these changes if required by Cisco TAC.}} <br>

+

+

#'''None:''' Co-residency is not supported. The UC app only supports a single instance of itself in a single VM on the virtualization host / physical server. No co-residency with ANY other VM is allowed, whether Cisco UC app VM, Cisco non-UC VM, or 3rd-party application VM.

+

#'''Limited:''' The co-resident application mix is restricted to specified VM combinations only. Click on the "Limited" entry in the tables below to see which VM combinations are allowed. Co-residency with any VMs outside these combinations - including other Cisco VMs - is not supported (these applications must be placed on a separate physical server). The deployment must also follow the [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]] listed below.

+

#'''UC with UC only:''' The co-resident application mix is restricted to VMs for UC apps listed at [[Unified Communications Virtualization Supported Applications]]. Co-residency with Cisco non-UC VMs and/or 3rd-party application VMs is not supported; those VMs must be placed on a separate physical server. The deployment must also follow the [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]] rules below.

+

#'''Full:''' The co-resident application mix may contain UC app VMs with Cisco non-UC VMs with 3rd-party application VMs. The deployment must follow the [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]] rules below. The deployment must also follow the [[#Special_Rules_for_non-UC_and_3rd-party_Co-residency|Special Rules for non-UC and 3rd-party Co-residency]] below.

+

+

<br>

+

+

[[Image:UConUCS co-res 2 types.JPG|500px|Types of Co-residency]] <br>

+

+

<br>

+

+

==== General Rules for Co-residency and Physical/Virtual Hardware Sizing ====

+

+

See the tables after the rules for the co-residency policy of each UC app. <br>

+

+

{{note | Remember that virtualization and co-residency support varies by UC app '''version''', so don't forget to double-check inter-UC-app version compatibility, see [http://www.cisco.com/go/unified-techinfo Cisco Unified Communications System Documentation].}} <br>

**E.g. if you want to host co-resident apps on UCS C260 M2 TRC#1, all co-resident apps must have a hardware support policy that permits this.

+

**E.g. if you want to deploy instead as [[Specification-Based Hardware Support|UC on UCS Specs-based]] with a diskless UCS C260 M2 and a SAN/NAS storage array, all co-resident apps must support this.

+

**You must pick a hardware option that all the co-resident apps can support. For example, some UC apps do not support [[Specification-Based Hardware Support]] | Specs-based for UC on UCS or HP/IBM]], some UC apps do not support certain Tested Reference Configurations such as UC on UCS C200 M2 TRC#1 (as opposed to [[Specification-Based Hardware Support|UC on UCS C200 M2 specs-based]]).

+

*Same support for virtualization software product and version.

+

**E.g. one app supports vSphere 5.0, the other app only supports vSphere 4.1. vSphere 5.0 may not be used for this co-resident application mix.

+

*All apps must support a co-residency policy that permits the desired co-resident application mix.

+

**E.g. one app has a "Full" policy, another app has "UC with UC" policy. Co-resident non-UC or 3rd-party app VMs are not allowed.

+

**E.g. one app has a "UC with UC" policy, another app has "Limited" policy. Even though all apps will be UC, the desired combination may not be allowed by the "Limited" app.

+

**E.g. one app has "None" policy. No other apps can be co-resident with this app regardless of their policies.

+

*If support policies of a given co-resident app mix do not match, then the "least common denominator" is required.

All VMs require a one to one mapping between virtual hardware and physical hardware. See specifics below. <br>

+

+

::'''CPU'''

+

+

*Must map 1 VM vCPU core to 1 physical CPU core.

+

**For example, if you have a host with 12 total physical cores, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 12.

+

**The requirement is based on ''physical'' cores, not ''logical'' cores.

+

***Logical cores may exceed physical cores if CPU hyperthreading is used. See [[UC Virtualization Supported Hardware]] for recommendation on hyperthreading and other BIOS settings. See screenshot below for physical cores vs. logical cores (as viewed from either VMware vCenter or vSphere Client) for a UCS C220 M3S server with CPU hyperthreading DISABLED. If hyperthreading is ENABLED, you will see 16 logical cores despite only 8 physical cores, but UC sizing rules are still limited by 8 physical cores.

**The requirement is based on physical cores on CPU architectures that Cisco has verified have equivalent performance ([[UC Virtualization Supported Hardware#Processors_.2F_CPUs|click here for details]]). E.g. for UC sizing purposes, one core on E5-2600 at 2.5+ GHz is equivalent to one core on E7-2800 at 2.4+ GHz, which are both equivalent to one core on 5600 at 2.53+ GHz.

+

*Cisco Unity VMs also require VMware CPU Affinity.

+

*If there is at least one live Unity Connection VM on the physical server, then one CPU core per physical server must be left unused (it is actually being used by ESXi scheduler).

+

**For example, if you have a host with 12 total physical cores and one or more of the VMs on that host will be Unity Connection, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 11, with the 12th core left unused. This is regardless of how many Unity Connection VMs are on that host.

*CPU reservations on the VMs are not required. Use of CPU reservations in lieu of one-to-one CPU core mappings is not supported.

+

**Even if some of the virtual machines have a reservation, the above one-to-one vCPU to physical core rule still applies – it overrides the reservation.

+

**For example, if you have a host with a total of 4 physical cores, and you want to run the CUCM 2500 user OVA (which has 800 MHz reservation and requires 1 vCPU) along with other virtual machines, you still must deploy the VMs with a one to one mapping of vCPU to physical core. If you do not follow this rule, your deployment is unsupported.

*The sum of virtual machines' vRAM may not exceed the total physical memory on the physical server.

+

*Additional 2 GB of physical RAM must be provisioned for VMware ESXi itself (this is to cover ESXi overhead to run VMs; for more details see [http://pubs.vmware.com/vsp40_e/resmgmt/wwhelp/wwhimpl/common/html/wwhelp.htm#href=r_overhead_memory_on_virtual_machines.html&single=true "Understanding Memory Overhead" on vmware.com]).

**The sum of virtual machines' vDisks may not exceed the physical disk space of the physical server's logical volume capacity (i.e. capacity net of overhead for the VM itself, VMFS in vSphere and physical RAID configuration).

+

**Cisco recommends 10% buffer on top of vDisk values to handle overhead within the VM (such as swap files which are the size of the VM's vRAM). See [[Shared Storage Considerations]] for more details.

+

*The DAS, NAS or SAN storage solution must also supply enough performance to handle the total load of the VMs.

+

**Must provide enough IOPS to handle sum of the VMs.

+

**Kernel command latency must not exceed 2-3 milliseconds.

+

**Physical device command latency must not exceed 15-20 milliseconds.

+

**See [[Shared Storage Considerations]] and [[IO Operations Per Second (IOPS)|Storage Performance Requirements]] for more details.

+

*If the above capacity or performance requirements are not met, the storage system is overloaded and must be "fixed" by either moving virtual machines to alternate storage, or improving storage hardware.

+

+

<br>

+

+

::'''Network/LAN'''

+

+

*The aggregate networking load of the co-resident virtual machines must be met with the physical networking interface(s) on the host.

*For other network hardware best practices, see [[QoS Design Considerations for Virtual UC with UCS]].

+

*If the above capacity or performance requirements are not met, the networking hardware is congested and must be "fixed" by either moving virtual machines to a host with different network access, or provisioning more physical network interfaces.

+

+

<br>

+

+

::'''Maximum VM Count per Physical Server'''

+

+

*For hardware other than UCS C200 M2 TRC#1, you may mix and match Cisco UC app VM size and quantity as long as you follow all of the sizing rules described above. The maximum number of virtual machines per physical server that can be supported depends on several factors:

**E.g. using the above physical/virtual sizing rules for CPU, a physical server with 8 total physical cores can only host 4 of the "CUCM 7.5K user OVAs" since those are 2 vCPU each. If the physical server instead had 20 total physical cores, it could host 10 of these VMs (assuming memory, network and storage hardware are also sufficient using the UC sizing rules immediately below).

+

**All [[Tested Reference Configurations (TRC)|UC on UCS Tested Reference Configurations]] are sized for co-residency except for UCS C210 M1 Tested Reference Configuration #1 (which is only sized to host a single CUCM VM of 7500 user capacity). Note UCS C200 M2 Tested Reference Configuration #1 has special restrictions on choice of UC VM, and its allowed VMs are at lower capacity per VM than for other Tested Reference Configurations.

+

**[[Specification-Based Hardware Support|UC on UCS Specs-based and HP/IBM Specs-based]] deployments allow hardware options that may support a higher or lower max VM count than a [[Tested Reference Configurations (TRC)|UC on UCS Tested Reference Configuration]]. E.g. UCS C210 M2 TRC#1 is a dual-4-core CPU, but UCS C210 M2 specs-based could be configured with dual-6-core (for possibly more VMs) or a single 4-core (for possibly a single VM).

+

*Note the max VM count may also be further restricted by UC apps that only support "Limited" co-residency as described in the tables after the rules.

This policy applies to both Tested Reference Configurations and Specifications-based VMware Support. Note that Specifications-based VMware support allows hardware options that may support more or less VMs per physical server than a TRC.

+

::'''Special Requirements for UCS C200 M2 TRC#1 Hardware'''

-

To determine which applications may share a physical server, use the following guidelines:<br>

*For UCS C200 M2 TRC #1, there are additional TRC-specific restrictions since it uses a slower CPU than other TRCs or specs-based (i.e. E5506 / 2.13 GHz instead of CPU with 2.53 GHz speed or higher). A C200 M2 configured with a different CPU allowed by [[Specification-Based Hardware Support|UC on UCS Specs-based]] does not have these TRC-specific restrictions. Follow these rules for UCS C200M2 TRC#1 with E5506 / 2.13 GHz CPU:

-

**Co-residency of UC with 3rd-party application VMs - such as TFTP/SFTP/DNS/DHCP servers, Directories, Groupware, File/print, CRM, VMware vCenter, etc. - is not supported at this time. These applications may be placed on a separate physical server from UC. For UCS B-series, this can be an adjacent blade in the same chassis.

+

-

**Co-residency of UC with non-UC VMs such as Cisco Nexus 1000V VSM are not supported at this time. These applications may be placed on a separate physical server from UC. For UCS B-series, this can be an adjacent blade in the same chassis.

+

-

**See [[Unified Communications Virtualization Supported Applications|Unified Communications Virtualization Supported Applications]] for product-specific co-residency rules. Product-specific co-residency rules take precedence over the support policy on this page. The following apps have product-specific co-residency rules:<br>

+

-

***[[Cisco Unified Communications Manager Business Edition 6000]]

+

-

***[[Virtualization for Cisco Unified Presence]]

+

-

***[[Virtualization for Unified CCE]]

+

-

***[[Virtualization for Unified Intelligence Center]]

+

-

***[[Virtualization for CCMP with Unified CCE on UCS Hardware]]

+

-

***[[Virtualization for Unified CVP]]

+

-

***[[Virtualization for Cisco MediaSense]]

+

-

***[[Virtualization for Cisco SocialMiner]]{{note|Cisco Unity requires CPU Affinity which may not be desirable for other applications sharing the server.}}

+

-

**All co-resident applications must support UC on UCS. See [[Unified Communications Virtualization Supported Applications]] for more information. {{note|Supported versions may vary by application. See [http://www.cisco.com/go/unified-techinfo Cisco Unified Communications System Documentation] for more details on inter-product version compatibility.}}

+

-

**You must use a server/storage/network configuration (see [[Unified Computing System Hardware]]) which is supported by all the co-resident applications/versions (see [[Unified Communications Virtualization Supported Applications]]). For example, some applications/versions do not support C200 M2 TRC#1, or only support M2 TRCs but not M1 TRCs.

*For UCS B200 and C210 TRCs, you may otherwise mix and match applications, versions and OVAs (of any size) in any quantity as long as you do not oversubscribe any physical server resources as described above.

***Any combination of CUCM, CER, CUC/UCxn, CUCCX, CUxAC, provided you keep a 1:1 vCPU to physical core ratio, have at least 1 unused physical core if there is one or more instance of CUC/UCxn present, and only use supported OVA templates for UCS C200 M2 TRC#1:

+

***Any other combination provided you follow the [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]], and only use the above OVA templates supported for UCS C200 M2 TRC#1.

+

<br>

<br>

-

See the Sizing Examples at the bottom of this page for examples of using these guidelines.

+

<br>

+

+

==== Special Rules for non-UC and 3rd-party Co-residency ====

+

+

See the tables after these rules for the co-residency policy of each Cisco UC app.

+

+

Non-UC VMs and 3rd-party app VMs that will be co-resident with Cisco UC app VMs are required to align with all of the following:

:All co-resident VMs must follow the '''"Matching" Support Polices''' rule in [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]]. www.cisco.com/go/uc-virtualized does not describe policies for Cisco non-UC apps or 3rd-party apps

+

+

<br>

+

+

:'''Virtual Machine Templates'''

+

:Cisco non-UC VMs and 3rd-party app VMs own definition of their supported VM OVA templates (or specs for one to be created), similar to what Cisco UC app VMs require in [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]]. http://www.cisco.com/go/uc-virtualized does not describe VM templates for Cisco non-UC apps or 3rd-party apps.

:*To enforce "no memory oversubscription", each co-resident VM - whether UC, non-UC or 3rd-party - must have a reservation for vRAM that includes all the vRAM of the virtual machine. For example, if you have a virtual machine that is configured with 4GB of vRAM, then that virtual machine must also have a reservation of 4 GB of vRAM.

+

:*Otherwise all co-resident VMs - including non-UC VMs and 3rd-party app VMs - must follow the '''No Hardware Oversubscription''' rules for '''Memory/RAM''' in [[#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|General Rules for Co-residency and Physical/Virtual Hardware Sizing]]. The 2 GB for VMware vSphere is in addition to the sum of the vRAM reservations for the VMs.

+

+

<br>

+

+

:'''Storage'''

+

:*Non-UC VMs and 3rd-party app VMs must define their storage capacity requirements (ideally in an OVA template) and storage performance requirements. These requirements are not captured at http://www.cisco.com/go/uc-virtualized.

:*If DAS storage is to be used with non-UC / 3rd-party app VMs, it is highly recommended that pre-deployment testing be conducted, where all VMs are pushed to their highest level of IOPS generation. This is due to DAS environments being more capacity/performance-constrained in general, more dependent on adapter caches in RAID controllers, and Cisco DAS testing only done for UC apps on UCS Tested Reference Configurations.

+

+

<br>

+

+

:'''Network/LAN'''

+

:*Non-UC VMs and 3rd-party app VMs must define the network capacity/performance requirements of their VM OVA templates. These requirements are not captured at http://www.cisco.com/go/uc-virtualized.

Cisco does not support non-UC or 3rd-party applications VMs running on "Cisco UC Virtualization Hypervisor" or "Cisco UC Virtualization Foundation" (as described at Unified Communications VMware Requirements). If you want to deploy non-U / 3rd-party applications, you must deploy on VMware vSphere Standard, Advanced, Enterprise or Enterprise Plus Edition.

Each Cisco UC app supports one of the following four types of co-residency:

Note:

None: Co-residency is not supported. The UC app only supports a single instance of itself in a single VM on the virtualization host / physical server. No co-residency with ANY other VM is allowed, whether Cisco UC app VM, Cisco non-UC VM, or 3rd-party application VM.

Limited: The co-resident application mix is restricted to specified VM combinations only. Click on the "Limited" entry in the tables below to see which VM combinations are allowed. Co-residency with any VMs outside these combinations - including other Cisco VMs - is not supported (these applications must be placed on a separate physical server). The deployment must also follow the General Rules for Co-residency and Physical/Virtual Hardware Sizing listed below.

E.g. if you want to host co-resident apps on UCS C260 M2 TRC#1, all co-resident apps must have a hardware support policy that permits this.

E.g. if you want to deploy instead as UC on UCS Specs-based with a diskless UCS C260 M2 and a SAN/NAS storage array, all co-resident apps must support this.

You must pick a hardware option that all the co-resident apps can support. For example, some UC apps do not support Specification-Based Hardware Support | Specs-based for UC on UCS or HP/IBM]], some UC apps do not support certain Tested Reference Configurations such as UC on UCS C200 M2 TRC#1 (as opposed to UC on UCS C200 M2 specs-based).

Same support for virtualization software product and version.

E.g. one app supports vSphere 5.0, the other app only supports vSphere 4.1. vSphere 5.0 may not be used for this co-resident application mix.

All apps must support a co-residency policy that permits the desired co-resident application mix.

E.g. one app has a "Full" policy, another app has "UC with UC" policy. Co-resident non-UC or 3rd-party app VMs are not allowed.

E.g. one app has a "UC with UC" policy, another app has "Limited" policy. Even though all apps will be UC, the desired combination may not be allowed by the "Limited" app.

E.g. one app has "None" policy. No other apps can be co-resident with this app regardless of their policies.

If support policies of a given co-resident app mix do not match, then the "least common denominator" is required.

All VMs require a one to one mapping between virtual hardware and physical hardware. See specifics below.

CPU

Must map 1 VM vCPU core to 1 physical CPU core.

For example, if you have a host with 12 total physical cores, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 12.

The requirement is based on physical cores, not logical cores.

Logical cores may exceed physical cores if CPU hyperthreading is used. See UC Virtualization Supported Hardware for recommendation on hyperthreading and other BIOS settings. See screenshot below for physical cores vs. logical cores (as viewed from either VMware vCenter or vSphere Client) for a UCS C220 M3S server with CPU hyperthreading DISABLED. If hyperthreading is ENABLED, you will see 16 logical cores despite only 8 physical cores, but UC sizing rules are still limited by 8 physical cores.

The requirement is based on physical cores on CPU architectures that Cisco has verified have equivalent performance (click here for details). E.g. for UC sizing purposes, one core on E5-2600 at 2.5+ GHz is equivalent to one core on E7-2800 at 2.4+ GHz, which are both equivalent to one core on 5600 at 2.53+ GHz.

Cisco Unity VMs also require VMware CPU Affinity.

If there is at least one live Unity Connection VM on the physical server, then one CPU core per physical server must be left unused (it is actually being used by ESXi scheduler).

For example, if you have a host with 12 total physical cores and one or more of the VMs on that host will be Unity Connection, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 11, with the 12th core left unused. This is regardless of how many Unity Connection VMs are on that host.

CPU reservations on the VMs are not required. Use of CPU reservations in lieu of one-to-one CPU core mappings is not supported.

Even if some of the virtual machines have a reservation, the above one-to-one vCPU to physical core rule still applies – it overrides the reservation.

For example, if you have a host with a total of 4 physical cores, and you want to run the CUCM 2500 user OVA (which has 800 MHz reservation and requires 1 vCPU) along with other virtual machines, you still must deploy the VMs with a one to one mapping of vCPU to physical core. If you do not follow this rule, your deployment is unsupported.

Memory/RAM

Must map 1 GB of VM vRAM to 1 GB of physical RAM. Memory oversubscription is not supported for Cisco UC VMs.

The sum of virtual machines' vRAM may not exceed the total physical memory on the physical server.

The sum of virtual machines' vDisks may not exceed the physical disk space of the physical server's logical volume capacity (i.e. capacity net of overhead for the VM itself, VMFS in vSphere and physical RAID configuration).

Cisco recommends 10% buffer on top of vDisk values to handle overhead within the VM (such as swap files which are the size of the VM's vRAM). See Shared Storage Considerations for more details.

The DAS, NAS or SAN storage solution must also supply enough performance to handle the total load of the VMs.

If the above capacity or performance requirements are not met, the storage system is overloaded and must be "fixed" by either moving virtual machines to alternate storage, or improving storage hardware.

Network/LAN

The aggregate networking load of the co-resident virtual machines must be met with the physical networking interface(s) on the host.

See the UC application design guides (http://www.cisco.com/go/srnd) to size network utilization by UC app VMs. In general, most UC app VMs will not saturate a 1GbE link. Deployments leveraging non-FC-storage (iSCSI, NFS or Unified Fabric/FCoE including UCS B-Series FEX) must account for network traffic from both VM LAN access and VM storage access.

If the above capacity or performance requirements are not met, the networking hardware is congested and must be "fixed" by either moving virtual machines to a host with different network access, or provisioning more physical network interfaces.

Maximum VM Count per Physical Server

For hardware other than UCS C200 M2 TRC#1, you may mix and match Cisco UC app VM size and quantity as long as you follow all of the sizing rules described above. The maximum number of virtual machines per physical server that can be supported depends on several factors:

E.g. using the above physical/virtual sizing rules for CPU, a physical server with 8 total physical cores can only host 4 of the "CUCM 7.5K user OVAs" since those are 2 vCPU each. If the physical server instead had 20 total physical cores, it could host 10 of these VMs (assuming memory, network and storage hardware are also sufficient using the UC sizing rules immediately below).

All UC on UCS Tested Reference Configurations are sized for co-residency except for UCS C210 M1 Tested Reference Configuration #1 (which is only sized to host a single CUCM VM of 7500 user capacity). Note UCS C200 M2 Tested Reference Configuration #1 has special restrictions on choice of UC VM, and its allowed VMs are at lower capacity per VM than for other Tested Reference Configurations.

For UCS C200 M2 TRC #1, there are additional TRC-specific restrictions since it uses a slower CPU than other TRCs or specs-based (i.e. E5506 / 2.13 GHz instead of CPU with 2.53 GHz speed or higher). A C200 M2 configured with a different CPU allowed by UC on UCS Specs-based does not have these TRC-specific restrictions. Follow these rules for UCS C200M2 TRC#1 with E5506 / 2.13 GHz CPU:

To enforce "no memory oversubscription", each co-resident VM - whether UC, non-UC or 3rd-party - must have a reservation for vRAM that includes all the vRAM of the virtual machine. For example, if you have a virtual machine that is configured with 4GB of vRAM, then that virtual machine must also have a reservation of 4 GB of vRAM.

Non-UC VMs and 3rd-party app VMs must define their storage capacity requirements (ideally in an OVA template) and storage performance requirements. These requirements are not captured at http://www.cisco.com/go/uc-virtualized.

If DAS storage is to be used with non-UC / 3rd-party app VMs, it is highly recommended that pre-deployment testing be conducted, where all VMs are pushed to their highest level of IOPS generation. This is due to DAS environments being more capacity/performance-constrained in general, more dependent on adapter caches in RAID controllers, and Cisco DAS testing only done for UC apps on UCS Tested Reference Configurations.

Network/LAN

Non-UC VMs and 3rd-party app VMs must define the network capacity/performance requirements of their VM OVA templates. These requirements are not captured at http://www.cisco.com/go/uc-virtualized.

Redundancy and Failover Considerations

Application-layer considerations (such as Unified CM Cluster over WAN or Unified CCE Remote Redundancy) are the same for virtualized (UC on UCS) or non-virtualized (MCS 7800) deployments.

However, since there is no longer a 1:1 relationship between hardware and application instances, "placement logic" must be taken into account to minimize the impact of hardware unavailability or unreachability:

Avoid placing a primary VM and a backup VM on the same server, chassis or site

For failover groups, avoid placing all actives on the same server, chassis or site

Avoid placing all VMs of the same role on the same server, chassis or site

Note this example does not include non-UC applications (such as Cisco Nexus 1000V or Cisco Network Registrar) or 3rd-party applications such as customer-provided DNS / DHCP / TFTP servers, directories, email, groupware or other business applications. These applications need to run on separate physical servers and are not allowed to be co-resident with UC at this time. See the Co-residency section on this page for more details.

See below for details on the server layout and application/VM placement at each site. Note that Branch B is using UCS C200 M2 TRC #1 so has restrictions on which VM OVAs were able to be used.

HQ server detail:

Branch A server detail:

Branch B server detail:

Sizing and Ordering Tools

The suite of tools listed below can assist you with the sizing, configuring and quoting of Cisco Unified Communications solutions on the Unified Computing System.

Cisco Solution Expert

Cisco Solution Expert assists Cisco field and Cisco Unified Communications specialized channel partners in designing and quoting UC on UCS solutions using the Cisco Unified Workspace Licensing or the traditional design model. Solution Expert delivers a Bill of Materials for the Unified Communications software and the UCS B-series Blade Servers and VMWare ordered as Collaboration SKUS.

Netformx DesignXpert

Netformx DesignXpert is a third party application used to design and quote the Cisco Unified Computing System B-series. DesignXpert has two advisor modules that can be used to quote a Unified Communications solution with the Unified Computing System:

UC Advisor – a designing and quoting solution used to quote Unified Communications software. The UCS B-series Blade Servers and VMWare ordered as Collaboration SKUs can be quoted when ordering separate from the Unified Computing System. Other UCS B-series components must be configured via UCS Advisor below.

Cisco Configuration Tool

Cisco Configuration Tool (need link here) is part of the suite of Internet Commerce Tools for managing online ordering of Cisco products. It enables you to configure products and view lead times and prices for each selection. The Cisco Configuration Tool, also known as the Dynamic Configuration Tool, is used to configure the Unified Communications products and the B series SKU and VMWare ordered as Collaboration SKUs.

Ordering Guides

Ordering Guides for Unified Communications System 8.x releases are available for Cisco sales, partners, and customers.