UC apps will specify [[Unified Communications VMWare Requirements|the required version of VMware vSphere ESXi]]. Customers should follow server vendor guidelines for what to use with this VMware version.

+

*BIOS

+

*Firmware

+

*Drivers

-

For Cisco UCS:

+

| colspan="3" |

-

* UCS Software or UCS Manager Software in UCS 6x00 hardware: use the latest recommended version for the VMware vSphere ESXi version

+

There are no UC-specific requirements.

-

* Other B-Series / C-Series BIOS, firmware, drivers: use the latest recommended version for the VMware vSphere ESXi version

** Note that the resultant "Logical Cores" do not factor into [[Unified Communications Virtualization Sizing Guidelines#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing| UC sizing rules for co-residency]]. UC still requires mapping one physical core to one vcpu core (not to one "Logical Core").

+

+

UC apps will specify [[Unified Communications VMWare Requirements|the required version of VMware vSphere ESXi]]. Customers should follow server vendor guidelines for what to use with this VMware version.

+

+

For Cisco UCS:

+

+

*UCS Software or UCS Manager Software in UCS 6x00 hardware: use the latest recommended version for the VMware vSphere ESXi version

+

*Other B-Series / C-Series BIOS, firmware, drivers: use the latest recommended version for the VMware vSphere ESXi version

**Note that the resultant "Logical Cores" do not factor into [[Unified Communications Virtualization Sizing Guidelines#General_Rules_for_Co-residency_and_Physical.2FVirtual_Hardware_Sizing|UC sizing rules for co-residency]]. UC still requires mapping one physical core to one vcpu core (not to one "Logical Core").

+

+

<br>

|-

|-

-

! Mechanical and Environmental<br>

+

! Mechanical and Environmental<br>

-

| colspan="3" |

+

| colspan="3" |

{{ note | Energy-saving features that cause reduction in CPU performance or real-time relocation/powering-down of virtual machines (such as '''CPU throttling''' or '''VMware Dynamic Power Management''') '''are not supported'''.

{{ note | Energy-saving features that cause reduction in CPU performance or real-time relocation/powering-down of virtual machines (such as '''CPU throttling''' or '''VMware Dynamic Power Management''') '''are not supported'''.

-

}}

+

}}

-

Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components.

+

Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components.

If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS.

If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS.

-

Redundant power supplies are highly recommended, particularly for '''UC on UCS'''.

+

Redundant power supplies are highly recommended, particularly for '''UC on UCS'''.

-

For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party.

+

For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party.

|}

|}

-

+

<br> <br>

-

<br>

+

<br>

<br>

Line 778:

Line 787:

*Compatible with the VMware HCL and compatible with the supported server model used

*Compatible with the VMware HCL and compatible with the supported server model used

:* For DAS-only TRCs (including [[Cisco Business Edition 6000]], Thin Provisioning (either from VMware or from storage array) is '''not''' supported. Thick provisioning must be used.

+

-

:* For diskless TRCs and any Specs-based server, thin provisioning (either from VMware or from storage array) is allowed with the caveat that disk space must be available to the VM as needed. Running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).

:*For diskless TRCs and any Specs-based server, thin provisioning (either from VMware or from storage array) is allowed with the caveat that disk space must be available to the VM as needed. Running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).

<br> {{ note | '''UC on UCS TRCs''' using only DAS storage (such as C220 M3S TRC#1) have been pre-designed and tested to meet the above requirements for any UC with UC co-residency scenario that will fit on the TRC. Detailed capacity planning is not required unless deploying

<br> {{ note | '''UC on UCS TRCs''' using only DAS storage (such as C220 M3S TRC#1) have been pre-designed and tested to meet the above requirements for any UC with UC co-residency scenario that will fit on the TRC. Detailed capacity planning is not required unless deploying

Line 792:

Line 803:

<br> See below for supported storage hardware options.

<br> See below for supported storage hardware options.

-

{| width="770" class="wikitable FCK__ShowTableBorders" style=""

+

{| width="770" style="" class="wikitable FCK__ShowTableBorders"

|-

|-

-

! <br>

+

! <br>

! UC on UCS TRC

! UC on UCS TRC

! Specs-based (UCS or 3rd-party Server)

! Specs-based (UCS or 3rd-party Server)

Line 815:

Line 826:

<br> '''DAS Support Details'''

<br> '''DAS Support Details'''

-

{| width="1200" class="wikitable FCK__ShowTableBorders" style=""

+

{| width="1200" style="" class="wikitable FCK__ShowTableBorders"

|-

|-

-

! <br>

+

! <br>

! UC on UCS TRC

! UC on UCS TRC

! Specs-based (UCS or 3rd-party Server)

! Specs-based (UCS or 3rd-party Server)

Line 835:

Line 846:

*compatible with the VMware HCL and compatible with the server model used

*compatible with the VMware HCL and compatible with the server model used

*The storage solution must be compatible with the VMware HCL. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&amp;rls=com.microsoft:en-us:IE-SearchBox&amp;ie=&amp;oe=

*The storage solution must be compatible with the VMware HCL. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&amp;rls=com.microsoft:en-us:IE-SearchBox&amp;ie=&amp;oe=

*No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the [[UC Virtualization Storage System Design Requirements]] for an illustration of a best practices storage array configuration for UC.

*No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the [[UC Virtualization Storage System Design Requirements]] for an illustration of a best practices storage array configuration for UC.

Line 861:

Line 872:

<br>

<br>

-

<br> <br> '''Removable Media'''

+

<br> <br> '''Removable Media'''

-

{| width="770" class="wikitable FCK__ShowTableBorders" style=""

+

{| width="770" style="" class="wikitable FCK__ShowTableBorders"

|-

|-

-

! <br>

+

! <br>

! UC on UCS TRC

! UC on UCS TRC

! Specs-based (UCS or 3rd-party Server)

! Specs-based (UCS or 3rd-party Server)

Line 871:

Line 882:

! Boot from USB devices or SD cards

! Boot from USB devices or SD cards

|

|

-

* Not allowed or supported for UC apps. Must boot them from DAS or FC SAN depending on Table 1.

+

*Not allowed or supported for UC apps. Must boot them from DAS or FC SAN depending on Table 1.

-

* Not allowed or supported for VMware vSphere ESXi. Must boot from DAS or FC SAN depending on Table 1.

+

*Not allowed or supported for VMware vSphere ESXi. Must boot from DAS or FC SAN depending on Table 1.

-

* Note all current TRCs are either diskless blades or C-Series DAS/HDD. SD cards in C-Series TRCs are used for convenience to get the UCS utilities (like SCU and HUU), in lieu of a DVD drive.

+

*Note all current TRCs are either diskless blades or C-Series DAS/HDD. SD cards in C-Series TRCs are used for convenience to get the UCS utilities (like SCU and HUU), in lieu of a DVD drive.

+

|

|

-

* Not allowed or supported for UC apps. Must boot them from DAS, SAN or NAS per Specs-based Storage requirements.

+

*Not allowed or supported for UC apps. Must boot them from DAS, SAN or NAS per Specs-based Storage requirements.

-

* Allowed for VMware vSphere ESXi (with same support demarcations as with "boot from FC SAN" ).

+

*Allowed for VMware vSphere ESXi (with same support demarcations as with "boot from FC SAN" ).

+

|}

|}

Line 1,803:

Line 1,816:

<br>

<br>

-

= End of Sale UC on UCS TRC Bills of Material (BOMs) =

+

= End of Sale UC on UCS TRC Bills of Material (BOMs) =

-

=== B200 M2 TRC#1 ===

+

=== B200 M2 TRC#1 ===

-

This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements.

+

This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements.

{| class="prettytable"

{| class="prettytable"

|-

|-

-

|'''Quantity'''

+

| '''Quantity'''

-

|'''Cisco Part Number'''

+

| '''Cisco Part Number'''

-

|'''Description'''

+

| '''Description'''

-

+

|-

|-

-

|'''1'''

+

| '''1'''

-

|'''N20-B6625-1'''

+

| '''N20-B6625-1'''

-

|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine

+

| UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine

-

+

|-

|-

-

|'''2'''

+

| '''2'''

-

|'''A01-X0109'''

+

| '''A01-X0109'''

-

|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

+

| 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

+

|-

+

| '''12'''

+

|

+

Either:

-

|-

+

*'''N01-M304GB1'''

-

|'''12'''

+

*'''A02-M304GB2-L'''

-

|

+

*'''UCS-MR-1X041RX-A'''

-

Either:

+

-

*'''N01-M304GB1

+

| <br>

-

*'''A02-M304GB2-L

+

*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

-

*'''UCS-MR-1X041RX-A

+

*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt

-

|<br>

+

-

*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

+

-

*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt

+

*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

|-

|-

-

|'''2'''

+

| '''2'''

-

|Either:

+

| Either:

-

*'''A03-D146GC2

+

*'''A03-D146GC2'''

*'''UCS-HDD300GI2F105'''

*'''UCS-HDD300GI2F105'''

-

|<br>

+

-

*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted

+

| <br>

+

*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted

*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted

*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted

|-

|-

-

|'''1'''

+

| '''1'''

-

|'''N20-AC0002'''

+

| '''N20-AC0002'''

-

|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

+

| UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

-

+

|-

|-

-

|'''2'''

+

| '''2'''

-

+

| '''N20-BHTS1'''

-

|'''N20-BHTS1'''

+

| Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server

-

+

-

|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server

+

|}

|}

+

<br>

+

<br>

-

<br>

+

=== B200 M2 TRC#2 ===

-

=== B200 M2 TRC#2 ===

+

Memory and hard drives changes are due to industry transitions and not UC app requirements.

-

Memory and hard drives changes are due to industry transitions and not UC app requirements.

+

{| class="prettytable"

{| class="prettytable"

|-

|-

-

|'''Quantity'''

+

| '''Quantity'''

-

|'''Cisco Part Number'''

+

| '''Cisco Part Number'''

-

|'''Description'''

+

| '''Description'''

-

+

|-

|-

-

|'''1'''

+

| '''1'''

-

|'''N20-B6625-1'''

+

| '''N20-B6625-1'''

-

|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine

+

| UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine

-

+

|-

|-

-

|'''2'''

+

| '''2'''

-

|'''A01-X0109'''

+

| '''A01-X0109'''

-

|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

+

| 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

+

|-

+

| '''12'''

+

|

+

Either:

-

|-

+

*'''N01-M304GB1'''

-

|'''12'''

+

*'''A02-M304GB2-L'''

-

|

+

*'''UCS-MR-1X041RX-A'''

-

Either:

+

-

*'''N01-M304GB1

+

| <br>

-

*'''A02-M304GB2-L

+

*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

-

*'''UCS-MR-1X041RX-A

+

*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt

-

|<br>

+

-

*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

+

-

*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt

+

*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

|-

|-

-

|

+

| <br>

-

|

+

| <br>

| Diskless

| Diskless

-

|-

|-

-

|'''1'''

+

| '''1'''

-

|'''N20-AC0002'''

+

| '''N20-AC0002'''

-

|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

+

| UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

-

+

|-

|-

-

|'''2'''

+

| '''2'''

-

+

| '''N20-BHTS1'''

-

|'''N20-BHTS1'''

+

| Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server

-

+

-

|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server

+

|}

|}

+

<br>

+

<br>

-

<br>

+

<br> <br>

+

=== B200 M1 TRC#1 ===

-

<br>

+

This configuration was also quotable as UCS-B200M2-VCS1.

-

+

-

=== B200 M1 TRC#1 ===

+

-

+

-

This configuration was also quotable as UCS-B200M2-VCS1.

+

{| class="prettytable"

{| class="prettytable"

|-

|-

-

|

+

|

-

'''Quantity'''

+

'''Quantity'''

-

|

+

|

-

'''Cisco Part Number'''

+

'''Cisco Part Number'''

-

|

+

|

-

'''Description'''

+

'''Description'''

|-

|-

-

|

+

|

-

'''1'''

+

'''1'''

-

|

+

|

-

'''N20-B6620-1'''

+

'''N20-B6620-1'''

-

|

+

|

-

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

+

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

|-

|-

-

|

+

|

-

'''2'''

+

'''2'''

-

|

+

|

-

'''N20-X00002'''

+

'''N20-X00002'''

-

|

+

|

-

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

+

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

|-

|-

-

|

+

|

-

'''8'''

+

'''8'''

-

|

+

|

-

'''N01-M304GB1'''

+

'''N01-M304GB1'''

-

|

+

|

-

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

+

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

|-

|-

-

|

+

|

-

'''2'''

+

'''2'''

-

|

+

|

-

'''A03-D146GA2'''

+

'''A03-D146GA2'''

-

|

+

|

-

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

+

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

|-

|-

-

|

+

|

-

'''1'''

+

'''1'''

-

|

+

|

-

'''N20-AQ0002'''

+

'''N20-AQ0002'''

-

|

+

|

-

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

+

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

|-

|-

-

|

+

|

-

'''2'''

+

'''2'''

-

|

+

|

-

'''N20-BHTS1'''

+

'''N20-BHTS1'''

-

|

+

|

-

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server

+

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server

|}

|}

+

<br>

+

<br>

-

<br>

+

=== B200 M1 TRC#2 ===

-

+

-

=== B200 M1 TRC#2 ===

+

{| class="prettytable"

{| class="prettytable"

|-

|-

-

|

+

|

-

'''Quantity'''

+

'''Quantity'''

-

|

+

|

-

'''Cisco Part Number'''

+

'''Cisco Part Number'''

-

|

+

|

-

'''Description'''

+

'''Description'''

|-

|-

-

|

+

|

-

'''1'''

+

'''1'''

-

|

+

|

-

'''N20-B6620-1'''

+

'''N20-B6620-1'''

-

|

+

|

-

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

+

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

|-

|-

-

|

+

|

-

'''2'''

+

'''2'''

-

|

+

|

-

'''N20-X00002'''

+

'''N20-X00002'''

-

|

+

|

-

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

+

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

|-

|-

-

|

+

|

-

'''8'''

+

'''8'''

-

|

+

|

-

'''N01-M304GB1'''

+

'''N01-M304GB1'''

-

|

+

|

-

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

+

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

|-

|-

-

|

+

| <br>

-

|

+

| <br>

-

|Diskless

+

| Diskless

-

+

|-

|-

-

|

+

|

-

'''1'''

+

'''1'''

-

|

+

|

-

'''N20-AQ0002'''

+

'''N20-AQ0002'''

-

|

+

|

-

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

+

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

|-

|-

-

|

+

|

-

'''2'''

+

'''2'''

-

|

+

|

-

'''N20-BHTS1'''

+

'''N20-BHTS1'''

-

|

+

|

-

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server

+

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server

|}

|}

+

<br>

+

+

=== C210 M2 TRC#1 ===

-

=== C210 M2 TRC#1 ===

+

This configuration was also quotable as UCS-C210M2-VCD2. Memory and hard drives changes due to industry technology transitions not UC app requirements.

-

This configuration was also quotable as UCS-C210M2-VCD2. Memory and hard drives changes due to industry technology transitions not UC app requirements.

"TRC" used by itself means "UC on UCS Tested Reference Configuration (TRC)". "UC on UCS" used by itself refers to both UC on UCS TRC and UC on UCS Specs-based."Specs-based" used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based.

Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen:

VMware Requirements

VMware virtualization software is required for Cisco TAC support.

See the Introduction for basic virtualization software requirements, including what is optional and what is mandatory.

For Cisco UCS, no UC applications run or install directly on the server hardware; all applications run only as virtual machines. Cisco UC does not support a physical, bare-metal, or nonvirtualized installation on Cisco UCS server hardware.

All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features as described here.

Note:

For UC on UCS Specs-based and Third-party Server Specs-based, use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory. Click here for how to configure VMware vCenter to capture these logs. If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems.

Note that the resultant "Logical Cores" do not factor into UC sizing rules for co-residency. UC still requires mapping one physical core to one vcpu core (not to one "Logical Core").

Mechanical and Environmental

Note:

Energy-saving features that cause reduction in CPU performance or real-time relocation/powering-down of virtual machines (such as CPU throttling or VMware Dynamic Power Management) are not supported.

Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components.

If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS.

Redundant power supplies are highly recommended, particularly for UC on UCS.

For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party.

Processors / CPUs

Cisco Collaboration is a set of mission-critical, "Tier 0" applications, where customer expectations for availability, stability and predictable performance are higher than for traditional business applications. Cisco Collaboration apps are real-time and latency-sensitive, with extremely different resource footprints and operational characteristics than traditional business applications. Therefore note the following:

Note:

Explicit qualification of new or existing CPU architectures is required.

Until qualification occurs, new CPU architectures are not allowed or supported, even if they are believed to be "better" than currently supported CPU models.

Some CPU architectures may never be allowed or supported by Cisco Collaboration (e.g. Intel desktop architecture or Xeon 65xx). Within an allowed or supported CPU architecture, some CPU models may never be supported (e.g. due to insufficient physical core speed).

Collaboration application support for new CPU architectures/models may lag the release date from Intel and/or server vendors.

For purposes of sizing rules and co-residency, virtualized UC apps see equivalent performance from one physical CPU core on any of the above architectures. E.g. UC apps perform eqiuvalently on 1 physical core of 5600 at 2.53+ GHz or 1 physical core of E5-2600 at 2.50+ GHz or 1 physical core of E7-2800 at 2.40+ GHz.

Physical CPU core speed same or higher than that of the TRC BOM's CPU model in Table 1.

E.g. if the TRC BOM was tested with 2-socket Intel Xeon E5-2680 (Romley/SandyBridge-EP, 8-core, 2.7 GHz), then 2-sockets of any 8-core CPU of 2.7 GHz or higher with the E5-26xx/46xx (Intel Xeon Romley/SandyBridge-EP architecture) may be substituted.

Per these policies, recall that physical CPU cores may not be over-subscribed for UC VMs

I.e. one physical CPU core must equal one VM vCPU core.

Hyper-threading on the CPU should be enabled when available, but the resulting Logical Cores do not change UC app rules. UC rules are based on 1:1 mapping of physical cores to vcpu, not Logical Cores to vcpu.

Cisco TAC is not obligated to troubleshoot UC app issues in deployments with insufficient physical processor cores or speed.

Memory / RAM

Note:

Virtualization software licenses such as Cisco UC Virtualization Foundation or VMware vSphere limit the amount of total vRAM that can be used (and therefore the amount of physical RAM that can be used for UC VMs, due to UC sizing rules). See Unified Communications VMware Requirements for these limits. In general larger deployments, or deployments with high VM counts, will require very high vRAM totals and will therefore need to use VMware vSphere instead of Cisco UC Virtualization Foundation.
If using high-memory-capacity servers, use VMware vSphere instead to ensure use of all physical memory.

while following co-residency support policy rules. Per these rules, recall that UC does not support physical memory oversubscription (1 GB of vRAM must equal 1 GB of physical RAM). Cisco TAC is not obligated to troubleshoot UC app issues if the deployment has insufficient physical RAM.

For DAS-only TRCs (including Cisco Business Edition 6000, Thin Provisioning (either from VMware or from storage array) is not supported. Thick provisioning must be used.

For diskless TRCs and any Specs-based server, thin provisioning (either from VMware or from storage array) is allowed with the caveat that disk space must be available to the VM as needed. Running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).

UC on UCS TRCs using only DAS storage (such as C220 M3S TRC#1) have been pre-designed and tested to meet the above requirements for any UC with UC co-residency scenario that will fit on the TRC. Detailed capacity planning is not required unless deploying

non-UC/3rd-party apps

VM OVA templates created later than the TRC

VM OVA templates with very large vDisks (300GB+).

Note:

All of the above requirements must be met for Cisco UC to function properly. Except for UC on UCS TRCs using DAS only, it is the customer's responsibility to design a storage system that meets the above requirements. Cisco TAC is not obligated to troubleshoot UC app issues when customer-provided storage is insufficient, overloaded or otherwise not meeting the above requirements.

B-Series TRC may use the disk size/speed listed in Table 1 BOMs, or any other orderable size/speed for the blade server (since local disks are only used to boot VMware).

C-Series TRC Both must be same or higher than specs listed in Table 1. E.g. for a TRC tested with 300 GB 10K rpm disks, then:

300GB 15K rpm is supported (faster)

146GB 10K rpm not supported (too small)

7.2K rpm disk of any size not supported (too slow)

DAS is supported with customer-determined disk size, speed, quantity, technology, form factor and RAID configuration as long as:

compatible with the VMware HCL and compatible with the server model used

all UC latency, performance and capacity requirements are met. To ensure optimum UC app performance, be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.

TRC BOMs are updated as orderable disk drive options change. E.g. UCS C210 M2 TRC#1 was tested with 146GB 15K rpm disks, but due to 146GB disk EOL, the BOM now specifies 300GB 15K rpm disks (still supported as TRC since both size and speed are "same or higher" than what was tested).

Disk Quantity, Technology, Form Factor

Must exactly match what is listed in Table 1. E.g. if the TRC was tested with ten 2.5" SAS drives, then that must be used regardless of disk size or speed.

No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the UC Virtualization Storage System Design Requirements for an illustration of a best practices storage array configuration for UC.

There is no UC-specific requirement for NFS version. Use what VMware and the server vendor recommend for the vSphere ESXi version required by UC.

Use of storage network and array "features" (such as thin provisioning or EMC Powerpath) is allowed.

Otherwise any shared storage configuration is allowed as long as UC requirements for VMware HCL, server compatibility, latency, capacity and performance are met.

Removable Media

UC on UCS TRC

Specs-based (UCS or 3rd-party Server)

Boot from USB devices or SD cards

Not allowed or supported for UC apps. Must boot them from DAS or FC SAN depending on Table 1.

Not allowed or supported for VMware vSphere ESXi. Must boot from DAS or FC SAN depending on Table 1.

Note all current TRCs are either diskless blades or C-Series DAS/HDD. SD cards in C-Series TRCs are used for convenience to get the UCS utilities (like SCU and HUU), in lieu of a DVD drive.

Not allowed or supported for UC apps. Must boot them from DAS, SAN or NAS per Specs-based Storage requirements.

Allowed for VMware vSphere ESXi (with same support demarcations as with "boot from FC SAN" ).

Otherwise, there are no UC-specific requirements or restrictions. The different methods of installing UC apps into VMs can leverage the following distribution types of Cisco UC software:

UCS B-Series TRC may use either the adapters listed in Table 1 BOMs or substitute with any other supported adapter for the blade server model. Which adapter "should" be used is dependent on deployment, design and UC apps.

The customer is also responsible for configuring redundant devices on the server (e.g. redundant NIC, HBA, VIA or CNA adapters).

There are no UC restrictions on hardware vendors for I/O Devices other than that VMware HCL and the server vendor/model must be compatible with them and support them.

IO Capacity and Performance

In most cases detailed capacity planning is not required for LAN IO or storage access IO. TRC adapter choices have been made to accommodate the IO of all UC on UCS app co-residency scenarios that will fit on the TRC. For guidance on active vs. standby network ports, see the Cisco UC Design Guide] and QoS Design Considerations for Virtual UC with UCS It is the customer's responsibility to ensure the external LAN and storage access meet UC app design requirements.

LAN access adapters must be able to accommodate the LAN usage of UC VMs (described in UC app design guides).

Storage access adapters must be able to accommodate the storage IOPS (described in the Storage section of this policy).

Cisco TAC is not obligated troubleshoot UC apps issues in a deployment with insufficient or overloaded I/O devices.