This document describes how the Cisco Unified Computing System™ can be used in conjunction with EMC® CLARiiON® storage systems to implement an Oracle Real Application Clusters (RAC) system that is an Oracle Certified Configuration. The Cisco Unified Computing System provides the compute, network, and storage access components of the cluster, deployed as a single cohesive system. The result is an implementation that addresses many of the challenges that database administrators and their IT departments face today, including needs for a simplified deployment and operation model, high performance for Oracle RAC software, and lower total cost of ownership (TCO). The document introduces the Cisco Unified Computing System and provides instructions for implementing it; it concludes with an analysis of the cluster’s performance and reliability characteristics.

Data powers essentially every operation in a modern enterprise, from keeping the supply chain operating efficiently to managing relationships with customers. Oracle RAC brings an innovative approach to the challenges of rapidly increasing amounts of data and demand for high performance. Oracle RAC uses a horizontal scaling (or scale-out) model that allows organizations to take advantage of the fact that the price of one-to-four-socket x86-architecture servers continues to drop while their processing power increases unabated. The clustered approach allows each server to contribute its processing power to the overall cluster’s capacity, enabling a new approach to managing the cluster’s performance and capacity.

Cisco is the undisputed leader in providing network connectivity in enterprise data centers. With the introduction of the Cisco Unified Computing System, Cisco is now equipped to provide the entire clustered infrastructure for Oracle RAC deployments. The Cisco Unified Computing System provides compute, network, virtualization, and storage access resources that are centrally controlled and managed as a single cohesive system. With the capability to scale to up to 160 rack-mount servers and incorporate both blade and rack-mount servers in a single system, the Cisco Unified Computing System provides an ideal foundation for Oracle RAC deployments.

Historically, enterprise database management systems have run on costly symmetric multiprocessing servers that use a vertical scaling (or scale-up) model. However, as the cost of one-to-four-socket x86-architecture servers continues to drop while their processing power increases, a new model has emerged. Oracle RAC uses a horizontal scaling, or scale-out, model, in which the active-active cluster uses multiple servers, each contributing its processing power to the cluster, increasing performance, scalability, and availability. The cluster balances the workload across the servers in the cluster, and the cluster can provide continuous availability in the event of a failure.

All components in an Oracle RAC implementation must work together flawlessly, and Cisco has worked closely with EMC and Oracle to create, test, and certify a configuration of Oracle RAC on the Cisco Unified Computing System. Cisco’s Oracle Certified Configuration provides an implementation of Oracle Database 10g Release 2 and Oracle Database 11g Release 1 with Real Application Clusters technology consistent with industry best practices. For back-end Fibre Channel storage, it uses an EMC CLARiiON storage system with a mix of Fibre Channel drives and state-of-the-art Enterprise Flash Drives (EFDs) to further speed performance.

Because the entire cluster runs on a single cohesive system, database administrators no longer need to painstakingly configure each element in the hardware stack independently. The system’s compute, network, and storage-access resources are essentially stateless, provisioned dynamically by Cisco® UCS Manager. This role- and policy-based embedded management system handles every aspect of system configuration, from a server’s firmware and identity settings to the network connections that connect storage traffic to the destination storage system. This capability dramatically simplifies the process of scaling an Oracle RAC configuration or rehosting an existing node on an upgrade server. Cisco UCS Manager uses the concept of service profiles and service profile templates to consistently and accurately configure resources. The system automatically configures and deploys servers in minutes, rather than the hours or days required by traditional systems composed of discrete, separately managed components. Indeed, Cisco UCS Manager can simplify server deployment to the point where it can automatically discover, provision, and deploy a new blade server when it is inserted into a chassis.

The system is based on a 10-Gbps unified network fabric that radically simplifies cabling at the rack level by consolidating both IP and Fibre Channel traffic onto the same rack-level 10-Gbps converged network. This “wire-once” model allows in-rack network cabling to be configured once, with network features and configurations all implemented by changes in software rather than by error-prone changes in physical cabling. This Oracle Certified Configuration not only supports physically separate public and private networks; it provides redundancy with automatic failover.

The Cisco UCS B-Series Blade Servers used in this certified configuration feature Intel Xeon 5500 series processors that deliver intelligent performance, automated energy efficiency, and flexible virtualization. Intel Turbo Boost Technology automatically boosts processing power through increased frequency and use of hyperthreading to deliver high performance when workloads demand and thermal conditions permit.

The patented Cisco Extended Memory Technology offers twice the memory footprint (384 GB) of any other server using 8-GB DIMMs, or the economical option of a 192-GB memory footprint using inexpensive 4-GB DIMMs. Both choices for large memory footprints can help speed database performance by allowing more data to be cached in memory.

Cisco and Oracle are working together to promote interoperability of Oracle’s next-generation database and application solutions with the Cisco Unified Computing System, helping make the Cisco Unified Computing System a simple and safe platform on which to run Oracle software. In addition to the certified Oracle RAC configuration described in this document, Cisco, Oracle and EMC have:

This document introduces the Cisco Unified Computing System and discusses the ways it addresses many of the challenges that database administrators and their IT departments face today. The document provides an overview of the certified Oracle RAC configuration along with instructions for setting up the Cisco Unified Computing System and the EMC CLARiiON storage system, including database table setup and the use of EFDs. The document reports on Cisco’s performance measurements for the cluster and a reliability analysis that demonstrates how the system continues operation even when hardware faults occur.

The system uses an embedded, end-to-end management system that uses a high-availability active-standby configuration. Cisco UCS Manager uses role and policy-based management that allows IT departments to continue to use subject-matter experts to define server, network, and storage access policy. After a server and its identity, firmware, configuration, and connectivity are defined, the server, or a number of servers like it, can be deployed in minutes, rather than the hours or days that it typically takes to move a server from the loading dock to production use. This capability relieves database administrators from tedious, manual assembly of individual components and makes scaling an Oracle RAC configuration a straightforward process.

The Cisco Unified Computing System represents a radical simplification compared to the way that servers and networks are deployed today. It reduces network access-layer fragmentation by eliminating switching inside the blade server chassis. It integrates compute resources on a unified I/O fabric that supports standard IP protocols as well as Fibre Channel through FCoE encapsulation. The system eliminates the limitations of fixed I/O configurations with an I/O architecture that can be changed through software on a per-server basis to provide needed connectivity using a just-in-time deployment model. The result of this radical simplification is fewer switches, cables, adapters, and management points, helping reduce cost, complexity, power needs, and cooling overhead.

The system’s blade servers are based on the fastest Intel Xeon 5500 series processors. These processors adapt performance to application demands, increasing the clock rate on specific processor cores as workload and thermal conditions permit. These processors, combined with patented Cisco Extended Memory Technology, deliver database performance along with the memory footprint needed to support large in-server caches. The system is integrated within a 10 Gigabit Ethernet–based unified fabric that delivers the throughput and low-latency characteristics needed to support the demands of the cluster’s public network, storage traffic, and high-volume cluster messaging traffic.

The system used to create the certified configuration is designed to be highly scalable, with up to 20 blade chassis and 160 blade servers connected by a single pair of low-latency, lossless fabric interconnects. New compute resources can be put into service quickly, enabling Oracle RAC configurations to be scaled on demand, and with the compute resources they require.

The system gives Oracle RAC room to scale while anticipating future technology investments. The blade server chassis, power supplies, and midplane are capable of handling future servers with even greater processing capacity. Likewise, the chassis is built to support future 40 Gigabit Ethernet standards when they become available.

The Cisco Unified Computing System used for the certified configuration is based on Cisco B-Series Blade Servers; however, the breadth of Cisco’s server and network product line suggests that similar product combinations will meet the same requirements. The Cisco Unified Computing System uses a form-factor-neutral architecture that will allow Cisco C-Series Rack-Mount Servers to be integrated as part of the system using capabilities planned to follow the product’s first customer shipment (FCS). Similarly, the system’s core components -- high-performance compute resources integrated using a unified fabric -- can be integrated manually today using Cisco C-Series servers and Cisco Nexus™ 5000 Series Switches.

The system used to create the Oracle Certified Configuration is built from the hierarchy of components illustrated in Figure 1:

●The Cisco UCS 6120XP 20-Port Fabric Interconnect provides low-latency, lossless, 10-Gbps unified fabric connectivity for the cluster. The interconnect provides connectivity to blade server chassis and the enterprise IP network. Through an 8-port, 4-Gbps Fibre Channel expansion card, the interconnect provides native Fibre Channel access to the EMC CLARiiON storage system. Two fabric interconnects are configured in the cluster, providing physical separation between the public and private networks and also providing the capability to securely host both networks in the event of a failure.

●The Cisco UCS 2104XP Fabric Extender brings the unified fabric into each blade server chassis. The fabric extender is configured and managed by the fabric interconnects, eliminating the complexity of blade-server-resident switches. Two fabric extenders are configured in each of the cluster’s two blade server chassis. Each one uses two of the four available 10-Gbps uplinks to connect to one of the two fabric interconnects.

●The Cisco UCS 5108 Blade Server Chassis houses the fabric extenders, up to four power supplies, and up to eight blade servers. As part of the system’s radical simplification, the blade server chassis is also managed by the fabric interconnects, eliminating another point of management. Two chassis were configured for the Oracle RAC described in this document.

●The blade chassis supports up to eight half-width blades or up to four full-width blades. The certified configuration uses eight (four in each chassis) Cisco UCS B200 M1 Blade Servers, each equipped with two quad-core Intel Xeon 5500 series processors at 2.93 GHz. Each blade server was configured with 24 GB of memory. A memory footprint of up to 384 GB can be accommodated through the use of a Cisco UCS B250 M1 Extended Memory Blade Server.

●The blade server form factor supports a range of mezzanine-format Cisco UCS network adapters, including a 10 Gigabit Ethernet network adapter designed for efficiency and performance, the Cisco UCS M81KR Virtual Interface Card designed to deliver the system’s full support for virtualization, and a set of Cisco UCS M71KR converged network adapters designed for full compatibility with existing Ethernet and Fibre Channel environments. These adapters present both an Ethernet network interface card (NIC) and a Fibre Channel host bus adapter (HBA) to the host operating system. They make the existence of the unified fabric transparent to the operating system, passing traffic from both the NIC and the HBA onto the unified fabric. Versions are available with either Emulex or QLogic HBA silicon; the certified configuration uses a Cisco UCS M71KR-Q QLogic Converged Network Adapter that provides 20-Gbps of connectivity by connecting to each of the chassis fabric extenders.

The configuration presented in this document is based on the Oracle Database 10g Release 2 with Real Application Clusters technology certification environment specified for an Oracle RAC and EMC CLARiiON CX4-960 system (Figure 2).

Figure 2 illustrates the 8-node configuration with EMC CLARiiON CX4-960 storage and Cisco Unified Computing System running Oracle Enterprise Linux (OEL) Version 5.3. This is a scalable configuration, that enables users to scale horizontally and internally in terms of processor, memory, and storage.

In the figure, the blue lines indicate the public network connecting to Fabric Interconnect A, and the green lines indicate the private interconnects connecting to Fabric Interconnect B. The public and private VLANs spanning the fabric interconnects help ensure the connectivity in case of link failure. Note that the FCoE communication takes place between the Cisco Unified Computing System chassis and fabric interconnects (red and green lines). This is a typical configuration that can be deployed in a customer's environment. The best practices and setup recommendations are described in subsequent sections of this document.

As shown in Figure 3, two chassis housing four blades each were used for this eight-node Oracle RAC solution. Tables 1 through 5 list the configuration details for all the server, LAN, and SAN components that were used for testing.

Figure 3. Detailed Topology of the Public Network and Oracle RAC Private Interconnects

The first step is to establish connectivity between the blades and fabric interconnects. As shown in Figure 4, four public (two per chassis) links go to Fabric Interconnect A (ports 5 through 8). Similarly, four private links go to Fabric Interconnect B. These ports should be configured as server ports as shown in Figure 4.

Before configuring the service profile, you should perform the following steps:

●Configure the SAN: On the SAN tab, set the VSANs to be used in the SAN (if any). You should also set up pools for world wide names (WWNs) and world wide port names (WWPNs) for assignment to the blade server virtual HBAs (vHBAs).

●Configure the LAN: On the LAN tab, set the VLAN assignments to the virtual NICs (vNICS). You can also set up MAC address pools for assignment to vNICS. For this setup, the default VLAN (VLAN ID 1) was used for public interfaces, and a private VLAN (VLAN ID 100) was created for Oracle RAC private interfaces.

Note: It is very important that you create a VLAN that is global across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects in case of failover.

The following screenshot shows two VLANs.

After these preparatory steps have been completed, you can generate a service profile template for the required hardware configuration. You can then create the service profiles for all eight nodes from the template.

Service profiles are the central concept of the Cisco Unified Computing System. Each service profile serves a specific purpose: to help ensure that the associated server hardware has the configuration required to support the applications it will host.

The service profile maintains configuration information about:

●Server hardware

●Interfaces

●Fabric connectivity

●Server and network identity

This information is stored in a format that can be managed through Cisco UCS Manager. All service profiles are centrally managed and stored in a database on the fabric interconnect.

Initial templates create new service profiles with the same attributes, but the child service profiles are not updated when a change is made to the original template. If you select Updating Template, child profiles will immediately be updated when a change to the template is made, potentially making all the dependent child profiles to cause servers to reboot, so you should use updating templates with care.

c)Click Next.

3.On the Storage screen (to create vHBAs for SAN storage):

a)In the How would you like to configure SAN storage? options, select Expert.

b)Click Add to add an HBA.

4.On the Create vHBA screen:

a)In the Name field, enter vHBA1.

b)In the Select VSAN drop-down list, choose VSAN default.

For simplicity, this configuration uses the default VSAN for both HBAs. You may need to make a different selection depending on what is appropriate for your configuration.

c)If you have created SAN pin groups for pinning Fibre Channel traffic to a specific Fibre Channel port, specify appropriate pin groups, using the Pin Group drop-down list.

Pinning in a Cisco Unified Computing System is relevant only to uplink ports, where you can pin Ethernet or FCoE traffic from a given server to a specific uplink Ethernet (NIC) port or uplink (HBA) Fibre Channel port. When you pin the NIC and HBA of both physical and virtual servers to uplink ports, you get finer control over the unified fabric. This control helps ensure better utilization of uplink port bandwidth. However, manual pinning requires an understanding of network and HBA traffic bandwidth across the uplink ports. The configuration described here does not use pin groups.

The screenshot shows the configuration for vHBA1 assigned to Fabric Interconnect A.

d)Click OK.

5.On the Storage screen (to create the second vHBA for SAN storage):

a)Click Add to add an HBA.

6.On the Create vHBA screen, create the second vHBA:

a)In the Name field, enter vHBA2.

b)In the Select VSAN drop-down list, choose VSAN default.

For simplicity, this configuration uses the default VSAN for both HBAs. You may need to make a different selection depending on what is appropriate for your configuration.

c)If you have created SAN pin groups for pinning Fibre Channel traffic to a specific Fibre Channel port, specify appropriate pin groups, using the Pin Group drop-down list.

The screenshot shows the configuration for vHBA2 assigned to Fabric Interconnect B.

d)Click OK.

7.On the Storage screen, click Finish.

Two vHBAs have now been created, which completes the SAN configuration.

Follow these steps to create the vNICs and then associate them with the appropriate VLANs:

1.On the Networking screen:

a)In the How would you like to configure LAN connectivity? options, select Expert.

b)Click Add.

2.On the Create vNICs screen:

a)In the Name field, enter vNIC1.

b)For the Fabric ID options, select Fabric A and Enable Failover.

c)For the VLAN Trunking options, select Yes.

VLAN trunking allows multiple VLANs to use a single uplink port on the system.

d)In the VLANs area, select the associated check boxes for default and oraclepriv.

e)Click OK.

vNIC1 is now assigned to use Fabric Interconnect A for the public network.

Create the second vNIC in Step 3.

3.On the Networking screen:

a)Click Add to add vNIC2.

4.On the Create vNICs screen:

a)In the Name field, enter vNIC2.

b)For the Fabric ID options, select Fabric B and Enable Failover.

c)For the VLAN Trunking options, select Yes.

d)In the VLANs area, select the associated check boxes for default and oraclepriv.

5.Click OK.

vNIC2 is now assigned to use Fabric Interconnect B for the Oracle RAC private network.

The Networking screen lists the vNICs that you have created.

The setup created here did not use SAN boot or any other policies. You can configure these in the screens that follow the Networking screen. You may be required to configure these policies if you choose to boot from the SAN or if you associate any specific policies with your configuration.

This document provides a general overview of the storage configuration for the database layout. However, it does not supply details about host connectivity and logical unit number (LUN)—that is, RAID—configuration. For more information about EMC CLARiiON storage, refer to http://powerlink.emc.com.

Follow these steps to configure storage for the Cisco Unified Computing System data center solution:

1.Ensure host connectivity.

If each host has the EMC Navisphere Agent® package installed, the agent automatically registers the HBA initiators.

2.If the package is not installed, make sure that all initiators are registered properly to complete the host registration.

3.Create the RAID groups.

Testing for the Cisco Unified Computing System solution used:

●EMC CLARiiON CX4-960 with 105 Fibre Channel spindles

●15 EFDs

Figure 5 illustrates the RAID groups created for database testing.

Figure 5. RAID Groups Used in Database Testing

4.Create the LUNs.

Note: It is extremely important that you choose an appropriate storage processor as the default owner so that the service processors are evenly balanced. The Cisco Unified Computing System data center solution creates one LUN per RAID group for Fibre Channel drives and four LUNs per RAID group for EFDs.

Table 7 provides the LUN configuration data.

Table 7.LUN Configuration Data

RAID Group and Type

LUN

Size

Purpose

Owner (Storage Processor)

RAID Group 0 (RAID-5 4+1)

LUNs 0 and 1

256 MB

Voting disks

SP-A

LUN 2

256 MB

OCR disk

SP-B

RAID Group 1 (RAID-5 4+1)

LUN 3

256 MB

Voting disk

SP-B

LUN 4

256 MB

OCR disk

SP-A

RAID Group 2 EFD drives (RAID-5 4+1)

LUNs 5 and 6

66 GB

Data disks for Oracle ASM

SP-A

LUNs 7 and 8

66 GB

Data disks for Oracle ASM

SP-B

RAID Group 2 EFD drives (RAID-5 4+1)

LUNs 9 and 10

66 GB

Data disks for Oracle ASM

SP-A

LUNs 11 and 12

66 GB

Data disks for Oracle ASM

SP-B

RAID Group 4 EFD drives (RAID-5 4+1)

LUNs 13 and 14

66 GB

Data disks for Oracle ASM

SP-A

LUNs 15 and 16

66 GB

Data disks for Oracle ASM

SP-B

RAID Group 5 (RAID-5 4+1)

LUN 50

512 GB

Redo logs

SP-A

RAID Group 6 (RAID-5 4+1)

LUN 17

512 GB

Redo logs

SP-B

RAID Group 7 (RAID-5 4+1)

LUN 18

512 GB

Redo logs

SP-A

RAID Group 8 (RAID-5 4+1)

LUN 19

512 GB

Redo logs

SP-B

RAID Group 9 (RAID-5 4+1)

LUN 20

768 GB

Temp

SP-A

RAID Group 10 (RAID-5 4+1)

LUN 21

768 GB

Temp

SP-B

RAID Group 11 (RAID-5 4+1)

LUN 22

768 GB

Temp

SP-A

RAID Group 12 (RAID-5 4+1)

LUN 23

768 GB

Temp

SP-B

5.Follow the additional recommendations for configuring storage and LUNs:

a)Turn off the read and write caches for EFD-based LUNs. In most situations, it is better to turn off both the read and write caches on all the LUNs that reside on EFDs, for the following reasons:

●The EFDs are extremely fast: When the read cache is enabled for the LUNs residing on them, the read cache lookup for each read request adds more overhead compared to Fibre Channel drives. This scenario occurs in an application profile that is not expected to get many read cache hits at any rate. It is generally much faster to directly read the block from the EFD.

●In typical situations, the storage array is also shared by several other applications and the database. This situation occurs particularly when storage deploys mixed drives, which may also consist of slower SATA drives. The write cache may become fully saturated, placing the EFDs in a force-flush situation, which adds latency. Therefore, it is better in these situations to write the block directly to the EFDs than to the write cache of the storage system.

b)Distribute database files for EFDs. Refer to Table 8 for recommendations about distributing database files based on the type of workload.

●Sequentially read and written, but I/O is performed in 1-MB units; not enough to amortize seeks

●Lower latency: Get in and get out

Redo Log Files

●Sequential I/O

●Read and write and commit latency already handled by cache in storage controller

Undo Tablespace

●Sequential writes and random reads by Oracle Flashback.

●Generally, reads are for recently written data that is likely to be in the buffer cache

Large Table Scans (If Single Stream)

The configuration described here employs most of EMC’s best practices and recommendations for LUN distribution in the database. It also adopts the layout for a mixed storage environment consisting of Fibre Channel disks and EFDs.

Follow these steps to install the OS and enable the environment settings:

1.Install 64-bit OEL 5.3, Update 3, on all eight nodes.

2.Update the Intel ixgbe driver by applying the latest errata kernel.

Because of a bug in OEL and Red Hat Enterprise Linux (RHEL) 5.3, systems with 16 or more logical processors that use network devices requiring the ixgbe driver have intermittent network connectivity or can experience a kernel panic. To help ensure network stability, follow the recommendations in the article at http://kbase.redhat.com/faq/docs/DOC-16041.

3.Install the Oracle Validated RPM package.

Use of this RPM package can simplify preparation of Linux for Oracle Clusterware and RAC installation. The RPM downloads (or updates) all necessary RPM packages on the system, resolves dependencies, and creates Oracle users and groups. It also sets all appropriate OS and kernel specifications, depending on the system configuration. The appendix lists kernel settings if you decide to set them manually.

For more information about creating and populating databases for OLTP (Order Entry) and DSS (Sales History) workloads, refer to the Oracle SwingBench documentation at http://dominicgiles.com/swingbench.html.

To evaluate workload performance, the cluster was stressed for 24 hours with a sustained load. During the 24-hour run of both the OLTP (Order Entry) and the DSS (Sales History) workloads, no crashes or degradation of performance was observed.

The following workload performance metrics were detected and recorded:

●Sustained FCoE-based I/O ranging between 1.8 and 2.0 GB per second, which could be further divided into 1.4 GB per second of Fibre Channel I/O and approximately 450 MB per second of interconnect communication

●No occurrence of I/O bottlenecks or wait times

●Excellent I/O service times for storage

The consistent workload performance can be attributed to:

●The simplified, excellent architectural design of the Cisco Unified Computing System based on a 10-Gbps unified fabric

●The pairing of the Cisco Unified Computing System with EMC CLARiiON storage with high-performance EFDs

Note: This is a testing, not a performance benchmarking, exercise. The numbers presented here should not be used for comparison purposes. The intent here is to look at the Cisco Unified Computing System supporting a sustained load over a long time period. Note that no tuning was performed, and the lack of resource saturation indicates that significant headroom is available to support greater performance than that shown here.

Figure 6 shows the Order Entry workload running 1500 users in the eighth hour of a 24-hour run.

Figure 6. Order Entry Workload

A typical OLTP Oracle application has some write activity because of Data Manipulation Language (DML) operations such as updates and inserts. Figure 7 shows the DML operations per minute for the OLTP workload.

Unlike the OLTP workload, the DSS workload is set to run from the command line. DSS workloads are generally very sequential and read intensive. For DSS workloads, it is common practice to set the parallel queries and the degree of parallelism on heavily read tables. This practice was followed in the test environment and achieved excellent performance, as indicated in the Tablespace and File IO Stats information from the Oracle Automated Workload Repository (AWR) report (90-minute duration) shown in Tables 9 and 10.

Table 9.Oracle AWR Report Tablespace IO Stats Information

As Table 9 shows, 134 read operations occur per second. Each read fetches about 122 data blocks, and each data block is 8 KB in size. Consequently, each read operation fetches about 1 MB (122.44 x 8 KB). In other words, this particular instance performs about 134 MB of read operations on SH tablespace. Similar behavior was observed across all eight nodes. The result is about 130 MB x 8 instances at 1 GB per second of read operations for the DSS workload.

Table 10.Oracle AWR Report File IO Stats Information

The File IO Stats information indicates that all ASM-managed files have evenly spread read operations (18 to 20 operations per second). However, the benefit of the EFD drives is clearly reflected in the Av Rd(ms) column.

Generally speaking, rotating Fibre Channel drives perform well in a single stream of queries. However, addition of multiple concurrent streams (or parallel queries) causes additional seek and rotational latencies, thereby reducing the overall per-disk bandwidth. In contrast, the absence of any moving parts in EFDs enables sustained bandwidth regardless of the number of concurrent queries running on the drive.

Figure 8 provides a sample from a 24-hour stress run using the workload. It shows the combined FCoE read and write traffic observed at the fabric interconnects. This I/O is the combination of Oracle RAC interconnect traffic (approximately 450 MB) and Fibre Channel I/O (1.4 GB).

Previous sections described Cisco Unified Computing System installation, configuration, and performance. This section examines the Cisco Unified Computing System’s nearly instant failover capabilities to show how they can improve overall availability after unexpected, but common, hardware failures attributed to ports and cables.

Figure 10 shows some of the failure scenarios (indicated by numbers) that were tested under the stress conditions described in the preceding section, “Testing Workload Performance.”

Figure 10. Sample Failure Scenarios

Table 11 summarizes the failure scenarios (each indicated by a number in Figure 10) and describes how the Cisco Unified Computing System architecture sustains unexpected failures related to ports, links, and the fabric interconnect (a rare occurrence).

Designed using a new and innovative approach to improve data center infrastructure, the Cisco Unified Computing System unites compute, network, storage access, and virtualization resources into a scalable, modular architecture that is managed as a single system.

For the Cisco Unified Computing System, Cisco has partnered with Oracle because Oracle databases and applications provide mission-critical software foundations for the majority of large enterprises worldwide. In addition, the architecture and large memory capabilities of the Cisco Unified Computing System connected to the industry-proven and scalable CLARiiON storage system enable customers to scale and manage Oracle database environments in ways not previously possible.

Both database administrators and system administrators will benefit from the Cisco Unified Computing System combination of superior architecture, outstanding performance, and unified fabric. They can achieve demonstrated results by following the documented best practices for database installation, configuration, and management outlined in this document.

The workload performance testing included a realistic mix of OLTP and DSS workloads, which generated a sustained load on the eight-node Oracle RAC configuration for a period of 72 hours. This type of load far exceeds the demands of typical database deployments.

Despite the strenuous workload, the following high-performance metrics were achieved: