About the Authors

Niranjan, Technical Marketing Engineer, SAVBU, Cisco Systems

Niranjan Mohapatra is a Technical Marketing Engineer in Cisco Systems Data Center Group (DCG) and specialist on Oracle RAC RDBMS. He has over 14 years of extensive experience on Oracle RAC Database and associated tools. Niranjan has worked as a TME and a DBA handling production systems in various organizations. He holds a Master of Science (MSc) degree in Computer Science and is also an Oracle Certified Professional (OCP -DBA) and NetApp accredited storage architect. Niranjan also has strong background in Cisco UCS, NetApp Storage and Virtualization.

Acknowledgment

For their support and contribution to the design, validation, and creation of the Cisco Validated Design, I would like to thank:

•Siva Sivakumar- Cisco

•Vadiraja Bhatt- Cisco

•Tushar Patel- Cisco

•Ramakrishna Nishtala- Cisco

•John McAbel- Cisco

•Steven Schuettinger- NetApp

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit:

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

FlexPod is a pretested data center solution built on a flexible, scalable, shared infrastructure consisting of Cisco UCS servers with Cisco Nexus® switches and NetApp unified storage systems running Data ONTAP. The FlexPod components are integrated and standardized to help you eliminate the guesswork and achieve timely, repeatable, consistent deployments. FlexPod has been optimized with a variety of mixed application workloads and design configurations in various environments such as virtual desktop infrastructure and secure multitenancy environments.

One main benefit of the FlexPod architecture is the ability to customize the environment to suit a customer's requirements. This is why the reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of an FCoE-based storage solution. A storage system capable of serving multiple protocols across a single interface is the customer's choice and investment protection.

Large enterprises are adopting virtualization, have much higher I/O requirements. For them, FCoE is a better solution. Customers who have adopted Cisco® MDS 9000 family switches will probably prefer FCoE as it offers inherent coexistence with Fibre Channel, with no need to migrate existing Fibre Channel infrastructures. FCoE will take a large share of the SAN market. It will not make iSCSI obsolete, but it will reduce its potential market.

Virtualization started as a means of server consolidation, but IT needs are evolving as data centers are becoming service providers. An isolated hypervisor cannot provide the speed and time to market required to deploy a complete application stack. To realize the full benefits of virtualization, Oracle offers an integrated virtualization from desktop to the data center and enables you to virtualize and manage your complete hardware and software stack.

Oracle Real Application Clusters (RAC) allows an Oracle Database to run any packaged or custom applications, unchanged across a pool of servers. This provides the highest levels of RAS (Reliability, Availability and Scalability). If a server in the pool fails, the Oracle database continues to run on the remaining servers. When you need more processing power, simply add another server to the pool without taking users offline. Oracle Real Application Clusters provide a foundation for Oracle's Private Cloud Architecture. Oracle RAC 11g Release 2 in addition enables customers to build a dynamic private cloud infrastructure.

FlexPod Data Center with Oracle RAC on Oracle VM includes NetApp storage, Cisco® networking, Cisco UCS, and Oracle virtualization software in a single package. This solution is deployed and tested on a defined set of hardware and software.

This Cisco Validated Design describes how the Cisco Unified Computing System™ can be used in conjunction with NetApp FAS storage systems to implement an optimized system to run Oracle Real Application Clusters (RAC) in Oracle VM.

Business Needs

Business applications are moving into integrated stacks consisting of compute, network, and storage. This FlexPod solution helps to reduce costs and complexity of a traditional Oracle Database 11g Release 2 RAC deployment. Following business needs for Oracle Database 11g Release 2 RAC deployment on Oracle VM are addressed by this solution.

•Reduced risk for a solution that is tested for end-to-end interoperability of compute, storage, and network.

•Save costs, power, and lab space by reducing the number of physical servers.

•Enable a global virtualization policy.

•Create a balanced configuration that yields predictable purchasing guidelines at the computing, network, and storage tiers for a given workload.

•With Oracle VM and Oracle RAC, which are referred to as complementary technologies, additional high availability can be achieved.

•Oracle VM application-driven server virtualization is designed for rapid application deployment and ease of lifecycle management. Using Oracle VM Templates, entire application stacks can be deployed into your new FlexPod architecture in hours and minutes rather than days and weeks, helping to accelerate time to value, at the same time standardizing your application deployment process to ensure reliability and minimize risks.

•Oracle offers a complete applications-to-disk stack, and virtualization is fully integrated across all layers. Oracle can provision and manage applications, middleware, and databases.

Solution Overview

This solution provides an end-to-end architecture with Cisco UCS, Oracle, and NetApp technologies that demonstrate the implementation of Oracle Database 11g Release 2 RAC on FlexPod and Oracle VM. This solution demonstrates the implementation, capabilities and advantages of Oracle Database 11g Release 2 RAC and Oracle VM on FlexPod.

The following infrastructure and software components are used for this solution:

•Cisco Unified Computing System*

•Cisco Nexus 5548UP switches

•NetApp storage components

•NetApp OnCommand® System Manager 2.1

•Oracle VM

•Oracle Database 11g Release 2 RAC

•Swingbench benchmark kit for OLTP and DSS workloads.

* Cisco Unified Computing System includes all the hardware and software components required for this deployment solution.

Figure 1 shows the architecture and the connectivity layout for this deployment model.

Figure 1 Solution Architecture

Let us look at individual components that define this architecture.

Technology Overview

Cisco Unified Computing System

Figure 2 Cisco Unified Computing System

The Cisco Unified Computing System is a third-generation data center platform that unites computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce TCO and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet (10GbE) unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform in which all the resources participate in a unified management domain that is controlled and managed centrally.

Figure 3 Cisco UCS Components

Figure 4 Cisco UCS Components

The main components of the Cisco UCS are:

•Compute

The system is based on an entirely new class of computing system that incorporates blade servers based on Intel Xeon® E5-2600 Series Processors. Cisco UCS B-Series Blade Servers work with virtualized and non-virtualized applications to increase performance, energy efficiency, flexibility and productivity.

•Network

The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.

•Storage access

The system provides consolidated access to both storage area network (SAN) and network-attached storage (NAS) over the unified fabric. By unifying storage access, Cisco UCS can access storage over Ethernet, Fiber Channel, Fiber Channel over Ethernet (FCoE), and iSCSI. This provides customers with the options for setting storage access and investment protection. Additionally, server administrators can reassign storage-access policies for system connectivity to storage resources, thereby simplifying storage connectivity and management for increased productivity.

•Management

The system uniquely integrates all the system components which enable the entire solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all the system configuration and operations.

The Cisco UCS is designed to deliver:

•A reduced Total Cost of Ownership (TCO), increased Return on Investment (ROI) and increased business agility.

•Increased IT staff productivity through just-in-time provisioning and mobility support.

•A cohesive, integrated system which unifies the technology in the data center. The system is managed, serviced and tested as a whole.

•Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.

•Industry standards supported by a partner ecosystem of industry leaders.

Cisco UCS Blade Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis.

The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.

A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.

Cisco UCS B200 M3 Blade Server

The Cisco UCS B200 M3 Blade Server is a half-width, two-socket blade server. The system uses two Intel Xeon® E5-2600 Series Processors, up to 384 GB of DDR3 memory, two optional hot-swappable small form factor (SFF) serial attached SCSI (SAS) disk drives, and two VIC adaptors that provides up to 80 Gbps of I/O throughput. The server balances simplicity, performance, and density for production-level virtualization and other mainstream data center workloads.

Figure 6 Cisco UCS B200 M3 Blade Server

Cisco UCS Virtual Interface Card 1240

A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet, FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10 Gigabit Ethernet.

Cisco UCS 6248UP Fabric Interconnect

•The Fabric interconnects provide a single point for connectivity and management for the entire system. Typically deployed as an active-active pair, the system's fabric interconnects integrate all the components into a single, highly-available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a server or virtual machine's topological location in the system.

•Cisco UCS 6200 Series Fabric Interconnects support the system's 80-Gbps unified fabric with low-latency, lossless, cut-through switching that supports IP, storage, and management traffic using a single set of cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections equivalently, establishing a virtualization-aware environment in which blade, rack servers, and virtual machines are interconnected using the same mechanisms. The Cisco UCS 6248UP is a 1-RU fabric interconnect that features up to 48 universal ports that can support 80 Gigabit Ethernet, Fiber Channel over Ethernet, or native Fiber Channel connectivity.

Figure 7 Cisco UCS 6248UP Fabric Interconnect

Cisco UCS Manager

Cisco UCS Manager is an embedded, unified manager that provides a single point of management for Cisco UCS. Cisco UCS Manager can be accessed through an intuitive GUI, a command-line interface (CLI), or the comprehensive open XML API. It manages the physical assets of the server and storage and LAN connectivity, and it is designed to simplify the management of virtual network connections through integration with several major hypervisor vendors. It provides IT departments with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to individuals based on their roles as managers of server, storage, or network hardware assets. It simplifies operations by automatically discovering all the components available on the system and enabling a stateless model for resource use.

Some of the key elements managed by Cisco UCS Manager include:

•Cisco UCS Integrated Management Controller (IMC) firmware

•RAID controller firmware and settings

•BIOS firmware and settings, including server universal user ID (UUID) and boot order

Cisco UCS is designed from the start to be programmable and self-integrating. A server's entire hardware stack, ranging from server firmware and settings to network profiles, is configured through model-based management. With Cisco virtual interface cards (VICs), even the number and type of I/O interfaces is programmed dynamically, making every server ready to power any workload at any time.

With model-based management, administrators manipulate a desired system configuration and associate a model's policy driven service profiles with hardware resources, and the system configures itself to match requirements. This automation accelerates provisioning and workload migration with accurate and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced risk of failures due to inconsistent configurations. This approach represents a radical simplification compared to traditional systems, reducing capital expenditures (CAPEX) and operating expenses (OPEX) while increasing business agility, simplifying and accelerating deployment, and improving performance.

UCS Service Profiles

Figure 8 Traditional Provisioning Approach

A server's identity is made up of many properties such as UUID, boot order, IPMI settings, BIOS firmware, BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP addresses, number of HBAs, HBA WWNs, HBA firmware, FC fabric assignments, QoS settings, VLAN assignments, remote keyboard/video/monitor etc. I think you get the idea. It's a LONG list of "points of configuration" that need to be configured to give this server its identity and make it unique from every other server within your data center. Some of these parameters are kept in the hardware of the server itself (like BIOS firmware version, BIOS settings, boot order, FC boot settings, etc.) while some settings are kept on your network and storage switches (like VLAN assignments, FC fabric assignments, QoS settings, ACLs, and so on.). This results in following server deployment challenges:

Limited OS and application mobility

Cisco UCS has uniquely addressed these challenges with the introduction of service profiles (see Figure 9) that enables integrated, policy based infrastructure management. UCS Service Profiles hold the DNA for nearly all configurable parameters required to set up a physical server. A set of user defined policies (rules) allow quick, consistent, repeatable, and secure deployments of UCS servers.

Figure 9 Service Profiles

UCS Service Profiles contain values for a server's property settings, including virtual network interface cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external management, and high availability information. By abstracting these settings from the physical server into a Cisco Service Profile, the Service Profile can then be deployed to any physical compute hardware within the Cisco UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one physical server to another. This logical abstraction of the server personality separates the dependency of the hardware type or model and is a result of Cisco's unified fabric model (rather than overlaying software tools on top).

This innovation is still unique in the industry despite competitors claiming to offer similar functionality. In most cases, these vendors must rely on several different methods and interfaces to configure these server settings. Furthermore, Cisco is the only hardware provider to offer a truly unified management platform, with UCS Service Profiles and hardware abstraction capabilities extending to both blade and rack servers.

Some of key features and benefits of UCS service profiles are:

•Service Profiles and Templates

A service profile contains configuration information about the server hardware, interfaces, fabric connectivity, and server and network identity. The Cisco UCS Manager provisions servers utilizing service profiles. The UCS Manager implements a role-based and policy-based management focused on service profiles and templates. A service profile can be applied to any blade server to provision it with the characteristics required to support a specific software stack. A service profile allows server and network definitions to move within the management domain, enabling flexibility in the use of system resources.

Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by server, network, and storage administrators. Service profile templates consist of server requirements and the associated LAN and SAN connectivity. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

The UCS Manager can deploy the service profile on any physical server at any time. When a service profile is deployed to a server, the Cisco UCS Manager automatically configures the server, adapters, Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service profile. A service profile template parameterizes the UIDs that differentiate between server instances.

This automation of device configuration reduces the number of manual steps required to configure servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches.

•Programmatically Deploying Server Resources

Cisco UCS Manager provides centralized management capabilities, creates a unified management domain, and serves as the central nervous system of the Cisco UCS. Cisco UCS Manager is embedded device management software that manages the system from end-to-end as a single logical entity through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and policy-based management using service profiles and templates. This construct improves IT productivity and business agility. Now infrastructure can be provisioned in minutes instead of days, shifting IT's focus from maintenance to strategic initiatives.

•Dynamic Provisioning

Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin groups, and threshold policies) all are programmable using a just-in-time deployment model. A service profile can be applied to any blade server to provision it with the characteristics required to support a specific software stack. A service profile allows server and network definitions to move within the management domain, enabling flexibility in the use of system resources. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP is a 1RU 1 Gigabit and 10 Gigabit Ethernet switch offering up to 960 gigabits per second throughput and scaling up to 48 ports. It offers 32 1/10 Gigabit Ethernet fixed enhanced Small Form-Factor Pluggable (SFP+) Ethernet/FCoE or 1/2/4/8-Gbps native FC unified ports and three expansion slots. These slots have a combination of Ethernet/FCoE and native FC ports.

Figure 10 Cisco Nexus 5548UP switch

The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity, and business agility, with support for networking standards. For traditional, virtualized, unified, and high-performance computing (HPC) environments, it offers a long list of IT and business advantages, including:

–The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fiber Channel, and Ethernet or FCoE

–Throughput of up to 960 Gbps

NetApp Storage Technologies and Benefits

NetApp storage platform can handle different type of files and data from various sources—including user files, e-mail, and databases. Data ONTAP is the fundamental NetApp software platform that runs on all the NetApp storage systems. Data ONTAP is a highly optimized, scalable operating system that supports mixed NAS and SAN environments and a range of protocols, including Fiber Channel, iSCSI, FCoE, NFS, and CIFS. The platform includes the Write Anywhere File Layout (WAFL®) file system and storage virtualization capabilities. By leveraging the Data ONTAP platform, the NetApp Unified Storage Architectureoffers the flexibility to manage, support, and scale to different business environments by using a common knowledge base and tools. This architecture enables users to collect, distribute, and manage data from all locations and applications at the same time. This allows the investment to scale by standardizing processes, cutting management time, and increasing availability. Figure 11 shows the various NetApp Unified Storage Architectureplatforms.

Figure 11 NetApp Unified Storage Architecture Platforms

The NetApp storage hardware platform used in this solution is the FAS3270A. The FAS3200 series is an excellent platform for primary and secondary storage for an Oracle Database 11g Release 2 Grid Infrastructure deployment.

A number of NetApp tools and enhancements are available to augment the storage platform. These tools assist in deployment, backup, recovery, replication, management, and data protection. This solution makes use of a subset of these tools and enhancements.

Storage Architecture

The storage design for any solution is a critical element that is typically responsible for a large percentage of the solution's overall cost, performance, and agility.

The basic architecture of the storage system's software is shown in the figure below. A collection of tightly coupled processing modules handles CIFS, FCP, FCoE, HTTP, iSCSI, and NFS requests. A request starts in the network driver and moves up through network protocol layers and the file system, eventually generating disk I/O, if necessary. When the file system finishes the request, it sends a reply back to the network. The administrative layer at the top supports a command line interface (CLI) similar to UNIX® that monitors and controls the modules below. In addition to the modules shown, a simple real-time kernel provides basic services such as process creation, memory allocation, message passing, and interrupt handling.

The networking layer is derived from the same Berkeley code used by most UNIX systems, with modifications made to communicate efficiently with the storage appliance's file system. The storage appliance provides transport-independent seamless data access using block- and file-level protocols from the same platform. The storage appliance provides block-level data access over an FC SAN fabric using FCP and over an IP-based Ethernet network using iSCSI. File access protocols such as NFS, CIFS, HTTP, or FTP provide file-level access over an IP-based Ethernet network.

Figure 12 Storage Architecture

RAID-DP

RAID-DP® is NetApp's implementation of double-parity RAID 6, which is an extension of NetApp's original Data ONTAP WAFL® RAID 4 design. Unlike other RAID technologies, RAID-DP provides the ability to achieve a higher level of data protection without any performance impact, while consuming a minimal amount of storage. For more information on RAID-DP, see: http://www.netapp.com/us/products/platform-os/raid-dp.html

Snapshot

Creating Snapshot copies incurs minimal performance effect because data is never moved, as it is with other copy-out technologies. The cost for Snapshot copies is at the rate of block-level changes and not 100% for each backup, as it is with mirror copies. Using Snapshot can result in savings in storage cost for backup and restore purposes and opens up a number of efficient data management possibilities.

With FlexVol you can improve—even double—the utilization of your existing storage and save the expense of acquiring more disk space. In addition to increasing storage efficiency, you can improve I/O performance and reduce bottlenecks by distributing volumes across all the available disk drives.

NetApp Flash Cache

Flash Cache speeds data access through intelligent caching of recently read user data or NetApp metadata. No setup or ongoing administration is needed, and operations can be tuned. Flash Cache works with all the NetApp storage protocols and software, enabling you to:

NetApp OnCommand System Manager 2.1

System Manager is a powerful management tool for NetApp storage that allows administrators to manage a single NetApp storage system as well as clusters, quickly and easily.

Some of the benefits of the System Manager Tool are:

•Easy to install

•Easy to manage from a Web browser

•Does not require storage expertise

•Increases storage productivity and response time

•Cost effective

•Leverages storage efficiency features such as thin provisioning and compression

Oracle VM 3.1.1

Oracle VM is a platform that provides a fully equipped environment with all the latest benefits of virtualization technology. Oracle VM enables you to deploy operating systems and application software within a supported virtualization environment. Oracle VM is a Xen-based hypervisor that runs at nearly bare-metal speeds.

A self-contained virtualization environment designed to provide a lightweight, secure, server-based platform for running virtual machines. Oracle VM Server is based upon an updated version of the underlying Xen hypervisor technology, and includes Oracle VM Agent.

•Oracle VM Agent

Installed with Oracle VM Server. It communicates with Oracle VM Manager for management of virtual machines.

The combination of Oracle VM and Oracle RAC enables a better server consolidation (RAC databases with under utilized CPU resources or peaky CPU utilization can often benefit from consolidation with other workloads using server virtualization) sub-capacity licensing, and rapid provisioning. Oracle RAC on Oracle VM also supports the creation of non-production virtual clusters on a single physical server for production demos and test/dev environments. This deployment combination permits dynamic changes to pre-configured database resources for agile responses to changing service level requirements common in consolidated environments.

Oracle VM is the only software based virtualization solution that is fully supported and certified for Oracle real Application Clusters.

There are several reasons why you may want to run Oracle RAC in an Oracle VM environment. The more more common reasons are:

•Server consolidation

Oracle RAC databases or Oracle RAC One Node databases with under utilized CPU resources or variable CPU utilization can often benefit from consolidation with other workloads using server virtualization. A typical use case for this scenario would be the consolidation of several Oracle databases (Oracle RAC, Oracle RAC One Node or Oracle single instance databases) into a single Oracle RAC database or multiple Oracle RAC databases where the hosting Oracle VM guests have pre-defined resource limits configured for each VM guest.

•Sub-capacity licensing

The current Oracle licensing model requires the Oracle RAC database to be licensed for all CPUs on each server in the cluster. Sometimes customers wish to use only a subset of the CPUs on the server for a particular Oracle RAC database. Oracle VM can be configured in such way that it is recognized as a hard partition. Hard partitions allow customers to only license those CPUs used by the partition instead of licensing all CPUs on the physical server. More information on sub-capacity licensing using hard partitioning can be found in the Oracle partitioning paper. For more information on using hard partitioning with Oracle VM refer to the "Hard Partitioning with Oracle VM" white paper.

•Create a virtual cluster

Oracle VM enables the creation of a virtual cluster on a single physical server. This use case is particularly interesting for product demos, educational settings, and test environments. This configuration should never be used to run production Oracle RAC environments. The following are valid deployments for this use case:

–Test and development cluster

–Demonstration cluster

–Education cluster

•Rapid provisioning

The provisioning time of a new application consists of the server (physical or virtual) deployment time, and the software install and configuration time. Oracle VM can help reduce the deployment time for both of these components. Oracle VM supports the ability to create deployment templates. These templates can then be used to rapidly provision new (Oracle RAC) systems.

For Oracle RAC, currently only para-virtualized VM (PVM) mode is supported. Some of the advantages of using para-virtualized VM mode is mentioned in the next sub-section.

Para-virtualized VM (PVM)

Guest virtual machines running on Oracle VM server should be configured in para-virtualized virtualization mode. In this mode the kernel of the guest operating system is modified to distinguish that it is running on a hypervisor instead of on the bare metal hardware. As a result, I/O actions and system clock timers in particular are handled more efficiently, as compared with non para-virtualized systems where I/O hardware and timers have to be emulated in the operating system. Oracle VM supports PV kernels for Oracle Linux and Red Hat Enterprise Linux, offering better performance and scalability.

Oracle Database 11g Release 2 RAC

Oracle Database 11g Release 2 provides the foundation for IT to successfully deliver more information with higher quality of service, reduce the risk of change within IT, and make more efficient use of IT budgets.

Oracle Database 11g Release 2 Enterprise Edition provides industry-leading performance, scalability, security, and reliability on a choice of clustered or single-servers with a wide range of options to meet user needs. Cloud computing relieves users from concerns about where data resides and which computer processes the requests. Users request information or computation and have it delivered - as much as they want, whenever they want it. For a DBA, the cloud is about resource allocation, information sharing, and high availability. Oracle Database with Real Application Clusters provide the infrastructure for your database cloud. Oracle Automatic Storage Management provides the infrastructure for a storage cloud. Oracle Enterprise Manager Cloud Control provides you with holistic management of your could.

Oracle Database 11g Direct NFS Client

Direct NFS client is an Oracle developed, integrated, and optimized client that runs in user space rather than within the operating system kernel. This architecture provides for enhanced scalability and performance over traditional NFS v3 clients. Unlike traditional NFS implementations, Oracle supports asynchronous I/O across all operating system environments with Direct NFS client. In addition, performance and scalability are dramatically improved with its automatic link aggregation feature. This allows the client to scale across as many as four individual network pathways with the added benefit of improved resiliency when Network connectivity is occasionally compromised. It also allows Direct NFS client to achieve near block level Performance. For more information on Direct NFS Client comparison to block protocols, see: http://media.netapp.com/documents/tr-3700.pdf.

Cisco UCS Networking and NetApp NFS Storage Topology

This section explains Cisco UCS networking and computing design considerations when deploying Oracle Database 11g Release 2 RAC in an NFS Storage Design. In this design, the NFS traffic is isolated from the regular management and application data network using the same Cisco UCS infrastructure by defining logical VLAN networks to provide better data security. Figure 14, presents a detailed view of the physical topology, and some of the main components of Cisco UCS in an NFS network design.

Figure 14 Cisco UCS Networking and NFS Storage Network Topology

Table 2 vPC Details

Network

vPC

VLAN ID

Public

33

760,761,191,120,121

Private

34

760,761,191,120,121

NetApp-Storage1

3

120,121

NetApp-Storage2

4

120,121

As shown in Figure 14, a pair of Cisco UCS 6248UP fabric interconnects carries both storage and network traffic from the blades with the help of Cisco Nexus 5548UP switch. The 10GB FCoE traffic leaves the UCS Fabrics through Nexus 5548 Switches to NetApp Array. As larger enterprises are adopting virtualization, they have much higher I/O requirements. To effectively handle the higher I/O requirements, FCoE boot is a better solution.

Both the fabric interconnect and the Cisco Nexus 5548UP switch are clustered with the peer link between them to provide high availability. Two virtual Port Channels (vPCs) are configured to provide public network, private network and storage access paths for the blades to northbound switches. Each vPC has VLANs created for application network data, NFS storage data, and management data paths. For more information about vPC configuration on the Cisco Nexus 5548UP Switch, see:

As illustrated in Figure 14, 8 (4 per chassis) links go to Fabric Interconnect A (ports 1 through 8). Similarly, 8 links go to Fabric Interconnect B. Fabric Interconnect A links are used for Oracle Public network and NFS Storage Network traffic and Fabric Interconnect B links are used for Oracle private interconnect traffic and NFS Storage network traffic.

Note For an Oracle RAC configuration on UCS, we recommend to keep all private interconnects local on a single Fabric interconnect. In such case, the private traffic will stay local to that fabric interconnect and will not be routed via northbound network switch. In other words, all inter blade (or Oracle RAC node private) communication will be resolved locally at the fabric interconnect and this significantly reduces latency for Oracle Cache Fusion traffic.

Cisco UCS Manager Configuration Overview

High Level Steps for Cisco UCS Configuration

Given below are high level steps involved for a Cisco UCS configuration:

1. Configuring Fabric Interconnects for Chassis and Blade Discovery

a. Configure Global Policies

b. Configuring Server Ports

2. Configuring LAN and SAN on UCS Manager

a. Configure and Enable Ethernet LAN uplink Ports

b. Configure and Enable FC SAN uplink Ports

c. Configure VLAN

d. Configure VSAN

3. Configuring UUID, MAC, WWWN and WWPN Pool

a. UUID Pool Creation

b. IP Pool and MAC Pool Creation

c. WWNN Pool and WWPN Pool Creation

4. Configuring vNIC and vHBA Template

a. Create vNIC templates

b. Create Public vNIC template

c. Create Private vNIC template

d. Create Storage vNIC template

e. Create HBA templates

5. Configuring Ethernet Uplink Port Channels

6. Create Server Boot Policy for SAN Boot

Details for each step are discussed in the following subsequent sections.

Configuring Fabric Interconnects for Blade Discovery

Cisco UCS 6248 UP Fabric Interconnects are configured for redundancy. It provides resiliency in case of failures. The first step is to establish connectivity between the blades and fabric interconnects.

Configure Global Policies

To configure global policies, follow theses steps

1. Log into UCS Manager.

2. Click the Equipment tab in the navigation pane.

3. Choose Equipment > Policies > Global Policies.

4. Under Chassis/FEX Discovery Policy field select 4-link from the Action drop-down list.

4. Select the desired number of ports by using the CTRL key and click the combination.

5. Right-click and choose Configure as Uplink Port as shown in Figure 18.

Figure 18 Configure Ethernet LAN Uplink Ports

As shown Figure 18, we have selected Port 31 and 32 on Fabric interconnect A and configured them as Ethernet uplink ports. Repeat the same step on Fabric interconnect B to configure Port 31 and 32 as Ethernet uplink ports. We have selected port 29 and Port 30 on both the fabrics and configured them as FCoE Uplink ports for FCoE boot.

Note You will use these ports to create port channels in later sections.

Important Oracle RAC Best Practices and Recommendations for vLANs and vNIC Configuration

•For Direct NFS clients running on Linux, best practices recommend always to use multipaths in separate subnets. If multiple paths are configured in the same subnet, the operating system invariably picks the first available path from the routing table. All the traffic flows through this path and the load balancing and scaling do not work as expected. Please refer to Oracle metalink note 822481.1 for more details.

For this configuration, we have created VLAN 120 and VLAN 121 for storage access, and VSAN 101 and VSAN 102 for FCoE boot.

•Oracle Grid Infrastructure can activate a maximum of four private network adapters for availability and bandwidth requirements. If you want to configure HAIP for Grid Infrastructure, you will need to create additional vNICs. We strongly recommend using a separate VLAN for each private vNIC. For Cisco UCS, a single UCS 10GE private vNIC configured with failover does not require HAIP configuration from bandwidth and availability perspective. As a general best practice, it is a good idea to localize all the private interconnect traffic to single fabric interconnect. For more information on Oracle HAIP, please refer to Oracle metalink note 1210883.1.

Note After selection of VLAN and vNICs, you can configure vLANs for this setup.

Configure VLAN

To configure VLAN, follow these steps:

1. Log into Cisco UCS Manager.

2. Click the LAN tab in the navigation pane.

3. Choose LAN > LAN Cloud > VLAN.

4. Right click and choose Create VLANs.

In this solution, we need to create five VLANs:

•One for private (VLAN 191)

•One for public network (VLAN 760)

•Two for storage traffic (VLAN 120 and 121)

•One for live migration (VLAN 761).

Note These five VLANs will be used in the vNIC templates.

Figure 19 Create VLAN for Public Network

In Figure 19, we have highlighted VLAN 760 creation for public network. It is also very important that you create both VLANs as global across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects in case of NIC failover.

Create VLANs for public, storage and live migration. In case you are using Oracle HAIP feature, you may have to configure additional vlans to be associated with additional vnics as well.

Here is the summary of VLANs once you complete VLAN creation.

•VLAN ID 760 for public interfaces.

•VLAN ID 191 for Oracle RAC private interconnect interfaces.

•VLAN ID 120 and VLAN 121 for storage access.

•VLAN ID 761 for live migration.

Note Even though private VLAN traffic stays local within UCS domain during normal operating conditions, it is necessary to configure entries for these private VLANs in northbound network switch. This will allow the switch to route interconnect traffic appropriately in case of partial link failures. These scenarios and traffic routing are discussed in details in later sections.

Figure 20 summarizes all the VLANs for Public and Private network and Storage access.

After creating the FCoE boot policies for Fabric A and Fabric B, you can view the boot order in the UCS Manager GUI. To view the boot order, navigate to Servers > Policies > Boot Policies. Select Boot Policy Boot-FCoE-OVM-A to view the boot order for Fabric A in the right pane of the UCS Manager. Similarly, select Boot Policy Boot-FCoE-OVM-B to view the boot order for Fabric B in the right pane of the UCS Manager. Figure 53 and Figure 54 show the boot policies for Fabric A and Fabric B respectively in the UCS Manager.

Create Device Aliases for FCoE Zoning

Cisco Nexus 5548 A

To configure device aliases and zones for the primary boot paths of switch A on <<var_nexus_A_hostname>>, follow these steps:

From the global configuration mode, run the following commands:

1. Login as admin user

2. Run the following commands

conf t

device-alias database

device-alias name Storage-FlexPod-A-5a pwwn 50:0a:09:85:9d:93:40:7f

device-alias name Storage-FlexPod-B-5a pwwn 50:0a:09:85:8d:93:40:7f

device-alias name OVM-Host-FlexPod-01-A pwwn 20:00:00:25:b5:01:0a:00

device-alias name OVM-Host-FlexPod-02-A pwwn 20:00:00:25:b5:01:0a:01

device-alias name OVM-Host-FlexPod-03-A pwwn 20:00:00:25:b5:01:0a:02

device-alias name OVM-Host-FlexPod-04-A pwwn 20:00:00:25:b5:01:0a:03

exit

device-alias commit

Cisco Nexus 5548 B

To configure device aliases and zones for the boot paths of switch B on <<var_nexus_B_hostname>>, follow these steps:

From the global configuration mode, run the following commands:

1. Login as admin user

2. Run the following commands

conf t

device-alias database

device-alias name Storage-FlexPod-A-5b pwwn 50:0a:09:86:9d:93:40:7f

device-alias name Storage-FlexPod-B-5b pwwn 50:0a:09:86:8d:93:40:7f

device-alias name OVM-Host-FlexPod-01-B pwwn 20:00:00:25:b5:01:0b:00

device-alias name OVM-Host-FlexPod-02-B pwwn 20:00:00:25:b5:01:0b:01

device-alias name OVM-Host-FlexPod-03-B pwwn 20:00:00:25:b5:01:0b:02

device-alias name OVM-Host-FlexPod-04-B pwwn 20:00:00:25:b5:01:0b:03

exit

device-alias commit

Create Zones

Cisco Nexus 5548 A

To create zones for the service profiles on switch A, follow these steps:

1. Create a zone for each service profile.

Login as admin user.

Run the following commands:

conf t

zone name OVM-Host-FlexPod-01-A vsan 101

member device-alias OVM-Host-FlexPod-01-A

member device-alias Storage-FlexPod-A-5a

member device-alias Storage-FlexPod-B-5a

exit

zone name OVM-Host-FlexPod-02-A vsan 101

member device-alias OVM-Host-FlexPod-02-A

member device-alias Storage-FlexPod-A-5a

member device-alias Storage-FlexPod-B-5a

exit

zone name OVM-Host-FlexPod-03-A vsan 101

member device-alias OVM-Host-FlexPod-03-A

member device-alias Storage-FlexPod-A-5a

member device-alias Storage-FlexPod-B-5a

exit

zone name OVM-Host-FlexPod-04-A vsan 101

member device-alias OVM-Host-FlexPod-04-A

member device-alias Storage-FlexPod-A-5a

member device-alias Storage-FlexPod-B-5a

exit

2. After the zone for the Cisco UCS service profiles has been created, create the zone set and add the necessary members.

zoneset name FlexPod-OVM vsan 101

member OVM-Host-FlexPod-01-A

member OVM-Host-FlexPod-02-A

member OVM-Host-FlexPod-03-A

member OVM-Host-FlexPod-04-A

exit

3. Activate the zone set.

zoneset activate name FlexPod-OVM vsan 101

exit

copy run start

Cisco Nexus 5548 B

To create zones for the service profiles on switch B, follow these steps:

1. Create a zone for each service profile.

Login as admin user.

Run the following commands:

zone name OVM-Host-FlexPod-01-B vsan 102

member device-alias OVM-Host-FlexPod-01-B

member device-alias Storage-FlexPod-A-5b

member device-alias Storage-FlexPod-B-5b

exit

zone name OVM-Host-FlexPod-02-B vsan 102

member device-alias OVM-Host-FlexPod-02-B

member device-alias Storage-FlexPod-A-5b

member device-alias Storage-FlexPod-B-5b

exit

zone name OVM-Host-FlexPod-03-B vsan 102

member device-alias OVM-Host-FlexPod-03-B

member device-alias Storage-FlexPod-A-5b

member device-alias Storage-FlexPod-B-5b

exit

zone name OVM-Host-FlexPod-04-B vsan 102

member device-alias OVM-Host-FlexPod-04-B

member device-alias Storage-FlexPod-A-5b

member device-alias Storage-FlexPod-B-5b

exit

2. After all of the zones for the Cisco UCS service profiles have been created, create the zone set and add the necessary members.

zoneset name FlexPod-OVM vsan 102

member OVM-Host-FlexPod-01-B

member OVM-Host-FlexPod-02-B

member OVM-Host-FlexPod-03-B

member OVM-Host-FlexPod-04-B

exit

3. Activate the zone set.

zoneset activate name FlexPod-OVM vsan 102

exit

copy run start

When configuring the Cisco Nexus 5548UP with vPCs, be sure that the status for all the vPCs are up for connected Ethernet ports by running the commands shown in Figure 68 from the CLI on the Cisco Nexus 5548UP Switch.

Figure 68 Port Channel Status on Cisco Nexus 5548UP

The command show vpc status should show the following for successful configuration.

Storage Configuration for NFS Storage Network

Create and Configure Aggregate, Volumes

NetApp FAS3270HA Controller A

1. Create DB_Aggr_A with a RAID group size of 10, with 40 disks, and RAID_DP redundancy for hosting NetApp FlexVol volumes, as shown in Table 3.

FlexPod-Oracle-A > aggr create DB_Aggr_A -t raid_dp -r 10 40

2. Create NetApp FlexVol volumes on DB_Aggr_A for oltp & dss data files as shown in the Table 3. These volumes are exposed directly to Guest VMs are part of Oracle RAC nodes.

FlexPod-Oracle-A > vol create DB_VOL_A DB_Aggr_A 3072g

FlexPod-Oracle-A > vol create DB_VOL_DSS_A DB_Aggr_A 2048g

FlexPod-Oracle-A > vol create LOG_VOL_A DB_Aggr_A 500g

FlexPod-Oracle-A > vol create LOG_VOL_DSS_A DB_Aggr_A 500g

FlexPod-Oracle-A > vol create OCR_VOTE_VOL DB_Aggr_A 20g

NetApp FAS3270HA Controller B

1. Create DB_Aggr_B with a RAID group size of 10, with 40 disks, and RAID_DP redundancy for hosting NetApp FlexVol volumes, as shown in Table 3.

FlexPod-Oracle-B > aggr create DB_Aggr_B -t raid_dp -r 10 40

2. Create NetApp FlexVol volumes on DB_Aggr_B for oltp & dss data files as shown in the Table 3. These volumes are exposed directly to Guest VMs are part of Oracle RAC nodes.

FlexPod-Oracle-B > vol create DB_VOL_B DB_Aggr_B 3072g

FlexPod-Oracle-B > vol create DB_VOL_DSS_B DB_Aggr_B 2048g

FlexPod-Oracle-B > vol create LOG_VOL_B DB_Aggr_B 500g

FlexPod-Oracle-B > vol create LOG_VOL_DSS_B DB_Aggr_B 500g

NFS export all the flexible volumes (data volumes, redo log volumes, and OCR and voting disk volumes) from both Controller A and Controller B, providing read/write access to the root user of all hosts created in the previous steps.

Create and Configure VIF Interface (Multimode)

Ensure NetApp multimode virtual interface (VIF) feature is enabled on NetApp storage systems on 10 Gigabit Ethernet ports (e5a and e5b) for NFS Storage access. We used the same VIF to access all the flexible volumes created to store Oracle Database files that use using the NFS protocol. Your best practices may vary depending upon setup.

Check the NetApp Configuration

Ensure that the MTU is set to 9000 and that jumbo frames are enabled on the Cisco UCS static and dynamic vNICs and on the upstream Cisco Nexus 5548UP switches.

Figure 72 shows the virtual interface "VIF0-a" created with the MTU size set to 9000 and the trunk mode set to multiple, using two 10 Gigabit Ethernet ports (e5a and e5b) on NetApp storage Controller A. Verify the same on NetApp Controller B.

Figure 72 Virtual Interface (VIF) on NetApp Storage

This completes storage configuration. Next, we will review boot from FCoE details.

UCS Servers and Stateless Computing via FCoE Boot

Boot from FCoE Benefits

Booting from FCoE is another key feature which helps in moving towards stateless computing in which there is no static binding between a physical server and the OS / applications it is tasked to run. The OS is installed on a SAN LUN and boot from FCoE policy is applied to the service profile template or the service profile. If the service profile were to be moved to another server, the pwwn of the HBAs and the Boot from SAN (BFS) policy also moves along with it. The new server now takes the same exact character of the old server, providing the true unique stateless nature of the UCS Blade Server.

The key benefits of booting from the network:

•Reduce Server Footprints

Boot from FCoE alleviates the necessity for each server to have its own direct-attached disk, eliminating internal disks as a potential point of failure. Thin diskless servers also take up less facility space, require less power, and are generally less expensive because they have fewer hardware components.

•Disaster and Server Failure Recovery

All the boot information and production data stored on a local SAN can be replicated to a SAN at a remote disaster recovery site. If a disaster destroys functionality of the servers at the primary site, the remote site can take over with minimal downtime.

Recovery from server failures is simplified in a SAN environment. With the help of snapshots, mirrors of a failed server can be recovered quickly by booting from the original copy of its image. As a result, boot from SAN can greatly reduce the time required for server recovery.

•High Availability

A typical data center is highly redundant in nature - redundant paths, redundant disks and redundant storage controllers. When operating system images are stored on disks in the SAN, it supports high availability and eliminates the potential for mechanical failure of a local disk.

•Rapid Redeployment

Businesses that experience temporary high production workloads can take advantage of SAN technologies to clone the boot image and distribute the image to multiple servers for rapid deployment. Such servers may only need to be in production for hours or days and can be readily removed when the production need has been met. Highly efficient deployment of boot images makes temporary server usage a cost effective endeavor.

With Boot from SAN, the image resides on a SAN LUN and the server communicates with the SAN through a host bus adapter (HBA). The HBAs BIOS contain the instructions that enable the server to find the boot disk. All the FC-capable Converged Network Adapter (CNA) cards supported on Cisco UCS B-series blade servers support Boot from SAN.

After power on self-test (POST), the server hardware component fetches the boot device that is designated as the boot device in the hardware BOIS settings. Once the hardware detects the boot device, it follows the regular boot process.

Quick Summary for Boot from SAN Configuration

At this time, we have completed following steps that are essential for Boot from SAN configuration.

•SAN Zoning configuration on the Nexus 5548UP switches

•NetApp Storage Array Configuration for Boot LUN

•Cisco UCS configuration of Boot from SAN policy in the service profile

At this time, you are ready to perform OS install. We will not cover steps to complete OS install in a FCoE boot configuration.

Oracle VM Server Install Steps and Recommendations

For this solution, we configured a 4-node Oracle Database 11g Release 2 RAC cluster using 4-Guest VM each created on one Oracle VM Server. There are four Cisco B200 M3 servers used boot from SAN to enable stateless computing in case if a need arises to replace/swap the server using UCS unique service profile capabilities. While OS boot is using FCoE, the databases and grid infrastructure components are configured to use NFS protocol on the NetApp storage. Oracle VM Server 3.1.1 with Patch 819 (Oracle VM Server 3.1.1.819) is installed on each server.

This patch will allow you to enable jumbo frames (MTU= 9000) on Ethernet ports of Oracle VM server as well as Guest VM. Without this patch the Oracle VM server as well as guest VM reboots, when you set MTU size 9000 on Ethernet ports of Oracle VM server and guest VM.

Note Ensure to use OVS build 3.1.1.819 or later. Please contact Oracle support to download the same.

Figure 73 OVS ISO attached to as Virtual Media to KVM Console

2. Click Reset to start the server for the Installation.

Figure 74 Starting the Installation

3. NetApp LUN is discovered from all the FCoE Paths.

Figure 75 NetApp LUN Discovered

4. Press Enter to continue the Installation.

Figure 76 Ready for Installation

Figure 77 OVS Installation Status

Figure 78 shows the finish of Oracle VM Server Installation after we provide all the required values during the Installation like configure the management Ethernet interface with appropriate ip. Ensure to verify the displayed MAC address with the static vNIC ethernet interface created on Service Profile for public/ management access.

Figure 78 Completion of Installation

Use the above Oracle VM Server Installation steps to complete the Installation for all the four Cisco UCS B200M3 Server.

Oracle VM Server Network Architecture

Oracle VM Manager Installation

Oracle VM Manager is installed as a production level; this is the preferred installation type, with options for selecting Oracle SE or EE database as the location for the Oracle VM Manager repository, as well as setting individual passwords for each component. Ensure that Oracle SE Database is installed; prior to installing Oracle VM Manager. Please follow the steps below to successfully install OVM Manager.

Please wait while WebLogic configures the applications... This can take up to 5 minutes.

Installation Summary

--------------------

Database configuration:

Database host name : ovmmanager

Database instance name (SID): orcl

Database listener port : 1521

Application Express port : None

Oracle VM Manager schema : ovs1

Weblogic Server configuration:

Administration username : weblogic

Oracle VM Manager configuration:

Username : admin

Core management port : 54321

UUID : 0004fb00000100000a5c59c7f7487ffe

Passwords:

There are no default passwords for any users. The passwords to use for Oracle VM Manager, Oracle Database 11g XE, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all the passwords are the same.

Oracle VM Manager UI:

http://ovmmanager:7001/ovm/console

https://ovmmanager:7002/ovm/console

Log in with the user 'admin', and the password you set during the installation.

Please note that you need to install tightvnc-java on this computer to access a virtual machine's console.

For more information about Oracle Virtualization, please visit:

http://www.oracle.com/virtualization/

Oracle VM Manager installation complete.

Please remove configuration file /tmp/ovm_configzFYrq_.

Post Oracle VM Manager installation, apply Oracle VM Manager 3.1.1 Patch Update (Build 365) [ID 1530546.1]. This would help in resolution of time out issues on creation of ovm 3.1.1.

Oracle VM Server Configuration Using Oracle VM Manager

Some of the important steps to configure Oracle VM environment are elaborated in the figure below.

Figure 80 Oracle VM Server Configuration and Guest VM Creation Steps

1. Discover Oracle VM Servers. We would see that the servers are listed as Unassigned Server under Servers and VMs tab.

Figure 81 Oracle VM Servers Listed in the VM Manager

2. Configure Oracle VM Server Network.

Figure 82 Network Configuration

3. Configure all the Ethernet ports of each Oracle VM Server appropriately and set the MTU size properly.

Figure 83 Setting MTU Size

4. Create Server Pool with the cluster LUN as the repository.

Figure 84 Cluster Pool

Figure 85 Status of all the Servers

5. Create Storage Repository for each of the data LUNs configured for Oracle VM Server.

Figure 86 Storage Repository

6. Create one Guest VM in each Oracle VM Server as elaborated in Figure 87. In accordance with Oracle recommendations, PVM Guest VMs are created. We created four Guest VMs for the Oracle RAC nodes, each one created on individual Oracle VM Server to configure four node Oracle RAC.

This section describes high level steps for Oracle Database 11g Release 2 RAC install. Prior to Grid and database install, verify all the prerequisites are completed.You can install Oracle validated RPM that will ensure most of the OS prerequisites are met before Oracle Grid install. We will not cover step-by-step install for Oracle Grid in this document but will provide partial summary of details that is relevant. As a best practice recommended from Oracle, ready-to-go Oracle VM Templates for Oracle RAC can be downloaded from Oracle Software Delivery Cloud for faster deployment

Use the following Oracle document for pre-installation tasks, such as setting up the kernel parameters, RPM packages, user creation, and so on.

3. Edit /etc/fstab file in each Oracle RAC node and add mount points for all database and Grid NFS volumes with the appropriate mount options. Please note that these mount points need to be created first.

Note Oracle Direct NFS (dNFS) configuration steps will need to be performed at a later stage after database creation.

Here is sample output from mount command on Node 1:

[root@orarac1 ~]# mount

To determine the proper mount options for different file systems of Oracle 11g Release 2, see:

Note An rsize and wsize of 65536 is supported by NFS v3 and used in this configuration to improve performance.

4. Configure the private and public NICs with the appropriate IP addresses.

5. Identify the virtual IP addresses and SCAN IPs and have them setup in DNS as per Oracle's recommendation, see: Oracle Real Application Clusters - Overview of SCAN (PDF). Alternatively, you can update the /etc/hosts file with all the details (private, public, SCAN and virtual IP) if you do not have DNS services available.

6. Create files for OCR and voting devices under /ocrvote local directories as follows.

Login as "grid" user from any one node and create the following raw files

dd if=/dev/zero of=/ocrvote/ocr/ocr1 bs=1m count=1024

dd if=/dev/zero of=/ocrvote/ocr/ocr2 bs=1m count=1024

dd if=/dev/zero of=/ocrvote/ocr/ocr3 bs=1m count=1024

dd if=/dev/zero of=/ocrvote/vote/vote1 bs=1m count=1024

dd if=/dev/zero of=/ocrvote/vote/vote2 bs=1m count=1024

dd if=/dev/zero of=/ocrvote/vote/vote3 bs=1m count=1024

7. Configure ssh option (with no password) for the Oracle user and grid user. For more information about ssh configuration, refer to the Oracle installation documentation.

Note You generally do not have to perform these steps if Oracle Validated RPM is installed.

9. Configure hugepages.

Hugepages is a method to have larger page size that is useful for working with very large memory. For Oracle Databases, using HugePages reduces the operating system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.

Advantages of HugePages

•HugePages are not swappable so there is no page-in/page-out mechanism overhead.

•Hugepage uses fewer pages to cover the physical address space, so the size of "book keeping" (mapping from the virtual to the physical address) decreases, so it requiring fewer entries in the TLB and so TLB hit ratio improves.

•Hugepages reduces page table overhead.

•Eliminated page table lookup overhead: Since the pages are not subject to replacement, page table lookups are not required.

•Faster overall memory performance: On virtual memory systems each memory operation is actually two abstract memory operations. Since there are fewer pages to work on, the possible bottleneck on page table access is clearly avoided.

For our configuration, we used hugepages for both OLTP and DSS workloads. Please refer to Oracle metalink document 361323.1 for hugepages configuration details.

Once hugepages are configured, You are now ready to install Oracle Grid Infrastructure and the Oracle Database 11g Release 2 including Oracle RAC.

Installing Oracle RAC 11g Release 2

It is not within the scope of this document to include the specifics of an Oracle RAC installation; you should refer to the Oracle installation documentation for specific installation instructions for your Environment. For best practices recommended by Oracle. See:

6. Run the dbca tool as oracle user to create OLTP and DSS databases. Ensure to place the datafiles, redo logs and control files in proper directory paths as created in above steps. We will discuss additional details about OLTP and DSS schema creation in workload section.

For improved NFS performance, Oracle recommends using the Direct NFS Client shipped with Oracle 11g. The direct NFS client looks for NFS details in the following locations:

–$ORACLE_HOME/dbs/oranfstab

–/etc/oranfstab

–/etc/mtab

In RAC configuration with Direct NFS, the oranfstab must be configured on all the nodes. Here is oranfstab configuration from RAC node 1.

[oracle@orarac1 dbs]$ vi oranfstab

server: 120.191.1.5

path: 120.191.1.5

path: 121.191.1.5

server: 121.191.1.6

path: 121.191.1.6

path: 120.191.1.6

export:/ocrvote mount:/vol/FlexPod_OVM_OCR

export:/oltp_data_A mount:/vol/OVM_OLTP_Data_A

export:/oltp_log_A mount:/vol/OVM_OLTP_Data_A

export:/oltp_data_B mount:/vol/OVM_OLTP_Data_B

export:/oltp_log_B mount:/vol/OVM_OLTP_LOG_B

export:/dss_data_A mount:/vol/OVM_DSS_Data_A

export:/dss_log_A mount:/vol/OVM_DSS_LOG_A

export:/dss_data_B mount:/vol/OVM_DSS_Data_B

export:/dss_log_B mount:/vol/OVM_DSS_LOG_B

Since the NFS mount point details were defined in the "/etc/fstab", and therefore the "/etc/mtab" file also, there is no need to configure any extra connection details. When setting up your NFS mounts, reference the Oracle documentation for guidance on what types of data can/cannot be accessed via Direct NFS Client. For the client to work we need to switch the libodm11.so library for the libnfsodm11.so library, as shown below.

Workloads and Database Configuration

We used Swingbench for workload testing. Swingbench is simple to use, free, Java based tool to generate database workload and perform stress testing using different benchmarks in Oracle database environments. Swingbench provides four separate benchmarks, namely, Order Entry, Sales History, Calling Circle, and Stress Test. For the tests described in this paper, Swingbench Order Entry benchmark was used for OLTP workload testing and the Sales History benchmark was used for the DSS workload testing. The Order Entry benchmark is based on SOE schema and is similar to TPC-C by types of transactions. The workload uses a very balanced read/write ratio around 60/40 and can be designed to run continuously and test the performance of a typical Order Entry workload against a small set of tables, producing contention for database resources. The Sales History benchmark is based on the SH schema and is TPC-H kind. The workload is query (read) centric and is designed to test the performance of the queries against large tables.

As discussed in previous section, two independent databases were created earlier for Oracle Swingbench OLTP and DSS workloads. Next step is to pre create the order entry and sales history schema for OLTP and DSS workload. Swingbench Order Entry (OLTP) workload uses SOE tablespace and Sales History workload uses SH tablespaces. We pre created these schemas in order to associate multiple datafiles with tablespaces and also evenly distributing them across two storage controllers. For our setup, we created 90 datafiles for SOE tablespace with odd number files for storage controller A and even number of files for storage controller B. In the same way, we used 50 datafiles for Sales history workload and evenly distributed them across both the storage controllers. Once schema for workloads was created, we populated both databases with Swingbench datagenerator as shown below.

Performance Data from the Tests

Once the databases were created, we started out with OLTP database calibration about number of users and database configuration. For Order Entry workload, we used 48GB SGA and ensured that the hugepages were in use. Each OLTP scalability test was run for at least 12 hours and we ensured that the results are consistent for the duration of full run.

OLTP Workload

For OLTP workloads, the common measurement metrics are Transactions Per Minute (TPM), users scalability with IOPs and CPU utilization. Here are the scalability charts for Order Entry workload.

Figure 103 OLTP Transactions

For OLTP TPM tests, we ran tests with 50, 100, 200 and 400 users across 4-node cluster. During tests, we validated that Oracle SCAN listener fairly and evenly distributed the load balanced users across all the 4 nodes of the cluster. We also observed appropriate scalability in TPMs as number or users across clusters increased. Next graph shows increased IO and scalability as number of users across increased.

Figure 104 OLTP IOPs and Scalability

As indicated in the graph, we observed about 26850 IO/Sec across 4-node cluster. The Oracle AWR report below also summarizes Physical Reads/Sec and Physical Writes/Sec per instance. During OLTP tests, we observed some resource utilization variations due to random nature of the workload as depicted by 200 users IOPs. We ran each test multiple times to ensure consistent numbers that are presented in this solution.

The table below shows interconnect traffic for the 4-node Oracle RAC cluster during 400 user run. The average interconnect traffic was 215 MB/Sec for the duration of the run.

The chart below indicates cluster CPU utilization as the number of users scale from 12 users/node to 100 users/node.

Figure 105 CPU Utilization

DSS Workload

DSS workloads are generally sequential in nature, read intensive and exercise large IO size. DSS workloads run a small number of users that typically exercise extremely complex queries that run for hours. For our tests, we ran Swingbench Sales history workload with 12 users. The charts below show DSS workload results.

Figure 106 DSS Workload - I/O Bandwidth

For 24 hour DSS workload test, we observed total IO bandwidth ranging between 1.5 GBytes/Sec and 1.7 GBytes/Sec. As indicated on the charts, the IO was also evenly distributed across both NetApp FAS storage controllers and we did not observe any significant dips in performance and IO bandwidth for a sustained period of time.

Mixed Workload

Next test is to run both OLTP and DSS workloads simultaneously. This test will ensure that configuration in this test is able to sustain small random queries presented via OLTP along with large and sequential transactions submitted via DSS workload. We ran the tests for 24 hours. Here are the results.

Figure 107 Mixed Workload - I/O Bandwidth

For mixed workloads running for 24 hours, we observed approximately 1.4 GBytes/Sec. IO bandwidth. The OLTP transactions also averaged between 220K and 230K transactions per minute.

Figure 108 Mixed Workload - TPM

Destructive and Hardware failover Tests

The goal of these tests is to ensure that reference architecture withstands commonly occurring failures either due to unexpected crashes, hardware failures or human errors. We conducted many hardware, software (process kills) and OS specific failures that simulate real world scenarios under stress conditions. In the destructive testing, we also demonstrate unique failover capabilities of Cisco VIC 1240 adapter. We have highlighted some of those test cases as below.

Figure 109 Flexpod Test Details

Conclusion

FlexPod is built on leading computing, networking, storage, and infrastructure software components. With Flexpod based solution, customers can leverage a secure, integrated, and optimized stack that includes compute, network and storage resources that are sized, configured and deployed as a fully tested unit running industry standard applications such as Oracle Database 11g RAC over D-NFS (Direct NFS). The following factors make the combination of Cisco UCS with NetApp storage so powerful for Oracle environments:

•Cisco UCS stateless computing architecture provided by the Service Profile capability of UCS allows for fast, non-disruptive workload changes to be executed simply and seamlessly across the integrated UCS infrastructure and Cisco x86 servers.

•All of this is made possible by Cisco's Unified Fabric with its focus on secure IP networks as the standard interconnect for the server and data management solutions.

The availability of Oracle VM overcomes this obstacle. Providing software based virtualization infrastructure (Oracle VM) and the market leading high availability solution Oracle Real Application Clusters (RAC), Oracle now offers a highly available, grid-ready virtualization solution for your data center, combining all the benefits of a fully virtualized environment. The combination of Oracle VM and Oracle RAC enables a better server consolidation (RAC databases with under utilized CPU resources or peaky CPU utilization can often benefit from consolidation with other workloads using server virtualization) sub-capacity licensing, and rapid provisioning. Following are the major advantages of using Oracle RAC on Oracle VM.

•Server Consolidation

•Sub-Capacity Licensing

•Create Virtual Cluster

•Rapid Provisioning

As a result, customers can achieve dramatic cost savings when leveraging Ethernet based products plus deploy any application on a scalable Shared IT infrastructure built on Cisco and NetApp technologies. Finally, FlexPod™, jointly developed by NetApp and Cisco, is a flexible infrastructure platform composed of pre-sized storage, networking, and server components. It's designed to ease your IT transformation and operational challenges with maximum efficiency and minimal risk.