Real Application Clusters (RAC) in conjunction with Oracle 12c databases have some special hardware requirements that single instance or non-RAC database don’t have. The hardware areas to focus on include network interfaces, central storage and nodes.

Nodes and Real Application Clusters in Oracle 12c

A node is a server that runs an Oracle instance. A true RAC configuration has at least two nodes.

The number of nodes in your RAC configuration depends on hardware and software limitations. According to Oracle’s documentation and support websites, Oracle software itself can support upwards of 100 nodes, but other forces may limit you to fewer.

If you’re getting into lots of nodes (more than eight), check with all your hardware and software vendors to see what your limit is.

Add nodes as you scale your cluster. You can add and remove them with minimal or no service interruption to your application. This ensures high availability. Typically, each node will have its own installation of the Oracle software.

You can have one central, shared software directory for each node to use. However, a configuration like this limits your high-availability capabilities.

For example, one advantage to installing the Oracle software on each node is the ability to individually patch the nodes by taking them down one at a time. This rolling patch avoids a complete application outage. You can’t apply all patches this way. Check with patch documentation to be sure. On the other hand, one central installation requires you to shut down the entire cluster to apply the patch.

Each node should have its own Oracle software code tree if you want high availability.

Central storage and Real Application Clusters in Oracle 12c

The following are some RAC configuration central storage requirements:

All your database files, control files, redo logs, archive logs, and spfile should be on shared storage. This way, each of the nodes has access to all the required files for data access, recovery, and configuration.

Attach the central storage to each node in the form of some high-speed media. Lots of high-speed connections (fiber channel or iSCSI, for example) are available from different storage vendors.

Make sure the storage and attachments are approved for Oracle RAC before making your decisions. (For example, NFS mounting drives to each server isn’t typically a certified configuration.) You can use almost any shared storage configuration with decent education and testing results.

When choosing a storage vendor, consider your applications’ performance needs. Your disk subsystem should be able to scale as easily as your RAC nodes. As you add nodes, you may need to add physical disks to support the increased demand on the storage subsystem. You should be able to do this with little or no downtime.

The disk on the shared storage subsystem must be configured for shared access. You may have up to four choices for this:

You may have to combine options. For example, you might use Oracle ASM for your database files, but you might want something other than ASM for RMAN backup files.

Cluster interconnect and Real Application Clusters in Oracle 12c

The cluster interconnect is a dedicated piece of hardware that manages all the inter-instance communication. A lot of communication across instances occurs in a RAC configuration: maintaining consistency, sharing lock information, and transferring data blocks.

Cache Fusion is a critical component for getting RAC to perform well. The interconnect needs to be gigabit speeds or better.

When you have cluster communication performance issues, the interconnect’s ability to provide the required bandwidth is questioned. It’s a necessary expense to set up an RAC environment appropriately. Would you spend thousands of dollars on a race car and then put street tires on it?

Network interfaces and Real Application Clusters in Oracle 12c

Make sure you have the right network interfaces on the server for proper communication. This includes multiple network interface cards:

One for the public or user connections to the machine

One for the private interconnect for the cluster to share information across the nodes

At the very least, a RAC configuration should have two network interface cards:

One for the private network for cluster interconnect traffic

One for the public network

The public network is the connection for all cluster connections, from your applications and end users (including you and the sys admin).