Oracle Blog

Musings on Fusion Middleware and SOA

Off the RAC

Configuring a RAC Cluster for SOA

To get the highest availability for a SOA cluster the backend database needs to be highly available. So in this post I will go through the minimum requirements to get a RAC cluster up and running ready for use by SOA. Note that this configuration is not suitable for production but is useful to enable you to develop and test in an environment that is similar to production.

Target

I decided to go for an 11gR2 RAC cluster running on Oracle Enterprise Linux 5.5. I used two Linux servers for database machines and OpenFiler for the NFS server to provide shared storage. I created all these as images under Virtual Box.

NFS Preparation

I brought up OpenFiler and after initial configuration to use the Internal RAC LAN I created a single volume group (rac) and then created the following volumes with associated shares.

Volume

Size

Share Location

Description

db

10GB

/mnt/rac/db/share

RAC Database Software

grid

10GB

/mnt/rac/grid/share

RAC Grid Software

cluster

1.5GB

/mnt/rac/cluster/share

RAC Cluster Files

data

10GB

/mnt/rac/data/share

RAC Data Files

The shares were configured with public guest access and RW access permissions. UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024).

OS Preparation

The first step was to install the OS and configure it to use yum. After updating packages to the latest revisions I can then apply the packages needed by RAC. The easiest way to apply the required packages was to install the package oracle-validated (yum install oracle-validated) as this automatically installs all required packages for RAC and sets the necessary system parameters.

I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew.

Users

I then created the following user and appropriate groups

User

Default Group

Groups

oracle

oinstall

oinstall, oracle, dba

I also added the following to the .bash_profile, changing the ORACLE_HOSTNAME and ORACLE_SID as appropriate.

After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.

Background

My final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background. Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.

Snapshot

Having prepared everything I shut down the 3 virtual machines (nas1, rac1 and rac2) and took a snapshot of the virtual images, labeling them pre-grid. Then if there were problems later I could revert to the configuration just before installing any software. When starting the virtual machines I always started the OPenFiler first so that the rac servers would be able to find it.

Grid Install

With the OS prepared I logged in as oracle user and kicked off the grid install choosing the advanced install option. I identified my nodes as rac1 and rac2 and the internal RAC network as the private interface and the external RAC network as the public interface. I used the shared file system storage option and claimed external redundancy. Setting the OCR file location to /u01/cluster/storage/ocr and the voting disk location as /u01/cluster/storage/vdsk. I installed the software onto the shared disk at /u01/app/11gR2/grid. The install automatically installs the software on both rac nodes.

During the verification you may find you are still missing a couple of packages and some settings may not be correct. The packages can be added using yum without aborting the install and the installer generates root scripts to adjust any parameters that need modifying.

Snapshot

After installing the grid software I again shut down all the servers and took a snapshot of each of them, labeling it grid.

DB Install

With the cluster services installed and running I logged in as oracle user and kicked off the database install choosing the database software only option and selecting a RAC install on the rac1 and rac2 nodes. I identified the software location as /u01/app/product/11gR2/db.

Snapshot

After installing the database software I again shut down all the servers and took a snapshot of each of them, labeling it db.

Database Creation

With the database software installed I ran the $ORACLE_HOME/bin/dbca utility to create a RAC database. I chose advanced install and selected rac1 and rac2 nodes and chose the AL32UTF8 character set. On my machine the database configuration wizard took about 10 hours to complete, but it did finish successfully.

Snapshot

After creating the database I shut it down using srvctl stop database –d rac and then shut down all the servers and took a snapshot of each of them, labeling it rac. At this point I deleted some of the earlier snapshots to reduce disk usage and potentially improve performance a little in the virtual machines.

Next Steps

With a RAC database available I am now ready to install and configure a SOA cluster which I will cover in the next few postings.

About

Musings on Fusion Middleware and SOA
Antony works with customers across the US and Canada in implementing SOA and other Fusion Middleware solutions.
Antony is the co-author of the SOA Suite 11g Developers Cookbook, the SOA Suite 11g Developers Guide and the SOA Suite Developers Guide.