Introduction

One of the biggest obstacles preventing people from setting up test RAC environments is the requirement for shared storage. In a production environment, shared storage is often provided by a SAN or high-end NAS device, but both of these options are very expensive when all you want to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire disk enclosure to allow two machines to access the same disk(s), but that still costs money and requires two servers. A third option is to use virtualization to fake the shared storage.

Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In additon, it allows you to set up shared virtual disks, overcoming the obstacle of expensive shared storage.

Before you launch into this installation, here are a few things to consider.

The finished system includes the host operating system, two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. As you can imagine, this requires a significant amount of disk space, CPU and memory.

Following on from the last point, the VMs will each need at least 2G of RAM (3G for 11.2.0.2), preferably 4G if you don’t want the VMs to swap like crazy. As you can see, 11gR2 RAC requires much more memory than 11gR1 RAC. Don’t assume you will be able to run this on a small PC or laptop. You won’t.

This procedure provides a bare bones installation to get the RAC working. There is no redundancy in the Grid Infrastructure installation or the ASM installation. To add this, simply create double the amount of shared disks and select the “Normal” redundancy option when it is offered. Of course, this will take more disk space.

During the virtual disk creation, I always choose not to preallocate the disk space. This makes virtual disk access slower during the installation, but saves on wasted disk space. The shared disks must have their space preallocated.

This is not, and should not be considered, a production-ready system. It’s simply to allow you to get used to installing and using RAC.

The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. In this article I’ve defined it as a single IP address in the “/etc/hosts” file, which is wrong and will cause the cluster verification to fail, but it allows me to complete the install without the presence of a DNS.

The virtual machines can be limited to 2Gig of swap, which causes a prerequisite check failure, but doesn’t prevent the installation working. If you want to avoid this, define 3+Gig of swap.

The “rac1” VM will appear on the left hand pane. Scroll down the “Details” tab on the right and click on the “Network” link.

Make sure “Adapter 1” is enabled, set to “Bridged Adapter” and “eth0”, then click on the “Adapter 2” tab.

Make sure “Adapter 2” is enabled, set to “Bridged Adapter” and “eth0”, then click on the “OK” button.

The virtual machine is now configured so we can start the guest operating system installation.

Guest Operating System Installation

Place the Oracle Linux 5 DVD in the DVD drive and start the virtual machine by clicking the “Start” button on the toolbar. The resulting console window will contain the Oracle Linux boot screen.

Continue through the Oracle Linux 5 installation as you would for a normal server. A general pictorial guide to the installation can be found here. More specifically, it should be a server installation with a minimum of 2G swap (3-4G if you want to avoid warnings), firewall and SELinux disabled and the following package groups installed:

GNOME Desktop Environment

Editors

Graphical Internet

Text-based Internet

Development Libraries

Development Tools

Server Configuration Tools

Administration Tools

Base

System Tools

X Window System

To be consistent with the rest of the article, the following information should be set during the installation:

hostname: rac1.localdomain

IP Address eth0: 192.168.2.101 (public address)

Default Gateway eth0: 192.168.2.1 (public address)

IP Address eth1: 192.168.0.101 (private address)

Default Gateway eth1: none

You are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.

Once the basic installation is complete, install the following packages whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages.

Note. The SCAN address should not really be defined in the hosts file. Instead is should be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. For this installation, we will compromise and use the hosts file.

If you are using DNS, then only the first line should be present in the “/etc/hosts” file. The other entries are defined in the DNS, as described here.

Click on the “Devices > Install Guest Additions” menu option at the top of the VM screen, then run the following commands.

cd /media/VBOXADDITIONS_3.2.8_64453
sh ./VBoxLinuxAdditions-amd64.run

The VM will need to be restarted for the additions to be used properly. The next section requires a shutdown so no additional restart is needed at this time.

Create Shared Disks

Shut down the “rac1” virtual machine using the following command.

# shutdown -h now

Create 5 sharable virtual disks and associate them as virtual media using one of the following sets of commands on the host server. If you are using a version of VirtualBox prior to 4.0.0, then use the following commands.

Start the “rac1” virtual machine by clicking the “Start” button on the toolbar. When the server has started, log in as the root user so you can configure the shared disks. The current disks can be seen by issuing the following commands.

# cd /dev
# ls sd*
sda sda1 sda2 sdb sdc sdd sde sdf
#

Use the “fdisk” command to partition the disks sdb to sdf. The following output shows the expected fdisk output for the sdb disk.

# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
#

Start the “rac2” virtual machine by clicking the “Start” button on the toolbar. Ignore any network errors during the startup.

Log in to the “rac2” virtual machine as the root user and start the “Network Configuration” tool (System > Administration > Network).

Remove the devices with the “%.bak” nicknames. To do this, highlight a device, deactivate, then delete it. This will leave just the regular “eth0” and “eth1” devices. Highlight the “eth0” interface and click the “Edit” button on the toolbar and alter the IP address to “192.168.2.102” in the resulting screen.

Click on the “Hardware Device” tab and click the “Probe” button. Then accept the changes by clicking the “OK” button.

Repeat the process for the “eth1” interface, this time setting the IP Address to “192.168.0.102”, and making sure the default gateway is not set for the “eth1” interface.

Click on the “DNS” tab and change the host name to “rac2.localdomain”, then click on the “Devices” tab.

Once you are finished, save the changes (File > Save) and activate the network interfaces by highlighting them and clicking the “Activate” button. Once activated, the screen should look like the following image.

Edit the “/home/oracle/.bash_profile” file on the “rac2” node to correct the ORACLE_SID and ORACLE_HOSTNAME values.

Also, amend the ORACLE_SID setting in the “/home/oracle/db_env” and “/home/oracle/grid_env” files.

Start the “rac1” virtual machine and restart the “rac2” virtual machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.

ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv

At this point the virtual IP addresses defined in the “/etc/hosts” file will not work, so don’t bother testing them.

Prior to 11gR2 we would probably use the “runcluvfy.sh” utility in the clusterware root directory to check the prerequisites have been met. If you are intending to configure SSH connectivity using the installer this check should be omitted as it will always fail. If you want to setup SSH connectivity manually, then once it is done you can run the “runcluvfy.sh” with the following command.

Install the Grid Infrastructure

Make sure the “rac1” and “rac2” virtual machines are started, then login to “rac1” as the oracle user and start the Oracle installer.

./runInstaller

Select the “Install and Configure Grid Infrastructure for a Cluster” option, then click the “Next” button.

Select the “Typical Installation” option, then click the “Next” button.

On the “Specify Cluster Configuration” screen, click the “Add” button.

Enter the details of the second node in the cluster, then click the “OK” button.

Click the “SSH Connectivity…” button and enter the password for the “oracle” user. Click the “Setup” button to to configure SSH connectivity, and the “Test” button to test it once it is complete.

Click the “Identify network interfaces…” button and check the public and private networks are specified correctly. Once you are happy with them, click the “OK” button and the “Next” button on the previous screen.

Enter “/u01/app/11.2.0/grid” as the software location and “Automatic Storage Manager” as the cluster registry storage type. Enter the ASM password and click the “Next” button.

Set the redundancy to “External”, select all 5 disks and click the “Next” button.

Accept the default inventory directory by clicking the “Next” button.

Wait while the prerequisite checks complete. If you have any issues, either fix them or check the “Ignore All” checkbox and click the “Next” button.

If you are happy with the summary information, click the “Finish” button.

Wait while the setup takes place.

When prompted, run the configuration scripts on each node.

The output from the “orainstRoot.sh” file should look something like that listed below.

Provided this is the only error, it is safe to ignore this and continue by clicking the “Next” button.

Click the “Close” button to exit the installer.

The grid infrastructure installation is now complete.

Install the Database

Make sure the “rac1” and “rac2” virtual machines are started, then login to “rac1” as the oracle user and start the Oracle installer.

./runInstaller

Uncheck the security updates checkbox and click the “Next” button.

Accept the “Create and configure a database” option by clicking the “Next” button.

Accept the “Server Class” option by clicking the “Next” button.

Make sure both nodes are selected, then click the “Next” button.

Accept the “Typical install” option by clicking the “Next” button.

Enter “/u01/app/oracle/product/11.2.0/db_1” for the software location. The storage type should be set to “Automatic Storage Manager”. Enter the appropriate passwords and database name, in this case “RAC.localdomain”.

Wait for the prerequisite check to complete. If there are any problems either fix them, or check the “Ignore All” checkbox and click the “Next” button.

If you are happy with the summary information, click the “Finish” button.

Wait while the installation takes place.

Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.

Once the Database Configuration Assistant (DBCA) has finished, click the “OK” button.

When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the “OK” button.

Click the “Close” button to exit the installer.

The RAC database creation is now complete.

Check the Status of the RAC

There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.