Wednesday, April 15, 2015

How to install Oracle RAC 12C using Oracle Linux 6.4 on VMware 5.x

A new release of Oracle means it’s time for a new walkthrough. In
this fourth “RAC on ESX” walkthrough, I’ll go over the process of
building an Oracle 12c RAC cluster on VMware ESXi 5 from start to
finish. My goal in this walkthrough is to have you up and running with
a virtualized Oracle cluster with minimal hassle. Since this guide is
step by step, you don't need to be an expert to follow along, but the
more experience you have the better.
The following diagram will give you a conceptual idea of the cluster.

As I’ve mentioned in previous walkthroughs, this configuration is
meant only for testing, and to give you a way to learn RAC without
buying the expensive hardware a traditional RAC cluster entails. If
you’re building a production RAC cluster, I suggest you read the Grid
Infrastructure and RAC installation guides, and the RAC Administration
and Deployment Guide. The following MOS (My Oracle Support) notes will
also provide you with guidance (this requires an Oracle support
subscription):

Network Requirements
We will need 9 IP addresses for the RAC cluster. 2 public
communication IP addresses, 2 Virtual IPs, 2 Interconnect IP addresses,
and 3 SCAN (single client access name) addresses. The public
communication IP addresses, SCAN addresses, and VIP addresses need to be
on the same segment. The private addresses need to be on their own
segment. This is how it looks in my network:

Node Hostname

Public IP

Interconnect IP

VIP

node1.example.com

192.168.2.220

10.0.0.1

192.168.2.222

node2.example.com

192.268.2.221

10.0.0.2

102.168.2.223

Lastly our SCAN addresses will be 192.168.2.117, 192.168.2.118, and
192.168.2.119. The SCAN addresses should be configured in DNS rather
than in the hosts file, as 3 round-robin A records. If you don't have
DNS configured, or are unable to configure DNS, you can place the SCAN
addresses in the hosts file. Placing the SCAN addresses in the host file
is against best practices. I gave them a name of clus-scan. Make sure
these addresses are resolvable from the nodes.

Your networking environment is probably different from mine. Feel
free to configure the IPs to be on whatever network segment you use,
just make sure that the Public IPs, SCAN addresses, and VIPs are on the
same segment. If you do use different addresses, make sure to use them
during the OS install and update /etc/hosts appropriately.

Hypervisor Configuration and Virtual Machine Creation

Each RAC node requires two network connections; one connection is for
public communication, and the other is for the Interconnect. In order
to isolate Interconnect traffic, we will create a virtual switch as
shown below. It's likely that the RAC Interconnect will work without
following this step, however I haven't tested this and Interconnect
traffic is supposed to be isolated over its own VLAN or switch in the
real world anyway.

Virtual Switch Creation
Log into the ESXi host using the vSphere client, select the host, and click the "Configuration" tab.

Click "Networking" in the "Hardware" box.

Click "Add Networking..." in the upper right corner of the pane.

Select "Virtual Machine" as the connection type and click the "Next" button.

Make sure "Create a vSphere standard switch" is selected, and click the "Next" button.

I used "RAC Interconnect" as my network label. Feel free to use any label you want to, and click the "Next" button.

Click "Finish" to create the virtual switch.

Virtual Machine Creation

Right click on the ESXi host, and click "New Virtual Machine..." to begin the process.

Select the "Custom" configuration option.

Set "node1.example.com" as the virtual machine name.

Select the storage location for the virtual machine files.

Select "Virtual Machine Version: 8."

Select "Linux" as the guest operating system, and then select "Oracle Linux 4/5/6 (64-bit)" from the version dropdown.

I just left this screen set to the defaults, but you can change them if needed.

Set the memory size to 4236MB. You'll notice this is slightly more
than the required 4GB. I am doing this because the virtual machine
reserves a small amount of memory that isn't visible to the guest
operating system. Setting this amount of memory allows the guest to have
a full 4GB available. The Cluster Verification Utility memory check
will fail otherwise.

Configure the networking as shown below. Make sure that this setting is consistent across all RAC nodes.

I left the controller setting at the default, but you can change it if you have a specific reason to do so.

Select "Create a new virtual disk."

Set the disk size to 30GB, and select "Thick Provision Eager Zeroed."

I left these settings at their defaults.

Click the "Finish" button to create the virtual machine. This may take some time to complete.

Next, the second cluster node virtual machine will be created. Repeat
the same process you used for node1, but use "node2.example.com" as the
name of the virtual machine instead.

Oracle Linux Installation

As listed in the prerequisites section, you'll need the Oracle Linux
6.4 installation ISO to follow this guide. You can mount it in the
virtual machine by selecting the virtual machine, then mounting the ISO
as shown below. The screenshot below has me mounting the ISO from the
datastore, but you can also mount it from a local ISO image (which means
the installation will run over the network). On my hypervisor that
option is grayed out until I turn on the virtual machine.

Select the newly created virtual machine, and click the play button to start it.

Once the virtual machine is started, you can view its console by right clicking on it and then clicking "Open Console."

From the console of the virtual machine, you can mount the ISO.

From my virtual machine, I clicked "Send Ctrl+Alt+del" in order to restart it and boot from the installation ISO.

The virtual machine will boot from the ISO. Press enter to proceed with the default installation option.

I selected "Skip." Feel free to test the installation media if you want to.

The graphical installation will commence.

Select your desired language.

Select your desired keyboard.

Select "Basic Storage Devices."

You may see a warning pop up, in which case you can click "Yes, discard any data."

Type in "node1.example.com" or a different hostname if you prefer. Click "Configure Network."

Select "System eth0," and then click "Edit."

Configure the network settings for the public interface as required.
Make sure that "Connect automatically" is checked. I left IPv6 turned
off, which is the default. Click "Apply" when you're finished.

Select "System eth1," and then click "Edit."

Configure the network settings for the private interface as required.
Make sure that "Connect automatically" is checked. I left IPv6 turned
off, which is the default. Click "Apply" when you're finished, and then
click the "Close" button when the "Network Connections" screen pops up.

The dependency check will run and then the installation process will begin.

The installation is finished.

The Oracle Linux installation is now complete on node1. Repeat the
process on node2, but make sure to use the correct hostname and IP
address. The fully-qualified hostname for node2 is node2.example.com.
Use the same root password on both nodes.

Pre-installation Tasks

All steps are run on both nodes as the root user, unless specified otherwise.Disable SELinux

Feel free to use any NTP servers you want, or leave it at the
defaults. I changed the NTP servers in my configuration in order to
avoid the "PRVF-5408 : NTP Time Server is common only to the following node" errors that occur when the Cluster Verification Utility runs.
Synchronize time on our nodes using ntpdate (use any NTP server you want):

ntpdate nist1-ny.ustiming.org

Start ntpd:

/etc/init.d/ntpd start

PRVF-5413 : Node "node1" has a time offset of -2355.8 that is beyond permissible limit of 1000.0 from NTP Time Server ".LOCL."

Installation Media Preparation
I transferred the compressed grid installation media to the grid
user's home directory. I transferred the media to the node from a
Windows system with WinSCP, which can be downloaded for free from http://winscp.net/eng/download.php.
When I used WinSCP, I connected to the node using the grid user; this
will assure that the grid user will have ownership permissions on the
compressed media. I did the same thing for the database installation
media, except I connected with the oracle user.

You may need to free up some space by removing the compressed media
files after you unzip them. I did this by running the following as root:

rm -rf /home/oracle/V38500-01* /home/grid/V38501-01*

Installing CVUQDISK
CVUQDISK is required by the cluster verification utility. Below are
the steps I took to install it. I unzipped the grid installation media
to /home/grid. If you placed it elsewhere, you'll need to adjust the
commands below.
This is how I copied the package to node2:

Run the following commands to create each disk file on the ESX host. You'll need to substitute the IP after -server
with the IP of your ESX host, and use the correct path for your data
store. I placed my shared disks in a folder called 12cR1RAC. If you want
to create your own folder you can do so by using the Datastore Browser.
This will take a while.

Now that we've created our shared disks, we'll be adding them to our virtual machines.
Shutdown the virtual machines by executing shutdown -h now on each virtual machine. Once they are powered off right click node1 and select "Edit Settings...."

Click the "Add...."

Select "Hard Disk."

Select "Use an existing virtual disk."

Click "Browse..." and select the crs.vmdk file you just created.

Under "Virtual Device Node," select "SCSI (1:0)". This will create a new disk controller. Select "Independent" and "Persistent."

There are two more drives to add. Repeat the drive adding process,
and use an incrementing virtual device node (1:0, 1:1, and 1:2). Select
the new SCSI controller and select "Physical." Now, go ahead and click
"OK" to have the change take effect.

Repeat this process on node2. Make sure that the drives have matching
virtual device node IDs on each RAC node. After adding crs.vmdk, I
added data.vmdk and finally fra.vmdk. Do this in the same order on both nodes.
Start the nodes. They should now have the drives attached to them. A listing of the devices should be similar to the following.

Configure Storage Devices
Next, we'll partition the disks we added to the virtual machines. These disks will be used by ASM. Run the following on node1:

fdisk /dev/sdb

Create a new partition with n, then select primary, partition number 1, and use the defaults for the starting and ending cylinder. Type w to write changes.

[root@node1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x61f217af.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Now, we will configure our 3 shared disks to use ASMlib. This needs to be done on node1.

# /etc/init.d/oracleasm createdisk CRS01 /dev/sdb1

Marking disk "CRS01" as an ASM disk: [ OK ]

# /etc/init.d/oracleasm createdisk DATA01 /dev/sdc1

Marking disk "DATA01" as an ASM disk: [ OK ]

# /etc/init.d/oracleasm createdisk FRA01 /dev/sdd1

Marking disk "FRA01" as an ASM disk: [ OK ]

On node2, run the following so that the disks are available on node2 as well:

/etc/init.d/oracleasm scandisks

Once this is complete, oracleasm listdisks should show the newly created ASMlib disks on both nodes:

[root@node1 rpm]# /etc/init.d/oracleasm listdisksCRS01DATA01FRA01

[root@node2 grid]# /etc/init.d/oracleasm listdisksCRS01DATA01FRA01

Pre-installation Cluster Verification
We should now be ready to install Grid Infrastructure. You can use
the Cluster Verification Utility to make sure there are no major
underlying problems with the node configuration. I ran the following
command as user grid on node1.

The only check that failed was the membership check for the grid user
in the dba group. Since this is intentional, you can ignore this. If
you'd like to see what my output looked like, you can download it here: cluvfy_results.txt

Fix [INS-40718] Single Client Access Name (SCAN):clus-scan could not be resolved.

Click the "Add" button to add the additional node, and then click "Next."

For the eth1 interface, select "Private" from the "Use for" dropdown.

Select "Yes."

Select "Use Standard ASM for storage."

This disk group will be used by the clusterware. Configure it as shown below.

This cluster is just for testing purposes, so I used a single password.

Select "Do not use Intelligent Platform Management Interface (IPMI)."

By default the correct operating system groups should be selected.

By default the correct software locations should be filled in.

By default the correct inventory location should be filled in.

You can fill this in if you want to have the installer execute the root scripts for you.

The prerequisite checks will now run. There shouldn't be any failures or errors.

Review the summary screen to make sure everything is correct, and then click "Install."

This will generally take a while, depending on your hardware. If you
opted to automated the root script execution, you'll see a popup
requesting additional approval before the installer will actually do it.

If you see the following screen, it means there were no issues with the
installation. You're ready to install the database software.

Installing Oracle Database 12c

Run the following as the oracle user to start a vncserver session.

vncserver

Connect to the session as we did before with your VNC client. Because
there may now be two VNC sessions running, you may need to connect by
typing :2, which will connect you to the second VNC session running.
From the VNC session as the oracle user, run the installer.

Select "Enterprise Edition" in order to be able to test the full feature set of Oracle.

By default the correct locations should be filled in.

By default the correct OS groups should be selected.

The prerequisite checks will run. There should be no warnings or
errors, and the installer should automatically go to the next screen.

Review the summary screen to make sure everything is correct, and then click "Install."

The installation runs.

Run the root script on each node.

You should now see the following screen. Click "Close."

Creating a RAC Database

We're just about done. Now that the software is installed, let's create a RAC database!
We still have two ASM disks that we need to create disk groups with.
We'll be doing this from a vnc session as the grid user. Start the ASMCA
by running asmca.

[grid@node1 ~]$ vncserver

Click the "Disk Groups" tab and click "Create."

Configure the "DATA", Externel(None) disk group as shown below, and click "OK" to create the disk group.

Configure the "FRA" disk group as shown below, Externel(None), and click "OK" to create the disk group.

The disk groups should be listed as shown below.

Feel free to click the "ASM Instances" tab to verify that ASM is running on both nodes. Click "Exit."

We will now use the Database Configuration Assistant to create a RAC
database. From another VNC session being run under the oracle user, run dbca. "Create Database" should be selected.

Select "Advanced Mode."

Select "Admin-Managed" as the configuration type.

Type "ORCL" as the "Global Database Name," which will cause the SID prefix to automatically be filled in.

Add node2 to the "Selected" list.

I left this screen at its defaults.

Since this is for testing, I used the same password for the administrative accounts.

Select the "+DATA" disk group as the common location for all database
files. I specified 10,000MB as the size of my FRA, and enabled
archiving.

I added the sample schemas, and left the other settings at their defaults.

You can leave this at the defaults. The only thing I changed was enabling Automatic Memory Management.

"Create Database" should be checked by default.

The prerequisite checks will run. There should be no warnings or
errors, and the installer should automatically go to the next screen.

Review the summary, then click "Finish" to have DBCA create the database.

The creation process runs.

The following popup should eventually appear, indicating that the database was successfully created. Click "Exit."

You should see the following screen. Click "Close."

If you've made it this far, you've successfully completed the installation and created a functional RAC database!

Post-installation Tasks

Clear Temp Files
The various installers used /tmp for their storage location. To free up some space you can run the following to clean this location out. Always double check what you've typed before pressing enter when using rm -rf. Run the following as root on each node.

rm -rf /tmp/*

Edit /etc/oratab

Add a new line to the bottom of /etc/oratab on node1 and node2, so that ORCL1 and ORCL2 are the SID values, respectfully. An example follows.

And that's it! If you've made it this far, you've finished the
install and verified that RAC is up and running. You'll probably want to
study the documentation in more detail at this point to get a better
understanding of RAC concepts and administration. I truly hope this
article has been of use to you!Miscellaneous Notes

Oracle RDBMS Pre-Install RPM
You may have noticed that I didn't use the oracle-rdbms-server-12cR1-preinstall.x86_64
RPM. I actually did initially, but it left so many things out that I
decided not to bother with it and just configure everything myself. The
side benefit of this is that the installation process I documented will
more closely align with installations on other distributions, such as
RHEL, that do not have the pre-install RPM.

The Disk I/O Scheduler
I didn't need to configure Deadline for I/O scheduling because we are
using the UEK kernel, which uses Deadline for I/O scheduling by
default.