If you are installing Oracle Clusterware
on a node that already has
a single-instance Oracle Database 9i installation, then stop the
existing instances. After Oracle Clusterware is installed, start up
the instances again.

You
can upgrade some or all nodes of an existing Cluster Ready Services
installation. For example, if you have a six-node cluster, then
you can
upgrade two nodes each in three upgrading sessions.Base the number of
nodes that you upgrade in each session on the load the
remaining nodes
can handle. This is called a "rolling upgrade."

Creating
QuorumFile:

I have used OCFS partition /dev/sda2 (mounted on /u02/oradata/ocfs) to
store database files as well as Quorum File. So I have created Quorum
File under this mount point. As this is shared by all the nodes in the
cluster, it is created ONLY from one node.
[oracle@node1-pub oracle]$ cat > /u02/oradata/ocfs/QuorumFile

If you get the below error,
then apply the below patch to fix this.[oracle@node1-pub oracle]$
/mnt/cdrom/runInstaller[oracle@node1-pub oracle]$
Initializing Java Virtual Machine from
/tmp/OraInstall2005-12-16_02-19-25AM/jre/bin/java. Please wait...Error occurred during
initialization of VMUnable to load native library:
/tmp/OraInstall2005-12-16_02-19-25AM/jre/lib/i386/libjava.so: symbol
__libc_wait, version GLIBC_2.0 not defined in file libc.so.6 with link
time reference

Download
patch p3006854_9204_LINUX.zip from metalink and apply it as shown below.

Leave the Default value as it is and
then CLICK Next. Anyhow, we are not going to use
Wathdog. You will see in the next section, I have configured Cluster
Manager to use
hangcheck-timer module instead of watchdog.

Enter the Quorum
file as we created in the previous section and then Click Next.

CLICK Install

CLICK Exit

Verifying Cluster
Manager Configuration:
At this point make sure that the clusterware is configured correctly on
all the nodes by verifying the contents of the
$ORACLE_HOME/oracm/admin/cmcfg.ora file. It should be looked like
below. This file MUST contain all the public and private node names. If
any of the nodes is missing, then you do not have completed the network
configuration correctly as memtioned in pre-Installation task. Also it
MUST assign private hostname of the node to the HostName variable.

Optionally, you can write it on the CD.[root@node1-pub root]# mkisofs -r 3095277 | cdrecord -v dev=1,1,0
speed=20 -Insert the newly burn cd into
the cdrom and Start the runInstaller as oracle like below. If you have
not copied this file on the
disk, then you can start the runInstaller from the directory where you
have unzipped this file.

Follow the instructions and enter the appropriate values. You will see
most of the time the same screens as the
ones we saw during installing 9.2.0.1 Clusterware.

CLICK Next

CLICK Install

Click ExitModifying Cluster Manager
Files:Once you upgrade Cluster Manager to 9.2.0.4, you do not
require watchdog daemon any more. Instead you can make use of
hangcheck-timer module that comes with Linux kernel by default. In the
Pre-Installation task, I have configured the hangcheck-timer
module. So We need to let Cluster Manager know that it has to use
hangcheck-timer over watchdog. So update the cmcfg.ora,
ocmargs.ora and ocmstart.sh file and remove/comment out watchdog
related entries ON BOTH THE NODES.

I have seen that after some
time the Cluster Manager dies itself on all the nodes. To over come
this issue, You need
to zero out some of the blocks of QuorumFile as shown below. I got this
solution from Puschitz.com. Thank you Puschitz.