Now do the same steps on the other
nodes in the cluster.When all those steps are done on the other nodes, start to
copy the authorized_keys.<nodeX> to all the nodes into $HOME/.ssh/

For example if you have 3 nodes you will have after the copy
in the .ssh 3 files with the name authorized_keys.<nodeX>

Then on EACH node
continue the configuration of SSH by doing the following:

$ cd
$HOME/.ssh

$ cat *.node*
>> authorized_keys

$ chmod 600
authorized_keys

To test that everything is working correct now
execute the commands

$
ssh <hostnameX> date

So on example in a 3
node environment:

$ ssh node1
date

$ ssh node2
date

$ ssh node3
date

Repeat this 3 times on each node,
including ssh back to the node itself. The nodeX is the hostname of the node.

The first time you will be asked to
add the node to a file called 'known_hosts' this is correct and answer the
question with 'yes'. After that when correctly configured you must be able to
get the date returned and you will not be prompted for a password.

3) Verify
the new node can be part of cluster, run the following command from one of the
existing nodes.

[oracle@rac1 .ssh]$ cluvfy stage -post hwos -n rac3

Performing post-checks for hardware and operating system
setup

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.1.0"
with node(s) rac3

TCP connectivity check passed for subnet
"192.168.1.0"

Node connectivity passed for subnet "192.168.0.0"
with node(s) rac3

TCP connectivity check passed for subnet
"192.168.0.0"

Interfaces found on subnet "192.168.1.0" that are
likely candidates for a private interconnect are:

rac3 eth0:192.168.1.17

Interfaces found on subnet "192.168.0.0" that are
likely candidates for a private interconnect are:

rac3 eth1:192.168.0.102

WARNING:

Could not find a suitable set of interfaces for VIPs

Node connectivity check passed

Check for multiple users with UID value 0 passed

Post-check for hardware and operating system setup was
successful.

4) From
an existing node, run ‘cluvfy’ to check inter-node compatibility:-