Re: Clufvy and private interconnects

From: Dan Norris <dannorris@xxxxxxxxxxxxx>

To: jeffthomas24@xxxxxxxxx

Date: Wed, 12 Mar 2008 07:16:34 -0500

I'm not sure why it is failing. In previous versions, when you used RFC
reserved networks, the tool couldn't find a network suitable for VIPs
since it assumed that all reserved networks were for private
interconnects. Looks like you sort of have the opposite problem. I'm
not sure the reason for this and there's no other notes on ML
indicating why, so I'd file an SR on it. I expect that VIPCA should
still run fine anyway.

I noticed that you have 2 NICs configured with separate IPs on each of
the two networks. If you want to build some redundancy for the NICs on
your servers, you need to investigate interface bonding. Simply putting
two NICs on the same subnet with different IP addresses isn't
sufficient to create a redundant NIC configuration. Instead, you'll
need to bond the two physical interfaces together (with the bonding
software driver) and then use a single IP address on the bonded
pseudointerface (typically called bond0, bond1, etc). Search metalink
for "linux ethernet bonding" and you'll find a few helpful notes.

Dan

Jeffery Thomas wrote:

Solaris 10 We are in the process of prepping two boxes
for a 10g RAC cluster. I downloaded the 11g cluvfy (as recommended by
Oracle) and our config passes every check; but for some reason cannot
find suitable interconnects.

My question would be: what exactly is cluvfy looking for when it is
scanning for the interconnects? User equivalence checks out, we are
using switches, and so on.