Establishing a New Cluster or New Cluster Node

This section provides information and procedures to establish a new cluster or to add a node to an existing cluster. Before you start to perform these tasks, ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.

The following task map lists the tasks to perform. Complete the procedures in the order that is indicated.

Table 3–1 Task Map: Establish the Cluster

Method

Instructions

1. Use one of the following methods to establish a new cluster or add a node to an existing cluster:

(New clusters only) Use the scinstall utility to establish the cluster.

(New clusters or added nodes) Set up a JumpStart install server. Then create a flash archive of the installed system. Finally, use the scinstall JumpStart option to install the
flash archive on each node and establish the cluster.

(Added nodes only) Use the clsetup command to add the new node to the cluster authorized-nodes list. If necessary, also configure the cluster interconnect and reconfigure the private network address range.

Configure Sun Cluster software
on a new node by using the scinstall utility or by using an XML configuration file.

How to Configure Sun Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster.

Note –

This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.

Before You Begin

Perform the following tasks:

Ensure that the Solaris OS is installed to support Sun Cluster software.

If Solaris software is already installed on the node, you must ensure that the Solaris installation
meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.

Component

Default Value

Private-network address

172.16.0.0

Private-network netmask

255.255.248.0

Cluster-transport adapters

Exactly two adapters

Cluster-transport switches

switch1 and switch2

Global-devices file-system name

/globaldevices

Installation security (DES)

Limited

Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode.

Typical Mode Worksheet - If you will use Typical mode and accept all defaults, complete the following worksheet.

Component

Description/Example

Answer

Cluster Name

What is the name of the cluster that you want to establish?

Cluster Nodes

List the names of the other cluster nodes planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)

Cluster Transport Adapters and Cables

What are the names of the two cluster-transport adapters that attach the node to the private interconnect?

First

Second

(VLAN adapters only)

Will this be a dedicated cluster transport adapter? (Answer No if using tagged VLAN adapters.)

Yes | No

Yes | No

If no, what is the VLAN ID for this adapter?

Quorum Configuration

(two-node cluster only)

Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server or a Network Appliance NAS device as a quorum device.)

Yes | No

Check

Do you want to interrupt cluster creation for sccheck errors?

Yes | No

Custom Mode Worksheet - If you will use Custom mode and customize the configuration data, complete the following worksheet.

Note –

If you are installing a single-node cluster, the scinstall utility automatically assigns the default private network address and netmask, even though the cluster does not use a private network.

Component

Description/Example

Answer

Cluster Name

What is the name of the cluster that you want to establish?

Cluster Nodes

List the names of the other cluster nodes planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)

Authenticating Requests to Add Nodes

(multiple-node cluster only)

Do you need to use DES authentication?

No | Yes

Network Address for the Cluster Transport

(multiple-node cluster only)

Do you want to accept the default network address (172.16.0.0)?

Yes | No

If no, which private network address do you want to use?

___.___.___.___

Do you want to accept the default netmask (255.255.248.0)?

Yes | No

If no, what are the maximum number of nodes and private networks that you expect to configure in the cluster?

_____ nodes

_____ networks

Which netmask do you want to use? Choose from the values calculated by scinstall or supply your own.

___.___.___.___

Minimum Number of Private Networks

(multiple-node cluster only)

Should this cluster use at least two private networks?

Yes | No

Point-to-Point Cables

(multiple-node cluster only)

If this is a two-node cluster, does this cluster use switches?

Yes | No

Cluster Switches

(multiple-node cluster only)

Transport switch name:

Defaults: switch1 and switch2

First

Second

Cluster Transport Adapters and Cables

(multiple-node cluster only)

Node name (the node from which you run scinstall):

Transport adapter name:

First

Second

(VLAN adapters only)

Will this be a dedicated cluster transport adapter? (Answer No if using tagged VLAN adapters.)

Yes | No

Yes | No

If no, what is the VLAN ID for this adapter?

Where does each transport adapter connect to (a switch or another adapter)?

Switch defaults: switch1 and switch2

First

Second

If a transport switch, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Do you want to use autodiscovery to list the available adapters for the other nodes?

If no, supply the following information for each additional node:

Yes | No

Specify for each additional node

(multiple-node cluster only)

Node name:

Transport adapter name:

First

Second

(VLAN adapters only)

Will this be a dedicated cluster transport adapter? (Answer No if using tagged VLAN adapters.)

Yes | No

Yes | No

If no, what is the VLAN ID for this adapter?

Where does each transport adapter connect to (a switch or another adapter)?

Defaults: switch1 and switch2

First

Second

If a transport switch, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Quorum Configuration

(two-node cluster only)

Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server or a Network Appliance NAS device as a quorum device.)

Yes | No

Yes | No

Global Devices File System

(specify for each node)

Do you want to use the default name of the global-devices file system (/globaldevices)?

Yes | No

If no, do you want to use an already-existing file system?

Yes | No

What is the name of the file system that you want to use?

Check

(multiple-node cluster only)

Do you want to interrupt cluster creation for sccheck errors?

Yes | No

(single-node cluster only)

Do you want to run the sccheck utility to validate the cluster?

Yes | No

Automatic Reboot

(single-node cluster only)

Do you want scinstall to automatically reboot the node after installation?

Yes | No

Follow these guidelines to use the interactive scinstall utility in this procedure:

Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

On the cluster node from which you intend to configure the cluster, become superuser.

Start the scinstall utility.

phys-schost# /usr/cluster/bin/scinstall

Type the number that corresponds to the option for Create a new cluster or add a cluster node and press the Return key.

*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1

The New Cluster and Cluster Node Menu is displayed.

Type the number that corresponds to the option for Create a new cluster and press the Return key.

The Typical or Custom Mode menu is displayed.

Type the number that corresponds to the option for either Typical or Custom and press the Return key.

The Create a New Cluster screen is displayed.
Read the requirements, then press Control-D to continue.

Follow the menu prompts to supply your answers from
the configuration planning worksheet.

The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.

If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.

exclude:lofs

The change to the /etc/system file becomes effective after the next system reboot.

Note –

You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause
switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

Disable LOFS.

Disable the automountd daemon.

Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.

See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback
file systems.

Example 3–1 Configuring Sun Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall Typical mode. The other cluster node is phys-schost-2. The adapter names are qfe2 and qfe3. The automatic selection of a quorum device is enabled.

Troubleshooting

Unsuccessful configuration - If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.

If you installed a multiple-node cluster and declined automatic quorum configuration, perform postinstallation setup. Go to How to Configure Quorum Devices.

How to Configure Sun Cluster Software on All Nodes (XML)

Perform this procedure to configure a new cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Sun Cluster 3.2 software.

This procedure configures the following cluster components:

Cluster name

Cluster node membership

Cluster interconnect

Global devices

Before You Begin

Perform the following tasks:

Ensure that the Solaris OS is installed to support Sun Cluster software.

If Solaris software is already installed on the node, you must ensure that the Solaris installation
meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

Ensure that the Solaris OS is installed to support Sun Cluster software.

If Solaris software is already installed on the node, you must ensure that the Solaris installation
meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

Copy the configuration file to the potential node from which you will configure the new cluster.

You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

Become superuser on the potential node from which you will configure the new cluster.

Modify the cluster configuration XML file as needed.

Open your cluster configuration XML file for editing.

If you are duplicating an existing cluster, open the file that you created with the cluster export command.

If you are not duplicating an existing cluster, create a new file.

Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.

Modify the values of the XML elements to reflect the cluster configuration that you want to create.

To establish a cluster, the following components must have valid values in the cluster configuration XML file:

Cluster name

Cluster nodes

Cluster transport

The cluster is created with the assumption that the partition /globaldevices exists on each node that you configure as a cluster node. The global-devices namespace is created on this partition. If you need to use a different file-system name on which to create
the global devices, add the following property to the <propertyList> element for each node that does not have a partition that is named /globaldevices.

If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.

See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.

Validate the cluster configuration XML file.

phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile

See the xmllint(1) man page for more information.

From the potential node that contains the cluster configuration XML file, create the cluster.

phys-schost# cluster create -i clconfigfile

-iclconfigfile

Specifies the name of the cluster configuration XML file to use as the input source.

For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.

If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.

exclude:lofs

The change to the /etc/system file becomes effective after the next system reboot.

Note –

You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause
switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

Disable LOFS.

Disable the automountd daemon.

Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.

See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback
file systems.

To duplicate quorum information from an existing cluster, configure the quorum device by using the cluster configuration XML file.

You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create
a required quorum device, go instead to How to Configure Quorum Devices.

If you are using a quorum server for the quorum device, ensure that the quorum server is set up and running.

Example 3–2 Configuring Sun Cluster Software on All Nodes By Using an XML File

The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris 10 OS and is not configured with non-global zones. The cluster configuration is exported from the existing
cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new
cluster is d3.

The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.

Troubleshooting

Unsuccessful configuration - If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Next Steps

See Also

After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component
to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster, and not the node names from the cluster that you duplicated unless the node names are the same.

To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate.
The following table lists the cluster components that you can create from a cluster configuration XML file after the cluster is established and the man page for the command that you use to duplicate the component.

You can use the -a option of the clresource, clressharedaddress, or clreslogicalhostname command to also duplicate the resource type and resource group that are associated with the resource that you duplicate.

Otherwise, you must first add the resource type and resource group to the cluster before you add the resource.

How to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software
on all cluster nodes and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

Before You Begin

Perform the following tasks:

Ensure that the hardware setup is complete and connections are verified before you install Solaris software. See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details on how to set up the hardware.

Determine the Ethernet address of each cluster node.

If you use a naming service, ensure that the following information is added to any naming services that clients use to access cluster services. See Public Network IP Addresses for planning guidelines. See your Solaris system-administrator
documentation for information about using Solaris naming services.

Address-to-name mappings for all public hostnames and logical addresses

On the server from which you will create
the flash archive, ensure that all Solaris OS software, patches, and firmware that is necessary to support Sun Cluster software is installed.

If Solaris software is already installed on the server, you must ensure that the Solaris installation meets the requirements for Sun Cluster software
and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.

Component

Default Value

Private-network address

172.16.0.0

Private-network netmask

255.255.248.0

Cluster-transport adapters

Exactly two adapters

Cluster-transport switches

switch1 and switch2

Global-devices file-system name

/globaldevices

Installation security (DES)

Limited

Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode. See Planning the Sun Cluster Environment for planning guidelines.

Typical Mode Worksheet - If you will use Typical mode and accept all defaults, complete the following worksheet.

Component

Description/Example

Answer

JumpStart Directory

What is the name of the JumpStart directory to use?

Cluster Name

What is the name of the cluster that you want to establish?

Cluster Nodes

List the names of the cluster nodes that are planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)

Cluster Transport Adapters and Cables

First node name:

Transport adapter names:

First

Second

VLAN adapters only

Will this be a dedicated cluster transport adapter? (Answer No if using tagged VLAN adapters.)

Yes | No

Yes | No

If no, what is the VLAN ID for this adapter?

Specify for each additional node

Node name:

Transport adapter names:

First

Second

Quorum Configuration

(two-node cluster only)

Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server or a Network Appliance NAS device as a quorum device.)

Yes | No

Yes | No

Custom Mode Worksheet - If you will use Custom mode and customize the configuration data, complete the following worksheet.

Note –

If you are installing a single-node cluster, the scinstall utility automatically uses the default private network address and netmask, even though the cluster does not use a private network.

Component

Description/Example

Answer

JumpStart Directory

What is the name of the JumpStart directory to use?

Cluster Name

What is the name of the cluster that you want to establish?

Cluster Nodes

List the names of the cluster nodes that are planned for the initial cluster configuration. (For a single-node cluster, press Control-D alone.)

Authenticating Requests to Add Nodes

(multiple-node cluster only)

Do you need to use DES authentication?

No | Yes

Network Address for the Cluster Transport

(multiple-node cluster only)

Do you want to accept the default network address (172.16.0.0)?

Yes | No

If no, which private network address do you want to use?

___.___.___.___

Do you want to accept the default netmask (255.255.248.0)?

Yes | No

If no, what are the maximum number of nodes and private networks that you expect to configure in the cluster?

_____ nodes

_____ networks

Which netmask do you want to use? Choose from the values that are calculated by scinstall or supply your own.

___.___.___.___

Minimum Number of Private Networks

(multiple-node cluster only)

Should this cluster use at least two private networks?

Yes | No

Point-to-Point Cables

(two-node cluster only)

Does this cluster use switches?

Yes | No

Cluster Switches

(multiple-node cluster only)

Transport switch name, if used:

Defaults: switch1 and switch2

First

Second

Cluster Transport Adapters and Cables

(multiple-node cluster only)

First node name:

Transport adapter name:

First

Second

(VLAN adapters only)

Will this be a dedicated cluster transport adapter? (Answer No if using tagged VLAN adapters.)

Yes | No

Yes | No

If no, what is the VLAN ID for this adapter?

Where does each transport adapter connect to (a switch or another adapter)?

Switch defaults: switch1 and switch2

If a transport switch, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Specify for each additional node

(multiple-node cluster only)

Node name:

Transport adapter name:

First

Second

Where does each transport adapter connect to (a switch or another adapter)?

Switch defaults: switch1 and switch2

If a transport switch, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Global Devices File System

Specify for each node

Do you want to use the default name of the global-devices file system (/globaldevices)?

Yes | No

If no, do you want to use an already-existing file system?

Yes | No

If no, do you want to create a new file system on an unused partition?

Yes | No

What is the name of the file system?

Quorum Configuration

(two-node cluster only)

Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a quorum server or a Network Appliance NAS device as a quorum device.)

Yes | No

Yes | No

Follow these guidelines to use the interactive scinstall utility in this procedure:

Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

Set up your JumpStart install server.

Ensure that the JumpStart install server meets the following requirements.

The install server is on the same subnet as the cluster nodes, or on the Solaris boot server for the subnet that the cluster nodes use.

The install server is not itself a cluster node.

The install server installs a release of the Solaris OS that is supported by the Sun Cluster software.

A custom JumpStart directory exists for JumpStart installation of Sun Cluster software. This jumpstart-dir directory must meet the following requirements:

Contain a copy of the check utility.

Be NFS exported for reading by the JumpStart install server.

Each new cluster node is configured as a custom JumpStart installation client that uses the custom JumpStart directory that you set up for Sun Cluster installation.

On a cluster node or another machine of the same server platform, install the Solaris OS and any necessary patches, if you have not already done so.

If Solaris software is already installed on the server, you must ensure that the Solaris installation meets the requirements for Sun Cluster software
and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

The path /export/suncluster/sc31/ is
used here as an example of the JumpStart installation directory that you created. In the media path, replace arch with sparc or x86 (Solaris 10 only) and replace ver with 9 for Solaris 9 or 10 for Solaris 10.

Type the number that corresponds to the option for Configure a cluster to be JumpStarted from this install server and press the Return key.

This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.

*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
* 2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 2

Follow the menu prompts to supply your answers from
the configuration planning worksheet.

The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the /jumpstart-dir/autoscinstall.d/3.2/ directory. This file is similar to the following example.

If necessary, make adjustments to the autoscinstall.class file to configure JumpStart to install the flash archive.

Modify entries as necessary to match configuration choices that you made when you installed the Solaris OS on the flash archive machine or when you ran the scinstall utility.

For example, if you assigned slice 4 for the global-devices
file system and specified to scinstall that the file-system name is /gdevs, you would change the /globaldevices entry of the autoscinstall.class file to the following:

The autoscinstall.class file installs the End User Solaris Software Group
(SUNWCuser).

If you install
the End User Solaris Software Group (SUNWCuser ), add to the autoscinstall.class file any additional Solaris software packages that you might need.

The following table lists Solaris packages that are required to support some Sun Cluster functionality.
These packages are not included in the End User Solaris Software Group. See Solaris Software Group Considerations for more information.

Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

Update the rules file to point to other profiles, then run the check utility to validate the rules file.

As long as the Solaris OS installation profile meets minimum Sun Cluster file-system allocation requirements, Sun Cluster software places no restrictions on other changes to the installation profile. See System Disk Partitions for partitioning guidelines
and requirements to support Sun Cluster software.

To install required packages for any of the following features or to perform other postinstallation tasks, set up your own finish script.

Remote Shared Memory Application Programming Interface (RSMAPI)

SCI-PCI adapters for the interconnect transport

RSMRDT drivers

Note –

Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.

Navigate to the listed IBA that is connected to the same network as the JumpStart PXE install server and move it to the top of the boot order.

The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the
right of the IBA boot choices corresponds to the higher Ethernet port number.

Save your change and exit the BIOS.

The boot sequence begins again. After further processing, the GRUB menu is displayed.

Immediately select the Solaris JumpStart entry and press Enter.

Note –

If the Solaris JumpStart entry is the only entry listed, you can alternatively wait for the selection screen to time out. If you do not respond in 30 seconds, the system automatically continues the boot sequence.

JumpStart installs the Solaris OS and Sun Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.

When the BIOS screen again appears, immediately press Esc+2 or press the F2 key.

Note –

If you do not interrupt the BIOS at this point, it automatically returns to the installation type menu. There, if no choice is typed within 30 seconds, the system automatically begins an interaction installation.

After further processing, the BIOS Setup Utility is displayed.

In the menu bar, navigate to the Boot menu.

The list of boot devices is displayed.

Navigate to the Hard Drive entry and move it back to the top of the boot order.

Save your change and exit the BIOS.

The boot sequence begins again. No further interaction with the GRUB menu is needed to complete booting into cluster mode.

For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.

If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.

If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.

From another cluster node that is active, display the names of all cluster file systems.

phys-schost# mount | grep global | egrep -v node@ | awk '{print $1}'

On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.

phys-schost-new# mkdir -pmountpoint

For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir-p/global/dg-schost-1 on the node that is being added to the cluster.

Note –

The mount points become active after you reboot the cluster in Step 24.

If VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.

phys-schost# grep vxio /etc/name_to_major
vxio NNN

Ensure that the same vxio number is used on each of the VxVM-installed nodes.

Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

(Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file on each node in the cluster.

set kernel_cage_enable=1

This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.
See your server documentation for more information about dynamic reconfiguration.

If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.

exclude:lofs

The change to the /etc/system file becomes effective after the next system reboot.

Note –

You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause
switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

Disable LOFS.

Disable the automountd daemon.

Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.

See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback
file systems.

x86: Set the default boot file.

The setting of this value enables you to reboot the node if you are unable to access a login prompt.

On the Solaris 9 OS, set the default to kadb.

phys-schost# eeprom boot-file=kadb

On the Solaris 10OS, set the default to kmdb in the GRUB boot parameters menu.

grub edit> kernel /platform/i86pc/multiboot kmdb

If you performed a task that requires a cluster reboot, follow these steps to reboot the cluster.

The following are some of the tasks that require a reboot:

Adding a new node to an existing cluster

Installing patches that require a node or cluster reboot

Making configuration changes that require a reboot to become active

On one node, become superuser.

Shut down the cluster.

phys-schost-1# cluster shutdown -y -g0 clustername

Note –

Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in
installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

Cluster nodes remain in installation mode until the first time that you run the clsetup command.
You run this command during the procedure How to Configure Quorum Devices.

Reboot each node in the cluster.

On SPARC based systems, do the following:

ok boot

On x86 based systems, do the following:

When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:

The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

(Optional) If you did not perform Step 24 to reboot the nodes, start the Sun Java Web Console web server manually on each node.

If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.

Troubleshooting

Disabled scinstall option - If the JumpStart option
of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 14 to correct JumpStart setup, then restart the scinstall utility.

Error
messages about nonexistent nodes - Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number
of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages.
See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.

How to Prepare the Cluster for Additional Cluster Nodes

Perform this procedure on existing cluster nodes to prepare the cluster for the addition of new cluster nodes.

How to Change the Private Network Configuration When Adding Nodes or Private Networks

Perform this task to change the cluster private IP address range to accommodate an increase in the number of nodes or non-global zones or in the number of private networks, or a combination. You can also use this procedure to decrease the private IP address range.

Note –

This procedure requires you to shut down the entire cluster.

Become superuser on a node of the cluster.

From one node, start the clsetup utility.

# clsetup

The clsetup Main Menu is displayed.

Switch each resource group offline.

If the node contains non-global zones, any resource groups in the zones are also switched offline.

Type the number that corresponds to the option for Resource groups and press the Return key.

The Resource Group Menu is displayed.

Type the number that corresponds to the option for Online/Offline or Switchover a resource group and press the Return key.

Follow the prompts to take offline all resource groups and to put them in the unmanaged state.

When all resource groups are offline, type q to return to the Resource Group Menu.

Disable all resources in the cluster.

Type the number that corresponds to the option for Enable/Disable a resource and press the Return key.

Choose a resource to disable and follow the prompts.

Repeat the previous step for each resource to disable.

When all resources are disabled, type q to return to the Resource Group Menu.

Quit the clsetup utility.

Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.

# cluster status -t resource,resourcegroup

-t

Limits output to the specified cluster object

resource

Specifies resources

resourcegroup

Specifies resource groups

From one node, shut down the cluster.

# cluster shutdown -g0 -y

-g

Specifies the wait time in seconds

-y

Prevents the prompt that asks you to confirm a shutdown from being issued

Boot each node into noncluster mode.

On SPARC based systems, perform the following command:

ok boot -x

On x86 based systems, perform the following commands:

In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

The GRUB boot parameters screen appears similar to the following:

GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a) |
| kernel /platform/i86pc/multiboot |
| module /platform/i86pc/boot_archive |
+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the
boot sequence, 'c' for a command-line, 'o' to open a new line
after ('O' for before) the selected line, 'd' to remove the
selected line, or escape to go back to the main menu.

Add -x to the command to specify that the system boot into noncluster mode.

Press Enter to accept the change and return to the boot parameters screen.

The screen displays the edited command.

GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a) |
| kernel /platform/i86pc/multiboot -x |
| module /platform/i86pc/boot_archive |
+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the
boot sequence, 'c' for a command-line, 'o' to open a new line
after ('O' for before) the selected line, 'd' to remove the
selected line, or escape to go back to the main menu.-

Type b to boot the node into noncluster mode.

Note –

This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter
command.

From one node, start the clsetup utility.

When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

Type the number that corresponds to the option for Change IP Address
Range and press the Return key.

The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

To change either the private-network IP address or the IP address range, type yes and press the Return key.

The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

Change or accept the private-network IP address.

To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter
your response.

To change the default private-network IP address, perform the following substeps.

Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

The clsetup utility will prompt for the new private-network IP address.

Type the new IP address and press the Return key.

The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

Change or accept the default private-network IP address range.

The default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster.

This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.

Before You Begin

Perform the following tasks:

Ensure that the Solaris OS is installed to support Sun Cluster software.

If Solaris software is already installed on the node, you must ensure that the Solaris installation
meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.

Typical Mode Worksheet - If you will use Typical mode and accept all defaults, complete the following worksheet.

Component

Description/Example

Answer

Sponsoring Node

What is the name of the sponsoring node?

Choose any node that is active in the cluster.

Cluster Name

What is the name of the cluster that you want the node to join?

Check

Do you want to run the sccheck validation utility?

Yes | No

Autodiscovery of Cluster Transport

Do you want to use autodiscovery to configure the cluster transport?

If no, supply the following additional information:

Yes | No

Point-to-Point Cables

Does the node that you are adding to the cluster make this a two-node cluster?

Yes | No

Does the cluster use switches?

Yes | No

Cluster Switches

If used, what are the names of the two switches?

Defaults: switch1 and switch2

First

Second

Cluster Transport Adapters and Cables

Transport adapter names:

First

Second

Where does each transport adapter connect to (a switch or another adapter)?

Switch defaults: switch1 and switch2

For transport switches, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Automatic Reboot

Do you want scinstall to automatically reboot the node after installation?

Yes | No

Custom Mode Worksheet - If you will use Custom mode and customize the configuration data, complete the following worksheet.

Component

Description/Example

Answer

Sponsoring Node

What is the name of the sponsoring node?

Choose any node that is active in the cluster.

Cluster Name

What is the name of the cluster that you want the node to join?

Check

Do you want to run the sccheck validation utility?

Yes | No

Autodiscovery of Cluster Transport

Do you want to use autodiscovery to configure the cluster transport?

If no, supply the following additional information:

Yes | No

Point-to-Point Cables

Does the node that you are adding to the cluster make this a two-node cluster?

Yes | No

Does the cluster use switches?

Yes | No

Cluster Switches

Transport switch name, if used:

Defaults: switch1 and switch2

First

Second

Cluster Transport Adapters and Cables

Transport adapter name:

First

Second

Where does each transport adapter connect to (a switch or another adapter)?

Switch defaults: switch1 and switch2

If a transport switch, do you want to use the default port name?

Yes | No

Yes | No

If no, what is the name of the port that you want to use?

Global Devices File System

What is the name of the global-devices file system?

Default: /globaldevices

Automatic Reboot

Do you want scinstall to automatically reboot the node after installation?

Yes | No

Follow these guidelines to use the interactive scinstall utility in this procedure:

Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

On the cluster node to configure, become superuser.

Start the scinstall utility.

phys-schost-new# /usr/cluster/bin/scinstall

The scinstall Main Menu is displayed.

Type the number that corresponds to the option for Create a new cluster or add a cluster node and press the Return key.

*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1

The New Cluster and Cluster Node Menu is displayed.

Type the number that corresponds to the option for Add this machine as a node in an existing cluster and press the Return key.

Follow the menu prompts to supply your answers from
the configuration planning worksheet.

The scinstall utility configures the node and boots the node into the cluster.

If you intend to use Sun Cluster HA for NFS on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.

exclude:lofs

The change to the /etc/system file becomes effective after the next system reboot.

Note –

You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause
switchover problems for Sun Cluster HA for NFS. If you choose to add Sun Cluster HA for NFS on a highly available local file system, you must make one of the following configuration changes.

However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If Sun Cluster HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.

Disable LOFS.

Disable the automountd daemon.

Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.

See The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback
file systems.

Example 3–3 Configuring Sun Cluster Software on an Additional Node

The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.

Troubleshooting

Unsuccessful configuration - If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

Perform this procedure to configure a new cluster node by using an XML cluster configuration file. The new node can be a duplication of an existing cluster node that runs Sun Cluster 3.2 software.

This procedure configures the following cluster components on the new node:

Cluster node membership

Cluster interconnect

Global devices

Before You Begin

Perform the following tasks:

Ensure that the Solaris OS is installed to support Sun Cluster software.

If Solaris software is already installed on the node, you must ensure that the Solaris installation
meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

The GRUB boot parameters screen appears similar to the following:

GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a) |
| kernel /platform/i86pc/multiboot |
| module /platform/i86pc/boot_archive |
+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the
boot sequence, 'c' for a command-line, 'o' to open a new line
after ('O' for before) the selected line, 'd' to remove the
selected line, or escape to go back to the main menu.

Add -x to the command to specify that the system boot into noncluster mode.

Press Enter to accept the change and return to the boot parameters screen.

The screen displays the edited command.

GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
+----------------------------------------------------------------------+
| root (hd0,0,a) |
| kernel /platform/i86pc/multiboot -x |
| module /platform/i86pc/boot_archive |
+----------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted.
Press 'b' to boot, 'e' to edit the selected command in the
boot sequence, 'c' for a command-line, 'o' to open a new line
after ('O' for before) the selected line, 'd' to remove the
selected line, or escape to go back to the main menu.-

Type b to boot the node into noncluster mode.

Note –

This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter
command.

Unconfigure Sun Cluster software from the potential node.

phys-schost-new# /usr/cluster/bin/clnode remove

If you are duplicating a node that runs Sun Cluster 3.2 software, create a cluster configuration XML file.

Become superuser on the cluster node that you want to duplicate.

Export the existing node's configuration information to a file.

phys-schost# clnode export -o clconfigfile

-o

Specifies the output destination.

clconfigfile

The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

Troubleshooting

Unsuccessful configuration - If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Sun Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Sun Cluster software packages. Then rerun this procedure.

How to Update Quorum Devices After Adding a Node to a Cluster

If you added a node to a cluster, you must update the configuration information of the quorum devices, regardless of whether you use SCSI devices, NAS devices, a quorum server, or a combination. To do this, you remove all quorum devices and update the global-devices namespace. You can optionally
reconfigure any quorum devices that you still want to use. This registers the new node with each quorum device, which can then recalculate its vote count based on the new number of nodes in the cluster.

Any newly configured SCSI quorum devices will be set to SCSI-3 reservations.

Before You Begin

Ensure that you have completed installation of Sun Cluster software on the added node.

On any node of the cluster, become superuser.

View the current quorum configuration.

Command output lists each quorum device and each node. The following example output shows the current SCSI quorum device, d3.

phys-schost# clquorum list
d3
…

Note the name of each quorum device that is listed.

Remove the original quorum device.

Perform this step for each quorum device that is configured.

phys-schost# clquorum remove devicename

devicename

Specifies the name of the quorum device.

Verify that all original quorum devices are removed.

If the removal of the quorum devices was successful, no quorum devices are listed.

phys-schost# clquorum status

Update the global-devices namespace.

phys-schost# cldevice populate

Note –

This step is necessary to prevent possible node panic.

On each node, verify that the cldevice populate command has completed processing before you attempt to add a quorum device.

The cldevice populate command executes remotely on all nodes, even through the command is issued from just one
node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.

phys-schost# ps -ef | grep scgdevs

(Optional) Add a quorum device.

You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.

(Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks.

The following example identifies the original SCSI quorum device d2, removes that quorum device, lists the available shared devices, updates the global-device namespace, configures d3 as a new SCSI quorum device, and verifies the new device.

Ensure that network switches that are directly connected to cluster nodes meet one of the following criteria:

The switch supports Rapid Spanning Tree Protocol (RSTP).

Fast port mode is enabled on the switch.

One of these features is required to ensure immediate communication between cluster nodes and the quorum server. If this communication is significantly delayed by the switch, the cluster interprets this prevention of communication as loss of the quorum device.

Have available the following information:

A name to assign to the configured quorum device

The IP address of the quorum server host machine

The port number of the quorum server

To configure a Network Appliance network-attached storage (NAS) device as a quorum device, do the following:

See the following Network Appliance NAS documentation for information about creating and setting up a Network Appliance NAS device and LUN. You can access the following documents
at http://now.netapp.com.

Task

Network Appliance Documentation

Setting up a NAS device

System Administration File Access Management Guide

Setting up a LUN

Host Cluster Tool for Unix Installation Guide

Installing ONTAP software

Software Setup Guide, Upgrade Guide

Exporting volumes for the cluster

Data ONTAP Storage Management Guide

Installing NAS support software packages on cluster nodes

Log in to http://now.netapp.com. From the Software Download page, download the Host Cluster Tool for Unix Installation Guide.

To use a quorum server as a quorum device, prepare the cluster to communicate with the quorum server.

If the public network uses variable-length subnetting, also called Classless Inter-Domain Routing (CIDR), modify the following files on each node.

If you use classful subnets, as defined in RFC 791, you do not need to perform these steps.

Add to the /etc/inet/netmasks file an entry for each public subnet that the cluster uses.

The following is an example entry which contains a public-network IP address and netmask:

10.11.30.0 255.255.255.0

Append netmask + broadcast + to the hostname entry in each /etc/hostname.adapter file.

nodenamenetmask + broadcast +

Ensure that the IP address of the quorum server is included in the /etc/inet/hosts or /etc/inet/ipnodes file on each node in the cluster.

If you use a naming service, ensure that the quorum server is included in the name-to-address mappings.

On one node, become superuser.

To use a shared SCSI disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.

From one node of the cluster, display a list of all the devices that the system checks.

Ensure that the output shows all connections between cluster nodes and storage devices.

Determine the global device-ID name of each shared disk that you are configuring as a quorum device.

Note –

Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.

Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global
device d2 is shared by phys-schost-1 and phys-schost-2.

Start the clsetup utility.

phys-schost# clsetup

The Initial Cluster Setup screen is displayed.

Note –

If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 9.

Answer the prompt Do you want to add any quorum disks?.

If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes to configure one or more quorum devices.

If your cluster has three or more nodes, quorum device configuration is optional.

Type No if you do not want to configure additional quorum devices. Then skip to Step 8.

Next Steps

Troubleshooting

Interrupted clsetup processing -
If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.

Changes to quorum vote count - If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then
add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List”
in Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

How to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.

Note –

Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts.
If any applications or data services are running, stop them before you perform this procedure.

Perform this procedure on one active node of the cluster.

Become superuser on a cluster node.

Start the clsetup utility.

phys-schost# clsetup

The clsetup Main Menu is displayed.

Type the number that corresponds to the option for Private hostnames and press the Return key.

The Private Hostname Menu is displayed.

Type the number that corresponds to the option for Change a private hostname and press the Return key.

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

How to Configure Network Time Protocol (NTP)

If you installed your own /etc/inet/ntp.conf file before you installed Sun Cluster software, you do not need to perform this procedure. Determine your next step:

Perform this task to create or modify the NTP configuration file after you perform any of the following tasks:

Install Sun Cluster software

Add a node to an existing cluster

Change the private hostname of a node in the cluster

If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.

The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes.
You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for
a Sun Cluster configuration.

Become superuser on a cluster node.

If you have your own /etc/inet/ntp.conf file, copy your file to each node of the cluster.

If you do not have your own /etc/inet/ntp.conf file to install, use the /etc/inet/ntp.conf.cluster file as your NTP configuration file.

Note –

Do not rename the ntp.conf.cluster file as ntp.conf.

If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. Sun Cluster software creates the /etc/inet/ntp.conf.cluster file
as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.

Use your preferred text editor to open the NTP configuration file on one node of the cluster for editing.

Ensure that an entry exists for the private hostname of each cluster node.

If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.

If necessary, make other modifications to meet your NTP requirements.

Copy the NTP configuration file to all nodes in the cluster.

The contents of the NTP configuration file must be identical on all cluster nodes.

Stop the NTP daemon on each node.

Wait for the command to complete successfully on each node before you proceed to Step 5.

SPARC: For the Solaris 9 OS, use the following command:

phys-schost# /etc/init.d/xntpd stop

For the Solaris 10 OS, use the following command:

phys-schost# svcadm disable ntp

Restart the NTP daemon on each node.

If you use the ntp.conf.cluster file, run the following command:

phys-schost# /etc/init.d/xntpd.cluster start

The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file.

If the ntp.conf file exists, the script exits immediately without starting the NTP daemon.

If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.

If you use the ntp.conf file,
run one of the following commands:

SPARC: For the Solaris 9 OS, use the following command:

phys-schost# /etc/init.d/xntpd start

For the Solaris 10 OS, use the following command:

phys-schost# svcadm enable ntp

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.