Oracle Solaris Cluster System Hardware and Software Components

This information is directed primarily to hardware service providers. These concepts can help
service providers understand the relationships between the hardware components before they install, configure,
or service cluster hardware. Cluster system administrators might also find this information useful as
background to installing, configuring, and administering cluster software.

A cluster is composed of several hardware components, including the following:

Cluster nodes with local disks (unshared)

Multihost storage (disks/LUNs are shared between cluster nodes)

Removable media (tapes and CD-ROMs)

Cluster interconnect

Public network interfaces

Figure 2-1 illustrates how the hardware components work with each other.

Figure 2-1 Oracle Solaris Cluster Hardware Components

Administrative console and console access devices are used to reach the cluster nodes
or the terminal concentrator as needed. The Oracle Solaris Cluster software enables you
to combine the hardware components into a variety of configurations. The following sections
describe these configurations.

SPARC: Oracle Solaris Cluster software supports from one to sixteen cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of SPARC based systems. See SPARC: Oracle Solaris Cluster Topologies for the supported configurations.

x86: Oracle Solaris Cluster software supports from one to eight cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of x86 based systems. See x86: Oracle Solaris Cluster Topologies for the supported configurations.

Cluster nodes are generally attached to one or more multihost storage devices. Nodes
that are not attached to multihost devices can use a cluster file system
to access the data on multihost devices. For example, one scalable services configuration
enables nodes to service requests without being directly attached to multihost devices.

In addition, nodes in parallel database configurations share concurrent access to all the
disks.

Public network adapters attach nodes to the public networks, providing client access to
the cluster.

Cluster members communicate with the other nodes in the cluster through one or
more physically independent networks. This set of physically independent networks is referred to as
the cluster interconnect.

Every node in the cluster is aware when another node joins or
leaves the cluster. Additionally, every node in the cluster is aware of the
resources that are running locally as well as the resources that are running
on the other cluster nodes.

Nodes in the same cluster should have the same OS and architecture, as
well as similar processing, memory, and I/O capability to enable failover to occur
without significant degradation in performance. Because of the possibility of failover, every node
must have enough excess capacity to support the workload of all nodes for
which they are a backup or secondary.

Software Components for Cluster Hardware Members

To function as a cluster member, an cluster nodes must have the
following software installed:

Figure 2-3 shows a high-level view of the software components that work together to
create the Oracle Solaris Cluster software environment.

Figure 2-3 Oracle Solaris Cluster Software Architecture

Multihost Devices

LUNs that can be connected to more than one cluster node at
a time are multihost devices. Greater than two-node clusters do not require quorum
devices. A quorum device is a shared storage device or quorum server that
is shared by two or more nodes and that contributes votes that are
used to establish a quorum. The cluster can operate only when a quorum
of votes is available. For more information about quorum and quorum devices, see
Quorum and Quorum Devices.

Local Disks

Local disks are the disks that are only connected to a single cluster
node. Local disks are therefore not protected against node failure (they are not highly
available). However, all disks, including local disks, are included in the global namespace
and are configured as global devices. Therefore, the disks themselves are visible from all
cluster nodes.

See the section Global Devices for more information about global devices.

Removable Media

Removable media such as tape drives and CD-ROM drives are supported in a
cluster. In general, you install, configure, and service these devices in the same
way as in a nonclustered environment. Refer to Oracle Solaris Cluster 4.0 Hardware Administration Manual for information about
installing and configuring removable media.

See the section Global Devices for more information about global devices.

Cluster Interconnect

The cluster interconnect is the physical configuration of devices that is used to transfer
cluster-private communications and data service communications between cluster nodes in the cluster.

Only nodes in the cluster can be connected to the cluster interconnect. The
Oracle Solaris Cluster security model assumes that only cluster nodes have physical access
to the cluster interconnect.

You can set up from one to six cluster interconnects in a
cluster. While a single cluster interconnect reduces the number of adapter ports that
are used for the private interconnect, it provides no redundancy and less availability. If
a single interconnect fails, moreover, the cluster is at a higher risk of
having to perform automatic recovery. Whenever possible, install two or more cluster interconnects
to provide redundancy and scalability, and therefore higher availability, by avoiding a single
point of failure.

The cluster interconnect consists of three hardware components: adapters, junctions, and cables. The
following list describes each of these hardware components.

Adapters – The network interface cards that are located in each cluster node. Their names are constructed from a driver name immediately followed by a physical-unit number (for example, bge2). Some adapters have only one physical network connection, but others, like the bge card, have multiple physical connections. Some adapters combine both the functions of a NIC and an HBA.

A network adapter with multiple interfaces could become a single point of failure if the entire adapter fails. For maximum availability, plan your cluster so that the paths between two nodes does not depend on a single network adapter. On Oracle Solaris 11, this name is visible through the use of the dladm show-physcommand. For more information, see the dladm(1M) man page.

Junctions – The switches that are located outside of the cluster nodes. In a two-node cluster, junctions are not mandatory. In that case, the nodes can be connected to each other through back-to-back network cable connections. Greater than two-node configurations generally require junctions.

Cables – The physical connections that you install either between two network adapters or between an adapter and a junction.

Figure 2-4 shows how the two nodes are connected by a transport adapter, cables,
and a transport switch.

Figure 2-4 Cluster Interconnect

Public Network Interfaces

Clients connect to the cluster through the public network interfaces.

You can set up cluster nodes in the cluster to include multiple
public network interface cards that perform the following functions:

Allow a cluster node to be connected to multiple subnets

Provide public network availability by having interfaces acting as backups for one another (through IPMP)

No special hardware considerations relate to clustering for the public network interfaces.

Logging Into the Cluster Remotely

You must have console access to all cluster nodes in the cluster.
You can use the Parallel Console Access (pconsole) utility from the command line to
log into the cluster remotely. The pconsole utility is part of the Oracle
Solaris terminal/pconsole package. Install the package by executing pkg installterminal/pconsole. The pconsole utility
creates a host terminal window for each remote host that you specify on
the command line. The utility also opens a central, or master, console window
that propagates what you input there to each of the connections that you
open.

The pconsole utility can be run from within X Windows or in console
mode. Install pconsole on the machine that you will use as the administrative
console for the cluster. If you have a terminal server connected to your
cluster nodes' serial ports (serial consoles), you can access a serial console port
by specifying the IP address of the terminal server and relevant terminal server's
port (terminal-server's IP:portnumber).