Cluster Nodes

In a cluster that runs on any version of the Solaris OS that was
released before the Solaris 10 OS, a node is a physical machine that
contributes to cluster membership and is not a quorum device. In a cluster
that runs on the Solaris 10 OS, the concept of a node changes. In this environment,
a node is a Solaris zone that is associated with a cluster.
In this environment, a Solaris host, or simply host, is one of the following hardware or software configurations that
runs the Solaris OS and its own processes:

A “bare metal” physical machine that is not configured
with a virtual machine or as a hardware domain

SPARC: Sun Cluster software supports from one to sixteen
Solaris hosts in a cluster. Different hardware configurations impose additional
limits on the maximum number of hosts that you can configure in a cluster
composed of SPARC based systems. See SPARC: Sun Cluster Topologies for the supported configurations.

x86: Sun Cluster software supports from one to eight Solaris
hosts in a cluster. Different hardware configurations impose additional limits
on the maximum number of hosts that you can configure in a cluster composed
of x86 based systems. See x86: Sun Cluster Topologies for
the supported configurations.

Solaris hosts are generally attached to one or more multihost devices.
Hosts that are not attached to multihost devices use the cluster file system
to access the multihost devices. For example, one scalable services configuration
enables hosts to service requests without being directly attached to multihost
devices.

In addition, hosts in parallel database configurations share concurrent
access to all the disks.

All nodes in the cluster are grouped under a common name (the cluster
name), which is used for accessing and managing the cluster.

Public network adapters attach hosts to the public networks, providing
client access to the cluster.

Cluster
members communicate with the other hosts in the cluster through one or more
physically independent networks. This set of physically independent networks
is referred to as the cluster interconnect.

Every node in the cluster is aware when another node joins or leaves
the cluster. Additionally, every node in the cluster is aware of the resources
that are running locally as well as the resources that are running on the
other cluster nodes.

Hosts in the same cluster should have similar processing, memory, and
I/O capability to enable failover to occur without significant degradation
in performance. Because of the possibility of failover, every host must have
enough excess capacity to support the workload of all hosts for which they
are a backup or secondary.