Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Sun Cluster software does not require a license certificate, but each node installed with Sun Cluster software must be covered under your Sun Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Patches

After installing each software product, you must also install any required patches. For proper cluster operation, ensure that all cluster nodes maintain the same patch level.

For information about current required patches, see Patches and Required Firmware Levels in Sun Cluster Release Notes or consult your Sun service provider.

Public-Network IP Addresses

You must set up a number of public-network IP addresses for various Sun Cluster components, depending on your cluster configuration. Each Solaris host in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need public-network IP addresses
assigned. Add these IP addresses to the following locations:

Any naming services that are used

The local /etc/inet/hosts file on each global-cluster node, after you install Solaris software

For IPv6 IP addresses on the Solaris 9 OS, the local /etc/inet/ipnodes file on each global-cluster node, after you install Solaris software

Console-Access Devices

You must have console access to all cluster nodes. If you install Cluster Control Panel software on an administrative console, you must provide the hostname and port number of the console-access device that is used to communicate with the cluster nodes.

A terminal concentrator is used to communicate between the administrative console and the global-cluster node consoles.

A Sun Enterprise 10000 server uses a System Service Processor (SSP) instead of a terminal concentrator.

A Sun Fire server uses a system controller instead of a terminal concentrator.

Alternatively, if you connect an administrative console directly to cluster nodes or through a management network, you instead provide the hostname of each global-cluster node and its serial port number that is used to connect to the administrative console or the management network.

Logical Addresses

Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed.

Public Networks

Public networks communicate outside the cluster. Consider the following points when you plan your public-network configuration:

Separation of public and private network – Public networks and the private network (cluster interconnect) must use separate adapters, or you must configure tagged VLAN on tagged-VLAN capable adapters and
VLAN-capable switches to use the same adapter for both the private interconnect and the public network.

Minimum – All cluster nodes must be connected to at least one public network. Public-network connections can use different subnets for different nodes.

Maximum – You can have as many additional public-network
connections as your hardware configuration allows.

Scalable services – All nodes that run a scalable service must either use the same subnet or set of subnets or use different subnets that are routable among themselves.

IPMP groups – Each public-network adapter that is used for data-service traffic must belong to an IP network multipathing (IPMP) group. If a public-network adapter is not used for data-service traffic,
you do not have to configure it in an IPMP group.

In the Sun Cluster 3.2 11/09 release, the scinstall utility no longer automatically configures a single-adapter IPMP group on each unconfigured public-network adapter
during Sun Cluster creation. Instead, the scinstall utility automatically configures a multiple-adapter IPMP group for each set of public-network adapters in the cluster that uses the same subnet. On the Solaris 10 OS, these groups are probe based.

The scinstall utility ignores adapters that are already configured in an IPMP group. You can use probe-based IPMP groups or link-based IPMP groups
in a cluster. But probe-based IPMP groups, which test the target IP address, provide the most protection by recognizing more conditions that might compromise availability.

If any adapter in an IPMP group that the scinstall utility configures will not
be used for data-service traffic, you can remove that adapter from the group.

Local MAC address support – All public-network adapters must use network interface cards (NICs) that support local MAC address assignment. Local MAC address assignment is a requirement of IPMP.

local-mac-addresssetting – The local-mac-address? variable must use the default value true for Ethernet adapters. Sun Cluster software does not support a local-mac-address? value
of false for Ethernet adapters. This requirement is a change from Sun Cluster 3.0, which did require a local-mac-address? value of false.

Quorum Servers

You can use Sun Cluster Quorum Server software to configure a machine as a quorum server and then configure the quorum server as your cluster's quorum device. You can use a quorum server instead of or in addition to shared disks and NAS filers.

Consider the following points when you plan the use of a quorum server in a Sun Cluster configuration.

Network connection – The quorum-server computer connects to your cluster through the public network.

Supported hardware – The supported hardware platforms for a quorum server are the same as for a global-cluster node.

Service to multiple clusters – You can configure a quorum server as a quorum device to more than one cluster.

Mixed hardware and software – You do not have to configure a quorum server on the same hardware and software platform as the cluster or clusters that it provides quorum to. For example, a SPARC based machine that runs the Solaris 9 OS can
be configured as a quorum server for an x86 based cluster that runs the Solaris 10 OS.

Spanning tree algorithm – You must disable the spanning tree algorithm on the Ethernet switches for the ports that are connected
to the cluster public network where the quorum server will run.

Using a cluster node as a quorum server – You can configure a quorum server on a cluster node to provide quorum for clusters other than the cluster that the node belongs to. However, a quorum server that
is configured on a cluster node is not highly available.

NFS Guidelines

Consider the following points when you plan the use of Network File System (NFS) in a Sun Cluster configuration.

NFS client – No Sun Cluster node can be an NFS client of a Sun Cluster HA for NFS-exported file system that is being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files
among global-cluster nodes.

NFSv3 protocol – If you are mounting file systems on the cluster nodes from external NFS servers, such as NAS filers, and you are using the NFSv3 protocol, you cannot run NFS client mounts and the Sun Cluster HA for NFS data
service on the same cluster node. If you do, certain Sun Cluster HA for NFS data-service activities might cause the NFS daemons to stop and restart, interrupting NFS services. However, you can safely run the Sun Cluster HA for NFS data service if you use the NFSv4 protocol to mount external NFS file systems on the cluster
nodes.

Locking – Applications that run locally on the cluster must not lock files on a file system that is exported through NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere
with the ability to restart the lock manager ( lockd(1M)). During restart, a blocked local process might be granted a lock which might be intended to be reclaimed by a remote client. This would cause unpredictable behavior.

NFS security features – Sun Cluster software does not support the following options of the share_nfs(1M) command:

secure

sec=dh

However, Sun Cluster software does support the following security features for NFS:

The use of secure ports for NFS. You enable secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.

No fencing support for NAS devices in non-global zones – Sun Cluster software does not provide fencing support for NFS-exported file systems from a NAS device when such file systems are used in a non-global zone, including nodes of a zone cluster. Fencing support
is provided only for NFS-exported file systems in the global zone.

Service Restrictions

Observe the following service restrictions for Sun Cluster configurations:

Routers – Do not configure cluster nodes as routers (gateways) due to the following reasons:

Routing protocols might inadvertently broadcast the cluster interconnect as a publicly reachable network to other routers, despite the setting of the IFF_PRIVATE flag on the interconnect interfaces.

NIS+ servers – Do not configure cluster nodes as NIS or NIS+ servers. There is no data service available for NIS or NIS+. However, cluster
nodes can be NIS or NIS+ clients.

Boot and install servers – Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.

RARP – Do not use a Sun Cluster configuration to provide an rarpd service.

RPC program numbers – If you install an RPC service on the cluster, the service must not use any of the following program numbers:

100141

100142

100248

These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively.

If the RPC service that you install also uses one of these program numbers, you must change that RPC service to use
a different program number.

Scheduling classes – Sun Cluster software does not support the running of high-priority process scheduling classes on cluster nodes. Do
not run either of the following types of processes on cluster nodes:

Processes that run in the time-sharing scheduling class with a high priority

Processes that run in the real-time scheduling class

Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.

Network Time Protocol (NTP)

Synchronization – The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time.

Accuracy – Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

Error messages about nonexistent nodes – Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs
a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon
might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.

See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines about how to configure NTP
for a Sun Cluster configuration.

Sun Cluster Configurable Components

This section provides guidelines for the following Sun Cluster components that you configure:

Global-Cluster Voting-Node Names

The name of a voting node in a global cluster is the same name that you assign to the physical or virtual host when you install it with the Solaris OS. See the hosts(4) man page for information about naming requirements.

In single-host cluster installations, the default cluster name is the name of the voting node.

During Sun Cluster configuration, you specify the names of all voting nodes that you are installing in the global cluster.

For information about node names in a zone cluster, see Zone Clusters.

Zone Names

On the Solaris 10 OS in versions that support Solaris brands, a non-global zone of brand native is a valid potential node of a resource-group node list. Use the naming convention nodename:zonename to
specify a non-global zone to a Sun Cluster command.

The nodename is the name of the Solaris host.

The zonename is the name that you assign to the non-global zone when you create the zone on the voting node. The zone name must be unique on the node. However, you can use the same zone name on different voting nodes. The different node name in nodename:zonename makes the complete non-global zone name unique in the cluster.

To specify the global zone, you need to specify only the voting-node name.

For information about a cluster of non-global zones, see Zone Clusters.

Private Network

Note –

You do not need to configure a private network for a single-host global cluster. The scinstall utility automatically assigns the default private-network address and netmask, even though a private network is not used by the cluster.

Sun Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Sun Cluster software. A Sun Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Sun Cluster software
on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

On the Solaris 10 OS, the default netmask is 255.255.240.0. This IP address range supports a combined maximum of 64 voting nodes and non-global zones, a maximum of 12 zone clusters, and a maximum of 10 private networks.

On the Solaris 9 OS, the default netmask is 255.255.248.0. This IP address range supports a combined maximum of 64 nodes and a maximum of 10 private networks.

Note –

The maximum number of voting nodes that an IP address range can support does not reflect the maximum number of voting nodes that the hardware or software configuration can currently support.

Specify a different allowable private-network address and accept the default netmask.

Accept the default private-network address and specify a different netmask.

Specify both a different private-network address and a different netmask.

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. On the Solaris 10 OS, the utility also prompts you for the number of zone clusters
that you want to support. The number of global-cluster nodes that you specify should also include the expected number of unclustered non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes, zone clusters, and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, zone clusters, and private
networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes, zone clusters, and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the
IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.

Note –

Changing the cluster private IP-address range might be necessary to support the addition of voting nodes, non-global zones, zone clusters, or private networks.

However, on the Solaris 10 OS the cluster can remain in cluster mode if you use the cluster set-netprops command to change only the netmask. For any
zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.

If you specify a private-network address other than the default, the address must meet the following requirements:

Address and netmask sizes – The private network address cannot be smaller than the netmask. For example, you can use a private network address of 172.16.10.0 with a netmask of 255.255.255.0. But you
cannot use a private network address of 172.16.10.0 with a netmask of 255.255.0.0.

Acceptable addresses – The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.

Use in multiple clusters – You can use the same private-network address in more than one cluster, provided that the clusters are on different private networks. Private IP network addresses are not accessible from outside the physical cluster.

For Sun Logical Domains (LDoms)
guest domains that are created on the same physical machine and that are connected to the same virtual switch, the private network is shared by such guest domains and is visible to all these domains. Proceed with caution before you specify a private-network IP address range to the scinstall utility
for use by a cluster of guest domains. Ensure that the address range is not already in use by another guest domain that exists on the same physical machine and shares its virtual switch.

IPv6 – Sun Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses
on the private-network adapters to support scalable services that use IPv6 addresses. But internode communication on the private network does not use these IPv6 addresses.

See Planning Your TCP/IP Network (Tasks), in System Administration Guide: IP Services (Solaris 9 or Solaris 10) for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Sun Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternodenodeid-priv, where nodeid is the numeral of the internal node ID. During Sun Cluster configuration, the node ID number is automatically assigned to each voting node when the node becomes a cluster member. A voting
node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.

After a global cluster is configured, you can rename its private hostnames by using the clsetup(1CL) utility. Currently, you cannot rename the private
hostname of a zone-cluster node.

For the Solaris 10 OS, the creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more voting nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use.

During Sun Cluster configuration, you specify configuration information for one or two cluster interconnects.

If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. For more information, see the guidelines for tagged VLAN adapters in Transport Adapters.

You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability. If a single interconnect fails, the cluster is
at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.

You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup(1CL) utility.

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-host cluster, you also specify whether your interconnect
is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

IPv6 – Sun Cluster software
does not support IPv6 communications over the private interconnects.

Local MAC address assignment – All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private-network adapters to support IPv6 public-network
addresses, are derived from the local MAC addresses.

Tagged VLAN adapters – Sun Cluster software supports tagged Virtual
Local Area Networks (VLANs) to share an adapter between the private cluster interconnect and the public network. To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter name and its VLAN ID (VID) in one of the following ways:

Specify the usual adapter name, which is the device name plus the instance number or physical point of attachment (PPA). For example, the name of instance 2 of a Cassini Gigabit Ethernet adapter would be ce2. If the scinstall utility asks whether
the adapter is part of a shared virtual LAN, answer yes and specify the adapter's VID number.

Specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is
the VID number and N is the PPA.

As an example, for VID 73 on adapter ce2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as ce73002 to indicate that it is part
of a shared virtual LAN.

See the scconf_trans_adap_*(1M) family of man pages for information about a specific transport adapter.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You
can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Solaris host that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types, such as SCI-PCI.

Note –

Clusters with three or more voting nodes must use transport switches. Direct connection between voting cluster nodes is supported only for two-host clusters.

If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect.

Tip –

If you specify a transport switch, you can more easily add another voting node to the cluster in the future.

Global Fencing

Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations. By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default
global fencing setting of pathcount. With the pathcount setting, the fencing protocol for each shared disk is chosen based on the number of DID paths that are attached to the disk.

In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing to support the following situations:

Caution –

If you disable fencing under other situations than the following, your data might be vulnerable to corruption during application failover. Examine this data corruption possibility carefully when you consider turning off fencing.

The shared storage does not support SCSI reservations.

If you turn off fencing for a shared disk that you then configure as a quorum device, the device uses the software quorum protocol. This is true regardless of whether the disk supports SCSI-2 or SCSI-3 protocols.
Software quorum is a protocol in Sun Cluster software that emulates a form of SCSI Persistent Group Reservations (PGR).

You want to enable systems that are outside the cluster to gain access to storage that is attached to the cluster.

If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster. After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol
of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device.

Quorum Devices

Sun Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a voting node, the quorum device prevents amnesia or split-brain problems when the voting cluster node attempts to rejoin the cluster. For more information
about the purpose and function of quorum devices, see Quorum and Quorum Devices in Sun Cluster Concepts Guide for Solaris OS.

During Sun Cluster installation of a two-host cluster, you can choose to let the scinstall utility automatically configure as a quorum device an available shared disk in the configuration. Shared disks include any Sun NAS device that is configured for use as a shared
disk. The scinstall utility assumes that all available shared disks are supported as quorum devices.

If you want to use a quorum server or a Network Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup(1CL) utility.

Note –

You do not need to configure quorum devices for a single-host cluster.

If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices.

Minimum – A two-host cluster must have at least one quorum device, which can be a shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are optional.

Odd-number rule – If more than one quorum device is configured in a two-host cluster, or in a pair of hosts directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices
have completely independent failure pathways.

Distribution of quorum votes – For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by voting nodes. Otherwise,
the nodes cannot form a cluster if all quorum devices are unavailable, even if all nodes are functioning.

Connection – You must connect a quorum device to at least two voting nodes.

SCSI fencing protocol – When a SCSI shared-disk quorum device is configured, its fencing protocol is automatically set to SCSI-2 in a two-host cluster or SCSI-3 in a cluster with three or more voting nodes.

Changing the fencing protocol of quorum devices – For SCSI disks that are configured as a quorum device, you must unconfigure the quorum device before you can enable or disable its SCSI fencing protocol.

Software quorum protocol – You can configure supported shared disks that do not support
SCSI protocol, such as SATA disks, as quorum devices. You must disable fencing for such disks. The disks would then use software quorum protocol, which emulates SCSI PGR.

The software quorum protocol would also be used by SCSI shared disks if fencing is disabled for such disks.

ZFS storage pools – Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide
a quorum vote to the cluster.

After a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device.

Zone Clusters

On the Solaris 10 OS, a zone cluster is a cluster of non-global zones. All nodes of a zone cluster are configured as non-global zones of the cluster brand. No other brand type is permitted in a zone cluster. You can run supported services on the zone cluster similar to
a global cluster, with the isolation that is provided by Solaris zones.

Consider the following points when you plan the creation of a zone cluster.

Global-Cluster Requirements and Guidelines

Global cluster – The zone cluster must be configured on a global Sun Cluster configuration. A zone cluster cannot be configured without an underlying global cluster.

Minimum Solaris OS – The global cluster must run at least the Solaris 10 5/08 OS.

Cluster mode – The global-cluster voting node from which you create or modify a zone cluster must be in cluster mode. If any other voting nodes are in noncluster mode when you administer a zone cluster, the changes that you make are propagated
to those nodes when they return to cluster mode.

Adequate private -IP addresses – The private IP-address range of the global cluster must have enough free IP-address subnets for use
by the new zone cluster. If the number of available subnets is insufficient, the creation of the zone cluster fails.

Changes to the private IP-address range – The private IP subnets and the corresponding private IP-addresses that are available for zone clusters
are automatically updated if the global cluster's private IP-address range is changed. If a zone cluster is deleted, the cluster infrastructure frees the private IP-addresses that were used by that zone cluster, making the addresses available for other use within the global cluster and by any
other zone clusters that depend on the global cluster.

Supported devices – Devices that are supported with Solaris zones can be exported to a zone cluster. Such devices include the following:

Zone-Cluster Requirements and Guidelines

Distribution of nodes – You cannot host multiple nodes of the same zone cluster on the same node of the global cluster. A global-cluster node can host multiple zone-cluster nodes as long as each node is a member of a different zone cluster.

Node creation – You must create at least one zone-cluster node at the time that you create the zone cluster. The names of the nodes must be unique within the zone cluster. The infrastructure automatically creates an underlying non-global
zone on each global-cluster node that hosts the zone cluster. Each non-global zone is given the same zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that is named zc1,
the corresponding non-global zone name on each global-cluster node that hosts the zone cluster is also zc1.

Cluster name – The name of the zone cluster must be unique
throughout the global cluster. The name cannot also be used by a non-global zone elsewhere in the global cluster, nor can the name be the same as that of a global-cluster node. You cannot use “all” or “global” as a zone-cluster name, because these are reserved names.

Private hostnames – During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the
same way that hostnames are created in global clusters. Currently, you cannot rename the private hostname of a zone-cluster node. For more information about private hostnames, see Private Hostnames.

Solaris zones brand – All nodes of a zone cluster are configured as non-global zones of the cluster brand. No other brand type is permitted
in a zone cluster.

Conversion to a zone-cluster node – You cannot add an existing non-global zone to a zone cluster.

File systems –
You can use the clzonecluster command to add the following types of file systems for use by a zone cluster. You configure an HAStoragePlus resource to manage the mounting of the file system:

Local file systems

QFS shared file systems, only when used to support Oracle Real Application Clusters

ZFS storage pools

To add a local file system that is not managed by an HAStoragePlus resource to a zone cluster, you instead use the zonecfg command as you normally would in a stand-alone system.

No fencing support for NAS devices in non-global zones – Sun Cluster software does not provide fencing support for NFS-exported file systems from a NAS device when such file systems are used in a non-global zone, including nodes of a zone cluster. Fencing support
is provided only for NFS-exported file systems in the global zone.