Planning the Sun Cluster HA for N1 Service Provisioning System Installation and
Configuration

This section contains the information you need to plan your Sun Cluster HA for N1 Service Provisioning System installation
and configuration.

N1 Grid Service Provisioning System and Solaris Containers

Sun Cluster HA for N1 Service Provisioning System is supported in Solaris Containers, Sun Cluster is
offering two concepts for Solaris Containers.

Zones are containers which are running
after a reboot of the node. These containers, combined with resource groups
having the nodename nodename:zonename as a valid “nodename”
in the resource groups nodename list.

Failover Zone containers are managed by the Solaris Container
agent, and are represented by a resource of a resource group.

Configuration Restrictions

This paragraph provides a list of software and hardware configuration
restrictions that apply to Sun Cluster HA for N1 Service Provisioning System only.

For restrictions that apply to all data services, see the Sun Cluster Release Notes.

Caution –

Your data service configuration might not be supported if you
do not adhere to these restrictions.

Restriction for the N1 Grid Service Provisioning System Data Service
Configuration

Sun Cluster HA for N1 Service Provisioning System can only be configured as a failover data service.
Each component of N1 Grid Service Provisioning System can operate as a failover data service
only. Therefore, all the components of the Sun Cluster HA for N1 Service Provisioning System can only be configured
to run as failover data services.

Restriction for the N1 Grid Service Provisioning System Storage Configuration

Install the N1 Grid Service Provisioning System components on shared storage. The Master
Server and the Local Distributor have to be installed on the shared storage.
The remote agents which are configured to bind on the logical host have to
be installed on the shared storage as well.

Note –

This restriction is automatically adhered in failover zone configurations.

Restriction to configure the N1 Grid Service Provisioning System Remote
Agent

Configure a Sun Cluster resource for the N1 Grid Service Provisioning System Remote Agent
for raw and ssl communication only. The Master Server will start and stop
the Remote Agent on every connection, as long as the Remote agent is configured
for ssh communication. In this case, there is no Sun Cluster resource needed.
In the ssh scenario, you have to install the N1 Grid Service Provisioning System Remote Agent
on the shared storage and copy the ssh keys from one node to the remaining
nodes of the cluster. This assures that all the cluster nodes have the same
ssh personality.

Note –

There is no need to copy ssh keys in failover zone configurations.

Restriction for the N1 Grid Service Provisioning System smf Service
Name in a Failover Zone

The N1 Grid Service Provisioning System configuration in a failover zone uses the smf component
of Sun Cluster HA for Solaris Containers. The registration of the N1 Grid Service Provisioning System data
service in a failover zone defines an smf service to control
the N1 Grid Service Provisioning System database. The name of this smf service
is generated in this naming scheme: svc:/application/sczone-agents:resource-name. No other smf service with
exactly this name can exist.

The associated smf manifest is automatically created
during the registration process in this location and naming scheme: /var/svc/manifest/application/sczone-agents/resource-name.xml.
No other manifest can coexist with this name.

Configuration Requirements

These requirements apply to Sun Cluster HA for N1 Service Provisioning System only. You must meet these
requirements before you proceed with your Sun Cluster HA for N1 Service Provisioning System installation and
configuration.

Caution –

Your data service configuration might not be supported if you
do not adhere to these requirements.

Configure the N1 Grid Service Provisioning System base directory on
shared storage on a failover file system

Create the N1 Grid Service Provisioning System base directory on the shared storage. The
location for the base directory can reside on a Global File System (GFS) or
it can reside on a Failover File System (FFS) with an HAStoragePlus resource.
It is best practice to store it on a FFS.

The FFS is required because the Master Server uses the directory structure
to store its configuration, logs, deployed applications, database and so on.
The Remote agent and the Local Distributor store their caches below the base
directory. It is not recommended to store the binaries on the local storage
and the dynamic parts of the data on the shared storage.

Note –

It is best practice to mount Global File Systems with the /global
prefix and to mount Failover File Systems with the /local prefix.

N1 Grid Service Provisioning System components and dependencies –

You can configure the Sun Cluster HA for N1 Service Provisioning System data service to protect one
or more N1 Grid Service Provisioning System instances or components. Each instance or component
needs to be covered by one Sun Cluster HA for N1 Service Provisioning System resource. The dependencies between
the Sun Cluster HA for N1 Service Provisioning System resource and other necessary resources are described
in the following table.

Table 3 Dependencies
Between Sun Cluster HA for N1 Service Provisioning System Components in Failover Configurations

Component

Dependency

N1 Grid Service Provisioning System resource in a Solaris 10 global zone, zone or
in Solaris 9.

SUNW.HAStoragePlus This dependency is required only,
if the configuration uses a failover file system or file systems in a zone.

SUNW.LogicalHostName

N1 Grid Service Provisioning System resource in a Solaris 10 failover zone.

Sun Cluster HA for the Solaris Container boot resource.

SUNW.HAStoragePlus

SUNW.LogicalHostName — This dependency is
required only if the zones boot resource does not manage the zone's IP address.

Note –

For more detailed information about N1 Grid Service Provisioning System, refer to
the product documentation on the docs.sun.com webpage
or the documentation delivered with the product.

Configuration and Registration Files

Each component of Sun Cluster HA for N1 Service Provisioning System has configuration and registration
files in the directory /opt/SUNWscsps/component-dir/util — The term component-dir stands for the
directory names master, localdist or remoteagent. These files let you register
the N1 Grid Service Provisioning System component with Sun Cluster.

# cd /opt/SUNWscsps/remoteagent
#
# ls -l util
total 34
-r-xr-xr-x 1 root bin 1363 Jun 6 13:54 spsra_config
-r-xr-xr-x 1 root bin 7556 Jun 6 13:54 spsra_register
-r-xr-xr-x 1 root bin 4478 Jun 6 13:54 spsra_smf_register
-r-xr-xr-x 1 root bin 1347 Jun 6 13:54 spsra_smf_remove
# more util/spsra_config
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)spsra_config.ksh 1.2 06/03/17 SMI"
# This file will be sourced in by spsra_register and the parameters
# listed below will be used.
#
# These parameters can be customized in (key=value) form
#
# RS - name of the resource for the application
# RG - name of the resource group containing RS
# PORT - name of the port number to satisfy GDS registration
# LH - name of the LogicalHostname SC resource
# USER - name of the owner of the remote agent
# BASE - name of the direcotry where the N1 Service Provisioning Server
# is installed
# HAS_RS - name of the HAStoragePlus SC resource
#
# The following variables need to be set only if the agent runs in a
# failover zone
#
# ZONE - Zonename where the zsmf component should be registered
# ZONE_BT - Resource name of the zone boot component
# PROJECT - A project in the zone, that will be used for the PostgreSQL
# smf service.
# If the variable is not set it will be translated as :default for
# the smf credentialss.
# Optional
#
RS=
RG=
PORT=22
LH=
USER=
BASE=
HAS_RS=
# failover zone specific options
ZONE=
ZONE_BT=
PROJECT=

The spsra_register script validates the variables
of the spsra_config script and registers the resource for
the remote agent.

# cd /opt/SUNWscsps/localdist
#
# ls -l util
total 34
-r-xr-xr-x 1 root bin 1369 Jun 6 13:54 spsld_config
-r-xr-xr-x 1 root bin 7550 Jun 6 13:54 spsld_register
-r-xr-xr-x 1 root bin 4501 Jun 6 13:54 spsld_smf_register
-r-xr-xr-x 1 root bin 1347 Jun 6 13:54 spsld_smf_remove
# more util/spsld_config
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)spsld_config.ksh 1.2 06/03/17 SMI"
# This file will be sourced in by spsld_register and the parameters
# listed below will be used.
#
# These parameters can be customized in (key=value) form
#
# RS - name of the resource for the application
# RG - name of the resource group containing RS
# PORT - name of the port number to satisfy GDS registration
# LH - name of the LogicalHostname SC resource
# USER - name of the owner of the local distributor
# BASE - name of the directry where the N1 Service Provisioning Server
# is installed
# HAS_RS - name of the HAStoragePlus SC resource
#
#
# The following variables need to be set only if the agent runs in a
# failover zone
#
# ZONE - Zonename where the zsmf component should be registered
# ZONE_BT - Resource name of the zone boot component
# PROJECT - A project in the zone, that will be used for the PostgreSQL
# smf service.
# If the variable is not set it will be translated as :default for
# the smf credentialss.
# Optional
#
RS=
RG=
PORT=22
LH=
USER=
BASE=
HAS_RS=
# failover zone specific options
ZONE=
ZONE_BT=
PROJECT=

The spsld_register script validates the variables
of the spsld_config script and registers the resource for
the local distributor.