Chapter 4 SPARC: Installing and Configuring VERITAS Volume Manager

Install and configure your local and multihost disks for VERITAS Volume Manager (VxVM) by using the procedures in this chapter, along with the planning information in Planning Volume Management. See your VxVM documentation for additional details.

SPARC: Setting Up a Root Disk Group Overview

For VxVM 3.5, each cluster node requires the creation of a root disk group after VxVM is installed. This root disk group is used by VxVM to store configuration information, and has the following restrictions.

Access to a node's root disk group must be restricted to only that node.

Remote nodes must never access data stored in another node's root disk group.

Do not use the scconf(1M) command to register the root disk group as a disk device group.

Whenever possible, configure the root disk group for each node on a nonshared disk.

Sun Cluster software supports the following methods to configure the root disk group.

Encapsulate the node's root disk – This method enables the root disk to be mirrored, which provides a boot alternative if the root disk is corrupted or damaged. To encapsulate the root disk you need two free disk slices as well as free
cylinders, preferably at the beginning or the end of the disk.

Use local nonroot disks – This method provides an alternative to encapsulating the root disk. If a node's root disk is encapsulated, certain tasks you might later perform, such as upgrade the Solaris OS or perform disaster recovery procedures,
could be more complicated than if the root disk is not encapsulated. To avoid this potential added complexity, you can instead initialize or encapsulate local nonroot disks for use as root disk groups.

A root disk group that is created on local nonroot disks is local to that node, neither
globally accessible nor highly available. As with the root disk, to encapsulate a nonroot disk you need two free disk slices as well as free cylinders at the beginning or the end of the disk.

See your VxVM installation documentation for more information.

SPARC: How to Install VERITAS Volume Manager Software

Perform this procedure to install VERITAS Volume Manager (VxVM) software on each node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will
manage.

Before You Begin

Perform the following tasks:

Ensure that all nodes in the cluster are running in cluster mode.

Obtain any VERITAS Volume Manager (VxVM) license keys that you need to install.

Have available your VxVM installation documentation.

Steps

Become superuser on a cluster node that you intend to install with VxVM.

SPARC: How to Encapsulate the Root Disk

Perform this procedure to create a root disk group by encapsulating the root disk. Root disk groups are required for VxVM 3.5. For VxVM 4.0 and later, root disk groups are optional. See your VxVM documentation for more information.

Before You Begin

If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders
to each VxVM slice.

Steps

Become superuser on the node.

Start the vxinstall utility.

# vxinstall

When prompted, make the following choices or entries.

If you intend to enable the VxVM cluster feature, supply the cluster feature license key.

Choose Custom Installation.

Do not encapsulate the boot disk.

Choose any disks to add to the root disk group.

Do not accept automatic reboot.

If the root disk group that you created contains one or more disks that connect to more than one node, enable the localonly property.

Use the following command to enable the localonly property of the raw-disk device group
for each shared disk in the root disk group.

# scconf -c -D name=dsk/dN,localonly=true

When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.

Before You Begin

Steps

Mirror the encapsulated root disk.

Follow the procedures in your VxVM documentation. For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.

Caution –

Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.

Display the DID mappings.

# scdidadm -L

From the DID mappings, locate the disk that is used to mirror the root disk.

Extract the raw-disk device-group name from the device-ID name of the root-disk mirror.

The name of the raw-disk device group follows the convention dsk/dN, where N is a number. In
the following output, the portion of a scdidadm output line from which you extract the raw-disk device-group name is highlighted in bold.

If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.

Only the node whose root disk you mirrored should remain in the node list for the raw-disk
device group.

# scconf -r -D name=dsk/dN,nodelist=node

-D name=dsk/dN

Specifies the cluster-unique name of the raw-disk device group

nodelist=node

Specifies the name of the node or nodes to remove from the node list

Enable the localonly property of the raw-disk device group.

When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node
from its boot device if the boot device is connected to multiple nodes.

Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.

Example 4–1 SPARC: Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c1t1d0, whose raw-disk device-group name is dsk/d2. Disk c1t1d0 is a multihost disk, so the node
phys-schost-3 is removed from the disk's node list and the localonly property is enabled.

Steps

Become superuser on the node that will own the disk group.

Create a VxVM disk group and volume.

If you are installing Oracle Real Application Clusters, create shared VxVM disk groups by using the cluster feature of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. Otherwise, create VxVM disk groups by using the standard
procedures that are documented in the VxVM documentation.

Note –

You can use Dirty Region Logging (DRL) to decrease volume recovery time if a node failure occurs. However, DRL might decrease I/O throughput.

If the VxVM cluster feature is not enabled, register the disk group as a Sun Cluster disk device group.

Next Steps

Troubleshooting

Failure to register the device group – If when you attempt to register the disk device group you encounter the error message scconf: Failed to add device group - in use, reminor the
disk device group. Use the procedure SPARC: How to Assign a New Minor Number to a Disk Device Group. This procedure enables you to assign a new minor number that does not conflict with a minor number that is used by existing disk device groups.

Stack overflow – If a stack overflows when the disk device group is brought online, the default value of the thread stack size might be insufficient. On each node, add the entry set
cl_comm:rm_thread_stacksize=0xsize to the /etc/system file, where size is a number greater than 8000, which is the default setting.

Configuration changes – If you change any configuration information for a VxVM disk group or volume, you must register the configuration changes by using
the scsetup utility. Configuration changes you must register include adding or removing volumes and changing the group, owner, or permissions of existing volumes. See Administering Disk Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures to register configuration changes to a disk device group.

SPARC: How to Assign a New Minor Number to a Disk Device Group

If disk device group registration fails because of a minor-number conflict with another disk group, you must assign the new disk group a new, unused minor number. Perform this procedure to reminor a disk group.

Steps

Become superuser on a node of the cluster.

Determine the minor numbers in use.

# ls -l /global/.devices/node@1/dev/vx/dsk/*

Choose any other multiple of 1000 that is not in use to become the base minor number for the new disk group.

Assign the new base minor number to the disk group.

# vxdg reminordiskgroupbase-minor-number

Example 4–2 SPARC: How to Assign a New Minor Number to a Disk Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command reminors the new disk device group to use the base minor number 5000.

Remove from the root disk group the VxVM volume that corresponds to the global-devices file system.

# vxedit -g rootdiskgroup -rf rm rootdiskxNvol

Caution –

Do not store data other
than device entries for global devices in the global-devices file system. All data in the global-devices file system is destroyed when you remove the VxVM volume. Only data that is related to global devices entries is restored after the root disk is unencapsulated.

Unencapsulate the root disk.

Note –

Do not accept the shutdown request from the command.

# /etc/vx/bin/vxunroot

See your VxVM documentation for details.

Use the format(1M) command to add a 512-Mbyte partition to the root disk to use for the global-devices file system.

Tip –

Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.