Upgrading With Solaris Live Upgrade and Installed
Non-Global Zones (Overview)

Starting with the Solaris Solaris 10 8/07 release, you can upgrade or patch
a system that contains non-global zones with Solaris Live Upgrade. If you
have a system that contains non-global zones, Solaris Live Upgrade is the
recommended program to upgrade and to add patches. Other upgrade programs
might require extensive upgrade time, because the time required to complete
the upgrade increases linearly with the number of installed non-global zones.
If you are patching a system with Solaris Live Upgrade, you do not have
to take the system to single-user mode and you can maximize your system's
uptime. The following list summarizes changes to accommodate systems that
have non-global zones installed.

A new package, SUNWlucfg, is required to
be installed with the other Solaris Live Upgrade packages, SUNWlur and SUNWluu. This package is required for any system, not just a system
with non-global zones installed.

The lumount command now provides non-global
zones with access to their corresponding file systems that exist on inactive
boot environments. When the global zone administrator uses the lumount command
to mount an inactive boot environment, the boot environment is mounted for
non-global zones as well. See Using the lumount Command on a System That Contains Non-Global Zones.

Understanding Solaris Zones and Solaris Live Upgrade

The Solaris Zones
partitioning technology is used to virtualize operating system services and
provide an isolated and secure environment for running applications. A non-global
zone is a virtualized operating system environment created within a single
instance of the Solaris OS, the global zone. When you create a non-global
zone, you produce an application execution environment in which processes
are isolated from the rest of the system.

Solaris Live Upgrade is a mechanism to copy the currently running system
onto new slices. When non-global zones are installed, they can be copied to
the inactive boot environment along with the global zone's file systems.

Figure 9–1 shows a non-global
zone that is copied to the inactive boot environment along with the global
zone's file system.

Figure 9–1 Creating a Boot Environment – Copying
Non-Global Zones

In this example of a system with a single disk, the root (/) file system is copied to c0t0d0s4. All non-global
zones that are associated with the file system are also copied to s4.
The /export and /swap file systems
are shared between the current boot environment, bootenv1,
and the inactive boot environment, bootenv2. The lucreate command is the following:

# lucreate -c bootenv1 -m /:/dev/dsk/c0t0d0s4:ufs-n bootenv2

In this example of a system with two disks, the root (/) file system is copied to c0t1d0s0. All non-global
zones that are associated with the file system are also copied to s0.
The /export and /swap file systems
are shared between the current boot environment, bootenv1,
and the inactive boot environment, bootenv2. The lucreate command is the following:

# lucreate -c bootenv1 -m /:/dev/dsk/c0t1d0s0:ufs-n bootenv2

Figure 9–2 shows that a non-global
zone is copied to the inactive boot environment.

Figure 9–2 Creating a Boot Environment –
Copying a Shared File System From a Non-Global Zone

In this example of a system with a single disk, the root (/) file system is copied to c0t0d0s4. All non-global
zones that are associated with the file system are also copied to s4.
The non-global zone, zone1, has a separate file system
that was created by the zonecfg add fs command. The zone
path is /zone1/root/export. To prevent this file system
from being shared by the inactive boot environment, the file system is placed
on a separate slice, c0t0d0s6. The /export and /swap file systems are shared between the current boot environment, bootenv1, and the inactive boot environment, bootenv2.
The lucreate command is the following:

In this example of a system with two disks, the root (/) file system is copied to c0t1d0s0. All non-global
zones that are associated with the file system are also copied to s0.
The non-global zone, zone1, has a separate file system
that was created by the zonecfg add fs command. The zone
path is /zone1/root/export. To prevent this file system
from being shared by the inactive boot environment, the file system is placed
on a separate slice, c0t1d0s4. The /export and /swap file systems are shared between the current boot environment, bootenv1, and the inactive boot environment, bootenv2.
The lucreate command is the following:

Creating a Boot Environment When a Non-Global Zone
Is on a Separate File System

Creating
a new boot environment from the currently running boot environment remains
the same as in previous releases with one exception. You can specify a destination
disk slice for a shared file system within a non-global zone. This exception
occurs under the following conditions:

If on the current boot environment the zonecfg add
fs command was used to create a separate file system for a non-global
zone

If this separate file system resides on a shared file system,
such as /zone/root/export

To prevent this separate file system from being shared in the new boot
environment, the lucreate command enables specifying a
destination slice for a separate file system for a non-global zone. The argument
to the -m option has a new optional field, zonename.
This new field places the non-global zone's separate file system on a separate
slice in the new boot environment. For more information about setting up a
non-global zone with a separate file system, see zonecfg(1M).

Note –

By default, any file system other than the critical file systems
(root (/), /usr, and /opt file
systems) is shared between the current and new boot environments. Updating
shared files in the active boot environment also updates data in the inactive
boot environment. For example, the /export file system
is a shared file system. If you use the -m option and the zonename option, the non-global zone's file system is copied
to a separate slice and data is not shared. This option prevents non-global
zone file systems that were created with the zonecfg add fs command
from being shared between the boot environments.

Upgrading With Solaris Live Upgrade When Non-Global
Zones Are Installed on a System (Tasks)

The following procedure provides detailed instructions for upgrading
with Solaris Live Upgrade for a system with non-global zones installed.

Install required patches.

Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com.
Search for the info doc 72099 on the SunSolve web site.

From the SunSolveSM web site,
obtain the list of patches.

Become superuser or assume an equivalent role.

Install the patches with the patchadd command.

# patchaddpath_to_patches

path_to_patches is the path where the patches
are located.

Reboot the system if necessary. Certain patches require a reboot
to be effective.

x86 only:
Rebooting the system is required or Solaris Live Upgrade fails.

# init 6

Remove existing Solaris Live Upgrade packages.

The
three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed
to upgrade by using Solaris Live Upgrade. These packages include existing
software, new features, and bug fixes. If you do not remove the existing packages
and install the new packages on your system before using Solaris Live Upgrade,
upgrading to the target release fails.

# pkgrm SUNWlucfg SUNWluu SUNWlur

Install the Solaris Live Upgrade packages.

Insert the Solaris DVD or CD.

This media contains
the packages for the release to which you are upgrading.

Install the packages in the following order from the installation
media or network installation image.

# pkagadd -d path_to_packages SUNWlucfg SUNWlur SUNWluu

In the following example, the packages are installed from the installation
media.

The name of the boot environment to be created. BE_name must be unique on the system.

-A 'BE_description'

(Optional) Enables the creation of a boot environment description
that is associated with the boot environment name (BE_name). The description
can be any length and can contain any characters.

-cBE_name

Assigns the name BE_name to the
active boot environment. This option is not required and is only used when
the first boot environment is created. If you run lucreate for
the first time and you omit the -c option, the software creates
a default name for you.

-mmountpoint:device[,metadevice]:fs_options[:zonename] [-m ...]

Specifies the file systems' configuration of the new boot
environment in the vfstab. The file systems that are
specified as arguments to -m can be on the same disk or they
can be spread across multiple disks. Use this option as many times as needed
to create the number of file systems that are needed.

mountpoint can be any valid mount
point or – (hyphen), indicating a swap partition.

device field can be one of the
following:

The name of a disk device, of the form /dev/dsk/cwtxdysz

The name of a Solaris Volume Manager volume, of the form
/dev/md/dsk/dnum

The name of a Veritas Volume Manager volume, of the form
/dev/md/vxfs/dsk/dnum

The keyword merged, indicating that the
file system at the specified mount point is to be merged with its parent

fs_options field can be one of
the following:

ufs, which indicates a UFS file system.

vxfs, which indicates a Veritas file system.

swap, which indicates a swap file system.
The swap mount point must be a – (hyphen).

For file systems that are logical devices (mirrors), several
keywords specify actions to be applied to the file systems. These keywords
can create a logical device, change the configuration of a logical device,
or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

zonename specifies that a non-global
zone's separate file system be placed on a separate slice. This option is
used when the zone's separate file system is in a shared file system such
as /zone1/root/export. This option copies the zone's
separate file system to a new slice and prevents this file system from being
shared. The separate file system was created with the zonecfg add
fs command.

In the following example, a new boot environment named newbe is
created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied
to the new boot environment. The non-global zone named zone1 is
given a separate mount point on c0t1d0s1.

Note –

By default, any file system other than the critical file systems
(root (/), /usr, and /opt file
systems) is shared between the current and new boot environments. The /export file system is a shared file system. If you use the -m option,
the non-global zone's file system is placed on a separate slice and data is
not shared. This option prevents zone file systems that were created with
the zonecfg add fs command from being shared between the
boot environments. See zonecfg(1M) for
details.

BE_name specifies the name of the boot environment
that is to be activated.

Note –

For an x86 based system, the luactivate command
is required when booting a boot environment for the first time. Subsequent
activations can be made by selecting the boot environment from the GRUB menu.
For step-by-step instructions, see x86: Activating a Boot Environment With the GRUB Menu.

To successfully activate a boot environment, that boot environment must
meet several conditions. For more information, see Activating a Boot Environment.

Reboot.

# init 6

Caution –

Use only the init or shutdown commands
to reboot. If you use the reboot, halt,
or uadmin commands, the system does not switch boot environments.
The most recently active boot environment is booted again.

The boot environments have switched and the new boot environment is
now the current boot environment.

Upgrading With Solaris Live Upgrade When Non-Global
Zones Are Installed on a System

The following example provides abbreviated
descriptions of the steps to upgrade a system with non-global zones installed.
In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 10 release. This
system has non-global zones installed and has a non-global zone with a separate
file system on a shared file system, zone1/root/export.
The new boot environment is upgraded to the Solaris 10 8/07 release
by using the luupgrade command. The upgraded boot environment
is activated by using the luactivate command.

Ensure that you have the most recently
updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve
web site. In this example, /net/server/export/patches is
the path to the patches.

# patchadd /net/server/export/patches
# init 6

Remove the Solaris Live Upgrade packages
from the current boot environment.

# pkgrm SUNWlucfg SUNWluu SUNWlur

Insert the Solaris DVD or CD. Then
install the replacement Solaris Live upgrade packages from the target release.

In the following example, a new boot environment named newbe is
created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied
to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate
file system /zone/root/export is placed on a separate
file system, c0t1d0s1. This option prevents the separate
file system from being shared between the current boot environment and the
new boot environment.

Compare files that are listed in infile.
The files to be compared should have absolute file names. If the entry in
the file is a directory, the comparison is recursive to the directory. Use
either this option or -t, not both.

-t

Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file.
Use either this option or -i, not both.

-ooutfile

Redirect the output of differences to outfile.

BE_name

Specifies the name of the boot environment that is compared
to the active boot environment.

Example 9–2 Comparing Boot Environments

In this example, current boot environment (source) is compared to second_disk boot environment and the results are sent to a file.

Using the lumount Command on a
System That Contains Non-Global Zones

The lumount command
provides non-global zones with access to their corresponding file systems
that exist on inactive boot environments. When the global zone administrator
uses the lumount command to mount an inactive boot environment,
the boot environment is mounted for non-global zones as well.

In the following example, the appropriate file systems are mounted for
the boot environment, newbe, on /mnt in
the global zone. For non-global zones that are running, mounted, or ready,
their corresponding file systems within newbe are also
made available on /mnt within each zone.