Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool

This part provides an overview and instructions for using Solaris Live
Upgrade to create and upgrade an inactive boot environment on ZFS storage
pools. Also, you can migrate your UFS root (/) file system
to a ZFS root pool.

Chapter 11 Solaris Live Upgrade and ZFS (Overview)

With Solaris Live Upgrade, you can migrate your UFS file systems
to a ZFS root pool and create ZFS root file systems from an existing ZFS root
pool.

Note –

Creating boot environments with Solaris Live Upgrade is new in
the Solaris 10 10/08 release. When performing
a Solaris Live Upgrade for a UFS file system, both the command-line parameters
and operation of Solaris Live Upgrade remain unchanged. To perform a Solaris
Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.

What's
New in the Solaris 10 10/09 Release

Starting with the Solaris 10 10/09 release,
you can set up a JumpStart profile to identify a flash archive of a ZFS root
pool.

A Flash archive can be created on a system that is running a UFS root
file system or a ZFS root file system. A Flash archive of a ZFS root pool
contains the entire pool hierarchy, except for the swap and dump volumes,
and any excluded datasets. The swap and dump volumes are created when the
Flash archive is installed.

You can use the Flash archive installation method as follows:

Generate a Flash archive that can be used to install and boot
a system with a ZFS root file system.

Perform a JumpStart installation of a system by using a ZFS
Flash archive.

Note –

Creating a ZFS Flash archive backs up an entire root pool, not
individual boot environments. Individual datasets within the pool can be excluded
by using the flarcreate and flar command's -D option.

Introduction to Using Solaris Live Upgrade With ZFS

If you have a UFS file system, Solaris Live Upgrade works the same as
in previous releases. You can now migrate from UFS file systems to a ZFS root
pool and create new boot environments within a ZFS root pool. For these tasks,
the lucreate command has been enhanced with the -p option.
The command syntax is the following:

# lucreate [-c active_BE_name] -n BE_name[-p zfs_root_pool]

The -p option specifies the ZFS pool in which a new
boot environment resides. This option can be omitted if the source and target
boot environments are within the same pool.

Migrating From a UFS File System to
a ZFS Root Pool

If you create a boot environment from the currently running system,
the lucreate command copies the UFS root (/)
file system to a ZFS root pool. The copy process might take time, depending
on your system.

When you are migrating from a UFS file system, the source boot environment
can be a UFS root (/) file system on a disk slice. You
cannot create a boot environment on a UFS file system from a source boot
environment on a ZFS root pool.

Migrating From a UFS root (/)
File System to ZFS Root Pool

The following commands create a ZFS root pool and a new boot environment
from a UFS root (/) file system in the ZFS root pool.
A ZFS root pool must exist before the lucreate operation
and must be created with slices rather than whole disks to be upgradeable
and bootable. The disk cannot have an EFI label, but must be an SMI label.
For more limitations, see System Requirements and Limitations When Using Solaris Live Upgrade.

Figure 11–1 shows the zpool command that creates a root pool, rpool on
a separate slice, c0t1d0s5. The disk slice c0t0d0s0 contains
a UFS root (/) file system. In the lucreate command,
the -c option names the currently running system, c0t0d0, that is a UFS root (/) file system. The -n option assigns the name to the boot environment to be created, new-zfsBE. The -p option specifies where to place
the new boot environment, rpool. The UFS /export file
system and the /swap volume are not copied to the new
boot environment.

Figure 11–1 Migrating From a UFS File System to a ZFS Root Pool

This example shows the same commands as in Figure 11–1. The commands create a new root pool, rpool, and
create a new boot environment in the pool from a UFS root (/)
file system. In this example, the zfs list command shows
the ZFS root pool created by the zpool command. The next zfs list command shows the datasets created by the lucreate command.

The new boot environment is rpool/ROOT/new-zfsBE.
The boot environment, new-zfsBE, is ready to be upgraded
and activated.

Migrating a UFS File System With Solaris Volume Manager
Volumes Configured to a ZFS Root File System

You can migrate your UFS file system if your system has Solaris Volume
Manager (SVM) volumes. To create a UFS boot environment from an existing SVM
configuration, you create a new boot environment from your currently running
system. Then create the ZFS boot environment from the new UFS boot environment.

Overview of Solaris Volume Manager (SVM).
ZFS uses the concept of storage pools to manage physical storage. Historically,
file systems were constructed on top of a single physical device. To address
multiple devices and provide for data redundancy, the concept of a volume
manager was introduced to provide the image of a single device. Thus, file
systems would not have to be modified to take advantage of multiple devices.
This design added another layer of complexity. This complexity ultimately
prevented certain file system advances because the file system had no control
over the physical placement of data on the virtualized volumes.

ZFS storage pools replace SVM. ZFS
completely eliminates the volume management. Instead of forcing you to create
virtualized volumes, ZFS aggregates devices into a storage pool. The storage
pool describes such physical characteristics of storage device layout and
data redundancy and acts as an arbitrary data store from which file systems
can be created. File systems are no longer constrained to individual devices,
enabling them to share space with all file systems in the pool. You no longer
need to predetermine the size of a file system, as file systems grow automatically
within the space allocated to the storage pool. When new storage is added,
all file systems within the pool can immediately use the additional space
without additional work. In many ways, the storage pool acts as a virtual
memory system. When a memory DIMM is added to a system, the operating system
doesn't force you to invoke some commands to configure the memory and assign
it to individual processes. All processes on the system automatically use
the additional memory.

When migrating a system with SVM volumes, the SVM volumes are ignored.
You can set up mirrors within the root pool as in the following example.

In this example, the lucreate command with the -m option creates a new boot environment from the currently running
system. The disk slice c1t0d0s0 contains a UFS root (/) file system configured with SVM volumes. The zpool command
creates a root pool, c1t0d0s0, and a RAID-1 volume (mirror), c2t0d0s0. In the second lucreate command, the -n option assigns the name to the boot environment to be created, c0t0d0s0. The -s option, identifies the UFS root
(/) file system. The -p option specifies
where to place the new boot environment, rpool.

Creating a New Boot Environment Within
the Same Root Pool

When creating a new boot environment within the same ZFS root pool,
the lucreate command creates a snapshot from the source
boot environment and then a clone is made from the snapshot. The creation
of the snapshot and clone is almost instantaneous and the disk space used
is minimal. The amount of space ultimately required depends on how many files
are replaced as part of the upgrade process. The snapshot is read-only, but
the clone is a read-write copy of the snapshot. Any changes made to the clone
boot environment are not reflected in either the snapshot or the source boot
environment from which the snapshot was made.

When the current boot environment resides on the same ZFS pool, the -p option is omitted.

Figure 11–2 shows the creation
of a ZFS boot environment from a ZFS root pool. The slice c0t0d0s0 contains
a the ZFS root pool, rpool. In the lucreate command,
the -n option assigns the name to the boot environment to
be created, new-zfsBE. A snapshot of the original root
pool is created rpool@new-zfsBE. The snapshot is used
to make the clone that is a new boot environment, new-zfsBE.
The boot environment, new-zfsBE, is ready to be upgraded
and activated.

Figure 11–2 Creating a New Boot Environment on the Same Root
Pool

Example 11–3 Creating a Boot Environment Within the Same ZFS
Root Pool

This example shows the same command as in Figure 11–2 that creates a new boot environment in the same root pool. The lucreate
command names the currently running boot environment with the -czfsBE option, and the -nnew-zfsBE creates
the new boot environment. The zfs list command shows the
ZFS datasets with the new boot environment and snapshot.

Creating a New Boot Environment on Another Root
Pool

You can use the lucreate command to copy an existing
ZFS root pool into another ZFS root pool. The copy process might take some
time depending on your system.

Figure 11–3 shows the zpool command that creates a ZFS root pool, rpool2,
on c0t1d0s5 because a bootable ZFS root pool does not yet
exist. The lucreate command with the -n option
assigns the name to the boot environment to be created, new-zfsBE.
The -p option specifies where to place the new boot environment.

Figure 11–3 Creating a New Boot Environment on Another Root
Pool

Example 11–4 Creating a Boot Environment on a Different ZFS
Root Pool

This example shows the same commands as in Figure 11–3 that create a new root pool and then a new boot environment in the
newly created root pool. In this example, the zpool create command
creates rpool2. The zfs list command
shows that no ZFS datasets are created in rpool2. The datasets
are created with the lucreate command.

The new boot environment, new-zfsBE, is created on rpool2 along with the other datasets, ROOT, dump and swap. The boot environment, new-zfsBE, is ready to be upgraded and activated.

Creating a New Boot Environment From a Source
Other Than the Currently Running System

If you are creating a boot environment from a source other than the
currently running system, you must use the lucreate command
with the -s option. The -s option works the
same as for a UFS file system. The -s option provides the
path to the alternate root (/) file system. This alternate
root (/) file system is the source for the creation of
the new ZFS root pool. The alternate root can be either a UFS (/)
root file system or a ZFS root pool. The copy process might take time, depending
on your system.

Example 11–5 Creating a Boot Environment From an Alternate Root
(/) File System

The following command creates a new ZFS root pool from an existing ZFS
root pool. The -n option assigns the name to the boot environment
to be created, new-zfsBE. The -s option
specifies the boot environment, source-zfsBE, to be used
as the source of the copy instead of the currently running boot environment.
The -p option specifies to place the new boot environment
in newpool2.

# lucreate -n new-zfsBE -s source-zfsBE -p rpool2

The boot environment, new-zfsBE, is ready to be upgraded
and activated.

Creating a ZFS Boot Environment on a System With
Non-Global Zones Installed

Chapter 12 Solaris Live Upgrade for ZFS (Planning)

This chapter provides guidelines and requirements for review before
performing a migration of a UFS file system to a ZFS file system or before
creating a new ZFS boot environment from an existing ZFS root pool.

Note –

Creating boot environments with Solaris Live Upgrade is new in
the Solaris 10 10/08 release. When you
perform a Solaris Live Upgrade for a UFS file system, both the command-line
parameters and operation of Solaris Live Upgrade remain unchanged. To perform
a Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of
this book.

Migrating from a UFS file system to a ZFS root pool with Solaris Live
Upgrade or creating a new boot environment in a root pool is new in the Solaris 10 10/08 release. This release contains the
software needed to use Solaris Live Upgrade with ZFS. You must have at least
this release installed to use ZFS.

Disk space

The minimum amount of available pool space for a bootable ZFS root file
system depends on the amount of physical memory, the disk space available,
and the number of boot environments to be created.

When you migrate shared file systems, shared file systems cannot be
copied to a separate slice on the new ZFS root pool.

For example, when performing a Solaris Live Upgrade with a UFS root
(/) file system, you can use the -m option
to copy the /export file system to another device. You
do not have the -m option of copying the shared file system
to a ZFS pool.

When you are migrating a UFS root file system that contains non-global
zones, shared file systems are not migrated.

On a system with a UFS root (/) file system and
non-global zones installed, the non-global zones are migrated if the zone
is in a critical file system as part of the UFS to ZFS migration. Or, the
zone is cloned when you upgrade within the same ZFS pool. If a non-global
zone exists in a shared UFS (/) file system, to
migrate to a ZFS root pool, you must first upgrade the zone, as in previous
Solaris releases.

The Solaris Live Upgrade feature is unaware of the name change and subsequent
commands, such as ludelete, will fail. In fact, do not
rename your ZFS pools or file systems if you have existing boot environments
that you want to continue to use.

Set dataset properties before the lucreate command
is used.

Solaris Live Upgrade creates the datasets for the boot environment and
ZFS volumes for the swap area and dump device but does not account for any
existing dataset property modifications. This means that if you want a dataset
property enabled in the new boot environment, you must set the property before
the lucreate operation. For example:

When creating a ZFS boot environment within the same ZFS root pool,
you cannot use the lucreate command include and exclude
options to customize the content.

You cannot use the -f, -o, -y, -Y, and -z options to include or exclude files from
the primary boot environment when creating a boot environment in the same ZFS root pool. However, you can use these options
in the following cases:

Creating a boot environment from a UFS file system to a UFS
file system

Creating a boot environment from a UFS file system to a ZFS
root pool

Creating a boot environment from a ZFS root pool to a different
ZFS root pool

Chapter 13 Creating a Boot Environment for ZFS Root Pools

This chapter provides step-by-step
procedures on how to create a ZFS boot environment when you use Solaris Live
Upgrade.

Note –

Migrating from a UFS file system to a ZFS root pool or creating
ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. To use Solaris Live Upgrade
on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.

Migrating a UFS File System to a ZFS File System

This
procedure describes how to migrate a UFS file system to a ZFS file system.
Creating a boot environment provides a method of copying critical file systems
from an active UFS boot environment to a ZFS root pool. The lucreate command
copies the critical file systems to a new boot environment within an existing
ZFS root pool. User-defined (shareable) file systems are not copied and are
not shared with the source UFS boot environment. Also, /swap is
not shared between the UFS file system and ZFS root pool. For an overview
of critical and shareable file systems, see File System Types.

How to Migrate a UFS File System to a ZFS File System

Note –

To migrate an active UFS root (/) file system
to a ZFS root pool, you must provide the name of the root pool. The critical
file systems are copied into the root pool.

Before running Solaris Live Upgrade for the first time, you must
install the latest Solaris Live Upgrade packages from installation media and
install the patches listed in the SunSolve Infodoc 206844. Search for the Infodoc 206844 (formerly 72099)
on the SunSolve web
site.

The latest packages and patches ensure that you have all
the latest bug fixes and new features in the release. Ensure that you install
all the patches that are relevant to your system before proceeding to create
a new boot environment.

The following substeps describe the steps
in the SunSolve Infodoc
206844.

Note –

Using Solaris Live Upgrade to create new ZFS boot environments
requires at least the Solaris 10 10/08 release to be installed. Previous releases
do not have the ZFS and Solaris Live Upgrade software to perform the tasks.

Become superuser or assume an equivalent role.

From the SunSolve web site, follow the instructions in Infodoc 206844 to remove and
add Solaris Live Upgrade packages.

The three Solaris Live Upgrade
packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live
Upgrade. These packages include existing software, new features, and bug fixes.
If you do not remove the existing packages and install the new packages on
your system before using Solaris Live Upgrade, upgrading to the target release
fails. The SUMWlucfg package is new starting
with the Solaris 10 8/07 release. If you are using Solaris Live
Upgrade packages from a release previous to Solaris 10 8/07, you do not need
to remove this package.

Assigns the name ufsBE to the current
UFS boot environment. This option is not required and is used only when the
first boot environment is created. If you run the lucreate command
for the first time and you omit the -c option, the software
creates a default name for you.

-nnew-zfsBE

Assigns the name new-zfsBE to the
boot environment to be created. The name must be unique on the system.

-prpool

Places the newly created ZFS root (/)
file system into the ZFS root pool defined in rpool.

The creation of the new ZFS boot environment might take a while. The
UFS file system data is being copied to the ZFS root pool. When the inactive
boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

(Optional) Verify that the boot environment is complete.

In this example, the lustatus command reports whether
the boot environment creation is complete and bootable.

The mount points listed for the new boot environment are temporary until
the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment,
but are shared within the ZFS root pool and boot environments within the root
pool.

You can now upgrade and activate the new boot environment.
See Example 13–1.

Example 13–1 Migrating a UFS Root (/) File System to a ZFS
Root Pool

In this example, the new ZFS root pool, rpool,
is created on a separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot environment,c0t0d0, to the new ZFS boot environment, new-zfsBE,
and places the new boot environment in rpool.

In this example, the new boot environment is upgraded by using the luupgrade command from an image that is stored in the location indicated
with the -s option.

# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10
51135 blocks
miniroot filesystem is <lofs>
Mounting miniroot at
</net/install/export/solaris_10/combined.solaris_10_wos
/Solaris_10/Tools/Boot>
Validating the contents of the media
</net/install/export/s10/combined.s10>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains Solaris version <10_1008>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live
Upgrade requests.
Creating upgrade profile for BE <zfsBE>.
Determining packages to install or upgrade for BE <zfsBE>.
Performing the operating system upgrade of the BE <zfsBE>.
CAUTION: Interrupting this process may leave the boot environment
unstable or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Adding operating system patches to the BE <zfsBE>.
The operating system patch installation is complete.
INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot
environment <zfsBE> contains a log of the upgrade operation.
INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot
environment <zfsBE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all
of the files are located on boot environment <zfsBE>.
Before you activate boot environment <zfsBE>, determine if any
additional system maintenance is required or if additional media
of the software distribution must be installed.
The Solaris upgrade of the boot environment <zfsBE> is complete.

The new boot environment can be activated anytime after it is created.

# luactivate new-zfsBE
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:
setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a
3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.

# init 6
# svc.startd: The system is coming down. Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

If you fall back to the UFS boot environment, then you need to import
again any ZFS storage pools that were created in the ZFS boot environment
because they are not automatically available in the UFS boot environment.
You will see messages similar to the following example when you switch back
to the UFS boot environment.

# luactivate c0t0d0
WARNING: The following files have changed on both the current boot
environment <new-zfsBE> zone <global> and the boot environment
to be activated <c0t0d0>:
/etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current
boot environment <zfsBE> zone <global> and the boot environment to be
activated <c0t0d0>. These files will not be automatically synchronized
from the current boot environment <new-zfsBE> when boot environment <c0t0d0>

Creating a Boot Environment Within the Same ZFS Root
Pool

If you have an existing ZFS
root pool and want to create a new ZFS boot environment within that pool,
the following procedure provides the steps. After the inactive boot environment
is created, the new boot environment can be upgraded and activated at your
convenience. The -p option is not required when you create
a boot environment within the same pool.

How to Create a ZFS Boot Environment Within the Same
ZFS Root Pool

Before running Solaris Live Upgrade for the first time, you must
install the latest Solaris Live Upgrade packages from installation media and
install the patches listed in the SunSolve Infodoc 206844. Search for the Infodoc 206844 (formerly 72099)
on the SunSolve web
site.

The latest packages and patches ensure that you have all
the latest bug fixes and new features in the release. Ensure that you install
all the patches that are relevant to your system before proceeding to create
a new boot environment.

The following substeps describe the steps
in the SunSolve Infodoc
206844.

Note –

Using Solaris Live Upgrade to create new ZFS boot environments
requires at least the Solaris 10 10/08 release to be installed. Previous releases
do not have the ZFS and Solaris Live Upgrade software to perform the tasks.

Become superuser or assume an equivalent role.

From the SunSolve web site, follow the instructions in Infodoc 206844 to remove and
add Solaris Live Upgrade packages.

The three Solaris Live Upgrade
packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live
Upgrade. These packages include existing software, new features, and bug fixes.
If you do not remove the existing packages and install the new packages on
your system before using Solaris Live Upgrade, upgrading to the target release
fails. The SUMWlucfg package is new starting
with the Solaris 10 8/07 release. If you are using Solaris Live
Upgrade packages from a release previous to Solaris 10 8/07, you do not need
to remove this package.

Note –

The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you
are using Solaris Live Upgrade packages from a previous release, you do not
need to remove this package.

path-to-patches is the patch to the patch
directory such as /var/tmp/lupatches. patch_id is
the patch number or numbers. Separate multiple patch names with a space.

Note –

The patches need to be applied in the order that is specified
in Infodoc 206844.

Reboot the system if necessary. Certain patches require a
reboot to be effective.

x86 only:
Rebooting the system is required or Solaris Live Upgrade fails.

# init 6

You now have the packages and patches necessary for a successful creation
of a new boot environment.

Create the new boot environment.

# lucreate [-c zfsBE] -n new-zfsBE

-czfsBE

Assigns the name zfsBE to the current
boot environment. This option is not required and is used only when the first
boot environment is created. If you run lucreate for the
first time and you omit the -c option, the software creates
a default name for you.

-nnew-zfsBE

Assigns the name to the boot environment to be created. The
name must be unique on the system.

The creation of the new boot environment is almost instantaneous. A
snapshot is created of each dataset in the current ZFS root pool, and a clone
is then created from each snapshot. Snapshots are very disk-space efficient,
and this process uses minimal disk space. When the inactive boot environment
has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

(Optional) Verify that the boot environment is complete.

The lustatus command reports whether the boot environment
creation is complete and bootable.

In this example, the ZFS root pool is named rpool,
and the @ symbol indicates a snapshot. The new boot environment mount points
are temporary until the luactivate command is executed.
The /dump and /swap volumes are
shared with the ZFS root pool and boot environments within the root pool.

# luactivate new-zfsBE
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:
setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a
3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.

# init 6
# svc.startd: The system is coming down. Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

Creating a Boot Environment In a New Root Pool

If you have an existing ZFS
root pool and want to create a new ZFS boot environment in a new root pool,
the following procedure provides the steps. After the inactive boot environment
is created, the new boot environment can be upgraded and activated at your
convenience. The -p option is required to note where to place
the new boot environment. The existing ZFS root pool must exist and be on
a separate slice to be bootable and upgradeable.

How to Create a Boot Environment on a New ZFS Root
Pool

Before running Solaris Live Upgrade for the first time, you must
install the latest Solaris Live Upgrade packages from installation media and
install the patches listed in the SunSolve Infodoc 206844. Search for the Infodoc 206844 (formerly 72099)
on the SunSolve web
site.

The latest packages and patches ensure that you have all
the latest bug fixes and new features in the release. Ensure that you install
all the patches that are relevant to your system before proceeding to create
a new boot environment.

The following substeps describe the steps
in the SunSolve Infodoc
206844.

Note –

Using Solaris Live Upgrade to create new ZFS boot environments
requires at least the Solaris 10 10/08 release to be installed. Previous releases
do not have the ZFS and Solaris Live Upgrade software to perform the tasks.

Become superuser or assume an equivalent role.

From the SunSolve web site, follow the instructions in Infodoc 206844 to remove and
add Solaris Live Upgrade packages.

The three Solaris Live Upgrade
packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live
Upgrade. These packages include existing software, new features, and bug fixes.
If you do not remove the existing packages and install the new packages on
your system before using Solaris Live Upgrade, upgrading to the target release
fails. The SUMWlucfg package is new starting
with the Solaris 10 8/07 release. If you are using Solaris Live
Upgrade packages from a release previous to Solaris 10 8/07, you do not need
to remove this package.

Note –

The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you
are using Solaris Live Upgrade packages from a previous release, you do not
need to remove this package.

Assigns the name to the boot environment to be created. The
name must be unique on the system.

-prpool2

Places the newly created ZFS root boot environment into the
ZFS root pool defined in rpool2.

The creation of the new ZFS boot environment might take a while. The
file system data is being copied to the new ZFS root pool. When the inactive
boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

(Optional) Verify that the boot environment is complete.

The lustatus command reports whether the boot environment
creation is complete and bootable.

The following example displays the names of all datasets on the system.
The mount point listed for the new boot environment are temporary until the luactivate command is executed. The new boot environment shares
the volumes, rpool2/dump and rpool2/swap,
with the rpool2 ZFS boot environment.

You can now upgrade and activate the new boot environment. See Example 13–3.

Example 13–3 Creating a Boot Environment on a New Root Pool

In this example, a new ZFS root pool, rpool, is
created on a separate slice, c0t2s0s5. The lucreate command
creates a new ZFS boot environment, new-zfsBE. The -p option
is required, because the boot environment is being created in a different
root pool.

Creating a Boot Environment From a Source Other Than
the Currently Running System

If you have an existing
ZFS root pool or UFS boot environment that is not currently used as the active
boot environment, you can use the following example to create a new ZFS boot
environment from this boot environment. After the new ZFS boot environment
is created, this new boot environment can be upgraded and activated at your
convenience.

If you are creating a boot environment from a source other than the
currently running system, you must use the lucreate command
with the -s option. The -s option works the
same as for a UFS file system. The -s option provides the
path to the alternate root (/) file system. This alternate
root (/) file system is the source for the creation of
the new ZFS root pool. The alternate root can be either a UFS (/)
root file system or a ZFS root pool. The copy process might take time, depending
on your system.

The following example shows how the -s option is used
when creating a boot environment on another ZFS root pool.

Example 13–4 How to Create a Boot Environment From a Source
Other Than the Currently Running System

The following command creates a new ZFS root pool from an existing ZFS
root pool. The -n option assigns the name to the boot environment
to be created, new-zfsBE. The -s option
specifies the boot environment, rpool3, to be used as the
source of the copy instead of the currently running boot environment. The
-p option specifies to place the new boot environment in rpool2.

Falling Back to a ZFS Boot Environment

If a failure is detected after upgrading or if the application is not
compatible with an upgraded component, you can fall back to the original boot
environment with the luactivate command.

When you have migrated to a ZFS root pool from a UFS boot environment
and you then decide to fall back to the UFS boot environment, you again need
to import any ZFS storage pools that were created in the ZFS boot environment.
These ZFS storage pools are not automatically available in the UFS boot environment.
You will see messages similar to the following example when you switch back
to the UFS boot environment.

# luactivate c0t0d0
WARNING: The following files have changed on both the current boot
environment <new-ZFSbe> zone <global> and the boot environment
to be activated <c0t0d0>: /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current
boot environment <ZFSbe> zone <global> and the boot environment to be
activated <c0t0d0>. These files will not be automatically synchronized
from the current boot environment <new-ZFSbe> when boot
environment <c0t0d0>

Migrating from a UFS root (/) file system to a ZFS root pool or
creating ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. When you perform a Solaris
Live Upgrade for a UFS file system, both the command-line parameters and operation
of Solaris Live Upgrade remain unchanged. To perform a Solaris Live Upgrade
on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.

Creating a ZFS Boot Environment on a System With
Non-Global Zones Installed (Overview and Planning)

You can use Solaris Live Upgrade to migrate your UFS root (/) file system
with non-global zones installed on a ZFS root pool. All non-global zones that
are associated with the file system are also copied to the new boot environment.
The following non-global zone migration scenarios are supported:

Pre-Migration Root File System and Zone Combination

Post-Migration Root File System and Zone Combination

UFS root file system with the non-global zone root directory in the
UFS file system

UFS root file system with the non-global zone root directory in a ZFS
root pool

On a system with a UFS root (/) file system and
non-global zones installed, the non-global zones are migrated if the zone
is in a non-shared file system as part of the UFS to ZFS migration. Or, the
zone is cloned when you upgrade within the same ZFS pool. If a non-global
zone exists in a shared UFS file system, to migrate to another ZFS root
pool, you must first upgrade the non-global zone, as in previous Solaris releases.

This chapter provides step-by-step instructions for migrating from a
UFS root (/) file system to a ZFS root pool on a system
with non-global zones installed. No non-global zones are on a shared file
system in the UFS file system.

How to Migrate a UFS File System to a ZFS Root Pool
on a System With Non-Global Zones

The lucreate command creates a boot environment of
a ZFS root pool from a UFS root (/) file system. A ZFS
root pool must exist before the lucreate operation and
must be created with slices rather than whole disks to be upgradeable and
bootable. This procedure shows how an existing non-global zone associated
with the UFS root (/) file system is copied to the new
boot environment in a ZFS root pool.

In the following example, the existing non-global zone, myzone,
has its non-global zone root in a UFS root (/) file system.
The zone zzone has its zone root in a ZFS file system in
the existing ZFS storage pool, pool. Solaris Live Upgrade
is used to migrate the UFS boot environment, c2t2d0s0,
to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool,
that is created before the Solaris Live Upgrade operation. The ZFS-based non-global
zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.

Complete the following steps the first time you perform a Solaris
Live Upgrade.

Note –

Using Solaris Live Upgrade to create new ZFS boot environments
requires at least the Solaris 10 10/08 release to
be installed. Previous releases do not have the ZFS and Solaris Live Upgrade
software to perform the tasks.

Remove existing Solaris Live Upgrade packages on your system if
necessary. If you are upgrading to a new release, you must install the packages
from that release.

The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg,
comprise the software needed to upgrade by using Solaris Live Upgrade. These
packages include existing software, new features, and bug fixes. If you do
not remove the existing packages and install the new packages on your system
before using Solaris Live Upgrade, upgrading to the target release fails.

Assigns the name ufsBE to the current
UFS boot environment. This option is not required and is used only when the
first boot environment is created. If you run the lucreate command
for the first time and you omit the -c option, the software
creates a default name for you.

-nnew-zfsBE

Assigns the name new-zfsBE to the
boot environment to be created. The name must be unique on the system.

-prpool

Places the newly created ZFS root (/)
file system into the ZFS root pool defined in rpool.

All nonshared non-global zones are copied to the new boot environment
along with critical file systems. The creation of the new ZFS boot environment
might take a while. The UFS file system data is being copied to the ZFS root
pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or
activate the new ZFS boot environment.

(Optional) Verify that the boot environment is complete.

The lustatus command reports whether the boot environment
creation is complete and bootable.

The mount points listed for the new boot environment are temporary until
the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment,
but are shared within the ZFS root pool and boot environments within the root
pool.

In the following example, the existing non-global zone myzone,
has its non-global zone root in a UFS root (/) file system.
The zone zzone has its zone root in a ZFS file system in
the existing ZFS storage pool, pool. Solaris Live Upgrade
is used to migrate the UFS boot environment, c2t2d0s0,
to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool,
that is created before the Solaris Live Upgrade operation. The ZFS-based,
non-global zone, zzone, is cloned but retained in the ZFS
pool pool and migrated to the new zfs2BE boot
environment.

Next, use the luactivate command to activate the
new ZFS boot environment. For example:

# luactivate zfsBE
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:
setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a
3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment <ZFSbe> successful.

Reboot the system to the ZFS BE.

# init 6
# svc.startd: The system is coming down. Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

Confirm the new boot environment and the status of the migrated zones
as in this example.

If you fall back to the UFS boot environment, then you again need to
import any ZFS storage pools that were created in the ZFS boot environment
because they are not automatically available in the UFS boot environment.
You will see messages similar to the following when you switch back to the
UFS boot environment.

# luactivate c1t2d0s0
WARNING: The following files have changed on both the current boot
environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>:
/etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current
boot environment <ZFSbe> zone <global> and the boot environment to be
activated <c1t2d0s0>. These files will not be automatically synchronized
from the current boot environment <ZFSbe> when boot environment <c1t2d0s0>

Additional Resources

For additional information about the topics included in this chapter,
see the resources listed in Table 14–1.

Table 14–1 Additional Resources

Resource

Location

For information about non-global zones, including overview, planning,
and step-by-step instructions