Migrating a UFS Root File System to a ZFS Root File
System (Oracle Solaris Live Upgrade)

Oracle Solaris Live Upgrade features related to UFS components are still
available, and they work as in previous Solaris releases.

The following features are also available:

When you migrate your UFS root file system to a ZFS root file
system, you must designate an existing ZFS storage pool with the -p option.

If the UFS root file system has components on different slices,
they are migrated to the ZFS root pool.

You can migrate a system with zones but the supported configurations
are limited in the Solaris 10 10/08 release. More zone configurations are
supported starting in the Solaris 10 5/09 release. For more information, see
the following sections:

The basic process for migrating a UFS root file system to a ZFS root
file system follows:

Install the Solaris 10 10/08, Solaris 10
5/09, Solaris 10 10/09, or Oracle Solaris 10 9/10 release
or use the standard upgrade program to upgrade from a previous Solaris
10 release on any
supported SPARC based or x86 based system.

When you are running at least the Solaris
10 10/08 release, create a ZFS storage pool for your ZFS root file system.

Use Oracle Solaris Live Upgrade to migrate your UFS root file
system to a ZFS root file system.

ZFS Migration Issues With Oracle Solaris Live Upgrade

Review the following issues before you use Oracle Solaris Live Upgrade
to migrate your UFS root file system to a ZFS root file system:

The Oracle Solaris installation GUI's standard upgrade option
is not available for migrating from a UFS to a ZFS root file system. To migrate
from a UFS file system, you must use Oracle Solaris Live Upgrade.

You must create the ZFS storage pool that will be used for
booting before the Oracle Solaris Live Upgrade operation. In addition, due
to current boot limitations, the ZFS root pool must be created with slices
instead of whole disks. For example:

# zpool create rpool mirror c1t0d0s0 c1t1d0s0

Before you create the new pool, ensure that the disks to be used in
the pool have an SMI (VTOC) label instead of an EFI label. If the disk is
relabeled with an SMI label, ensure that the labeling process did not change
the partitioning scheme. In most cases, all of the disk's capacity should
be in the slices that are intended for the root pool.

You cannot use Oracle Solaris Live Upgrade to create a UFS
BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your
UFS BE, you can boot from either your UFS BE or your ZFS BE.

Do not rename your ZFS BEs with the zfs rename command
because Oracle Solaris Live Upgrade feature cannot detect the name change.
Subsequent commands, such as ludelete, will fail. In fact,
do not rename your ZFS pools or file systems if you have existing BEs that
you want to continue to use.

When creating an alternative BE that is a clone of the primary
BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from
the primary BE. You can still use the inclusion and exclusion option set in
the following cases:

UFS -> UFS
UFS -> ZFS
ZFS -> ZFS (different pool)

Although you can use Oracle Solaris Live Upgrade to upgrade
your UFS root file system to a ZFS root file system, you cannot use Oracle
Solaris Live Upgrade to upgrade non-root or shared file systems.

You cannot use the lu command to create
or migrate a ZFS root file system.

Using Oracle Solaris Live Upgrade to Migrate to a
ZFS Root File System (Without Zones)

The following examples show how to migrate a UFS root file system to a ZFS
root file system.

If you are migrating or updating a system with
zones, see the following sections:

Example 5–3 Using Oracle Solaris Live Upgrade to Migrate a
UFS Root File System to a ZFS Root File System

The
following example shows how to create a BE of a ZFS root file system from
a UFS root file system. The current BE, ufsBE, which contains
a UFS root file system, is identified by the -c option. If
you do not include the optional -c option, the current BE
name defaults to the device name. The new BE, zfsBE, is
identified by the -n option. A ZFS storage pool must exist
before the lucreate operation.

The
ZFS storage pool must be created with slices rather than with whole disks
to be upgradeable and bootable. Before you create the new pool, ensure that
the disks to be used in the pool have an SMI (VTOC) label instead of an EFI
label. If the disk is relabeled with an SMI label, ensure that the labeling
process did not change the partitioning scheme. In most cases, all of the
disk's capacity should be in the slice that is intended for the root pool.

Next, use the luactivate command to activate the
new ZFS BE. For example:

# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.

# lucreate -n zfs2BE
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <zfsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs2BE>.
Source boot environment is <zfsBE>.
Creating boot environment <zfs2BE>.
Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
Population of boot environment <zfs2BE> successful.
Creation of boot environment <zfs2BE> successful.

Example 5–5 Upgrading Your ZFS BE (luupgrade)

You can upgrade your ZFS BE with additional packages or patches.

The
basic process follows:

Create an alternate BE with the lucreate command.

Activate and boot from the alternate BE.

Upgrade your primary ZFS BE with the luupgrade command
to add packages or patches.

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
zfsBE yes no no yes -
zfs2BE yes yes yes no -
# luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge
Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>.
Mounting the BE <zfsBE>.
Adding packages to the BE <zfsBE>.
Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product>
Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.
Using </a> as the package base directory.
## Processing package information.
## Processing system information.
4 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user
permission during the process of installing this package.
Do you want to continue with the installation of <SUNWchxge> [y,n,?] y
Installing Chelsio N110 10GE NIC Driver as <SUNWchxge>
## Installing part 1 of 1.
## Executing postinstall script.
Installation of <SUNWchxge> was successful.
Unmounting the BE <zfsBE>.
The package add to the BE <zfsBE> completed.

Using Oracle Solaris Live
Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

This command establishes datasets in the root pool for the new boot
environment and copies the current boot environment (including the zones)
to those datasets.

Activate the new ZFS boot environment.

# luactivate s10BE2

Now, the system is running a ZFS root file system, but the zone roots
on UFS are still in the UFS root file system. The next steps are required
to fully migrate the UFS zones to a supported ZFS configuration.

Reboot the system.

# init 6

Migrate the zones to a ZFS BE.

Boot the zones.

Create another ZFS BE within the pool.

# lucreate s10BE3

Activate the new boot environment.

# luactivate s10BE3

Reboot the system.

# init 6

This step verifies that the ZFS BE and the zones are booted.

Resolve any potential mount-point problems.

Due to
a bug in Oracle Solaris Live Upgrade, the inactive boot environment might
fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment
has an invalid mount point.

When the option to boot a specific
boot environment is presented, either in the GRUB menu or at the OpenBoot
PROM prompt, select the boot environment whose mount points were just corrected.

How to Configure a ZFS Root File System With Zone
Roots on ZFS (Solaris 10 10/08)

This procedure explains how to set up a ZFS root file system and ZFS
zone root configuration that can be upgraded or patched. In this configuration,
the ZFS zone roots are created as ZFS datasets.

In the steps that follow the example pool name is rpool and
the example name of the active boot environment is s10BE.
The name for the zones dataset can be any legal dataset name. In the following
example, the zones dataset name is zones.

Install the system with a ZFS root, either by using the Solaris
interactive text installer or the Solaris JumpStart installation method.

# zonecfg -z zoneA
zoneA: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zoneA> create
zonecfg:zoneA> set zonepath=/zones/zonerootA

You can enable the zones to boot automatically when the system is booted
by using the following syntax:

zonecfg:zoneA> set autoboot=true

Install the zone.

# zoneadm -z zoneA install

Boot the zone.

# zoneadm -z zoneA boot

How to Upgrade or Patch a ZFS Root File System With
Zone Roots on ZFS (Solaris 10 10/08)

Use this procedure when you need to upgrade or patch a ZFS root file
system with zone roots on ZFS. These updates can either be a system upgrade
or the application of patches.

In the steps that follow, newBE is the example name
of the boot environment that is upgraded or patched.

Create the boot environment to upgrade or patch.

# lucreate -n newBE

The existing boot environment, including all the zones, is cloned. A
dataset is created for each dataset in the original boot environment. The
new datasets are created in the same pool as the current root pool.

Select one of the following to upgrade the system or apply patches
to the new boot environment:

Upgrade the system.

# luupgrade -u -n newBE -s /net/install/export/s10u7/latest

where the -s option specifies the location of the Solaris
installation medium.

Apply patches to the new boot environment.

# luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14

Activate the new boot environment.

# luactivate newBE

Boot from the newly activated boot environment.

# init 6

Resolve any potential mount-point problems.

Due to
a bug in Oracle Solaris Live Upgrade feature, the inactive boot environment
might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot
environment has an invalid mount point.

When the option to boot a specific
boot environment is presented, either in the GRUB menu or at the OpenBoot
PROM prompt, select the boot environment whose mount points were just corrected.

Using Oracle Solaris
Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10
5/09)

You can use the Oracle Solaris Live Upgrade feature to migrate or upgrade
a system with zones starting in the Solaris 10 10/08 release. Additional sparse—root
and whole—root zone configurations are supported by Live Upgrade starting
in the Solaris 10 5/09 release.

Consider the following points when using Oracle Solaris Live Upgrade
with ZFS and zones starting in at least the Solaris 10 5/09 release:

To use Oracle Solaris Live Upgrade with zone configurations
that are supported starting in at least the Solaris 10 5/09 release, you must
first upgrade your system to at least the Solaris 10 5/09 release by using
the standard upgrade program.

Then, with Oracle Solaris Live Upgrade, you can either migrate
your UFS root file system with zone roots to a ZFS root file system or you
can upgrade or patch your ZFS root file system and zone roots.

Review the supported zone configurations before using Oracle Solaris
Live Upgrade to migrate or upgrade a system with zones.

Migrate a UFS root file system to
a ZFS root file system – The following configurations of
zone roots are supported:

In a directory in the UFS root file system

In a subdirectory of a mount point in the UFS root file system

UFS root file system with a zone root in a UFS root file system
directory or in a subdirectory of a UFS root file system mount point and a
ZFS non-root pool with a zone root

The following UFS/zone configuration is not supported: UFS root file
system that has a zone root as a mount point.

Migrate or upgrade a ZFS root file
system – The following configurations of zone roots are supported:

In a dataset in the ZFS root pool. In some cases, if a dataset
for the zone root is not provided before the Oracle Solaris Live Upgrade operation,
a dataset for the zone root (zoneds) will be created by
Oracle Solaris Live Upgrade.

In a subdirectory of the ZFS root file system

In a dataset outside of the ZFS root file system

In a subdirectory of a dataset outside of the ZFS root file
system

In a dataset in a non root pool. In the following example, zonepool/zones is a dataset that contains the zone roots, and rpool contains the ZFS BE:

Do not create zone roots in nested directories, for example, zones/zone1 and zones/zone1/zone2. Otherwise,
mounting might fail at boot time.

How to Create a ZFS BE With a ZFS Root File System
and a Zone Root (at Least Solaris 10 5/09)

Use this procedure after you have performed an initial installation
of at least the Solaris 10 5/09 release to create a ZFS root file system.
Also use this procedure after you have used the luupgrade feature
to upgrade a ZFS root file system to at least the Solaris 10 5/09 release.
A ZFS BE that is created using this procedure can then be upgraded or patched.

In the steps that follow, the example Oracle Solaris 10 9/10 system
has a ZFS root file system and a zone root dataset in /rpool/zones.
A ZFS BE named zfs2BE is created and can then be upgraded
or patched.

How to Upgrade or Patch a ZFS Root File System With
Zone Roots (at Least Solaris 10 5/09)

Use this procedure when you need to upgrade or patch a ZFS root file
system with zone roots in at least the Solaris 10 5/09 release. These updates
can either be a system upgrade or the application of patches.

In the steps that follow, zfs2BE, is the example
name of the boot environment that is upgraded or patched.

Example 5–6 Upgrading a ZFS Root File System With a Zone Root to a Oracle Solaris
10 9/10 ZFS Root File System

In this example, a ZFS BE (zfsBE), which was created
on a Solaris 10 10/09 system with a ZFS root file system and zone root in
a non root pool, is upgraded to the Oracle Solaris 10 9/10 release. This process
can take a long time. Then, the upgraded BE (zfs2BE) is
activated. Ensure that the zones are installed and booted before attempting
the upgrade.

In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone zone are created as
follows:

Example 5–7 Migrating a UFS Root File System With a Zone Root to a ZFS Root File
System

In this example, a Oracle Solaris 10 9/10 system with a UFS root file
system and a zone root (/uzone/ufszone), as well as a
ZFS non-root pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS root file system. Ensure that the ZFS root
pool is created and that the zones are installed and booted before attempting
the migration.