Migrating a UFS File System to a ZFS File System

This procedure describes how to migrate a UFS file system to a ZFS
file system. Creating a boot environment provides a method of copying critical file
systems from an active UFS boot environment to a ZFS root pool. The
lucreate command copies the critical file systems to a new boot environment within
an existing ZFS root pool. User-defined (shareable) file systems are not copied and
are not shared with the source UFS boot environment. Also, /swap is not shared
between the UFS file system and ZFS root pool. For an overview of
critical and shareable file systems, see File System Types.

How to Migrate a UFS File System to a ZFS File System

Note - To migrate an active UFS root (/) file system to a ZFS root
pool, you must provide the name of the root pool. The critical file
systems are copied into the root pool.

Before running Live Upgrade for the first time, you must install the latest
Live Upgrade packages from installation media and install the patches listed in the
My Oracle Support knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly
206844). Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements
(formerly 206844) on the My Oracle Support web site.

The latest packages and patches ensure that you have all the latest bug
fixes and new features in the release. Ensure that you install all the
patches that are relevant to your system before proceeding to create a new
boot environment.

The following substeps describe the steps in the My Oracle Support knowledge document
1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844).

Note - Using Live Upgrade to create new ZFS boot environments requires at least the
Solaris 10 10/08 release to be installed. Previous releases do not have the
ZFS and Live Upgrade software to perform the tasks.

From the My Oracle Support web site, follow the instructions in knowledge document
1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) to remove and add
Live Upgrade packages.

The three Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to
upgrade by using Live Upgrade. These packages include existing software, new features, and bug
fixes. If you do not remove the existing packages and install the new
packages on your system before using Live Upgrade, upgrading to the
target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live
Upgrade packages from a release previous to Solaris 10 8/07, you do not
need to remove this package.

# pkgrm SUNWlucfg SUNWluu SUNWlur

Install the new Live Upgrade packages from the release to which you are
upgrading. For instructions, see Installing Live Upgrade.

Before running Live Upgrade, you are required to install the following patches. These
patches ensure that you have all the latest bug fixes and new features
in the release.

Ensure that you have the most recently updated patch list by consulting My Oracle Support.
Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly
206844) on the My Oracle Support web site.

If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

The name for the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.

new-zfsBE

The name for the boot environment to be created. The name must be unique on the system.

-prpool

Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.

The creation of the new ZFS boot environment might take a while.
The UFS file system data is being copied to the ZFS root pool.
When the inactive boot environment has been created, you can use the luupgrade
or luactivate command to upgrade or activate the new ZFS boot environment.

The mount points listed for the new boot environment are temporary until the
luactivate command is executed. The /dump and /swap volumes are not shared
with the original UFS boot environment, but are shared within the ZFS root
pool and boot environments within the root pool.

You can now upgrade and activate the new boot environment.

Example 12-1 Migrating a UFS Root (/) File System to a ZFS Root Pool

In this example, the new ZFS root pool, rpool, is created on a
separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot
environment,c0t0d0, to the new ZFS boot environment, new-zfsBE, and places the new boot
environment in rpool.

In this example, the new boot environment is upgraded by using the
luupgrade command from an image that is stored in the location indicated with
the -s option.

# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10
51135 blocks
miniroot filesystem is <lofs>
Mounting miniroot at
</net/install/export/solaris_10/combined.solaris_10_wos
/Solaris_10/Tools/Boot>
Validating the contents of the media
</net/install/export/s10/combined.s10>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains Solaris version <10_1008>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live
Upgrade requests.
Creating upgrade profile for BE <zfsBE>.
Determining packages to install or upgrade for BE <zfsBE>.
Performing the operating system upgrade of the BE <zfsBE>.
CAUTION: Interrupting this process may leave the boot environment
unstable or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Adding operating system patches to the BE <zfsBE>.
The operating system patch installation is complete.
INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot
environment <zfsBE> contains a log of the upgrade operation.
INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot
environment <zfsBE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all
of the files are located on boot environment <zfsBE>.
Before you activate boot environment <zfsBE>, determine if any
additional system maintenance is required or if additional media
of the software distribution must be installed.
The Solaris upgrade of the boot environment <zfsBE> is complete.

The new boot environment can be activated anytime after it is created.

# luactivate new-zfsBE
A Live Upgrade Sync operation will be performed on startup of boot
environment <new-zfsBE>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following
process
needs to be followed to fallback to the currently working boot
environment:
1. Enter the PROM monitor (ok prompt).
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1t0d0s0 /mnt
4. Run <luactivate> utility with out any arguments from the current boot
environment root slice, as shown below:
/mnt/sbin/luactivate
5. luactivate, activates the previous working boot environment and
indicates the result.
6. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.

# init 6
# svc.startd: The system is coming down. Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

If you fall back to the UFS boot environment, then you need
to import again any ZFS storage pools that were created in the ZFS
boot environment because they are not automatically available in the UFS boot environment. You
will see messages similar to the following example when you switch back to
the UFS boot environment.

# luactivate c0t0d0
WARNING: The following files have changed on both the current boot
environment <new-zfsBE> zone <global> and the boot environment
to be activated <c0t0d0>:
/etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current
boot environment <zfsBE> zone <global> and the boot environment to be
activated <c0t0d0>. These files will not be automatically synchronized
from the current boot environment <new-zfsBE> when boot environment <c0t0d0>