Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

Live Upgrade features related to UFS components are still available, and they work
as in previous releases.

The following features are available:

UFS BE to ZFS BE migration

When you migrate your UFS root file system to a ZFS root file system, you must designate an existing ZFS storage pool with the -p option.

If the UFS root file system has components on different slices, they are migrated to the ZFS root pool.

In the Oracle Solaris 10 8/11 release, you can specify a separate /var file system when you migrate your UFS root file system to a ZFS root file system

The basic process for migrating a UFS root file system to a ZFS root file system follows:

Install the required Live Upgrade patches, if needed.

Install a current Oracle Solaris 10 release (Solaris 10 10/08 to Oracle Solaris 10 8/11), or use the standard upgrade program to upgrade from a previous Oracle Solaris 10 release on any supported SPARC based or x86 based system.

When you are running at least the Solaris 10 10/08 release, create a ZFS storage pool for your ZFS root file system.

Use Live Upgrade to migrate your UFS root file system to a ZFS root file system.

Activate your ZFS BE with the luactivate command.

Patch or upgrade a ZFS BE

You can use the luupgrade command to patch or upgrade an existing ZFS BE. You can also use luupgrade to upgrade an alternate ZFS BE with a ZFS flash archive. For information, see Example 4-8.

Live Upgrade can use the ZFS snapshot and clone features when you create a new ZFS BE in the same pool. So, BE creation is much faster than in previous releases.

Zone migration support– You can migrate a system with zones but the supported configurations are limited in the Solaris 10 10/08 release. More zone configurations are supported starting in the Solaris 10 5/09 release. For more information, see the following sections:

ZFS Migration Issues With Live Upgrade

Review the following issues before you use Live Upgrade to migrate your UFS
root file system to a ZFS root file system:

The Oracle Solaris installation GUI's standard upgrade option is not available for migrating from a UFS root file system to a ZFS root file system. To migrate from a UFS file system, you must use Live Upgrade.

You must create the ZFS storage pool that will be used for booting before the Live Upgrade operation. In addition, due to current boot limitations, the ZFS root pool must be created with slices instead of whole disks. For example:

# zpool create rpool mirror c1t0d0s0 c1t1d0s0

Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slices that are intended for the root pool.

You cannot use Oracle Solaris Live Upgrade to create a UFS BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your UFS BE, you can boot from either your UFS BE or your ZFS BE.

Do not rename your ZFS BEs with the zfs rename command because Live Upgrade cannot detect the name change. Subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to continue to use.

When creating an alternate BE that is a clone of the primary BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion and exclusion option set in the following cases:

UFS -> UFS
UFS -> ZFS
ZFS -> ZFS (different pool)

Although you can use Live Upgrade to upgrade your UFS root file system to a ZFS root file system, you cannot use Live Upgrade to upgrade non-root or shared file systems.

You cannot use the lu command to create or migrate a ZFS root file system.

Example 4-4 Using Live Upgrade to Migrate a UFS Root File System to a ZFS Root File System

The following example shows how to migrate a ZFS root file system
from a UFS root file system. The current BE, ufsBE, which contains a UFS
root file system, is identified by the -c option. If you do not
include the optional -c option, the current BE name defaults to the device
name. The new BE, zfsBE, is identified by the -n option. A ZFS
storage pool must exist before the lucreate operation is performed.

The ZFS storage pool must be created with slices rather than with
whole disks to be upgradeable and bootable. Before you create the new pool,
ensure that the disks to be used in the pool have an SMI
(VTOC) label instead of an EFI label. If the disk is relabeled with
an SMI label, ensure that the labeling process did not change the partitioning
scheme. In most cases, all of the disk's capacity should be in the
slice that is intended for the root pool.

Next, use the luactivate command to activate the new ZFS BE. For example:

# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.

If you switch back to the UFS BE, you must re-import any
ZFS storage pools that were created while the ZFS BE was booted because
they are not automatically available in the UFS BE.

If the UFS BE is no longer required, you can remove it
with the ludelete command.

Example 4-5 Using Live Upgrade to Create a ZFS BE From a UFS BE (With a Separate /var)

In the Oracle Solaris 10 8/11 release, you can use the lucreate-D option to identify that you want a separate /var file system created
when you migrate a UFS root file system to a ZFS root
file system. In the following example, the existing UFS BE is migrated to
a ZFS BE with a separate /var file system.

# lucreate -n zfs2BE
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <zfsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs2BE>.
Source boot environment is <zfsBE>.
Creating boot environment <zfs2BE>.
Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
Population of boot environment <zfs2BE> successful.
Creation of boot environment <zfs2BE> successful.

Example 4-7 Update Your ZFS BE (luupgrade)

You can update your ZFS BE with additional packages or patches.

The basic process follows:

Create an alternate BE with the lucreate command.

Activate and boot from the alternate BE.

Update your primary ZFS BE with the luupgrade command to add packages or patches.

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
zfsBE yes no no yes -
zfs2BE yes yes yes no -
# luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge
Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>.
Mounting the BE <zfsBE>.
Adding packages to the BE <zfsBE>.
Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product>
Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.
Using </a> as the package base directory.
## Processing package information.
## Processing system information.
4 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user
permission during the process of installing this package.
Do you want to continue with the installation of <SUNWchxge> [y,n,?] y
Installing Chelsio N110 10GE NIC Driver as <SUNWchxge>
## Installing part 1 of 1.
## Executing postinstall script.
Installation of <SUNWchxge> was successful.
Unmounting the BE <zfsBE>.
The package add to the BE <zfsBE> completed.

Or, you can create a new BE to update to a later
Oracle Solaris release. For example:

# luupgrade -u -n newBE -s /net/install/export/s10up/latest

where the -s option specifies the location of the Solaris installation medium.

Example 4-8 Creating a ZFS BE With a ZFS Flash Archive (luupgrade)

In the Oracle Solaris 10 8/11 release, you can use the luupgrade
command to create a ZFS BE from an existing ZFS flash archive. The
basic process is as follows:

This command establishes datasets in the root pool for the new BE
and copies the current BE (including the zones) to those datasets.

Activate the new ZFS boot environment.

# luactivate s10BE2

Now, the system is running a ZFS root file system, but the
zone roots on UFS are still in the UFS root file system. The
next steps are required to fully migrate the UFS zones to a supported
ZFS configuration.

Reboot the system.

# init 6

Migrate the zones to a ZFS BE.

Boot the zones.

Create another ZFS BE within the pool.

# lucreate s10BE3

Activate the new boot environment.

# luactivate s10BE3

Reboot the system.

# init 6

This step verifies that the ZFS BE and the zones are booted.

Resolve any potential mount-point problems.

Due to a bug in Live Upgrade, the inactive BE might fail to
boot because a ZFS dataset or a zone's ZFS dataset in the BE
has an invalid mount point.

When the option to boot a specific BE is presented, either in the
OpenBoot PROM prompt or the GRUB menu, select the BE whose mount points
were just corrected.

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

This procedure explains how to set up a ZFS root file system
and ZFS zone root configuration that can be upgraded or patched. In this
configuration, the ZFS zone roots are created as ZFS datasets.

In the steps that follow, the example pool name is rpool and the
example name of the active boot environment is s10BE. The name for the
zones dataset can be any valid dataset name. In the following example, the
zones dataset name is zones.

Install the system with a ZFS root, either by using the interactive text
installer or the JumpStart installation method.

When the option to boot a specific boot environment is presented either at
the OpenBoot PROM prompt or the GRUB menu, select the boot environment whose
mount points were just corrected.

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

You can use the Oracle Solaris Live Upgrade feature to migrate or
upgrade a system with zones starting in the Solaris 10 10/08 release. Additional
sparse (root and whole) zone configurations are supported by Live Upgrade starting in the
Solaris 10 5/09 release.

Consider the following points when using Oracle Solaris Live Upgrade with ZFS and
zones starting in at least the Solaris 10 5/09 release:

To use Live Upgrade with zone configurations that are supported starting in the Solaris 10 5/09 release, you must first upgrade your system to at least the Solaris 10 5/09 release by using the standard upgrade program.

Then, with Live Upgrade, you can migrate your UFS root file system with zone roots to a ZFS root file system, or you can upgrade or patch your ZFS root file system and zone roots.

Review the supported zone configurations before using Oracle Solaris Live Upgrade to migrate
or upgrade a system with zones.

Migrate a UFS root file system to a ZFS root file system – The following configurations of zone roots are supported:

In a directory in the UFS root file system

In a subdirectory of a mount point in the UFS root file system

A UFS root file system with a zone root in a UFS root file system directory or in a subdirectory of a UFS root file system mount point and a ZFS non-root pool with a zone root

A UFS root file system that has a zone root as a mount point is not supported.

Migrate or upgrade a ZFS root file system – The following configurations of zone roots are supported:

In a file system in a ZFS root or a non-root pool. For example, /zonepool/zones is acceptable. In some cases, if a file system for the zone root is not provided before the Live Upgrade operation is performed, a file system for the zone root (zoneds) is created by Live Upgrade.

In a descendent file system or subdirectory of a ZFS file system as long as different zone paths are not nested. For example, /zonepool/zones/zone1 and /zonepool/zones/zone1_dir are acceptable.

In the following example, zonepool/zones is a file system that contains the zone roots, and rpool contains the ZFS BE:

Live Upgrade takes snapshots of and clones the zones in zonepool and the rpool BE if you use this syntax:

# lucreate -n newBE

The newBE BE in rpool/ROOT/newBE is created. When activated, newBE provides access to the zonepool components.

In the preceding example, if /zonepool/zones were a subdirectory and not a separate file system, then Live Upgrade would migrate it as a component of the root pool, rpool.

The following ZFS and zone configuration is not supported:

Live upgrade cannot be used to create an alternate BE when the source BE has a non-global zone with a zone path set to the mount point of a top-level pool file system. For example, if zonepool pool has a file system mounted as /zonepool, you cannot have a non-global zone with a zone path set to /zonepool.

Do not add an file system entry for a non-global zone in the global zone's /etc/vfstab file. Instead, use zonecfg's add fs feature to add a file system to a non-global zone.

Zones migration or upgrade information with zones for both UFS and ZFS – Review the following considerations that might affect a migration or an upgrade of either a UFS and ZFS environment:

Do not create zone roots in nested directories, for example, zones/zone1 and zones/zone1/zone2. Otherwise, mounting might fail at boot time.

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

Use this procedure after you have performed an initial installation of at least
the Solaris 10 5/09 release to create a ZFS root file system. Also
use this procedure after you have used the luupgrade command to upgrade a
ZFS root file system to at least the Solaris 10 5/09 release. A
ZFS BE that is created using this procedure can then be upgraded
or patched.

In the steps that follow, the example Oracle Solaris 10 9/10 system has
a ZFS root file system and a zone root dataset in /rpool/zones. A
ZFS BE named zfs2BE is created and can then be upgraded or patched.

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

Use this procedure when you need to upgrade or patch a ZFS
root file system with zone roots in at least the Solaris 10 5/09
release. These updates can consist of either a system upgrade or the application
of patches.

In the steps that follow, zfs2BEis the example name of the BE that
is upgraded or patched.

Example 4-9 Upgrading a ZFS Root File System With a Zone Root to an Oracle Solaris 10 9/10 ZFS Root File System

In this example, a ZFS BE (zfsBE), which was created on a Solaris
10 10/09 system with a ZFS root file system and zone root
in a non-root pool, is upgraded to the Oracle Solaris 10 9/10 release.
This process can take a long time. Then, the upgraded BE (zfs2BE) is activated.
Ensure that the zones are installed and booted before attempting the upgrade.

In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone
zone are created as follows:

Example 4-10 Migrating a UFS Root File System With a Zone Root to a ZFS Root File System

In this example, an Oracle Solaris 10 9/10 system with a UFS
root file system and a zone root (/uzone/ufszone), as well as a ZFS non-root
pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS
root file system. Ensure that the ZFS root pool is created and that
the zones are installed and booted before attempting the migration.