The purpose of this guide is provide examples of configuring ZFS on specific systems or storage components. The ZFS configuration examples in this guide include systems with 2, 4, 8, and 48 disks and eventually, ZFS configurations for systems with storage arrays.

Consider the following best practices when configuring ZFS on any hardware:

Run ZFS on a 64-bit system with at least 1 Gbyte or more of memory.

Alway use ZFS redundancy, such as a mirrored or RAID-Z configuration, regardless of the underlying storage technology, in a production environment.

Set up ZFS hot spares to reduce hardware failures.

Use whole disks for a ZFS configuration in a production environment, which are generally better for performance and for an easier replacement/recovery process than using slices.

Use two disks for a mirrored root pool and then use additional disks to create a redundant non root pool, if necessary.

For detailed ZFS best practice information, go to the following pages:

Consider the following guidelines when configuring ZFS on systems with two disks.

Solaris Express Community Edition (SXCE), build 90 release and starting in the Solaris 10 10/08 release, the ability to install and boot a ZFS mirrored root pool is provided. However, a bootable ZFS root pool needs to be created from disk slices. For example:

mirror c0t0d0s0 and c0t1d0s0

A bootable ZFS root pool can be created during an initial installation or a Custom JumpStart installation.

Solaris 10 releases, use SVM mirroring to mirror root slice and swap areas across the two disks.

Use remaining slices to create a ZFS mirrored configuration. For example:

mirror c0t0d0s7 and c0t1d0s7

Using slices in a ZFS configuration is not recommended for a production environment. However, you can create a slice that represents the entire disk if you want to create a bootable ZFS pool in the SXCE, build 90 release. The other alternative is to add more disks so that you can create a redundant ZFS configuration across whole disks.

The x4500 has 6 controllers with 8 disks each for a total of 48 disks. By default, this system is configured with raidz1 devices comprised of disks on each of the 6 controllers. This redundant configuration is optimized for space with single-parity data protection, not for performance.

Consider the following general configurations guidelines if the default configuration doesn't meet your needs:

In the SXCE release or Solaris 10 release starting in Solaris 10 10/08, mirror the boot disks across the controllers, if possible, depending on your configuration during an initial installation or a Custom JumpStart installation. For example, mirror c4t0d0 to c5t0d0.

This raidz2 configuration provides approximately 12.5 terabytes of file system space

c0t0d0, c1t0d0, c6t0d0, and c7t0d0 are used for hot spares

In the following examples, the rzpool storage pool is created with 4 raidz2 devices of 6 disks.
Depending on your shell environment, you might run into a maximum character line limit, so the commands are separated into different steps.

In this scenario, you would need to use the hardware RAID capability of the array. You would have to make decisions
about what RAID level to use in the array, and then decide how many and what size LUNs to make. See
#Choosing_Storage_Array_Redundancy_With_ZFS for a description of using redundancy at the array level with ZFS.

Presenting LUNs that consist of a set of striped disks in an array

This configuration is not recommended unless you use a ZFS mirrored configuration on top of the striped LUNs. Striped
LUNs do not provide enough redundancy to be a reliable configuration. If a disk fails, you must replace the bad disk,
re-create stripe on an array and have ZFS resilver the data unless hot spares are configured.

Consider the following advantages and disadvantages when choosing a storage array redundancy configuration:

Presenting mirrored LUNs from the array to ZFS

* Advantage: If a disk fails in the array, the other disk in the mirror continues to present a viable LUN to ZFS. You
replace the disk in the array, the array does the resilvering, and ZFS remains oblivious to this operation.
* Disadvantage: Disk space utilized by a mirrored configuration.

* Advantage: A raidz1 configuration consumes some quantity of disk space for parity, but you'd have protection
from a full mirror failure. A raidz2 configuration might be overkill, because your data is mirrored anyway.

* For best reliability and space optimization, use a mirrored ZFS configuration with a hardware-based RAID5
configuration rather than a ZFS raidz configuration with a RAID-5 hardware configuration. Otherwise, time is spent
calculating parity in the software *and* the hardware.

The Sun StorEdge 3510 holds 12 disks, that range in size up to 146 Gbytes.

If presenting 12 x 146 Gbyte disks to your FC fabric provides the granularity you need, this configuration might be optimal space utilization. In this case, 12 x 146 Gbyte LUNs would be available to the hosts attached to the FC fabric. You could use them all on one system, or divide them up among multiple systems. You can expand this scenario if you have multiple arrays available.

In this example, the 4450 system is used as a build server, where performance is more important than maximizing disk space. The system components are configured as follows:

This system contains 8 internal SATA disks that are the same size as the 24 disks on the 3510s.

The 24 disks on the 3510s are mirrored as 12 two-way mirrors.

Four of the internal SATA drives are also mirrored.

Three of the internal SATA drives are allocated as spares. This isn't an optimal best practice, but provides some protection from a hardware failure in the JBOD arrays.