Size Estimates for zfs send and zfs destroy

This feature enhances OpenZFS's internal space accounting information. This new accounting information is used to provide a -n (dry-run) option for zfs send which can instantly calculate the amount of send stream data a specific zfs send command would generate. It is also used for a -n option for zfs destroy which can instantly calculate the amount of space that would be reclaimed by a specific zfs destroy command.

Arbitrary Snapshot Arguments to zfs snapshot

Performance

These are significant performance improvements, often requiring substantial restructuring of the source code.

Listed in chronological order (oldest first).

SA based xattrs

Improves performance of linux-style (short) xattrs by storing them in the dnode_phys_t's bonus block. (Not to be confused with Solaris-style Extended Attributes which are full-fledged files or "forks", like NTFS streams. This work could be extended to also improve the performance on illumos of small Extended Attributes whose permissions are the same as the containing file.)

Requires a disk format change and is off by default until Filesystem (ZPL) Feature Flags are implemented (not to be confused with zpool Feature Flags).

Use the slog even with logbias=throughput

Asynchronous Filesystem and Volume Destruction

Destroying a filesystem requires traversing all of its data in order to return its used blocks to the pool's free list. Before this feature the filesystem was not fully removed until all blocks had been reclaimed. If the destroy operation was interrupted by a reboot or power outage the next attempt to import the pool (probably during boot) would need to complete the destroy operation synchronously, possibly delaying a boot for long periods of time.

With asynchronous destruction the filesystem's data is immediately moved to a "to be freed" list, allowing the destroy operation to complete without traversing any of the filesystem's data. A background process reclaims blocks from this "to be freed" list and is capable of resuming this process after reboots without slowing the pool import process.

The new freeing algorithm also has a significant performance improvement when destroying clones. The old algorithm took time proportional to the number of blocks referenced by the clone, even if most of those blocks could not be reclaimed because they were still referenced by the clone's origin. The new algorithm only takes time proportional to the number of blocks unique to the clone.

Reduce Number of Empty bpobjs

Every time OpenZFS takes a snapshot it creates on-disk block pointer objects (bpobj's) to track blocks associated with that snapshot. In common use cases most of these bpobj's are empty, but the number of bpobjs per-snapshot is proportional to the number of snapshots already taken of the same filesystem or volume. When a single filesystem or volume has many (tens of thousands) snapshots these unecessary empty bpobjs can waste space and cause performance problems. OpenZFS waits to create each bpobjs until the first entry is added to it, thus eliminating the empty bpobjs.

Note: The empty_bpobj feature flag must be enabled to take advantage of this.

Single Copy ARC

OpenZFS caches disk blocks in-memory in the adaptive replacement cache (ARC). Originally when the same disk block was accessed from different clones it was cached multiple times (one for each clone accessing the block) in case a clone planned to modify the block. With these changes OpenZFS caches at most one copy of every block unless a clone is actually modifying the block.

TRIM Support

TRIM support provides the ability to pass deletes / frees through to underlying vdevs that help to ensure devices such as SSD's, which rely on receiving TRIM / UNMAP requests for sectors which are no longer needed, maintain optimal performance.

The current FreeBSD implementation builds a map of regions that were freed. On every write the
code consults the map and removes ranges that were freed before, but are now overwritten.

Freed blocks are not TRIMed immediately, there is a low priority thread that TRIMs ranges when the time comes.

Support for TRIM has been demonstrated to significantly improved the general performance of SSD in the field, eliminating the need for regular secure erase cycles on busy hosts.

FASTWRITE Algorithm

Block Freeing Performance Improvments

Performance analysis of OpenZFS revealed that the algorithms used when freeing blocks could cause significant performance problems when freeing a large amount of blocks in a single transaction or when dealing with fragmented pools. Several performance improvements were made in this area.

nop-write

ZFS supports end-to-end checksumming of every data block. When a cryptographically secure checksum is being used OpenZFS will compare the checksums of incoming writes to checksum of the existing on-disk data and avoid issuing any write i/o for data that has not changed. This can help performance and snapshot space usage in situations were the same files are regularly overwritten with almost-identical data (e.g. regular full-backups of large random-access files).

lz4 compression

OpenZFS supports on-the-fly compression of all user data with a variety of compression algorithm. This feature adds support for the lz4 compression algorithm. lz4 is usually faster and compresses data better than lzjb, the old default OpenZFS compression algorithm.

Note: The lz4_compress feature flag must be enabled to take advantage of this.

synctask rewrite

l2arc compression

ARC Shouldn't Cache Freed Blocks

Originally cached blocks in the ARC remained cached until they were evicted due to memory pressure, even if the underlying disk block was freed. In some workloads these freed blocks were so frequently accessed before they were freed that the ARC continued to cache them while evicting blocks which had not been freed yet. Since freed blocks could never be accessed again continuing to cache them was unnecessary. In OpenZFS ARC blocks are evicted immediately when their underlying data blocks are freed.

Improve N-way mirror read performance

Queues read requests to least busy leaf vdev in mirrors.

In addition to the vdev load biasing first implemented by ZFS on Linux in July 2013, the FreeBSD October 2013 version added I/O locality and device rotational information to further enhance the performance.

Smoother Write Throttle

The write throttle (dsl_pool_tempreserve_space() and txg_constrain_throughput()) is rewritten to produce much more consistent delays when under constant load. The new write throttle is based on the amount of dirty data, rather than guesses about future performance of the system. When there is a lot of dirty data, each transaction (e.g. write() syscall) will be delayed by the same small amount. This eliminates the "brick wall of wait" that the old write throttle could hit, causing all transactions to wait several seconds until the next txg opens. One of the keys to the new write throttle is decrementing the amount of dirty data as i/o completes, rather than at the end of spa_sync(). Note that the write throttle is only applied once the i/o scheduler is issuing the maximum number of outstanding async writes. See the block comments in dsl_pool.c and above dmu_tx_delay() for more details.

The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync read, sync write, async read, async write, and scrub/resilver. The scheduler issues a number of concurrent i/os from each class to the device. Once a class has been selected, an i/o is selected from this class using either an elevator algorithem (async, scrub classes) or FIFO (sync classes). The number of concurrent async write i/os is tuned dynamically based on i/o load, to achieve good sync i/o latency when there is not a high load of writes, and good write throughput when there is. See the block comment in vdev_queue.c for more details.

Disable LBA Weighting on files and SSDs

On rotational media, the bandwidth of the outermost tracks is approximately twice that of innermost tracks. A heuristic called LBA weighting was put into the metaslab allocator to account for this by favoring the outermost tracks over the innermost tracks. This has the consequence that metaslabs tend to fill at different rates depending on their location. This causes the metaslabs corresponding to outermost tracks to enter the best-fit allocation strategy.

The best-fit allocation strategy is more CPU intensive than the typical first-fit because it looks for the smallest region of free space able to fulfill an allocation rather than picking the next avaliable one. The CPU time is fairly excessive and is known to harm IOPS, but it exists to minimize use of gang blocks as a metaslab becomes excessively full. Gaining a bandwidth improvement from LBA weighting at the expense of an earlier switch to the best-fit allocation behavior on the weighted metaslabs is reasonable on rotational disks. However, it makes no sense on files, where the underlying filesystem is free to place things however way it sees fit, and on SSDs, where there is no bandwidth difference based on LBA.

With this change, we will more evenly fill metaslabs on pools whose vdevs consist of only files and SSDs, which will minimize the metaslabs that enter the best fit allocation strategy when a pool is mostly full, but still below 96% full. This is particularly important on SSDs, where drops in IOPS are more pronounced.

Dataset Properties

These are new filesystem, volume, and snapshot properties which can be accessed with the zfs(1) command's get subcommand. See the zfs(1) manpage for your distribution for more details on each of these properties.

Property

Description

illumos

FreeBSD

ZFS on Linux

OpenZFS on OS X

refcompressratio

The compression ratio acheived for all data referenced by (but not necessarily unique to) a snapshot, filesystem, or volume, expressed as a multiplier.