ZFS: My notes on using ztune.sh for random IO access pattern

ZFS’s design goal is to be self tune. So, users don’t have to tweak kernel parameters to adjust ZFS’s performance characteristics. For those who wish to live on the bleeding edge of performance, I am including my notes on using a ztune.sh script below.

Note: Each workload is different and these values may or may not work for you due to different load and/or hardware characteristics.

I used the following two values for the two kernel parameters:
vdev prefetch size = 8
max_pending = 12

The workload access pattern is 100% random, therefore any form of caching does not help speed up the performance. Instead, such technique will generate unnecessary excessive I/O traffic that will limit the workload’s throughput. The data block size is 8K. Setting the prefetch size to 8 essentially disable the prefetch feature. When the workload ask for 8k, ZFS will ony fetch 8K, not the default 64K. The ZFS partition was created on a very fast storage. So, we decreased the max_pending to lower the number of I/O operations ZFS put on hold before sending it off to the storage. This can lower the response time and increase the traffic to take advantage of the massive bandwidth provided by 4 fiber channel connection to the storage.

Special thanks to Vik N. and Neel N. for the explanations of these tunables.

Related

One thought on “ZFS: My notes on using ztune.sh for random IO access pattern”

I’m working on my grade work and I’m dealing with ZTune, I found this piece of information really useful in my work. I’d like to know your full name and a briefing of what you do for life so I can do a nice citation,