I'd like to argue that both ZFS and BTRFS both are incomplete file systems with their own drawbacks and that it may still be a long way off before we have something truly great.

Both ZFS and BTRFS are two heroic feats of engineering, created by people who are probably ten times more capable and smarter than me. There is no question about my appreciation for these file systems and what they accomplish.

Still, as an end-user, I would like to see some features that are often either missing or not complete. Make no mistake, I believe that both ZFS and BTRFS are probably the best file systems we have today. But they can be much better.

I want to start with a terse and quick overview on why both ZFS and BTRFS are such great file systems and why you should take some interest in them.

Then I'd like to discuss their individual drawbacks and explain my argument.

Why ZFS and BTRFS are so great

Both ZFS and BTRFS are great for two reasons:

They focus on preserving data integrity

They simplify storage management

Data integrity

ZFS and BTRFS implement two important techniques that help preserve data.

Data is checksummed and its checksum is verified to guard against bit rot due to broken hard drives or flaky storage controllers. If redundancy is available (RAID), errors can even be corrected.

Copy-on-Write (CoW), existing data is never overwritten, so any calamity like sudden power loss cannot cause existing data to be in an inconsistent state.

Simplified storage management

In the old days, we had MDADM or hardware RAID for redundancy. LVM for logical volume management and then on top of that, we have the file system of choice (EXT3/4, XFS, REISERFS, etc).

The main problem with this approach is that the layers are not aware of each other and this makes things very inefficient and more difficult to administer. Each layer needs it's own attention.

For example, if you simply want to expand storage capacity, you need to add drives to your RAID array and expand it. Then, you have to alert the LVM layer of the extra storage and as a last step, grow the file system.

Both ZFS and BTRFS make capacity expansion a simple one line command that addresses all three steps above.

Why are ZFS and BTRFS capable of doing this? Because they incorporate RAID, LVM and the file system in one single integrated solution. Each 'layer' is aware of the other, they are tightly integrated. Because of this integration, rebuilds after a drive faillure are often faster than with 'legacy RAID' solutions, because they only need to rebuild the actual data, not the entire drive.

And I'm not even talking about the joy of snapshots here.

The inflexibility of ZFS

The storage building block of ZFS is a VDEV. A VDEV is either a single disk (not so interesting) or some RAID scheme, such as mirroring, single-parity (RAIDZ), dual-parity (RAIDZ2) and even tripple-parity (RAIDZ3).

To me, a big downside to ZFS is the fact that you cannot expand a VDEV. Ok, the only way you can expand the VDEV is quite convoluted. You have to replace all of the existing drives, one by one, with bigger ones and rebuild the VDEV each time you replace one of the drives. Then, when all drives are of the higher capacity, you can expand your VDEV. This is quite impractical and time-consuming, if you ask me.

ZFS expects you just to add extra VDEVS. So if you start with a single 6-drive RAIDZ2 (RAID6), you are expected to add another 6-drive RAIDZ2 if you want to expand capacity.

What I would want to do is just to ad one or two more drives and grow the VDEV, as is possible with many hardware RAID solutions and with "MDADM --grow" for ages.

Why do I prefer this over adding VDEVS? Because it's quite evident that this is way more economical. If I can just expand my RAIDZ2 from 6 drives to 12 drives, I would only sacrifice two drives for parity. If I add two VDEVS each of them RAIDZ2, I sacrifice four drives (16% vs 33% capacity loss).

I can imagine that in the enterprise world, this is just not that big of a deal, a bunch of drives are a rounding error on the total budget and availability and performance are more important. Still, I'd like to have this option.

Either you are forced to buy and implement the storage you may expect to need in the future, or you must add it later on, wasting drives on parity you would otherwise not have done.

Maybe my wish for a zpool grow option is more geared to hobbyist or home usage of ZFS and ZFS was always focussed on enterprise needs, not the needs of hobbyists. So I'm aware of the context here.

I'm not done with ZFS however, because the way ZFS works, there is another great inflexibility. If you don't put the 'right' number of drives in a VDEV, you may lose significant portions of storage, which is a side-effect of how ZFS works.

I've seen first-hand with my 71 TiB NAS that if you don't use the optimal number of drives in a VDEV, you may lose whole drives worth of netto storage capacity. In that regard, my 24-drive chassis is very suboptimal.

The sad state of RAID on BTRFS

BTRFS has none of the downsides of ZFS as described in the previous section as far as I'm aware of. It has plenty of its own, though. First of all: BTRFS is still not stable, especially the RAID 5/6 part is unstable.

The RAID 5 and RAID 6 implementation are so new, the ink they were written with is still wet (February 8th 2015). Not something you want to trust your important data to I suppose.

I did setup a test environment to play a bit with this new Linux kernel (3.19.0) and BTRFS to see how it works and although it is not production-ready yet, I really like what I see.

With BTRFS you can just add or remove drives to a RAID6 array as you see fit. Add two? Subtract 3? Whatever, the only thing you have to wait for is BTRFS rebalancing the data over either the new or remaining drives.

This is friggin' awesome.

If you want to remove a drive, just wait for BTRFS to copy the data from that drive to the other remaining drives and you can remove it. You want to expand storage? Just add the drives to your storage pool and have BTRFS rebalance the data (which may take a while, but it works).

But I'm still a bit sad. Because BTRFS does not support anything beyond RAID6. No multiple RAID6 (RAID60) arrays or tripple-parity, as ZFS supports for ages. As with my 24-drive file server, putting 24 drives in a single RAID6, starts to feel like I'm asking for trouble. Tripple-parity or RAID 60 would probably be more reasonable. But no luck with BTRFS.

However, what really frustrates me is this article by Ronny Egner. The author of snapraid, Andrea Mazzoleni, has written a functional patch for BTRFS that implements not only tripple-parity RAID, but even up to six parity disks for a volume.

The maddening thing is that the BTRFS maintainers are not planning to include this patch into the BTRFS code base. Please read Ronny's blog. The people working on BTRFS are working for enterprises who want enterprise features. They don't care about tripple-parity or features like that because they have access to something presumably better: distributed file systems, which may do away with the need for larger disk arrays and thus tripple-parity.

BTRFS is in development for a very long time and only recently has RAID 5/6 support been introduced. The risk of the write-hole, something addressed by ZFS ages ago, is still an open issue. Considering all of this, BTRFS is still a very long way off, of being the file system of choice for larger storage arrays.

BTRFS seems to be way more flexible in terms of storage expansion or shrinking, but it slow pace of development makes it still unusable for anything serious for at least the next year I guess.

Conclusion

BTRFS addresses all the inflexibilities of ZFS but it's immaturity and lack of more advanced RAID schemes makes it unusable for larger storage solutions. This is so sad because by design it seems to be the better, way more flexible option as compared to ZFS.

I do understand the view of the BTRFS developers. With the enterprise data sets, at scale, it's better to use distributed file systems to handle storage and redundancy, than on the smaller system scale. But this kind of environment is not reachable for many.

So at the moment, compared to BTRFS, ZFS is still the better option for people who want to setup large, reliable storage arrays.

I've briefly played with LIO but the targetcli tool is interactive only. If you want to automate and use scripts, you need to learn the Python API. I wonder what's wrong with a plain old text-based configuration file.

iscsitarget or IET is broken on Debian Wheezy. If you just 'apt-get install iscsitarget', the iSCSI service will just crash as soon as you connect to it. This has been the case for years. I wonder why they don't just drop this package. It is true that you can manually download the "latest" version of IET, but don't bother, it seems abandoned. The latest release stems from 2010.

It seems that SCST is at least maintained and uses plain old text-based configuration files. So it has that going for it, which is nice. SCST does not require kernel patches to run. But particularly a patch regarding "CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION" is said to improve performance.

To use full power of TCP zero-copy transmit functions, especially
dealing with user space supplied via scst_user module memory, iSCSI-SCST
needs to be notified when Linux networking finished data transmission.
For that you should enable CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION
kernel config option. This is highly recommended, but not required.
Basically, iSCSI-SCST works fine with an unpatched Linux kernel with the
same or better speed as other open source iSCSI targets, including IET,
but if you want even better performance you have to patch and rebuild
the kernel.

So in general, patching your kernel is not always required, but an example will be given anyway.

After this, you can start the SCST module and connect your initiator to the appropriate LUN.

/etc/init.d/scst start

Closing words

It turned out that setting up SCST and compiling a kernel wasn't that much of a hassle. The main issue with patching kernels is that you have to repeat the procedure every time a new kernel version is released. And there is always a risk that a new kernel version breaks the SCST patches.

However, the whole process can be easily automated and thus run as a test in a virtual environment.

You have to understand that at the time, I believe the arguments in the article were relevant, but much has changed since then, and I do believe this article is not relevant anymore.

I really recommend giving ZFS a serious consideration if you are building your own NAS. It's probably the best file system you can use if you care about data integrity.

ZFS may only be available for non-Windows operating systems, but there are quite a few easy-to-use NAS distros available that turn your hardware into a full-featured home NAS box, that can be managed through your web browser. A few examples:

If you are quite familiar with FreeBSD or Linux, I do recommend this ZFS how-to article from Arstechnica. It offers a very nice introduction to ZFS and explains terms like 'pool' and 'vdev'.

My historical reasons for not using ZFS at the time

When I started with my 18 TB NAS in 2009, there was no such thing as ZFS for Linux. ZFS was only available in a stable version for Open Solaris. We all know what happened to Open Solaris (it's gone).

So you might ask: "Why not use ZFS on FreeBSD then?". Good question, but it was bad timing:

The FreeBSD implementation of ZFS became only stable [sic] in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.

One of the other objections against ZFS is the fact that you cannot expand your storage by adding single drives and growing the array as your data set grows.

A ZFS pool consists of one or more VDEVs. A VDEV is a traditional RAID-array. You expand storage capacity by expanding the ZFS pool, not the VDEVS. You cannot expand the VDEV itself. You can only add VDEVS to a pool.

So ZFS either forces you to invest in storage you don't need upfront, or it forces you invest later on because you may waste quite a few extra drives on parity. For example, if you start with a 6-drive RAID6 (RAIDZ) configuration, you will probably expand with another 6 drives. So the pool has 4 parity drives on 12 total drives (33% loss). Investing upfront in 10 drives instead of 6 would have been more efficient because you only lose 2 drives out of 10 to parity (20% loss).

However, despite all this, I still think that even at home, it is recommended to bite the bullet and invest a bit more in hard drives and switch to ZFS.

So at the time, I found it reasonable to stick with what I knew: Linux & MDADM.

I wrote an article about my worry that ZFS may die with FreeBSD as it sole backing, but fortunately, I've been proven very, very wrong.

ZFS is now supported on FreeBSD and Linux. Despite some licencing issues that prevent ZFS from being integrated in the Linux kernel itself, it can still be used as a regular kernel module and it works perfectly.

There is even an open-source ZFS consortium that brings together all the developers for the different operating systems supporting ZFS.

As you can see there is a script within a script that is executed separately by rc.local. That's a bit ugly but it does work.

setup installerconfig (FreeBSD unattended install)

The 'installerconfig' script is a script in a special format used by the bsdinstall tool to automate the installation. The top is used to control variables used during the unattended installation. The bottom is a script executed post-install chrooted on the new system.

As you can see, the post-install script enables SSH, sets the hostname and reduced the autoboot delay.

Please note that I faced an issue where the bsdinstall program would not interpret the options set in the installerconfig script. This is why I exported them with 'setenv' in the rc.local script.

With Debian preseeding or Redhat kickstarting, you can host the preseed or kickstart file on a webserver. Changing the PXE-based installation is just a matter of edditing the preseed or kickstart file on the webserver.

Because it's not fun having to generate a new image every time your want to update your unattended installation, it's recommended to host the installerconfig file on a webserver, as if it is a preseed or kickstart file.

This saves you from having to regenerate the PXE-boot image file every time.

You can still put the installer config in the image itself. If you want a fixed 'installerconfig' file containing the bsdinstall instructions, put this file also in the 'conf' directory. Next, edit the Makefile. Search for this string:

.for FILE in rc.conf ttys

For me, it was at line 315. Change it to:

.for FILE in rc.conf ttys installerconfig

Building the PXE boot image

Now everything is configured, we can generate the boot image with mfsbsd.
Run 'make'. Then when it fails with this error:

just run 'make' again. In my experience, make worked the second time, consistently. I'm not sure why this happens.

The end result of this whole process is a file like 'mfsbsd-se-10.1-RC3-amd64.img'.

You can copy this image to the appropriate folder on your TFTP server. In my example it would be:

/srv/tftp/BSD/FreeBSD/10.1/mfsbsd-10.1-RC3-amd64.img

Test the PXE installation

Boot a test machine from PXE and boot your custom generated image.

Final words

I'm a bit unhappy about how difficult it was to create an PXE-based unattended FreeBSD installation. The bsdinstall installation software seems buggy to me. However, it could be just me: that I have misunderstood how it al works. However, I can't seem to find any documentation on how to properly use the bsdinstall system for an unattended installation.

If anyone has suggestions or ideas to implement an unattended bsdinstall script 'properly', with ZFS support, I'm all ears.

I've created a tool called procurve-watch. It creates a backup of the running switch configuration through secure shell (using scp).

It also diffs backed up configurations against older versions, in order to keep track of changes. If you run the script from cron every hour or so, you will be notified by email of any (running) configuration changes.

The tool can backup hundreds of switches in seconds as it is running the configuration copy in parallel.

A tool like Rancid may actually be the best choice for this task, but it didn't work. The latest version of Rancid doesn't support HP Procurve switches (yet) and older versions created backups containing garbled characters.

I've released it on github, check it out and let me know if it works for you and you have suggestions to improve it further.