If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

ZFS On Linux Is Now Set For "Wide Scale Deployment"

Phoronix: ZFS On Linux Is Now Set For "Wide Scale Deployment"

The Sun/Oracle ZFS file-system port to the Linux kernel has now been deemed ready with its new release as "ready for wide scale deployment on everything from desktops to super computers." Will you use ZFS On Linux?..

I already do... I'm attempting to use it to recover a corrupted 3-drive ZFS Raid pool that my NAS ate while I was swapping a drive out (and attempting to re-size the pool at the same time).

That being said, I broke the pool out of my own stupidity right after I had neglected to take a fresh backup due to impatience... So either I re-write the parts of ZFS that handle the drive labels to remove the checksum verification and convince it that the missing drive is just off-line, or I lose about 10 years of digital photos...

When I've just left the ZFS array on its own, it has performed wonderfully and reliably in the 3-drive setup in my freenas box. The on-line scrubbing/verification and end-to-end checksums are re-assuring, as well as the fault tolerance for a single-drive failure. Backups are still required anyway, but it's reassuring to know that if a drive dies I have time to find a spare and swap it in without having to scramble.

Current uptime is only ~90 days, but that's due to some power outages at the beginning of winter.

why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs

Last time I tried using it for production was the KQ Infotech port on RHEL6, but that wasn't stable under high IO load (BackupPC Server).
Since, I have also tried the LLN port to access data on zfs pools for recovery with success, but due to the nature of the usage cannot generalize a recommendation.

If you do new ZFS benchmarks Michael, please do not just do single disk / SSD benchmarks. These are pointless. A comparison of a multidisk mdadm raid5/6 vs a ZFS raidz /raidz2 would be very interesting though. Added information value could be gained by testing how much a ZIL log or cache device SSD improves speeds.
Otherwise thanks for the greate site! :-)

New benchmarks of ZFS On Linux compared to other Linux file-systems will likely come soon. The last time at Phoronix we did extensive ZFS Linux benchmarks was last summer with ZFS On Linux With Ubuntu 12.04 LTS.

Please test ZFS pools created with different ashift values when you do these benchmarks. The default ashift is hardware-dependent and will be wrong if your hardware lies. You can find out what the default ashift for your hardware is after you make the pool by running `zdb`. If your hardware lies, it will likely be ashift=9. It should be ashift=12 on advanced format disks, which are basically hard disks manufactured after 2009. It should be ashift=13 on SSDs manufactured in roughly the same time frame. If you do not do this, your benchmarks will be invalid.

Originally Posted by Veerappan

Will I use ZFS on Linux?

I already do... I'm attempting to use it to recover a corrupted 3-drive ZFS Raid pool that my NAS ate while I was swapping a drive out (and attempting to re-size the pool at the same time).

That being said, I broke the pool out of my own stupidity right after I had neglected to take a fresh backup due to impatience... So either I re-write the parts of ZFS that handle the drive labels to remove the checksum verification and convince it that the missing drive is just off-line, or I lose about 10 years of digital photos...

When I've just left the ZFS array on its own, it has performed wonderfully and reliably in the 3-drive setup in my freenas box. The on-line scrubbing/verification and end-to-end checksums are re-assuring, as well as the fault tolerance for a single-drive failure. Backups are still required anyway, but it's reassuring to know that if a drive dies I have time to find a spare and swap it in without having to scramble.

Current uptime is only ~90 days, but that's due to some power outages at the beginning of winter.

You should join #zfs on freenode. The community should be able to help you with recovery.

Originally Posted by garegin

why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs

My understanding is that paid support is important to LLNL, which uses the lustre filesystem on top of ZFS. With Linux, they have paid support from both Whamcloud and Redhat. If they switched to FreeBSD, they would need to port lustre and then they would likely need to support themselves. If they switched Solaris (or Illumos), they could get support for the base system from a vendor, but they would still be on their own for lustre support. On the other hand, Whamcloud has significant interest in ZFS as a replacement for their ext4-based ldiskfs. This means that they can get support for Lustre from Whamcloud when using ZFS as a backend for lustre on Linux.

I should note that I am not associated with LLNL. My statements here should be taken as those of an outsider.

Last time I tried using it for production was the KQ Infotech port on RHEL6, but that wasn't stable under high IO load (BackupPC Server).
Since, I have also tried the LLN port to access data on zfs pools for recovery with success, but due to the nature of the usage cannot generalize a recommendation.

That code had numerous issues; I know because I wrote fixes for several of them. You should have a far better experience with the latest ZFSOnLinux code.

Originally Posted by Ares Drake

If you do new ZFS benchmarks Michael, please do not just do single disk / SSD benchmarks. These are pointless. A comparison of a multidisk mdadm raid5/6 vs a ZFS raidz /raidz2 would be very interesting though. Added information value could be gained by testing how much a ZIL log or cache device SSD improves speeds.

That would be great, but I doubt it will happen. Michael informed me that he was not comfortable doing multiple disk benchmarks because he lacked appropriate enterprise hardware when I last spoke to him. This is despite the fact that ZFS will work well without highend hardware (it is a selling point!) and the old disks that he has would be fine. :/

I have been using zfs for quite a while now.
while zfs and btrfs have a similar design, there is a big difference between them.
ZFS has been released as a stable filesystem many years ago by Sun Microsystem, and is now released as stable on linux,
while btrfs is not yet stable after many years and still considered experimental/in developpment/not stable.
If you want a filesystem with btree functionnality in production, your only choice is zfs.
one biased opinion to read : http://rudd-o.com/linux-and-free-sof...ter-than-btrfs

why don't you, for this job, just use freebsd or even better- solaris? solaris has the best/latest support for zfs

I prefer apt / debian to FREEBSD. Before I ran debian + ZFSonLinux (in a mostly unrecommended way) I had network stutters and similar issues on my n40l which is a known problem. On debian these seemed to be absent. Also this is a home server, I run more than just filestorage on it and I just prefer to use what I'm used to. It's rock stable for me, running 5 disk raidz2.

Also its arguably if solaris really has the best support. They have support from Oracle, but the opensource zfs and oracle zfs are now two different beasts. Either way ZFS is a great filesystem and I don't see anything bad about this announcement.