If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

what's the configuration? All 48 drives converging onto a single ext4 filesystem? If not, then I think you really don't know what he is talking about. He was talking about ZFS managing large number of disks and scaling performance better than Linux filesystems. And I agree with him.

We have a Linux server with 68 drives but that doesn't tell anything about how the filesystem is using those 68 drives.

what's the configuration? All 48 drives converging onto a single ext4 filesystem? If not, then I think you really don't know what he is talking about. He was talking about ZFS managing large number of disks and scaling performance better than Linux filesystems. And I agree with him.

We have a Linux server with 68 drives but that doesn't tell anything about how the filesystem is using those 68 drives.

He should at least know that XFS is generally the FS of choice under linux when going with large numbers of disks, so pointing out some EXT4 improvement as proof of linux's failure is missing the point entirely.

what's the configuration? All 48 drives converging onto a single ext4 filesystem? If not, then I think you really don't know what he is talking about. He was talking about ZFS managing large number of disks and scaling performance better than Linux filesystems. And I agree with him.

We have a Linux server with 68 drives but that doesn't tell anything about how the filesystem is using those 68 drives.

Hence the reason I said he had little idea what he's talking about - ext4 doesn't handle physical volumes - its not btrfs or ZFS.
If you have large RAID (Say, above 6-8 drives), you don't use a ext4/LVM/SoftMD combo - you simply buy an expensive SAS RAID controller w/ RAID6 support and huge amounts of memory. (In my case: multiple ext4 partitions over a single LVM PV running on an external FC box)

In August we delivered the news that Linux was soon to receive a native ZFS Linux kernel module. The Sun (now Oracle) ZFS file-system has long been sought after for Linux, though less now since Btrfs has emerged, but incompatibilities between the CDDL and GPL licenses have barred such support from entering the mainline Linux kernel. There has been ZFS-FUSE to run the ZFS file-system in user-space, but it comes with slow performance. There has also been work by the Lawrence Livermore National Laboratories in porting ZFS to Linux as a native Linux kernel module. This LLNL ZFS work though is incomplete but still progressing due to a US Department of Energy contract. It is though via this work that developers in India at KQ Infotech have made working a Linux kernel module for ZFS. In this article are some new details on KQ Infotech's ZFS kernel module and our results from testing out the ZFS file-system on Linux.

Any plans to repeat the same with latest code on top of 2.6.35.10 kernel? It upgrades to more recent ZFS codebase as well as fixes a lot of issues. In my own experiences, the latest code is stable and very performant, even on a USB drive.

what's the configuration? All 48 drives converging onto a single ext4 filesystem? If not, then I think you really don't know what he is talking about. He was talking about ZFS managing large number of disks and scaling performance better than Linux filesystems. And I agree with him.