If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.

What stops them from releasing ZFS under the BSD license? GPL is there for good reasons and it's the best license when you want to compete with other projects. However, its restrictions have nothing to this thread and strange Oracle's behavior.

One of the disadvantages of the BSD license to users, is that it doesn't grant you a patent licence on the code, like eg. ZFS's CDDL does.

Comment

Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.

This is true what you said, but while ZFS is now closed source the possibility of killing btrfs is much lower now.

Comment

Also nowadays, because of going back to closed source, ZFS has two major forks:
1. Solaris ZFS
2. Open Source ZFS

Solaris ZFS can read and import Open Source ZFS, but the reverse is not true. ZFS is backwards compatible, but not forward compatible and the Solaris ZFS has new features that the Open Source ZFS does not have.

Yes. But even if Btrfs doesn't die, Btrfs and its GPL licence puts other *nix systems in exactly the same position as ZFS is on Linux: they can use it, but they can't distribute it with their kernel. This isn't any better for interoperability.

Well.
A) Oracle can't close source BTRFS like they did with ZFS. The reason being is that while they did sponsor the development originally and hosted the websites for it originally they do not own the copyrights.

B) It's a Linux file system shares code extensively with Linux-VFS, So it's a derivative and has to be licensed GPL.

C) Portable file systems were never much of a priority for anybody. Unix systems, including BSD, used UFS/FFS fairly universally. Even OS X supported it. However they tended to introduce subtle changes and assumptions so that even though they share a common code base portability was undermined.

Comment

Also nowadays, because of going back to closed source, ZFS has two major forks:
1. Solaris ZFS
2. Open Source ZFS

Solaris ZFS can read and import Open Source ZFS, but the reverse is not true. ZFS is backwards compatible, but not forward compatible and the Solaris ZFS has new features that the Open Source ZFS does not have.

Comment

However, with FreeBSD, OpenIndiana, and Solaris 11 around - do you really want to?

I've got a 5.something Terrabyte raidz array in my HTPC. I've had it running well over a year now with few issues. The data integrety has been perfect, however, there is an annoying bug that causes the system to hang from time to time if performing small write operations on one of the files over Samba.

In theory, for reliable software raid 5, raidz can't be beat. I think practice is still catching up to theory, but I hope they keep working on it.

Comment

If ZFS wants to survive outside Solaris and their Open Source forks, a dual licensing must be made in some form and included into the vanilla kernel.

Other than that, this is like the fanatics wanting to use Reiser4 into Linux 3.x. It's just a pipe dream that's going nowhere.

I disagree. ReiserFS was never that great of a filesystem implementation to begin with. Of the 13 years I've been running linux on various machines the only time I've lost data due to filesystem corruption was when using Reiser, and it happened twice inside a year.

What I mean is not just a new, scalable and proper and efficient network filesystem, but also RAID-like capabilities. That could make commodity hardware to get into cheap RAID solutions to avoid data losing.

So what about an article about that instead beating a dead horse like ZFS? Despite all the hype in the past, it reduces interoperability between al UNIXes (and non-UNIXes).

The best I've found is snapraid. It is similar to unraid except it doesn't do realtime raid. Using snapraid with mhhdfs will let you "jbod" a bunch of disks together, designate another disk or two for parity BUT doesn't have the arrary limitations of raid so there's no risk of losing all your data, and it does integrity checks so no bit rot. The developer is working on adding a third parity disk, based on zfs code, but its not there yet.
Its really a fantastic and well engineered project.

The other thing I've heard about ZFS is it's a RAM hog. You need at least 2GB just to get going with it.And if I play games or just want a good desktop experience isn't low-latency essential?

Depends what You want to do with ZFS, it can work with 512 MB RAM (I used that size many time in virtual machines) and also with 512 GB RAM (for serious SAN work).

ZFS can use all memory if YOU ALLOW it for, but You can limit ARC size to the size You want, and ZFS will stop there, for example You can 'sacrifice' 256MB for ZFS ARC (CACHE) and it will not take more.

Deduplication is other thing, You need about 2-3GB RAM for every 1 TB of data, but if You have 40TB for example, You do not need 120GB RAM, You can successfully use ZFS with about 40GB RAM with 80 GB SSD for L2ARC. You can also use 40TB pool under ZFS with, for example 4GB RAM, but reading all hashes directly from disk will be dead slow, RAM in deduplication is needed to hold the hash table for the deduplicated blocks.