When it comes to dealing with storage, Solaris 10 provides admins with more choices than any other operating system. Right out of the box, it offers two filesystems, two volume managers, an iscsi target and initiator, and, naturally, an NFS server. Add a couple of Sun packages and you have volume replication, a cluster filesystem, and a hierarchical storage manager. Trust your data to the still-in-development features found in OpenSolaris, and you can have a fibre channel target and an in-kernel CIFS server, among other things. True, some of these features can be found in any enterprise-ready UNIX OS. But Solaris 10 integrates all of them into one well-tested package. Editor's note: This is the first of our published submissions for the 2008 Article Contest.

Feel free to describe what those layers are and what they do. It certainly isn't layered into a filesystem, volume manager and RAID subsystems.

So that means it isn't layered.... hmmm what are you smoking. It's different so it must be bad. I get it.

Breaking that layering was intentional because that layering adds nothing but more points of failure.

It's like saying electric cars are broken and rampant violations because they are not powered by gas. Which is utter nonsense.

When it's been around as long as the Vertitas Storage System, or indeed, pretty much any other filesystem, volume manager or software RAID implementation, give us a call.

Huh?? WTF does that have to do with anything? It is easy to use has no bearing on how long something has been in the market.

Your condition will never be true. Call me when Linux has been around for as long as Unix System V has been around. Unix System V has been around since 1983. Linux since 1992. Linux will never be around longer than Unix SV unless Unix dies at a particular time and linux continues.

BTW ZFS has been around longer than ReiserFS 4. Wait but ReiserFS 4 is completely useless.

I don't see lots of Linux users absolutely desperate to start ditching what they have to use ZFS.

The first comment on Jeff Bonwick's blog post that was linked in an earlier post had some guy running a 70TB linux storage solution who was waiting to dump it for ZFS.

I'm afraid you've been at the Sun koolaid drinking fountain. ZFS is not implemented in a working fashion in any way shape or form on OS X (Sun always seems to get very excited about OS X for some reason) or FreeBSD. They are exceptionally experimental, and pre-alpha, and integrating it with existing filesystems, volume managers and RAID systems is going to be exceptionally difficult unless they just go ZFS completely.

It is just being ported and is unstable. That doesn't mean it is impossible to port, as you claimed, because ZFS isn't layered.

It all depends on how many resources apple wants to put into ZFS and their Business plan. Your claim was directly in relation to some rubbish quote by Andrew Morton. You then, based on ill conceived conjecture, claimed ZFS is not portable because of "rampant layering" violations. Which is just nonsense.

You can create a RAID volume and ZFS can add it to a pool. You can then create a zvol from that pool and format it with other filesysems. You can create a LVM volume and add it to a ZFS pool as long as it is a block device. You can even take a file on a filesystem and zfs can use it in a pool. You have no idea what you are talking about.

You should stop drinking the Anti-Sun kool-aid. Its no secret that you are an anti-Sun troll on OSnews.

So what? You're sitting on a Solaris box. When you have HPFS, LVM, RAID and other partitions on your system and you're working out how to consolidate them (or you're a non-Solaris OS developer trying to work that out), give us a call.

WTF are you on about again? You claimed ZFS can't co exist with other files Systems because of its design. When you have figured out basic software layering and architecture or have at least learned how to look at some HTML code give us a call.

So that means it isn't layered.... hmmm what are you smoking. It's different so it must be bad. I get it.

Thanks for side-stepping it ;-).

Breaking that layering was intentional because that layering adds nothing but more points of failure.

What points of failure?

Huh?? WTF does that have to do with anything? It is easy to use has no bearing on how long something has been in the market.

Obviously filesystems and storage management software don't need to be proved. The point is that you've got lots of systems out there for storage management that people are already using, and Sun expects people to drop all that for ZFS, which does the same thing - but maybe slightly better in some areas. That's not enough.

Your condition will never be true. Call me when Linux has been around for as long as Unix System V has been around. Unix System V has been around since 1983. Linux since 1992. Linux will never be around...........

Yadd, yadda, yadda. This was about layering violations, wasn't it? The reason why Linux became popular and people moved off Solaris to it was because it ran well on x86 and generally available hardware. Sun thought everyone would move to a 'real' OS in Solaris and run on 'real' hardware. They didn't. ZFS follows in that fine tradition as it simply will not run on 32-bit systems.

BTW ZFS has been around longer than ReiserFS 4. Wait but ReiserFS 4 is completely useless

I don't use Reiser 4, and neither does anyone else.

The first comment on Jeff Bonwick's blog post that was linked in an earlier post had some guy running a 70TB linux storage solution who was waiting to dump it for ZFS.

Very scientific. Some bloke posting on someone's blog..... I don't find dumping a storage set up a valuable use of time, money or resources, and the cost/benefit just isn't there. Does ZFS have tools to help interoperability and migration, or will he be doing this himself?

It is just being ported and is unstable. That doesn't mean it is impossible to port, as you claimed, because ZFS isn't layered.

You cannot equate ZFS to existing storage systems and make them interoperate. If you go down the ZFS route it's really all or nothing. If it was layered into logical units and containers then that would be possible, and it would be possible for people like Apple and FreeBSD to reuse existing code and infrastructure.

You then, based on ill conceived conjecture, claimed ZFS is not portable because of "rampant layering" violations. Which is just nonsense.

You proceeded to proudly claim that ZFS didn't violate any layers that you would expect to see in a storage system stack (a filesystem, a volume manager and RAID containers), and then you actually admitted it:

"Breaking that layering was intentional because that layering adds nothing but more points of failure."

Then you didn't explain how ZFS was logically structured, nor did you explain these mythical points of failure.

WTF are you on about again? You claimed ZFS can't co exist with other files Systems because of its design. When you have figured out basic software layering........

Since you haven't explained how ZFS is actually layered..........

You can't just type out a bunch of words and make them true unfortunately. ZFS will simply not cooperate with existing filesystems and existing volume management and RAID systems. You can't for example, have the ZFS system manage existing RAID systems or volumes that FreeBSD might use, nor can Apple use the ZFS system to manage HPFS volumes. You just end up with duplicate systems lying around.

That was the point. ZFS cannot work with existing storage systems code, and to do so will mean picking the code apart.

Obviously filesystems and storage management softeare don't need to be proved. The point is that you've got lots of systems out there for storage management that people are already using, and Sun expects people to drop all that for ZFS, which does the same thing - but maybe slightly better in some areas. That's not enough.

Of course Sun wants everyone to use ZFS. Just like the linux guys want everyone to use linux or Apple wants every one to use a Mac. Should I go on? Its called a product and the process is marketing. Every single player in the computing industry from RedHat to some new startup is guilty of it.

Yadd, yadda, yadda. This was about layering violations, wasn't it? The reason why Linux became popular and people moved off Solaris to it was because it ran well on x86 and generally available hardware. Sun thought everyone would move to a 'real' OS in Solaris and run on 'real' hardware.

Put down the crack pipe. My response was to your silly idea that something has to be around for longer than something else to be better.

They didn't. ZFS follows in that fine tradition as it simply will not run on 32-bit systems.

Again put down the crack pipe.

Very scientific. Some bloke posting on someone's blog..... I don't find dumping a storage set up a valuable use of time, money or resources, and the cost/benefit just isn't there. Does ZFS have tools to help interoperability and migration, or will he be doing this himself?

Ask him. Obviously he is willing to put the time and effort into it because he finds the linux solution inadequate.

You cannot equate ZFS to existing storage systems and make them interoperate. If you go down the ZFS route it's really all or nothing. If it was layered into logical units and containers then that would be possible, and it would be possible for people like Apple and FreeBSD to reuse existing code and infrastructure.

What nonsense? Give me a real world example. Apple wants to replace HFS+ and many people in the Apple community are very excited by it.

Apple didn't make ZFS default in leopard not because of any intrinsic limitation of ZFS' design.

You are just hand waving. Give me some cogent technical details as to why you think it is not possible. Go into as much technical detail as you would like.

You proceeded to proudly claim that ZFS didn't violate any layers that you would expect to see in a storage system stack (a filesystem, a volume manager and RAID containers), and then you actually admitted it:

It doesn't violate any layers because it it trying to re-define them. Are you just plain daft?

If you go to implement something that was supposed to fit in a layer and then pruposefully change it to make it incompatible. You stupid claim makes sense.

ZFS was never designed to fit in that traditional layer and it was intentional because the designers thought the traditional model was broken. There is no violation.

People who love ZFS love it because ZFS doesn't use those unnecessary layers.

Then you didn't explain how ZFS was logically structured, nor did you explain these mythical points of failure.

ZFS has three layers. The ZPL, DMU and SPA. All of these have end to end checksuming. Unlike RAID, LVM and FS.

"I've implemented and supported a Linux-based storage system (70 TB and growing) on a stack of: hardware RAID, Linux FC/SCSI, LVM2, XFS, and NFS. From that perspective: flattening the stack is good. The scariest episodes we've had have been in unpredictable interactions between the layers when errors propagated up or down the stack cryptically or only partially (or, worse, didn't). With the experience we've had with the Linux-based system (which, admittedly, is generally working OK), it would be hard to imagine a more direct answer to every item on our list of complaints (not only reliability, but also usability issues) than ZFS, and I think the depth of the stack is ultimately behind for the large majority of those complaints.

Unsurprisingly, I'm aiming to migrate the system (on the same equipment) to ZFS on Solaris as soon as we can manage to. "

Here is the comment from the blog post. A lot of real world customers don't like the stupid layers. Get it!

Since you haven't explained how ZFS is actually layered..........

You haven't explained a lot of things. Explain again in as much detail why ZFS can not coexist with other filesystems. Also what in its design makes it hard for Apple to implement it.

You can't just type out a bunch of words and make them true unfortunately. ZFS will simply not cooperate with existing filesystems and existing volume management and RAID systems. You can't for example, have the ZFS system manage existing RAID systems or volumes that FreeBSD might use, nor can Apple use the ZFS system to manage HPFS volumes. You just end up with duplicate systems lying around.

Yes it can. I already explained it to you and also linked to ZFS on FUSE using LVM.

Its evident you have never used ZFS and are just hanging on Andrew Morton's words and ranting. Let's get techincal. I am waiting for you techincal explanation. Don't just say it can't, show me exactly why it can't.

That was the point. ZFS cannot work with existing storage systems code, and to do so will mean picking the code apart.

Rubbish! Prove it. All you have done is make claims. how about backing it up with some real examples and techincal discussions?