Well again I'm all about "that depends on the workload", and environment and so on. Personally I'm not sure I would use ZFS og BtrFS on a laptop or any other FS that goes to great lengths to preserve my data perfectly, simply because the laptop has so many other reliability issues that external backup is the only solution that is good enough. So rather than paying the price for reliability all the time. I would run "unsafe" + backup on my laptop and get the performance advantage. At work i'd much rather use ZFS on BL460c G7 blades with HDS USP-V or AMS2xxx storage backends. They are not really slow even though there are disks inside. :)

07-29-2010, 07:30 AM

kraftman

Quote:

Originally Posted by sbergman27

It goes much deeper than licensing. Even if ZFS were GPLv2 it would have an uphill battle to get into the kernel. Not to criticize Sun's (rather radical) architectural decisions regarding ZFS... but it is important to remember that they were *Sun's* decisions.

In simple words, zfs too bloated for Linux? I read somewhere it will polute too many kernel areas, but I'm not sure this was the point. It reminds me situation with XEN when developers decided to not merge it in the current state, because it will affect too many parts of the kernel which 'shouldn't' be affected by it.

07-29-2010, 07:39 AM

aka101

Quote:

Originally Posted by kraftman

In simple words, zfs too bloated for Linux? I read somewhere it will polute too many kernel areas, but I'm not sure this was the point. It reminds me situation with XEN when developers decided to not merge it in the current state, because it will affect too many parts of the kernel which 'shouldn't' be affected by it.

Or perhaps Linux kernel is too bloated for ZFS and XEN? Also consider that two slim systems with poor matching can give a bloated result when merged. ZFS is claimed to be very slim compared to other file systems, but I can't confirm this. I have heard 25k LoC?

07-29-2010, 08:40 AM

kebabbert

Quote:

Originally Posted by Jimbo

“Linux is really bad as a Large Enterprise server. It is not because of the bad filesystems, but because of limitations in the Linux kernel”

There are numerous stories about seniors consultant of +20 years experience being wrong. I am not saying this guy is one of them, but the data is from 2008 too, and he doesn't show any benchmark or quantitative results. Michael test showed that btrfs was superior on threaded I/O, at least on a single desktop HDD.

First you talk about large storage servers, then you talk about "btrfs is superior on a single desktop HDD"? I didnt get that. Those to two statements have nothing to do with each other. Michaels test says nothing about Large storage servers.

Quote:

Originally Posted by Jimbo

Btrfs is mainly developed because ZFS was clearly superior (features and data integrity), and just now (2010), btrfs is begin to show some maturity level. So new data / benchmark on large entreprise storage hardware is required to show those points.

"If you disagree [that Linux does not scale storagewise], try it yourself. Go mkfs a 500 TB ext-3/4 or other Linux file system, fill it up with multiple streams of data, add/remove files for a few months with, say, 20 GB/sec of bandwidth from a single large SMP server and crash the system and fsck it and tell me how long it takes. Does the I/O performance stay consistent during that few months of adding and removing files? Does the file system perform well with 1 million files in a single directory and 100 million files in the file system? My guess is the exercise would prove my point: Linux file systems have scaling issues that need to be addressed before 100 TB environments become commonplace. Addressing them now without rancor just might make Linux everything its proponents have hoped for."

Let me say you this. Regarding the current unstability of BTRFS and all bugs and so on, I doubt BTRFS does handle this situation. But you are convinced BTRFS does? I also doubt any other Linux filesystem handles this, because the senior consultant explains Linux can not handle this scenario with any filesystem. I doubt he lies. A liar never risk a situation where his lies could be revealed, a liar always says vague things. The consultant does not say vague things. He gives a clear and concrete example which have been done by many technicians. If the consultant lied about this concrete example, one of the technicians would have objected and it would have been spread on Linux sites that the senior consultant is a liar, and he would loose credibility and get bad reputation. A liar never gives concrete examples that are easy to double check. They always say vague things, that can never be double checked.

Sure, you have posted another link that tries to disprove the follow up article, but nothing concrete is said. There are mostly people saying stuff like "XFS is GREAT, it runs on LARGE installations, trust me!". It is only opinions, no hard facts. No scenarios. No examples. It could be FUD and lies. I am not saying that, but theoretically, it could be FUD and lies. Where are the links and benchmarks and white papers I ask of? Or any concrete examples that we can try out ourself? Of course, that senior consultant just assures us that Linux does not handle large filesystems without giving benchmarks, but he must speak true. This senior consultant gives us a concrete example to try out, and the risk is someone tries it out and the consultant has lied - then he will be crucified.

Quote:

Originally Posted by smitty3268

Good old kebabbert, repeating the same things over and over again 50 times.

Until someone disproves me, I will continue to repeat things. I only want to say true things. If someone can prove any link wrong, then I never post that link again. And if someone proves all my links are wrong, then I stop post. Because I want to always post links and benchmarks and research papers - because that is how you do in academic world. If someone takes away all my research papers, etc - then I have nothing to say. Seriously. I mean it. Disprove me, show me I am wrong. Then I stop. The thing is, all of you are wrong. I am correct. Not you. See below.

Quote:

Originally Posted by smitty3268

Btrfs will provide just as much data integrity as ZFS, since that was one of the goals. The only difference is that ZFS is a bit more mature since it's been in use longer.

I doubt this. ZFS is developed by a whole team that has vast experience of storage and all the troubles that arise. BTRFS seems to be developed by one guy who probably dont know what is important in Enterprise storage. But the Sun guys knows what is important:

http://blogs.sun.com/perrin/entry/the_lumberjack
"So that's the ZFS ZIL in a nutshell. It's design was guided by previous years of frustration with UFS logging. We knew it had to be very fast as synchronous semantics are critical to database performance. We certainly learnt from previous mistakes."

How the heck do you expect that single BTRFS guy to know what is important? Is he all knowing? He knows as much as the whole Solaris developer team combined with the Enterprise Storage team? Is he best in the world? Or is he just an ordinary developer with high goals, though "BTRFS is broken by design" as a RedHat developer wrote?

Again. To do data integrity correctly, is VERY difficult. It requires VAST knowledge (see below). It is almost to try to make a software bug free - that requires vast experience of building systems, to know which common pitfalls there are, etc. I would not be surprised if BTRFS is the first or second filesystem he ever developed. That is hilarious: "BTRFS is a ZFS killer" developed by a guy who knows nothing about common pitfalls? I trust more an experienced team that has vast experience, than a single guy that have never set his foot into a large server hall.It seems that BTRFS is more tailored to desktops, than Enterprise halls. Dont you agree?

Quote:

Originally Posted by sbergman27

Moore's law is about transistor density on silicon. It has nothing to do with disk space. However, it will apply if we all move to SSD storage. But that move will be a big speed bump in the march of disk size increases. That said, at the current geometric rate of expansion, the move from 32->64 bits represents about 48 years. But some applications today absolutely need more than that. Let's be *very* generous and say that the largest applications today have a 48 bit requirement. Assuming that expansion continues at the historical rate (and that's a big if. It likely won't.) then we have about 25 years before anyone at all would care about the 64 bit "barrier". An increase in bittness today, beyond 64, is not totally ridiculous or beyond the pale. But 128 *is* ridiculous. That's just a big waste of memory and processor. (Unsurprisingly, memory and processor use are ZFS's two main points of suckiness today.) Why design a filesystem today that sacrifices performance (today) in order to scale to sizes that we won't care about for 120 years? Does someone really think that ZFS will be around in 120 years?! More likely, management and marketing like the idea of being able to claim 128 bitness as a bullet point in their glossies.

You, Kebabbert, are apparently one of the folks I was referring to who does not understand exponential growth. Exactly the sort of person who might be impressed by such a claim in a Sun/Oracle glossy.

Or at least you have not bothered to do the simple arithmetic which demonstrates how silly ZFS 128 bitness really is.

-Steve

Well Steve. First of all, I have studied more math than you ever had, which is evident because you say so ignorant things. Second, I suggest you read a bit more. You know, the hype about ZFS and DTrace, etc - is for real. The Solaris guys ARE good. They KNOW what they are doing. Otherwise noone would cared for ZFS nor DTrace. When the Solaris guys says things, you better listen. They know things. You DONT. They can see trends, they know how much storage the server halls are bying, and they see it grows exponentially. You dont have that information.

When you talk about Moore's law
"Moore's law is about transistor density on silicon. It has nothing to do with disk space."

Let us read more on this law, regarding hard drives part:http://en.wikipedia.org/wiki/Moore%27s_law
It turns out that hard drives have the same development, called Kryder's law:
"A similar law [as Moore's law] (sometimes called Kryder's Law) has held for hard disk storage cost per unit of information"

Do you know enough math to understand what this means? It means EXPONENTIAL GROWTH, the same as Moore's law. And if you dont know theory about asymptotics, let me tell you. Exponential growth is a very bad thing, it grows extremely fast. It actually, grows exponentially. Therefore you are wrong on this. Your premise is false, and your entire reasoning is FAIL. And you say I dont understand arithmetic and math? Jesus. What are those Linux guys out there? Uneducated all of them? I cant help but wonder. Why am I wasting time pointing out errors in their juvenile reasonings?

Look, if anyone can prove me wrong, then I stop post that very thing. Until then, it is YOU that are wrong because I have evidence I am correct (research papers, articles, white papers, benchmarks, etc) and you have nothing. So I am correct. I dont make up things, nor FUD, etc. I can always link to white papers, research papers, scientif journals, etc. You Linux guys, can not. And you call me a FUDer and Troll? Jesus.If you prove me wrong with articles, I stop. If you prove me wrong, by yelling I am dumb and Troll - I continue. It is as simple as that. As long as I am right, and you are wrong, I continue. Prove me wrong. Use research papers, articles etc.

Let me cite you: "You, Kebabbert, are apparently one of the folks I was referring to who does not understand exponential growth". Great. It takes an igorant person to fail to see how fast hard discs have growed in size.

Quote:

Originally Posted by energyman

first of all Raid5&6 do a pretty good job at data integrity. And for some reason, Solaris is not really a datacentre powerhouse, is it?

Wrong again. Raid5&6 does not do a pretty good job at data integrity.

Jesus, give me strength. Why do I must correct all Linux people's misunderstandings all the time? Why do they think "ZFS and DTrace are slightly more polished than Linux counter parts"?

Look. Raid5 & 6 sucks big time. Your data is not safe with those. Here are some articles. Read them and please stop say things that are not true.

http://www.cs.wisc.edu/adsl/Publicat...ion-fast08.pdf
"Detecting and recovering from data corruption requires protection techniques beyond those provided by the disk drive. In fact, basic protection schemes such as RAID [13] may also be unable to detect these problems.
..
As we discuss later, checksums do not protect against all forms of corruption"

From above, you can not just add some checksums all over the place into BTRFS and expect to get data integrity. Ok? Not even mature techniques as Raid are safe (which many falsely believe). Do you finally understand now why I doubt BTRFS to be safe? Why must I educate all these Linux people all the time? *sigh*

Regarding raid-6:http://kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
"The paper explains that the best RAID-6 can do is use probabilistic methods to distinguish between single and dual-disk corruption, eg. "there are 95% chances it is single-disk corruption so I am going to fix it assuming that, but there are 5% chances I am going to actually corrupt more data, I just can't tell". I wouldn't want to rely on a RAID controller that takes gambles :-)"

Let me tell you, I have PLENTY of material on Silent corruption. Including many research papers. Just tell me if you want more information on silent corruption. Actually, it is very interesting read, to see how bad the situation is with current storage situation. (Please, someone ask me, to post all this material! :o)

Quote:

Originally Posted by energyman

But sure, ZFS is not about speed. What next? When BTRFS is finally seen as stable? 'ZFS is not about stability but licencing'?

Actually, I never said "ZFS is not about speed". Of course it has plenty of speed. I just said that the main reason to use ZFS is data integrity.

ZFS is for Enterprise, that is, not a single disc, but zfs raid. If you have 48 ordinary SATA discs in your PC chassi without any raid controller card, you reach 2-3GB/sec read speed. That is plenty. If you also add SSD discs, your latency drops extremely much, in some cases, to a few millionths of a second. Your IOPS go far over 100.000. If you have 7 discs in a ZFS raid (without controller card) you can reach 430MB/sec in an ordinary PC. Do you call this "slow"? I would not. Do you want some links that proves everything I claim, or do you trust me on this? Of course I can link to every number I gave. It would be dumb of me to make up numbers and not be able to give links - no liar gives hard concrete numbers which are easy to prove or disprove.

Still, the MAIN POINT with ZFS is data integrity. Which no other common filesystem offers. Of course ZFS offers more perfomance than BTRFS, why would it not? We are talking about ZFS! The besto.

Is your data important to you? CERN says ZFS is the only filesystem that offers data integrity, even when considering very expensive enterprise storage systems. They are migrating away from Linux to ZFS. Is it because CERN is fooled by the ZFS hype and Sun marketing people, or is it because ZFS is actually best? For real?

Sbergman27,
I have read in a blog comment somewhere (not a good source, so we should not trust this information too much) that BTRFS is also a "layering violation". Does anyone know more on this?

And besides, I dont understand the fuss about layering violation. If ZFS is best on the market, it is. I do not care if it is written in pascal, has layering violation, is painted blue or whatever as long as it is best and protects your data. Would you prefer to use an inferior filesystem that does not protect your data, but has four layers instead of three layers?

One thing I really like:
FreeBSD vs Linux, Linux win. Freebse fanbois 'no fair, different userland'
Debian/FreeBSD vs Debian/Linux. Debian/Linux wins. Freebse fanbois 'you should try the new drivers they speed thing up a lot. And ZFS. It is surely faster'
FreeBSD with the new sata driver - shows slowdowns. Freebse fanbois ignore that and just pound the fs drum. 'Try ufs with journaling! And ZFS rocks performancewise'.
FreeBSD with UFS+Journaling and ZFS vs Linux. Linux wins. By a large margin. Freebse fanbois 'no fair, ZFS never was about speed but data integrity'.

Muhahahaha

I doubt BTRFS even handles as many discs and reaches the speed that ZFS does, today. BTRFS is too buggy. The best ZFS machine will outclass the best BTRFS machine by a large margin. BTRFS seems to be targeted to desktops. Not Enterprise halls.

Regarding the rest of your post, if Linux is faster than FreeBSD on desktop things, so what? FreeBSD is a Unix. As Solaris. They win by a large margin in the Enterprise halls. They are stable. Safe. Not unstable as Linux. Do you want to see links where Linux companies increase their work load much, and must switch to for instance, Solaris? Of course I have such links. I may be true that Linux is faster on desktop things - but is Linux good on Enterpirse with lots of CPUs? No? Linux looses on SAP benchmarks with many cores? Ah. Linux is not stable says many admins? Ah. Linux does not protect your data? Ah. BUT LINUX IS FASTER ON FPS! Wow.

Why are Linux people so ignorant and dont know how things REALLY are? It IS very tricky do data integrity, after many years, even raid does not succeed! Not even the old mature filesystems as "superior" XFS does not offer data integrity? And Linux people thinks that one guy will make BTRFS safe? Hilarious.

07-29-2010, 08:43 AM

kebabbert

Quote:

Originally Posted by aka101

Or perhaps Linux kernel is too bloated for ZFS and XEN? Also consider that two slim systems with poor matching can give a bloated result when merged. ZFS is claimed to be very slim compared to other file systems, but I can't confirm this. I have heard 25k LoC?

Whereas UFS which more similar to ext3, is many times bigger. Read the link. The point is, ZFS has ditched layers, so the code is minimal, compared to old bloated antique filesystems with unnecessary layers.

07-29-2010, 09:59 AM

kebabbert

Quote:

Originally Posted by cjcox

How long have you dealt with the Sun experts? I have dealt with them for MANY MANY MANY years (20+). Let me tell you first hand that these guys are MASTER of the untruth. They will hide their bugs and problems for YEARS... and then after they feel safe, they will let you into their "secrets" and show you just HOW BAD their code really was.

This is interesting. Could you tell me more on this? I want to hear more. If they use foul play, just like IBM, then I dont like that.

And you also claim the code was bad. Can you tell me more on this? Which code? Was it important code, or was it an unimportant shell script?

Quote:

Do you read the forums? Do you SEE the problems that people are having with ZFS?

Yes I read the OpenSolaris forums, and I see that people have trouble with ZFS. I have never denied that. I had said earlier here, several times, that ZFS has bugs. But I also think ZFS is still the best out there. And it is rapidly maturing, thanks to the great Solaris developers.

The opensolaris forum is one single forum. Every ZFS user goes there to post about ZFS, or complain. Meanwhile, the ext3 and other Linux filesystems are spread out on several different forums. I have read stories about Linux people loosing data too.

Quote:

Ok.. so we have different opinions about Sun's expertise (expertise that bankrupt the company btw). That's ok... but I just want to make sure that people understand, there is ANOTHER side to the story apart from the slick marketing and persuasive arguments that Sun and its engineers tell.

I dont doubt the Sun sales personal sucked. But I talk about the Sun engineers, that provenly does great and innovative tech that no one has thought of earlier. Tech that many OSes want. Solaris has not only ZFS and DTrace that is great, there are other great tech there, too.

Meanwhile, I dont see anything that makes devs drool about anything that Linux has. Sure, Linux gives better FPS in gaming, or graphics. But no one has denied Linux is a better desktop OS. But Enterprise use in another thing.

07-29-2010, 10:04 AM

aka101

I wouldn't draw the conclusion that BtrFS is slower than ZFS, just because ZFS is so scalable. The similarities between BtrFS and ZFS are significant and one could say that BtrFS is an extension of ZFS. It might turn out faster and more scalable in the end, but currently it is too nice to call it beta stage IMHO. The basic problem with BtrFS is the data structure that has proven difficult to implement efficiently and might prove impossible to implement efficiently with all desirable features.

I wouldn't draw the conclusion that BtrFS is slower than ZFS, just because ZFS is so scalable.

I dont say that BTRFS is slower than ZFS. I am only saying to the BTRFS fans that say ZFS is slow; ZFS can give huge performance, far surpassing for instance, BTRFS.

It is only a matter of ZFS configuration. Just add some more discs or SSD and you get more performance. ZFS handles the extra hardware automatically for you.

So, I just want to say this: "Muhahahaha" in your face. I TOLD you that ZFS is the best out there. ZFS does everything that other filesystems do, but better. But on top of that, ZFS gives DATA INTEGRITY. That is the ONLY reason to run ZFS. Sure, ZFS is extremely fast in Enterprise settings but that is not the reason to use ZFS. It is data safety.

BTW, Phoronix doesnt do Enterprise benchmarks. If Phoronix did, Linux would loose on every benchmark. Solaris is since long, targeted to Enterprise. Not Desktop. Phoronix only do Desktop benchmarks: single computer, 8 cores benchmarks, single disc, etc. That is chicken shit. When we talk about large Enterprise stuff, then Linux just doesnt cut it. Linux is a great DesktopOS. I admit it. Linux is better DesktopOS than OpenSolaris or Solaris.

But, desktop and Enterprise are different things. People here, just don't understand it. They see BTRFS is faster on single disc (a desktop benchmark), and draw the conclusion "BTRFS is ZFS killer". Jesus. How can you compare Desktop vs Enterprise, that easy???

If we ventured into the realms of LARGE servers giving huge performance, people would understand the true strength of Solaris, ZFS, DTrace and all other Solaris tech. They just dont have any experience of it, so they believe it is just to add some cpus, discs, etc to Linux and then you have Enterprise. No, up comes lots of different, new scalability problems that are very hard to solve. IBM AIX which is very mature and high end, didnt scale well cpu wise, until recently. It IS difficult to scale well. Enterprise IS difficult.

07-29-2010, 02:02 PM

Tomservo

So, where was OpenSolaris in all this? Seeing as it's the native environment to run ZFS in?

07-29-2010, 02:11 PM

kraftman

Quote:

Originally Posted by aka101

Or perhaps Linux kernel is too bloated for ZFS and XEN? Also consider that two slim systems with poor matching can give a bloated result when merged. ZFS is claimed to be very slim compared to other file systems, but I can't confirm this. I have heard 25k LoC?

No. Afaik XEN was too bloated (its design was messed up, so it would pollute the kernel areas which weren't polluted by KVM, beause its design was smarter). So, in theory zfs could pollute some kernel areas due to its bloated design while other Linux file systems don't pollute them.