Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Actually, Linus was, as he sometimes is, completely clueless. He's unaware of the fact that filesystem journaling was *NEVER* intended to give better data integrity guarantees than an ext2-crash-fsck cycle and that the only reason for journaling was to alleviate the delay caused by fscking. All the filesystem can normally promise in the event of a crash is that the metadata will describe a valid filesystem somewhere between the last returned synchronization call and the state at the event of the crash. If you need more than that -- and you really, probably don't -- you have to do special things, such as running an OS that never, ever, ever crashes and putting a special capacitor in the system so the OS can flush everything to disk before the computer loses power in an outage.

On-disk state must always be consistent. That was the point of journalig, so that you do not have to do a fsck to get to a consistent state. You write to a journal, what you are planing to do, then you do it, then you activate it and mark done in the journal. At any point in time, if power is lost, the filesystem is in a consistant state - either the state before the operation or the state after the operation. You might get some half-written blocks, but that is perfectly fine, because they are not referenced in the directory structure until the final activation step is written to disk and those half-written bloxk are still considered empty by the filesystem.

All the filesystem can normally promise in the event of a crash is that the metadata will describe a valid filesystem somewhere between the last returned synchronization call and the state at the event of the crash. If you need more than that -- and you really, probably don't -- you have to do special things, such as running an OS that never, ever, ever crashes and putting a special capacitor in the system so the OS can flush everything to disk before the computer loses power in an outage.

What about ZFS [wikipedia.org]? Doesn't ZFS have a bunch of checksumming and hardware failure tolerance functionality which you "probably need"?

Actually, he has a valid point: the user doesn't give a damn about whether their disk's metadata is consistent. They care about their actual data. If a filesystem is sacrificing user data consistency in favor of metadata consistency, then it's made the wrong tradeoff.

Some of us have discovered the 'shutdown' command. [...]Anyhow, I suggest you use it occasionally. Then perhaps you can only fsck when something bad has happened.

Don't be too smug - a "shutdown" doesn't always guarantee a clean startup. I remember a bug (hopefully fixed now) where "shutdown" was completing so quickly that it powered off the computer while data was still sitting in the hard drive's volatile write cache. Even though the OS had unmounted the filesystem, the on-disk blocks were still dirty.

p.s. If any OS/kernel developers are listening - how about implementing a standard API through which drive write-caches can be flushed+disabled whenever a system starts a shutdown procedure, gets a signal that the UPS is running on battery power, or otherwise concludes that it is in a state where a temporarily-increased risk of data loss justifies slowing down I/O?

Actually, Linus was, as he sometimes is, completely clueless. He's unaware of the fact that filesystem journaling was *NEVER* intended to give better data integrity guarantees than an ext2-crash-fsck cycle

Linus is not clueless in this case. I think it is a case of you misinterpreting the issue he was discussing.

Journaling is, as you say NOT about data integrity/prevention of data loss. That is what RAID and UPSes are for. However, it IS about data CONSISTENCY. Even if a file is overwritten, truncated or otherwise corrupted in a system failure (i.e. loss of data integrity) the journal is supposed to accurately describe things like "file X is Y bytes in length and resides in blocks 1,2,3...." (data/metadata consistency). Why would you update that information before you are sure the data was actually changed? A consistent journal is the WHOLE REASON why you can "alleviate the delay caused by fscking".

Linus rightly pointed out, with a degree of tact that Theo de Raadt would be proud of, that writing meta-data before the actual data is committed to disk is a colossally stupid idea. If the journal doesn't accurately describe the actual data on the drive then what is the point of the journal? In fact, it can be LESS than useless if you implicitly trust the inconsistent journal and have borked data that is never brought to your attention.

"...the idiotic ext3 writeback behavior. It literally does everything the wrong way around - writing data later than the metadata that points to it. Whoever came up with that solution was a moron. No ifs, buts, or maybes about it."

In the interests of fairness... it should be fairly easy to track down the person or group of people who did this. Code commits in the Linux world seem to be pretty well documented.

How about ASKING them rather than calling the Morons?

(note: they may very well BE morons, but at least give them a chance to respond before being pilloried by Linus)

Torvalds exactly knows who it is and most people following the discussion will probably know it, too.Also, there has been a fairly public discussion including a statement by the responsible person in question.

Not saying the name is Torvalds attempt at saving grace. Similar to a parent of two children saying, I don't know who did the mess, but if I come back, it better be cleaned up.

hm. Similar to a parent of two children ranting at them without taking time to think first. Calling them morons is just going to get them growing up to be dysfunctional at best. No wonder the world has a dim view of the "geek" community.

Ahh... That link explains a lot. However, I have a different parenting strategy. If the kid does something wrong, let him know it. If he does something good let him know it too. Calling them a moron is ok, as long as its balanced out with genius every now and then. Of course, don't actually use the word, if the kid is a moron. Like Linus that should only be used to indicate a temporary lapse of judgment in an otherwise intelligent person.

The best ways to have person improve are positive and negative stimulation. Working systems are the positive stimulation, fellow programmers commenting on the dumb points of the design is the negative one.

And you need both at all times, regardless what the politically correct view on education is floating currently.

Given a choice, I'd employ the more mundane developer rather than the brilliant kid with the mouth.

For exactly the same reasons, I wouldn't want to work with the him. I've had to work with loudmouths in the past, and the abuse is wearying. It saps your creativity, because you don't want to risk triggering another outburst; but they come anyway. Each time, I've left with renewed energy.

Right now, I have a job where everyone treats everyone else with respect, and not without jus

Torvalds exactly knows who it is and most people following the discussion will probably know it, too....

Yes, Mr. Torvalds is fairly outspoken.

Yes, and the folks in that conversation are very thick skinned and are used to such statements, it's just they way they communicate. Having Linus call you a moron is nothing. (and he's probably right);)

How many times have I looked at my own code and asked, "What MORON came up with this junk?"

Well, some Linux filesytem developers (and some fanboys) have been chastising other (higher-performance) filesytems for not providing the guarantees that ext3 ordered move provides.

Application developers hence were indirectly educated to not use fsync(), because apparently a filesystem giving anything other than the ext3 ordered mode guarantees is just unreasonable, and ext3 fsync() performance really sucks. (The reason why you don't actually *want* what fsync implies has been explained in the previous ext4 data-loss posts).

Some of those developers are now complaining that their "new" filesystem (designed to do away with the bad performance of the old one) is disliked by users who are losing data due to applications being encouraged to be written in a bad way, and telling the developers that they now should add fsync() anyway (instead of fixing the actual problem with the filesystem).

Moreover, they are complaining that the application developers are "weird" because of expecting to be able to write many files to the filesystem and not having them *needlessly* corrupted. IMAGINE THAT!

As an aside joke, the "next generation" btrfs which was supposed to solve all problems has ordered mode by default, but its an ordered mode that will erase your data in exactly the same way as ext4 does.

Honestly, the state of filesystems in Linux is SO f***d that just blaming whoever added writeback mode is irrelevant.

Honestly, the state of filesystems in Linux is SO f***d that just blaming whoever added writeback mode is irrelevant.

I agree that the who-dun-it part is irrelevant. I disagree on the "SO f***d" part. We have three filesystems that write the journal prior to the data. Basically, we know the issue, and a similar fix can be shared amongst the three affected filesystems. We've had far more "f***d" situations than this (think etherbrick-1000) where hardware was being destroyed without a good understanding of what was happening. Everything will work out as it seems to have everyone's attention.

I agree that the who-dun-it part is irrelevant. I disagree on the "SO f***d" part. We have three filesystems that write the journal prior to the data. Basically, we know the issue, and a similar fix can be shared amongst the three affected filesystems.

I would be very surprised if the fix can be shared between the filesystems. At least the most serious among those involved, XFS, sits on a complete intermediate compatibility layer that makes Linux looks like IRIX.

Linux filesytems are seriously in a bad state. You simply cannot pick a good one. Either you get one that does not actively kill your data (ext3 ordered/journal) or you pick one which actually gives decent performance (anything besides ext3).

...btrfs is starting from the ground up rather than try to fight those camped on their domains and won't play ball,.... So why don't you stop talking shit, or come up with specific cases to back up your claims.

Didn't you just do that for me?

Things like XFS or JFS are badly maintained and supported because they are too complex and were lumped in from other systems. This is a problem if, for example, XFS is the only serious option for really big volumes.

Reiser3 receives no more improvements, Reiser4 is dead. That doesn't leave much besides ext3. Funnily, ext3 has been catching up in performance just because the other FS are dead. Ok, maybe funny isn't the right word...

Unlike other OSes, Linux has several filesystems to chose for whatever the users' needs are, and new ones will appear from other proprietary systems at a later date. You think NTFS or HFS+ is any better?

Would you care to make an educated guess on how many run one of said three filesystems - particularly ext3, compared to using an etherbrick-1000? Scale matters, even if it sucks equally much if *your* data was eaten by a one-in-a-billion freak bug or a common one.

fsync() (sync all pending driver buffers to disk) certainly has a major performance cost, but sometimes you do want to know that your data actually made it to disk - that's an entirely different issue from journalling and data/meta-data order of writes which is about making sure the file system is recoverable to some consistent state in the event of a crash.

I think sometimes programmers do fsync() when they really want fflush() (flush library buffers to driver) which is about program behavior ("I want this data written to disk real-soon-now", not hanging around in the library buffer indefinitely) rather than a data-on-disk guarantee.

IMO telling programmers to flatly avoid fsync is almost as bad as having a borked meta-data/data write order - progammers should be educated about what fsync does and when they really want/need it and when they don't. I'll also bet that if the file systems supported transactions (all-or-nothing journalling of a sequence of writes to disk), maybe via an ioctl(), that many people would be using that instead.

I agree. What we need is a mechanism for an application to indicate to the OS what kind of data is being written (in terms of criticality/persistance/etc). If it is the gimp swapfile chances are you can optimize differently for performance than if it is a file containing innodb tables.

Right now app developers are having to be concerned with low-level assumptions about how data is being written at the cache level, and that is not appropriate.

I got burned by this when my mythtv backend kept losing chunks of video when the disk was busy. Turns out the app developers had a tiny buffer in ram, which they'd write out to disk, and then do an fsync every few seconds. So, if two videos were being recorded the disk is contantly thrashing between two huge video files while also busy doing whatever else the system is supposed to be doing. When I got rid of the fsyncs and upped the buffer a little all the issues went away. When I record video to disk I don't care if when the system goes down that in addition to losing the next 5 minutes of the show during the reboot I also lose the last 20 seconds as well. This is just bad app design, but it highlights the problems when applications start messing with low-level details like the cache.

Linux filesystems just aren't optimal. I think that everybody is more interested in experimenting with new concepts in file storage, and they're not as interested in just getting files reliably stored to disk. Sure, most of this is volunteer-driven, so I can't exactly put a gun to somebody's head to tell them that no, they need to do the boring work before investing in new ideas. However, it would be nice if things "just worked".

We need a gradual level of tiers ranging from a database that does its own journaling and needs to know that data is fully written to disk to an application swapfile that if it never hits the disk isn't a big deal (granted, such an app should just use kernel swap, but that is another issue). The OS can then decide how to prioritize actual disk IO so that in the event of a crash chances are the highest priority data is saved and nothing is actually corrupted.

And I agree completely regarding transaction support. That would really help.

> We need a gradual level of tiers ranging from a database that does its own journaling> and needs to know that data is fully written to disk to an application swapfile that if> it never hits the disk isn't a big deal (granted, such an app should just use kernel swap,> but that is another issue).

Actually there already is a syscall for telling the kernel how the file will be used.

fsync() (sync all pending driver buffers to disk) certainly has a major performance cost, but sometimes you do want to know that your data actually made it to disk - that's an entirely different issue from journalling and data/meta-data order of writes which is about making sure the file system is recoverable to some consistent state in the event of a crash.

The two issues are very closely related, not "an entirely different issue". What the apps want is not "put this data on the disk, NOW", but "put this data on the disk sometime, but do NOT kill the old data until that is done".

Applications don't want to be sure that the new version is on disk. They want to be sure that SOME version is on disk after a crash. This is exactly what some people can't seem to understand.

fsync() ensures the first at a huge performance cost. rename() + ext3 ordered gives you the latter. The problem is that ext4 breaks this BECAUSE of the journal ordering. The "consistent state" is broken for application data.

I'll also bet that if the file systems supported transactions (all-or-nothing journalling of a sequence of writes to disk), maybe via an ioctl(), that many people would be using that instead.

Yes. But they are assuming this exists and the API is called rename():)

fsync() is for flushing *all* data to disk. That's often the wrong thing to do! If the application just needs to flush it's own writes to disk, or even just one specific write, and not incur the HUGE performance hit of fsync(), it shouldn't need to call fsync().

sync() is for flushing *all* data to disk.

fsync() and the related fdatasync() operate on a single file descriptor. There is also a finer-grained, non-portable "sync_file_range()" introduced in kernel 2.6.17 (according to the man page).

fsync() is the correct function call for an application to use when it wants to flush its writes (for a particular fd) to disk. It is unfortunate if the implementation cannot do so without having to also flush unrelated writes to disk, but that's beyond the control of a usersp

"mount -o data=ordered"
Only journals metadata changes, but data updates are flushed to
disk before any transactions commit. Data writes are not atomic
but this mode still guarantees that after a crash, files will
never contain stale data blocks from old files.

"mount -o data=writeback"
Only journals metadata changes, and data updates are entirely
left to the normal "sync" process. After a crash, files will
may contain stale data blocks from old files: this mode is
exactly equivalent to running ext2 with a very fast fsck on reboot.

So, switching writeback mode to write the data first would simply be using ordered data mode, which is the default...

I think it's safe to say that anyone capable of writing a filesystem module at all is far above the "moron" level on the human intelligence scale. Furthermore, anyone willing to volunteer their time by writing such software and donating it to the ungrateful world should be thanked, mistakes or not.

Linus seems to have the wrong temperament for managing a project of humans.

That's why I'm a OpenBSD developer as well... I like the abuse and scorn that Theo throws at me.It's good to see that Linus is becoming more like Theo. What's the quote: "that which doesn't kill me only makes me stronger"

FTA: "if you write your data _first_, you're never going to see corruption at all"

Agreed, but I think this still misses the point - Computers go down unexpectedly. Period.

Once upon a time, we all seemed to understand that, and considered writeback behavior (when rarely available)
always a dangerous option only for use in non-production systems and with a good UPS connected.
And now? We have writeback FS caching enabled by silent default, sometimes without even a way to disable it!

Yes, it gives a huge performance boost... But performance without reliability means absolutely nothing. Eventually
every computer will go down without enough warning to flush the write buffers.

It is not an option yet in Ext4, and for now may not be the default, but an option to be set at mount time.

Currently in Ext4, the meta data in journal is first updated, then the data written.

When software assumes that it can send commands, and have them take place in the order sent this becomes problematic. Because without costly immediate writes there is a risk of losing very very old data, as the files metadata gets updated but the data not written

Here's what Linus had to say, and I think he hit the nail on the head:

The point is, if you write your metadata earlier (say, every 5 sec) andthe real data later (say, every 30 sec), you're actually MORE LIKELY tosee corrupt files than if you try to write them together.

And if you write your data _first_, you're never going to see corruptionat all.

This is why I absolutely _detest_ the idiotic ext3 writeback behavior. Itliterally does everything the wrong way around - writing data later thanthe metadata that points to it. Whoever came up with that solution was amoron. No ifs, buts, or maybes about it.

Yes! This is the whole point. I am not a filesystem guy either. I don't even know that much about filesystems. But imagine you write a program with some common data storage. Imagine part of that common data is a pointer to some kind of matrix or whatever. Does anybody think it is a good idea to set that pointer first, and then initialize the data later?

Sure, a realy robust program should be able to somehow recover from corrupt data. But that doesn't mean you can just switch your brain off when writing the data.

This is a potential problem when you are overwriting existing bytes or removing data.

In that case, you've removed or overwritten the data on disk, but now the metadata is invalid.

i.e. You truncated a file to 0 bytes, and wrote the data.

You started re-using those bytes for a new file that another process is creating.

Suddenly you are in a state where your metadata on disk is inconsistent, and you crash before that write completes.

Now you boot back up.. you're ext3, so you only journal metadata, so that's the only thing you can revert, unfortunately, there's really nothing
to rollback, since you haven't written any metadata yet.

Instead of having a 0 byte file, you have a file that appears to be the size it was before you truncated it, but the contents are silently corrupt, and contain other-program-B's data

After a truncf(), you lock the deleted blocks against a write until after you've written the updated metadata for the file. Until then, anything you write to the file will have to be allocated elsewhere on the disk. But then that's part of what the reserve slack is for: to increase the probability that there is somewhere else on the disk that you can write it.

This is more of a response to the 5 other replies to this comment - but rather than post it 5 times I'll just stick it here...

What everybody else has proposed is the obvious solution, which is essentially copy-on-write. When you modify a block, you write a new block and then deallocate the old block. This is the way ZFS works, and it will also be used in btrfs. Aside from the obvious reliability improvement, it also can allow better optimization in RAID-5 configurations, as if you always flush an entire

When you have less than 64K of RAM, and a processor that barely has a modern memory management unit, then some of these "extras" like Copy-On-Write appear as advanced features. Additionally, when your computer costs $500,000, you tend not to scrimp on stuff like a UPS.

Economics have changed much since the early days of UNIX. Many of the file system design principles still remain the same. Assumptions need to change with the times. Reasonable historical assumptions were:
- Every UNIX machine has a UPS.
- Production servers run UNIX. What's this Linux you are talking about?
- Disk space is expensive. No one will pay for unused disk space.
- RAM is expensive. As such, it can be quickly flushed to disk.
- No one has enough disk space, RAM, or disk bandwidth to experience a random fault rate of 1 part in 1 quadrillion (1E-15).
Times have changed, Linux is used on heavy servers now. UNIX (with deference to AIX and Solaris) is almost gone from the market place. RAM and disk space are cheap, so cheap that random data errors can big issue. A UPS can cost more than a hard drive, and sometimes more than the computer it is attached to. Disk capacities are huge.

Unfortunately, the file system designers haven't kept pace. The Ext4 bug was detected, reproduced, and ultimately solved for a group desktop Ubuntu users. Linux is used in cheap embedded applications, like home NAS servers. Applications that don't have a UPS. Linux isn't a just server O/S anymore. The way to design and optimize a file system needs to change too.

Additionally, even for servers, the times have changed, and this affects file systems. It used to be that accepting data loss was OK, since you would need to rebuild a server after a failure. Today, the disk arrays are so large, that if you attempted to restore the data from backups, it would take hours (sometimes days.) As such, capabilities like "snapshots" are becoming very important to servers. Server disk storage is increasingly bandwidth limited, and not disk size limited. Today, it is possible to have 1 TB of data on a single disk, while being unable to use that disk space effectively. Under many workloads, the users are capable of changing the data faster than a backup program can copy the data off the disk. In such a case, without a snapshot capability, it is impossible to make a valid backup.

Linus seems to understand this much better than the people writing the filesystems, which is quite ironic.

It's common sense! Duh. Write data first, pointers to data second. If the system goes down, you're far less likely to lose anything. That's obvious. Those who think this is somehow not obvious don't have the right mentality to be writing kernel code.

I think the problem is Ted T'so has had a slight 'works for me' attitude about it:

All I can tell you is that *I* don't run into them, even when I wasusing ext3 and before I got an SSD in my laptop. I don't understandwhy; maybe because I don't get really nic

You specifically have to choose writeback mode in the full knowledge that the datablocks will almost certainly be written after the metadata journal.

I think Ted Tso etc are probably perfectly aware of how it works.

Except that ext4 loses data in ordered mode for exactly the same reason, and we had a big fuss about that the last few weeks, because *someone* (cough) said that it's the application developers fault for not fsync()-ing.

If I were to setup a new home spare-part-server using software RAID-5 and LVM today, using kernel 2.6.28 or 2.6.29 and I really care about not losing important data in case of a power outage or system crash but still want reasonable performance (not run with -o sync), what would be my best choice of filesystem (EXT4 or XFS), mkfs and mount options?

I believe XFS has a similar option, and Ext4 will with the next kernel, but for a home type system Ext3 should meet all of your needs, and Linux utilities still know it best.

Of course you should probably use RAID-10 too, with data disk space so cheap it is well worth it. Using the "far" disk layout, you get very fast reads, and though it penalizes writes (vs RAID 0) in theory, the benchmarks I have seen show that penalty to be smaller than the theory.

- Make regular backups; you'll need them eventually. Keep some off-site.- ext3 filesystem, default "data=ordered" journal- Disable the on-drive write-cache with 'hdparm'- "dirsync" mount option- Consider a "relatime" or "noatime" mount option to increase performance (depending on whether or not you use applications that care about atime)- If you don't want the performance hit from disabling the on-drive write-cache, add a UPS and set up software to shut down your system cleanly when the power fails. You are still vulnerable to power-supply failures etc. even if you have a UPS.- Schedule regular "smartctl" scans to detect low-level drive failures- Schedule regular RAID parity checks (triggered through a "/sys/.../sync_action" node) to look for inconsistencies. I have a software-RAID1 mirror and I've found problems here a few times (one of which was that 'grub' had written to only one of the disks of the md device for my/boot partition).- Periodically compare the current filesystem contents against one of your old backups. Make sure that the only files that are different are ones that you expected to be different.

If you decide to use ext4 or XFS most of the above points will still apply. I don't have any experience with ext4 yet so I can't say how well it compares to ext3 in terms of data-preservation.

Yeah, I have to second this... all the journalling filesystems in the world can't compete with a bog-standard, home-based UPS. You just need to make ABSOLUTELY sure that the system shuts down when the battery STARTS going (don't try and be fancy about getting it to run until the battery lifetime) and that the system WILL shut down, no questions asked.

A UPS costs, what, £50 for a cheap, home-based one? Batteries might cost you £20 a year or so on average (and probably a lot less if you just nee

UPS are nice, and I use one too. It won't protect you from kernel crashes or direct hardware failures. It would still result in corrupted discs if some filesystem decided it did not yet have to write that 2 GB of cached data. Ext3 in ordered mode is still much preferred.

Linux seriously needs to find a workaround to its licensing squabbles [blogspot.com] and find a way to get a rock-solid ZFS in the kernel. Right now, ZFS on OpenSolaris [opensolaris.org] is simply wonderful, and this is what I am deploying for file service at all my customer sites now. The scary thing about file system corruption is that it is often silent, and can go on for a long time, until your system crashes, and you find that all of your backups are also crap. I've replaced a couple of linux servers (and more than a couple of Windows servers) after filesystem and disk corruption compounded by naive RAID implementations (RAID[1-5] without end-to-end checksumming can make your data *less* safe), and my customers couldn't be happier. Having hourly snapshots [dzone.com] and a fast in-kernel CIFS server fully integrated with ZFS ACLS [sun.com] (and with support for NTFS-style mixed case naming) is jut icing on the cake.
Now if only I could have an Opensolaris desktop with all the nice linux userland apps available. Oh wait, I can! [nexenta.org]

Dear Sam:As you know, I've been trying to get a decent file system into Linux for a while. Let's face it, none of these johnny-come-lately open-source arseholes can write a file system to save their life; the last one to have a chance was Reiser, and I really don't want him han

FreeBSD has ZFS. My understanding is while ZFS is a good filesystem, it isn't without issues. It doesn't work well on 32-bit architectures because of the memory requirements, isn't reliable enough to host a swap partition, and can't be used as a boot partition when part of a pool. Here's FreeBSD's rundown of known problems: http://wiki.freebsd.org/ZFSKnownProblems [freebsd.org].

On the other hand, the new filesystems in the Linux kernel - ext4 and btrfs - are taking the lessons learned from ZFS. I'm excited about next-generation filesystems, and I don't think ZFS is the only way to go.

It's similar (at least, a lot more similar than any other Linux filesystem), but less mature.

In defense of the LK team on the whole ZFS issue, I understand that part of the reason they didn't pursue some ZFS-like features years ago was because of patents. Now that SUN has open-sourced (though not in a GPL-compatible way) ZFS and is defending that against Network Appliance in a lawsuit, the way looks a lot clearer for Btrfs and company to proceed.

Actually, on that thought, the IBM acquisition of SUN should get NetApp to drop that lawsuit. Going up against SUN in a MAD patent dispute is a bit risky, but (as SCO discovered) aggressive IP lawsuits against IBM come in right behind "land war in Asia".

Somebody's going to mention it so here it is: there was a BSD unix research project that ended as the soft-updates implementation (currently present in all modern free BSDs). It deals precisely with the ordering of metadata and data writes. The paper is here: http://www.ece.cmu.edu/~ganger/papers/softupdates.pdf [cmu.edu]. Regardless of what Linus says, soft-updates with strong ordering also do metadata updates before data updates, and also keeps tracks of ordering *within* metadata. It has proven to be very resilient (up to hardware problems).

Here's an excerpt:

We refer to this
requirement as an update dependency, because safely writing the direc-
tory entry depends on first writing the inode. The ordering constraints map
onto three simple rules:
(1) Never point to a structure before it has been initialized (e.g., an inode
must be initialized before a directory entry references it).
(2) Never reuse a resource before nullifying all previous pointers to it (e.g.,
an inode's pointer to a data block must be nullified before that disk
block may be reallocated for a new inode).
(3) Never reset the last pointer to a live resource before a new pointer has
been set (e.g., when renaming a file, do not remove the old name for an
inode until after the new name has been written).
The metadata update problem can be addressed with several mecha-
nisms. The remainder of this section discusses previous approaches and the
characteristics of an ideal solution.

There's some quote about this... something about those who don't know unix and about reinventing stuff, right:P ?

Sometimes I get the impression that Linus says things the way he says because the other 'powerful' guys who are really important and active in the Linux community don't say nothing or even agree with him when he talks like that. I remember a similar episode some time ago when a guy wanted to port GIT to C++ or something like that. I think he cried.

I think it's more a matter of dealing with divas all day. It's pretty clear that the two sides of this issue are the side with technical people convinced that the correctness of the journaling system overcomes any difficulties with integrity, and people who think that integrity should be paramount. For most users, disk integrity IS the number one priority. It seems to me that this is a case of some people not being able to see that they're wrong.

What we need is more, and not less, of such an aggressive attitude. A real man can take it.

That depends if you're trying to construct a team of "real men" or a team of skilled developers.

People sometimes confuse the idea or the act with the person that is associated with. If I propose a stupid idea or commit a stupid act, then by all means call me out and tell me that it's stupid and why. But save the ad hominem attacks. Calling somebody a moron accomplishes no good thing, and doing it in public is an extremely quick and effective way of destroying team morale.