Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "Saw this new benchmark on the linux-kernel mailing list. Although NAMESYS, the developers of ReiserFS has many benchmarks on their site, they only have one Ext3 benchmark. The new benchmark tests Ext3 in ordered and writeback mode versus ReiserFS with and without the notail mount option. Better than expected results for Ext3. Big difference between ordered and writeback modes."

However, it is clear that IF your server is stable and not prone to crashing, and/or you have the write cache on your hard drives battery backed, you should strongly consider using the writeback journaling mode of Ext3 versus ordered.

I didn't see where "notail" made much of a difference on ReiserFS, though.

FWIW, the issue of whether writing to volatile storage counts as a committed transaction has been kicking around for a long time.

I remember in the mid-80s, Stratus and Tandem would duel over TPC benchmarks, and while Stratus did respectably on conventional disk-based writes, they did try to get the TPC council to allow writes to their resilient (duplicated), battery-backed memory to count too. I don't think they succeeded then, and IMHO some rather cruddy PC memory system should not be allowed to count now.

Nope. That's the problem. fsync() guarantees that the disk controller hardware is synced with the OS. It does not guarantee that the disk platters hold the data. It probably should, but implementing that is not always possible. Many controllers lie to look faster.

But if you look at the NAMESYS benchmark comparing ext2 to ext3 and ResierFS then it is clear that for sheer throughput ext2 wins...

IF Speed is your reason for choosing a Filesystem then writeback wins on almost everything in these examples...

But using a Journaled Filesystem isn't usually done for Speed... Unless you count speed booting after a crash. It's done to (more or less) guarantee filesystem integrity after a crash. You may lose data, but you only lose writes that never completed.

So, if you are choosing ext3 with writeback, is it faster than native ext2? I don't know. But it doesn't sound like it is any safer.

Of course, if you're worried about data integrity, you will have a mirror across multiple striped drives using multiple controllers. And then use a Journaled Filesystem to improve boot time.

ext3 with writeback is indeed safer than ext2, inasmuch that all corruption will be with regard to the data -- your metadata is still safe.

Now, data corruption can be a Very Bad Thing, depending on what you're doing... but in many cases, preventing metadata corruption (and thus being sure that your filesystem is always usable) is Good Enough.

Of course, if you're worried about data integrity, you will have a mirror across multiple striped drives using multiple controllers. And then use a Journaled Filesystem to improve boot time.

This is a misinformed opinion, at best. Your RAID setup will only save data in the case of hardware failure (i.e. one of your disks fails). It will do nothing about incomplete writes. The whole purpose of journaled filesystems is to ensure that writes completed, to minimize filesystem corruption. It just so happens that the way it does this allows for a faster boot, which is an added bonus.

I've seen such an option on big external RAID arrays. Makes sense, lets the write cache be written to disk before the power goes out.

I'm curious, though, do any hard drives have this feature? Maybe not a full battery, but perhaps a capacitor to store enough juice to write that 8 MB of cache data down to disk before it's gone for good? Or perhaps some sort of bolt-on option for existing internal drives?

I ask this as I'm an average joe with home-brew and cheap-label servers (I built most, a few are PII Dells and Gateways). My machines are pretty stable, but I only have about 70 minutes of battery backup from my UPS... and there's no way I could justify buying a generator.

Why bother? If you have a UPS, all you need to do is let it alert your servers to the loss of external power and the servers can begin a clean shutdown sequence, certainly well within your 70 minute range. Most APC UPSes that I know of have a serial cable hookup. If you have more than one server hooked up to one UPS, I'm sure you could devise someway of one server recieving the power-down signal and broadcasting it to all your other machines over the network.

No need to devise a way of sending out a power-down signal for those with APC UPSes. They have a product named PowerChute [apcc.com] (and even a linux version!) that machines connected to a UPS can use to communicate to each other. It has configurable shutdown times, so mission critical servers can stay up for the longest time possible, while not so important ones can be shut down immediately. We use it extensively in my office, and it really lengthens the battery length on our UPS.

Also worth nothing -- we have our Exchange server begin shutdown almost immediately after the power goes out, as it takes exchange nearly 15 minutes just to shut down. We are actively looking for an alternative to Exchange.

There is also an apcupsd. This way, you can have one machine that is hooked to the UPS (no need for additional hardware to let multiple machine monitor the UPS.) When power goes down, the apcupsd then lets the other servers know what is going on (power off, power on, shut down now, etc...) Ports to Unicies galore, and winders.

This all assumes that you have the network on a UPS and with the power out all machines can still talk.

Pretty nice tool with tons of options. http://www.apcupsd.org (oddly, with the exception of the what's new pages of the docs, the url isn't listed in the docs.)

Of course, I like my option - buy a UPS with enough capacity to hold the whole room for about 30 minutes (40KW) and a big ole generator in case things go down for a while.

I second the vote for using apcupsd. However, I think it is important for me to relay my experiences with it just to avoid potential problems for those of you uptime zealots (like me).

A few months ago, I had a short (2 minute) power outage and of course my UPS kicked in and my server stayed online as you might expect. However, when power was restored, the apcupsd scripts were (by default) configured to reboot the server after a return to utility power. Why this is the case, I cannot answer, however I'm sure there is a logical explanation. In my case, I found this very unsettling as it caused my 100+ days of uptime to return to zero whence they came. The scripts were easy to fix, but hopefully this will serve as a warning for those of you who cannot afford the restart.

On a slightly different note, I'm still not understanding the whole journalling file system issue; I understand the benefits, but are you really crashing that much (which must be hard locks), that you need to do a hard reset and let the journal replay the transactions? Personally, I have a tape backup, and a UPS. Do I really need a journalling file system, other than the obvious advantage of impressing the ladies? At the moment, I'm interested in XFS because of the ACLs and the "intensive disk usage" features SGI has in the IRIX version, and I'm hoping those make it into the "final" Linux version (if there ever will be a "final" version).

How can a script (software) reboot a a server that has already halted?

The system wasn't halted. The UPS kicked in and ran on batteries for a couple minutes then switched back to mains. The server remained up and running. The apcupsd daemon was set to run a script when hen the utility power returned, and the script was configured to be "shutdown -r now"

well...the simple reason for a better file system is simply shit happens

On one of our old systems, the network admin asked what a button did as he pushed it. It was the power button. At another time the same guy accidentally dropped a pencil that hit the same power button (actually a rocker switch) again. Someone else was curious as to what the inside of that machine looked like, so they opened the swinging back door of the case, which caused the system to power down (oh that poor TI 1500)

Have you considered Samsung Contact [samsungcontact.com] (formerly HP Openmail)? As far as Exchange replacements it should be a viable alternative. Runs on Solaris, Linux, HP-UX or AIX on the server side and supports pretty much everything Exchange does on the client side (and of course it supports most other email clients).

Of course, if you dont need a feature for feature match with Exchange there are unlimited cheap alternatives for mail servers.

What if the reason for the power failure is that someone tripped over the cord running from the UPS to the PC & pulled it out, or if the Power supply in the PC failed? How about if you were in the computer room & saw smoke & fire pouring out of the server? How about if the UPS failed?

There are cases where a UPS won't prevent an "unexpected downtime". In these cases, it might be helpful if the drives were able to finish their last write on their own power. It might give you something to boot after you correct the problem.

I know that I'm stupid for saying this, but after the past few years, a benchmark isn't sexy unless it has scenes of flying dragons or a copied scene from the Matrix on the screen. I must have sold my soul to the devil for saying that.

A hash collision in a ReiserFS directory (where two filenames hash out to the same value) causes the older file to BE OVERWRITTEN without so much as a warning. This is a huge design error, and I can't believe they're pushing Reiser as a production-use filesystem. The only way to ensure you never lose data to hash collisions is to use the 'slowest' hash setting; the faster the hash function, the more likely it is to create collisions and leak data. I had a large project lost to a

To prove you theory you could take the hash function in reiserfs and replace it with a function that always returns '1'. You would probably have to reformat your partitions though for that test though. The filesystem should still work. If it doesn't that's a bug.

The chances of their being a bug in reiserfs is about 100%. Same is true of ext3 though.

SuSE have been pushing ReiserFS for some time. I've certainly been using it for what seems like ages with no noticeable problems.

I'm 110% sure it's saved more files when I've lost power or when something's hung requiring a hard reset than it'd deleted due to hash clashes. What's the likelihood of two files generating the same hash? You talk of increasing likeliness, but don't mention any figures. It's hard to judge without some stats.

As an aside, why didn't you restore your large project from your backup? What do you mean you didn't have...

Can you document the claim that hash collisions cause silent data corruption? Or even that they cause a failure of any sort?

If this is true, surely it must be documented somewhere, or have been discussed in a credible forum? I did a little searching, and didn't find anything. Please post a URL to elevate your comment from unsubstantiated rumor to informative information.

In most hash-based indexing algorithms I know of, hash collisions incur a perfomance penalty, but not a data loss.

I don't know how accurate this is because its a bit beyond my technical knowledge. However I know that following a hash collision while using RFS, my/usr/local directory vanished. So there is some truth to the parent post.

A hash collision in a ReiserFS directory (where two filenames hash out to the same value) causes the older file to BE OVERWRITTEN without so much as a warning.

This is not necessarily a bug if the probability of that happening in real world scenarios is negligible. After all, you risk data loss from many sources.

Unfortunately, programmers often seem a bit unreasonable about probabilities. They complain about a (say) 1:10^20 chance of losing a file, while at the same time writing the whole file system in C, which basically guarantees a several-fold increase in the probability of undetected software faults compared to alternatives. In fact, the fix for such a remote possibility may not only kill performance, it may actually increase the overall probability of a fault that causes data loss--because the extra code may have bugs.

So, no, this doesn't bother me. I suspect that if Reiser knows about it and he isn't fixing it, he probably thought about it and decided the probability is too remote. If you disagree, I would like to see a more detailed analysis from you.

Provide at least one pair of filepaths which generate a hash collision under whatever scenario you care to specify, so that others can test and verify the resulting effect, even if it's probabilistic and requires billions of reruns to trigger -- no problem.

If the effect isn't seen by anyone else under any conditions, then the problem doesn't exist. Conversely, if it does happen under some repeatable conditions (even if only extremely rarely) then it *is* a problem, and will be fixed.

If you want to be constructive about it, take this issue out of mythology and onto firmer ground.

My decision isn't based on performance. They both are "fast enough" for me. I used to use ReiserFS a while back and it was great. Then I installed Redhat 7.3 on a machine and used ext3 so I didn't have to mess with anything. Yes tinkering is fun... but when I feel like it. Sometimes its nice to have stuff Just Work. Haven't had any problems since and have had a few random power outages.

Also I like the idea that I can read the drive with an ext2 driver from an older kernel or from FreeBSD just in case. In case of what? I don't know, but somehow it makes me feel better.

Also I like the idea that I can read the drive with an ext2 driver from an older kernel or from FreeBSD just in case. In case of what? I don't know, but somehow it makes me feel better.

...How about in case you want to make a disk image with a tool like DriveImage [powerquest.com] that supports ext2, and therefore, in a round-about way, ext3?
Hard disk crash? no problem -- drop in a new drive and the cd with your partition image and you're up in 15 minutes.
Note: I'm not affliated with PowerQuest -- I just buy their software when I've got money left over from buying a book of the new 37 cent US first class stamps...

"Just Works", at least in this case, is partially dependent on distro.I run SuSE, and installed ReiserFS (version 7.1? 7.2? Sometime around there.) and it "Just Works." I don't know if it is faster, I've never noticed the difference on my P2-400 home machine.Got to test it out the other day when the cat sat on the surge protector switch - rebooted like nothing happened. sweeeet.

Offtopic, but seems to me that the picture that gurulabs is using as background for their web page is ripped from the cover artwork of the album "Rally of Love" by the Finnish band 22-Pistepirkko. Wonder if they have permission for that?

Of course, could be that the album cover is a copy of something that is in the public domain...

so what's the point of running ext3 in writeback if (as the faq says) it's exactly equivalent to ext2 "with a very fast fsck"?

Consider a large tmp volume.

Anywhere where the consequences of finding stale data in a file are no worse than having the data simply missing after a crash. Even a src directory if you do a lot of big makes (since you're best off with make clean ; make after a crash anyway). Just be sure to sync after writing out a source file.

However, as long as performance is adequate, probably better safe than sorry when it comes to filesystems.

so what's the point of running ext3 in writeback if (as the faq says) it's exactly equivalent to ext2 "with a very fast fsck"? So is the _only_ gain the fsck time?

Well, ext3 with data=writeback is equivalent to
how reiserfs has always operated (i.e. if you crash you can lose data in files that were being written to). Using data=ordered is an extra benefit that doesn't have any noticable performance hit unless you are trashing the disk and RAM in a benchmark. FYI, there are now beta patches for reiserfs that implement data=ordered.

Only the fsck time can be a big deal if you have to wait 8 hours while your 1TB storage array is fscking (8 hours is a guess, I don't have that much disk...)

Because the ext2 code is more mature than the ext3 code. I also read that the ext2 code is currently much better suited to SMP, but ext3 hasn't been worked over to work well with multiple processes/processors.

I would have wanted to also see a non-journalling filesystem compared against these. Since I'm not currently using a journalled filesystem, it would be nice to see the difference between what I use now (ext2) and the journalled fs's.

quote from AC:"Do tune2fs -j/dev/ext2_hd_dev and find out...creates a journal for ext2. Just change your fstab to try it."

You might also want to make sure you compiled ext3 support into your kernel. Not trying to be a jackass, but not everyone has the latst kernel, I just upgraded from 2.2.17 for ext3 personally. Giving sloppy advice like that to somebody could be bad. Shame on you.

Also: Slashdot (the founders/owners/editors) is notorious for saying one thing and doing another. Witness the virulent anti-DMCA stance, yet, notice also how they support the very companies who forced it upon us (aka Sony). Witness their yammering about IE/MS not following standards when in fact their own HTML on thier own site is grossly out of established standards.

Also: Slashdot (the founders/owners/editors) is notorious for saying one thing and doing another. Witness the virulent anti-DMCA stance, yet, notice also how they support the very companies who forced it upon us (aka Sony). Witness their yammering about IE/MS not following standards when in fact their own HTML on thier own site is grossly out of established standards.

Completely true. I've filed a bug to the slashdot bug report page in sourceforge to add some semantic tags to the ones we are allowed to use. I'd like to use , , etc. The bug was deleted as quick as it was posible, with no explanation.

Besides, not only the HTML code doesn't validate. but also Slashdot has blocked [w3.org] the W3C validator!. That's very stupid, as anyone can just download and validate the page uploading it to the validator.
Here is the
validation result [technisys.com.ar].

Back when they started subscriptions I emailed Taco and told him I was subscribing conditionally, and expected that they cleaned up their act - proof read submissions, valdiated links, proper HTML, etc.

He responded that they were looking into all those things.

I added two $5 blocks to my account, and then after that since none of the things I mentioned happened I am off subscriptions.

It's too bad - with just a tiny bit more effort they could turn a popular nerd-friendly site into a popular, successful, respected, nerd-friendly news site.

i'm running XFS on a couple or three systems here at home w/ Linux From Scratch (www.linuxfromscratch.org) installs... and its very very nice. i remember seeing an article that was linked on linuxtodaycom a while back about XFS, i bleive the only downfall they said it has was its a bit slower that others when deleting files.

i'd personally use XFS over any of the others any day, mainly since its fsck free and is a file system that is known to work well (its been used/owned by SGI, yea know).

If you are using soft updates and not running fsck
after a dirty reboot, then you don't understand
soft updates. You are also flirting with loss of
data.

Here is what you are missing. Soft updates is
a method of ensuring that disk metadata is
recoverably consistent without the normal speed
penalty imposed by synchronous mounting. The
only guarantee that softupdates makes is that
your file system can be recovered to a consistent
state by running fsck. Soft updates is designed
to aid the running of fsck, but does not eliminate
the need.

> why did EXT get to be the defacto Linux filesystem, rather than UFS?

My understading of the sitation is that it was because until softupdates were implemented UFS was painful. Now, had softupdates been implemented, say, 7-10 YEARS ago when EXT became the Linux de-faco filesystem there might have been a chance.

On the flip side, seeing a good Linux implementation of a BSDish UFS with softupdates would be very nice.

EVEN IF ext2 is considerably faster than UFS (which I doubt...) that wouldn't change the fact that it is much more stable (I've lost several ext2 fs's). That's besides the fact that UFS supports much large files and filesystems.

Just today I was working on getting some molecular dynamics code to work on a DEC PWS 500au. This code writes some large (3GB-500MB) files to the disk. On a fresh striped down (~400MB) install of RedHat 7.1 using ext2, bonnie showed throughputs of about 20MB/s for sequential read/writes of a 512 MB file.

On a fresh install of FreeBSD 4.6 using UFS, bonnie reported more than 30 MB/s on the same machine.

I know this isn't really what you were looking for but it surprised me that there was that much of a difference.

Just for the hell of it I ran the same benchmarks on one of my test boxes (FreeBSD running -current). The performance basically comes down to how much write latency you are willing to endure... the longer the latency, the better the benchmark results for the first two tests.

So, for example, with the (conservative) system defaults I only got around 250 trans/sec for mixed creations with the first postmark test, because the system doesn't allow more then around 18MB of dirty buffers to build up before it starts forcing the data out, and also does not allow large sequential blocks of dirty data to sit around. When I bump up the allowance to 80MB and turn off full-block write_behind the trans rate went up to 2776/sec. I got similar characteristics for the second test as well.
Unfortunately I have only one 7200 rpm hard drive on this box so I couldn't repeat the third test in any meaningful way (which is a measure mostly of disk bandwidth).

In anycase, the point is clear, and the authors even mention it by suggesting that the ext3 write-back mode should only be used with NVRAM. Still, I don't think they realize that their RedHat box likely isn't even *writing* the data to the disk/NVRAM until it absolutely has to, so arbitrarily delaying writes for what is supposed to be a mail system is not a good evaluation of performance. Postmark does not fsync() any of the operations it tests whereas any real mail system worth its salt does, and even with three drives striped together this would put a big crimp on the reported numbers unless you have a whole lot of NVRAM in the RAID controller.

I do not believe RedHat does the write-behind optimization that FreeBSD does. This optimization exists specifically to maximize sequential performance without blowing up system caches (vs just accumulating dirty buffers). But while this optimization is good in most production situations it also typically screws up non-sequential benchmark numbers by actually doing I/O to the drive when the benchmark results depend on I/O not having been done:-).

Last thought. Note that the FreeBSD 4.6 release has a performance issue with non-truncated file overwrites (not appends, but the 'rewrite without truncation' type of operation). This was fixed post-release in -stable.

OK, asshole. How about we start with Journaling Versus Soft Updates: Asynchronous Meta-data Protection in File Systems [usenix.org] presented at USENIX 2000? The first three authors should need no introduction, so I think it satisfies the "well known" requirement; in fact, one could hardly find a group of six people more qualified to comment on the matter. Even in the abstract, the authors clearly state the similarity between goals of journaling and soft updates:

In this paper, we explore the two most commonly used approaches for improving the performance of meta-data operations and recovery

The similarity is mentioned repeatedly elsewhere in the paper, all the way to the conclusion, but I'll let you do your homework this time.

Anybody who knows anything about filesytems - and I've been working on them for over a decade - recognizes the similarity in goals between journaling, soft updates, and phase trees. Usually it's considered too obvious even to require comment, unless an ignorant troll like you comes along demanding that the obvious be spelled out.

Yes, folks, some filesystems are faster than others for some type of file.

We benchmark ReiserFS versus all other Linux filesystems about once every 6 months or so, and the last one from about 3 months ago still places Reiser in the "significantly faster" category for our workloads, specifically web caching with Squid.

ext3 is a nice filesystem, and I use it on my home machine and my laptop. But for some high performance environments, ReiserFS is still superior by a large margin. It is also worth mentioning that I could crash a machine running ext3 at will the last time we ran some Squid benchmarks (this was on 2.4.9-31 kernel RPM from Red Hat, so things have probably been fixed by now).

All that said, I'll be giving ext3 vs. ReiserFS another run real soon now, since there does seem to be some serious performance and stability work going into ext3.

I like reiserfs because I can trust it to perform well on any file system load. I can put it on a server and know it will be fast and efficient regardless of what the users do. Ext3 gives ext2 journaling, but does not add efficient large directories or small files, two features that reiserfs has.

Sure ext3 may benchmark slightly faster in certain scenarios. But unless you know ahead of time that those are the only scenarios you are going to put on the file system, I recommend reiserfs.

I can't say much about ReiserFS. We use it on a server in one of the computer labs I admin at school, but that's the extent of my experience.

But ext3.. I've been using it since the day RH7.3 was released, during which time I'll bet power to my machine has been cut at least 150 times (we had a bad circuit breaker that would randomly flip. I replaced it a few days ago). Often power was repeatedly lost many times in a short period of time (if that would matter), and in the middle of big disk write operations.

Every single time I have been able to immediately reboot without any apparent data loss (except for the data being written at that very second) or filesystem corruption (a couple of times I forced a check just to make sure nothing was wrong, and nothing ever was).

I can't testify to the relative quality of ext3 compared to ReiserFS, but I can certainly say I have been quite pleased with the stability of ext3.

Hell you can get blazing speed out of FAT, but do you want to use it? EXT3 turned me off the second I founoutit it's journeling was a 'bolt on' addition. (Metadata is kept is a private file...very ugly)

ReiserFS has eaten more megabytes then I would have liked...but that was 2 years ago. Comparing Resier which is a mature, next generation FS to EXT3, a revamp which isn't even done yet, is a bad idea.

I understand your point. But their point was (paraphrased):"We need to choose a file system. Let's try to experimentally determine which of out two prime contenders is best."

You may feel that their selection of contenders is incorrect, but to select between them based on experiment is called "the experimental method" (sometimes mistakenly "the scientific method". This is the basis of science, engineering, and technology. I.e.: Don't assume ahead of time that you know the right answer, check.

If they didn't find the problems that you expected, then perhaps you need to examine why. But a hand-waving "explanation" doesn't explain very much, so I don't even really know what problems you think they should have found. FWIW, I haven't noticed any instability problems with ext3.

Does anyone have info on which of these file systems might be the better one for glitch-free playback of multitrack uncompressed audio? (I'm thinking of up to 16 simultaneous streams, so effiicent throughput would be the priority -- BeOS's BFS was optimized for this sort of thing, but I don't know who in Linux-land has been focused on that aspect of performance)

I use ext3 in ordered mode for my "/" and "/usr" partitions for its data journaling, and reiserfs with -notail for my/tmp and/pub partitions (pub is an FTP/SMB fileserver, lots of activity). I think this is a good compromise between performance and non-corrupability (sp?)

I haven't experienced any problems with ext3, and I've used it (light loads only) ever since it was a Red Hat standard file system.

OTOH, a year (I think) earlier, when Mandrake released a Reiser file system option, I tried it, had disk corruption, and couldn't find any tools that helped recovery.

Now these are single data points, so you shouldn't take them too seriously. Also, around the same time that I had file corruption under Reasser, I also had an ext2 file system become corrupt. I even know that the problem was caused by fsck. (I was running from a secondary hard disk. I think that this may have been a kernel problem.) The point is, I was able to recover from the ext2 file system corruption, but was unable to recover from the Reisser file system corruption.

So I didn't find either system to be more reliable than the other. But ext2 was recoverable, and I was unable to recover the Reiser file system.

Again, let me stress, this was under light use. The system was one that I was using for development and experimentation, not one that I did serving from or kept serious data on. So usage patterns wouldn't match a production machine.

I don't like the fact that ext3 is now included as a module. The default filesystem driver should be compiled as part of the kernel.

SGI's version of Red Hat is far preferable to Red Hat's own release for this reason.

Now, I must create and maintain an initrd on my IDE system (which was never required before), and I've also been in a crazy situation where attempting to mount an installed filesystem under ext3 caused and Oops, but changing fstab to ext2 was fine.

Down with Red Hat's use of ext3 as a module! Red Hat has never handled journaling in a reasonable manner.

*ANY* journally filesystem can recover from an unexpected power loss. With an ext3 system, if you're seeing a check taking place (and you want to prevent such), disable them - in general, they are a holdover from ext2:

tune2fs -c 0 -C 0

However, you should also read this, from the tune2fs man page:

You should strongly consider the consequences of
disabling mount-count-dependent checking entirely.
Bad disk drives, cables, memory, and kernel bugs
could all corrupt a filesystem without marking the
filesystem dirty or in error. If you are using
journaling on your filesystem, your filesystem will
never be marked dirty, so it will not normally be
checked. A filesystem error detected by the kernel
will still force an fsck on the next reboot, but it
may already be too late to prevent data loss at
that point.

OK, you lost me here. I work with a couple of NTFS partitions, containing a total of 2-4 million small files (about 40 gigs total, split on two 40 gig drives), and when my machine crashed the other day, it took OVER AN HOUR before it finished its mandatory file check. Not to mention the fact that it takes windows about a minute and a half to "find itself" after the bios checks and begin booting whenever I have to restart. Hell, I even tried to format one of the drives in FAT32 since it handles small files better, but the Win2k programmers decided that no one should format Fat32 partitions larger than 32 gigs (and you can't unless you use something other than Win2k)

My linux box (not quite as many files) recovers its ext3 journal seemingly instantly after any crash (oh wait, it doesn't crash) or forced reboot (I'll admit that sometimes it's just easier to reboot the machine than try to restart X when the screensaver won't power my monitor back up)

NTFS is a journaled file system (similar to ext3 writeback mode, I believe). It shouldn't even be running a chkdsk on bootup for an NTFS volume, unless perhaps it detects something really wacky with the file system..

FAT32, on the other hand, will always run a chkdsk whenever it wasn't unmounted cleanly. For a disk with that many small files, it would likely take even longer than a full NTFS chkdsk (whatever the reason is that that's even running), not to mention the horrific slack waste..

exactly. In my case, no keystroke works (that includes control-alt-delete). And honestly, sometimes it's just easier to reboot the machine than try to manually kill/respawn bad processes. This is a desktop machine, not a server. A little downtime won't hurt anything.

Just the other night I was trying to program during a thunderstorm. My pc was reset by powerspikes at least ten times (no I do not learn), and ever time my pc came right back up without having to scan the entire partition.

Next time, I suggest standing outside with a golf club outstretched to the sky.

Actually, ReiserFS does produce a profit.
Look at their web site [namesys.com]. ReiserFS is licensed under the GPL and free for anybody to use. But if you want a feature you can pay them and they will write it for you. Then it will be available for everybody for free.

I have machines running for more than a year full on ext3 (inc. root) over a software mirroring raid (it does not really matter, but may be this increases the complexity of the actions the machine performs). The filesystems survived to a power outage when the generator failed to start before draining out the UPS and also to a hardware lock when a HDD controller broke down and somehow made the SCSI chain unusable (all the other HDDs became inaccesible). I am pretty happy with it, and althoug it is very intensive used - e-mail, syslog from a lot of cisco routers, netflow collection (this is really big - about 80G of logs/month), it created no problem, error or crash.