While Ext4 was originally merged in 2.6.19, it was marked as a development filesystem. It has been a long time coming but as planned, Ext4dev has been renamed to Ext4 in 2.6.28 to indicate its level of maturity and paving the way for production level deployments. Ext4 filesystem developer Ted Tso also endorsed Btrfs as a multi-vendor, next generation filesystem and along with the interest from Andrew Morton, Btrfs is planned to be merged before 2.6.29 is released. It will follow a similar development process to Ext4 and be initially marked as development only.

Of course it is. Believe it or not, such technical challenges are OS-neutral

Incorrect. Fragmentation and its consequences are filesystem (and workload) specific. Unix/Linux filesystems historically, with the notable exception of Reiser4, have been quite resistant to fragmentation.

For example, I just spot checked my busiest server, formatted ext3, which has a workload that consists of:

It has been operating for a little under a year and currently exhibits only 6.6% fragmentation.

That said, there are may be workloads that result in more fragmentation. But low to mid single digit percentages are what I typically see. In fact, in my 20+ years of administering various Unix/Linux systems, I have never at any time been in a situation in which I felt any need for a defragmenter. But as a friend of mine was fond of saying, "it's better to have it and not need it than need it and not have it".

Unfortunately, considering the number of new Linux users coming from a Windows background, I expect to see lots of senseless recommendations to "defrag the hard drive" in the not too distant future. For "performance" reasons... and even as an attempt to fix problems. Remember that Linspire was forced, by popular user request, to add a virus checker to their distro. Because "everyone knows" that its dangerous to run a computer without one because it might get a "computer virus".

Unix/Linux filesystems historically, with the notable exception of Reiser4, have been quite resistant to fragmentation.

Why is that? Someone more knowledgeable than I could probably point to some specific aspects of unix filesystem design that reduce fragmentation. But it was an issue for the designers to consider when the fs was designed, and is still an issue for people working on new filesystems today. How well that issue is dealt with by particular operating systems or particular filesystems is a separate question. (FAT certainly was notoriously bad.)

I think it likely has to do with respective history. Unix started out on the server and evolved onto the desktop. DOS/Windows started out on the desktop and evolved to the server. Unix filesystems were designed in an environment where the machine was expected to run, run, run. Downtime was expensive and to a great extent unacceptable. Defragmenting the filesystem would have been downtime, and thus unacceptable. Current community culture reflects that tradition.

Windows culture tends to look more to resigning one's self to fragmentation (and viruses for that matter) and then running a tool (defragger, antivirus) to "fix" the problem. When NTFS was designed, Windows users were already used to the routine of regular defrags, and would likely do it whether the filesystem required it or not. So why make fragmentation avoidance a high priority?

Currently F8 x86_64, though if I had it all to do over I would have stuck with CentOS. Fedora was pretty rough for the first couple of months we ran it but things have stabilized nicely.

8 GB of memory. I target about 128M per desktop user. 64 bit costs some memory up front, but has more sane memory management. I was running something like 50 desktops on 4GB on x86_32 CentOS 5, but sometimes zone_normal was touch and go. I had to reserve a lot of memory for it which cut into the buffer and page caches a bit. (Linux does a wonderful job with shared memory. Single user desktop admins don't get to see all the wonders it can perform.)

BTW, the it's a dual Xeon 3.2 GHz box. And the processor usage is only moderate. (That's why I chuckle a bit when I hear people talk as if they think multicore is likely to benefit the average user. My 60 desktop users don't even keep 2 cores overly busy!)

With x86_64, no, I don't feel any great need to for more servers. I don't have the luxury, for one thing. And more servers means more administrative overhead. That's one reason that virtualization is such a buzz word today.