Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "I'm hoping for a discussion about the best file system for a web hosting server. The server would serve as mail, database, and web hosting. Running CPanel. Likely CentOS. I was thinking that most hosts use ext3 but with of the reading/writing to log files, a constant flow of email in and out, not to mention all of the DB reads/writes, I'm wondering if there is a more effective FS. What do you fine folks think?"

ZFS if funky and all but you don't need the extra features and the additional CPU overhead is just wastful. The only real thing to care about is lake of fsck on unclean reboot, and fast reads. XFS+LVM2+mdraid (although a proper RAID controller is preferable) is perfect.

As I recall, XFS is particularly good with large files. I use it for a media volume on my NAS. It's major down side from my point of view is lack of TRIM support, which only matters if it's on an SSD of course. The other thing to consider would be the occasional defrag.

app: Hey, can you write this data out toext4: DONE!app: Uhh, that wasn't long enough to actually write the data.ext4: Sure it was, I'm super faGRRRRRRRRRRRRRst at writing too.app: wait, did you just cache that write and report it written but then not actually write it to disk until 30 seconds later?ext4: Yeah, what about it?

That being said, ext3 and mount it with the noatime flag. If you're on a web server you don't want to be hammering it with writes to update the last access time. That's just silly.

fsync() is waaaay too slow. You could have at least recommended the fdatasync(), which is less slow. Or even better: opening files with O_SYNC/O_DSYNC flag.

The experimental nature of Linux IO subsystem, its unpredictability, is one of the reasons why some actually pick *BSD instead. OK, disk IO is slower than that of Linux, but at least one has sensible IO guarantees: data are written probably not right away, but without any great delay. (The only major problem of *BSD is the lack of drivers for the sto

At this point, you can change that to "If you have to ask, use ext4". It's been around long enough at this point that it's ready for production use (and has been for a year or two). Especially if you have situations of multi-gigabyte files that take a long time to delete under ext3, or you want the faster fsck of ext4.

I plan on waiting until at least late next year before I'd test btrfs for production. Let others be the pioneers in that, because ext4 handles our workload just fine.

The main point for me to use ext4 over ext3 is that ext3 has broken fsync() behaviour. If you fsync() a single file descriptor on ext3 it will flush the whole filsystem buffer instead of just the dirty blocks of that file descriptor. Terrible for write concurrency, especially with databases.

I am running a cent 6.3 box with ext4 and works well for me I have no issues with he file system.
Starting from Linux Kernel 2.6.19 ext4 was available.
Supports huge individual file size and overall file system size.
Maximum individual file size can be from 16 GB to 16 TB
Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).
Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
You can also mount an existing ext3 fs a

Typically, that is the default file system. That is how you will get the best support when there is an issue. It will also be the most stable with your OS because the developers focus on that FS. So personally, I would use whatever is the default FS for whatever OS you decide to use.
To get off topic a bit, IMHO that OS should be Debian because it is just too awesome and Debian based OS's have the largest community. Also, it should be running on Linode.com;)

Especially if you decide to use a SSD. Even if there's not alot of data writing going on the constant rewriting of the directory entries to update the last accessed time stamp would wear an SSD and slow a regular hard drive.

it won't significantly wear any modern SSD, but shutting it off will save you wasted I/O for a function that is not terribly important for a web server, especially a web server that keeps logs storing far more complete information than most recent

From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.

If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is neces

From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.

If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is necessarily the most efficient. When your server goes down, you'll quickly find advice on how to restore ext3 filesystems because gazillions of people have done it before. You will find less info about xfs (although it may be higher quality), just because it isn't as common.

XFS is probably better for large maildirs, but ext3 in recent kernels has much better performance on large directories starting in the late 2.6 kernels. It doesn't provide for infinite # of files per directory, but it doesn't take a huge hit listing e.g. 4k files in a directory anymore.

it doesn't take a huge hit listing e.g. 4k files in a directory anymore.

Umm, maildirs store each message in its own file. I clean up (archive) emails from each past year in a separate folder and still easily have 8k files in each... and that is not my busiest mailbox.

After a few thousand items of anything, the proper tool for the job is a database, not a file system. Though file system can be described as a kind of database, any in case there are problems common to both, such as fragmentation, a specialized data storage always beats generic ones. Personally, I like what Dovecot

You're not going to be there forever, and all using a non-standard filesystem is going to accomplish is to cause headaches down the road for whoever is unfortunate enough to follow you. Use whatever comes with the OS you've decided to run - that'll make it a lot more likely the server will be kept patched and up to date.

Trust me - I've been the person who's had to follow a guy that decided he was going to do the sort of thing you're considering. Not just with filesystems - kernels too. It was quite annoying to run across grsec kernels that were two years out of date on some of our servers, because apparently he got bored with having to constantly do manual updates on the servers and so just stopped doing it...

If you need a large filesystems then go with XFS. RHEL only supports up to 16TB filesystems with ext4 and up to 100TB with XFS. I'm not sure at this point where the limitation comes from as it is limited even with X86-64.

This isn't 1999. You have no reason to host your web server, email server, and database server on the same operating system.

You would be well advised to run your web server on one machine, your email server on another machine, and your database server on a third machine. In fact, this is pretty much mandatory. Many standards, such as PCI compliance, require that you separate all of your units.

Take advantage of the technology that has been created over the past 15 years and use a virtualized server environment. Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.

Keep the database server similarly clean. Keep the email server similarly clean. Six months from now, when the email server dies, and you have to take the server offline to recover things, or when you decide to test an upgrade, you will suddenly be glad that you can tinker with your email server as much as you want without harming your web server.

After having worked for companies that do both, I honestly disagree. If you host your dbs and web servers on different machines, you wind up with a really heavy latency bottleneck which makes lamps applications load even slower. It doesn't really make a difference in the "how many users can I fit into a machine" catagory. CPanel in particular is a very one-machine centric piece of software, while you could link it to a remote database its really a better idea to put everything on one machine.

Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.

Why is this required? Shouldn't we expect our operating systems to multitask?

It's a shame, isn't it? We have all these layers and layers of security (such as user separation, private memory address space for processes, java virtual machine...) which we do not trust and are therefore essentially nothing but configuration and performance cruft. If we're really just running one application on each (virtual) machine, that machine might as well be running DOS.

Why is this required? Shouldn't we expect our operating systems to multitask?

We should expect our servers to be secure. But they're buggy.We should expect defense in depth to be unnecessary. But people screw up.We should expect OS tunables to be variable on a per-process basis, but they're not (with Linux anyhow).

If you are concerned about performance and expect constant email stream you should host mail, database and web-servers on separate computers. There is a reason any reputable host does it this way. Plus increased load on one component doesn't affect others.

I use an old netbook for movie watching while trying to sleep (not a pervert, only insomniac). I let the computer die by itself many times a week and so far XFS never gave up on me. Never had to fscking at boot, never complained, so far no FS corruption I can see. I would recommend XFS based on its reliability and I trust it's stability being almost 2 decades old. Bonus point, it performs beautifully and do not require gigabytes of memory just to run. I mostly pick FS based on reliability, and disks based o

based on your topology you have described, the last thing you need to worry about is what file system to choose, since you have decided to host ALL tasks on a single server. if performance was an issue, you would separate them all to dedicated "farms" and if security is a factor (which it should be), none of them would be in the DMZ, only your proxy(s) would live there.

There is no reason to not have various parts of the filesystem mounted from different disks or partitions on the same disk. If you do this, you can run part of the system on one filesystem, other parts on others as appropriate for their intended usage. This is commonly done on large servers for performance reasons, quite like the one you are asking about. It's also why SCSI ruled in the server world for so long since it made it easy to have multiple discs in a system.

So run most of your system on something stable, reliable and with good read performance, and the portions that are going to take a read/write beating on a separate partition/disc with the filesystem which has better read or write, whichever is needed, performance. If you segregate your filesystem like this correctly, an added benefit is that you can mount security critical portions of the filesystem readonly, making it more difficult for an attacker.

Actually, there is a reason not to have different apps using different filesystems in partitions on one disk. If those apps just use subdirectories within one filesystem, that filesystem can do a pretty good job of linearizing I/O across them all, minimizing head motion (XFS is especially good at this). If those apps use separate partitions, you'll thrash the disk head mercilessly between them if more than one is busy. Your advice is good in the multiple-disk case, but terrible in the single-disk case, a

Contrary to the majority of the people replying to this post, I emphatically DO NOT recommend ext3. ext3 by default wants to fsck every 60 or 90 days; you can disable this, but if you forget to, in a hosting environment it can be pure hell if one of your servers reboots. Usually shared hosting web servers are not redundant, for cost reasons; if one of your shared hosting boxes reboots you thus get to enjoy up to an hour of customers on the phone screaming at you while the fsck completes

XFS is a very good filesystem for hosting operations. It has superior performance to ext3, which really helps, as it means your XFS-running server can host more websites and respond to higher volumes of requests than an ext3-running equivalent. It also has a feature called Project Quotas, which allows you to define quotas not linked to a specific user or group account; this can be extremely useful for hosting environments, both for single-homed customers and for multi-homed systems where individual customer websites are not tied to UNIX user accounts. The oft-circulated myth that XFS is prone to data loss is just that; there was a bug in its very early Linux releases that was fixed ages ago, and now its no worse than ext4 in this respect.

Ext4 is also a good option, and a better option than ext3; it is faster and more modern than ext3 and is being more actively developed. Ext4 is also more widely used than XFS, and is less likely to get you into trouble in the unlikely event that you get bit by an unusual bug with either filesystem.

Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet. The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.

This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it. I would advise you steer clear of ZFS on Linux.

Finally, for clustered applications, i.e. if you want to buck the trend and implement a high availability system with multiple redundant webservers, the only Linux clustering filesystem I've found to be worth the trouble is Oracle's open source OCFS2 filesystem (avoid OCFS1; its deprecated and non-POSIX compliant). OCFS2 lets you have multiple Linux boxes share the same filesystem; if one of them goes offline, the others still have access to it. You can easily implement a redundant iSCSI backend for it using mpio. Its somewhat easier to do this then to setup a high availability NFS cluster, without buying a proprietary filer such as a NetApp.

Reiserfs was at one time popular for mail servers, in particular for maildirs, due to its competence at handling large numbers of small files and small I/O transactions, but in the wake of Hans Reiser's murder conviction, it is no longer being actively developed and should be avoided. JFS likewise is a very good filesystem, on a par with ext4 in terms of featureset, but for various reasons the Linux version of it has failed to become popular, and you should avoid it on a hosting box for that reason (unless your box is running AIX).

Speaking of older proprietary UNIX systems; on these you should have no qualms about using the standard UFS, which is a tried and true filesystem analogous to ext2 in terms of functionality. This is the standard on OpenBSD. NetBSD features a variant with journaling called WAPBL, developed by the now defunct Wasabi Systems. DragonFlyBSD features an innovative clustering FS called HammerFS, which has received some favorable reviews, but I haven't seen anyone using that platform in hosting yet. The main headache with hosting is the extreme cruelty you will experience in response to downtime, even when that downtime is short, scheduled or inevitable. Thus, it pays to avoid using unconventional systems that customers will use as a vector for claiming incomp

While I agree with what you say, mostly, I've got contention with a couple key points.

Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet.

On the contrary, btrfs will not be a good option 'when it's officially declared stable'. It'll be a good option when it's vetted as stable without too much regressive or destructive behavior, in the wild. Until then, it's still immature and best suited for closed environments.

The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.

This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it.

I agree, but a word of caution... FreeBSD lacks the necessary stable storage controller support to make ZFS fully stable on FreeBSD on all but a hand

I spent some time late last year and earlier this year working very closely with the developers of BetterLinux, and in the work I did, I did stress testing (on a limited scale) to see how the product performed. It has some OSS components and some closed-source components, but the I/O leveling they do is pretty amazing.

I'm planning to race a Yugo kitted out with cast iron spoilers and wooden tires.Which type of decals will make me go fastest?

Ontopic; the choice of filesystem will have far less impact than the choice of programming language, database, webserver application and how you use those. The choice to go with CPanel (or any *Panel) means the impact of the filesystem will be unnoticable. Nothing wrong with those panels; they drive down human cost, but if you need the absolute best performance, panels won't let you g

I used JFS on all my machines from around 2007-2011, including laptops. I had many unclean shutdowns (especially on laptops) and JFS rarely had any problems, except that one time briefly in 2009 where I did actually lose a bunch of data, but then so did my ext4 reinstall a few weeks later (bad hardware).

JFS was much, much better than ext3. Especially in low-CPU situations/hardware.

I can't remember why I went back to ext4, I guess I wanted to see if it still sucked compared to JFS. With noatime I decided I c

Go with tmpfs. It has the highest performance of any of the "standard kernel" filesystems, and if you use it for your personal webserver/blogserver/mailserver/etc, it will never lose any valuable data if the server reboots unexpectedly.

yeah - its especially good for your log files, after all, SSD is just like a big RAM drive.....

you're going to be better off forgetting SSDs and going with lots more RAM in most cases, if you have enough RAM to cache all your static files, then you have the best solution. If you're running a dynamic site that generates stuff from a DB and that DB is continually written to, then generally putting your DB on a SSD is going to kill its performance just as quickly as if you had put/var/log on it.

RAID drives are the fastest, stripe data across 2 drives basically doubles your access speed, so stripe across an array of 4! The disadvantage is 1 drive failure kills all data - so mirror the lot. 8 drives in a stripe+mirror (mirror each pair, then put the stripe across the pairs - not the other way round) will give you fabulous performance without worry that your SSD will start garbage collecting all the time when it starts to fill up.

don't know the budget, but 250gb of "RAM" for 500$ looks like a good deal. and you just suggested an array of 4 drives to someone that wants the classic webserver with CPanel, all stuffed in one system, that would be like 3-4k$ just for the disks. SSD is the way to go on this cases mainly because of the money you save; and the lifespan? i replaced way more HDD than SSD in the last 3 years since using them, and they are in the same ratio right now and the SSD get way more I/O.

You're still going to want redundancy. At the very least 2 identical drives mirrored with software RAID.

If redundancy is important, 500GB/1TB "Enterprise" drives are cheap. 4 drives in RAID10 would give the best cost:redundancy:performance ratio. You can probably get 4 HDD's for the cost of the one $500 240GB SSD you mentioned.

Yep, agreed... agonizing over the FS choice isn't going to provide many gains compared to spending time optimizing the physical disk configuration and partitioning.

FS performance is only going to really matter if you're going to have directories with thousands of nodes in them. But then hopefully you have better ways to prevent that from happening.

But you do want to spend a good deal of time benchmarking different RAID and partitioning setups, where you can see some gains in the 100-200% range rather than 5-10%, especially under concurrent loads. Spend some quality time with bonnie++ and making some pretty comparison graphs. Configure jmeter to run some load tests on different parts of your system, and then all together to see how well it deals with concurrent accesses. Figure out which processes you want to dedicate resources to, and which can be well-behaved and share with other processes. Set everything up in a way to make it easier to scale out to other servers when you're ready to grow.

The FS choice is probably the least interesting aspect of the system (until you start looking at clustered FSs, like OCFS2 or Lustre)

Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server. ext3 should work fine for you, especially if you're not too familiar with the different types of file systems. Two things I might recommend is if you're looking at really high traffic, you need to separate out your database, email, & web server into 3 different entities. If not... again the file system is not really a concern for you. Last, but not least, re

Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server.

Only if you're completely ignorant about the difference between consumer and enterprise SSDs. The official rated endurance of a 200GB Intel 710 with random 4K writes (the worst case scenario) with no over-provisioning is 1.0 TB. In order to wear this drive out in a high-load scenario, you could write 100GB of data in 4k chunks to this drive every day for nearly 30 years before you approached even the official endurance.

If you use a consumer SSD in a high-load enterprise scenario, you're going to get bit. If you use an enterprise SSD in a high-load enterprise scenario, you'll have no problems whatsoever with endurance, regardless of what people spreading FUD like you would have you believe.

Intel rates the endurance of the 710 at 1.0 PB and the 330 at 60 TB, so yeah, there's a pretty big difference there.

In Intel's case, specifically, the difference is between using MLC flash and MLC-HET flash. The difference is largely from binning, but it's the difference between 3k to 5k p/e cycles on typical MLC, and 90k p/e cycles on MLC-HET. SLC produces similar improvements. I could explain how they achieve this, but Anandtech and Tom's Hardware have both done pretty good write-ups explaining the difference.

It depends entirely on your workload. If you've got an enterprise workload where you don't do many writes, then a consumer drive will work just fine. And since most drives report their current wear levels, it's actually pretty safe to use a consumer drive if you monitor that.

Anandtech gave one example, when they were short on capacity and were facing a delay in getting some new enterprise SSDs; they walked out to the store, bought a bunch of consumer Intel SSDs, and slapped those into their servers. They were facing a write-heavy workload, so they wouldn't have lasted long, but they only needed them for a few months and kept an eye on the media wear indicator values, so they were fine.

My point overall is that you can't look at SSDs the same way if you're a consumer versus an enterprise user, and if you're an enterprise user, you need to pick an SSD appropriate for your workload.

One thing people don't consider is upgrade cycles. Hanging on to an SSD for ten years doesn't really make sense, because it only takes a few years for them to be replaced by drives enormously cheaper, larger, and faster. They're improving by Moore's Law, unlike HDDs. I paid $700 for a 160GB Intel G1, and three years later, I paid $135 for a much faster 180GB Intel 330. If you're going to replace an SSD in three to five years, does it matter if the lifespan is 10 or 30?

The biggest source for early XFS corruption issues was that at the time the filesystem was introduced, most drives on the market lied about write caching. XFS was the first Linux filesystem that depended on write barriers working properly. If something was declared written but not really on disk, filesystem corruption could easily result after a crash. But when XFS was released in 2001, all the cheap ATA disks in PC hardware lied about writes being complete, Linux didn't know how to work around that, and as such barriers were not reliable on them. SGI didn't realize how big of a problem this was because their own hardware, the IRIX systems XFS was developed for, used better quality drivers where this didn't happen. But take that same filesystem and run it on random PC hardware of the era, and it usually doesn't work.

ext4 will fail in the same way XFS used to, if you run it on old hardware. That bug was only fixed in kernel 2.6.32 [phoronix.com], with an associated performance loss on software like PostgreSQL that depends on write barriers for its own reliable operation too. Nowadays write barriers on Linux are handled by flushing the drive's cache out, all SATA drives support that cache flushing call, and the filesystems built on barriers work fine.

Many of the other obscure XFS bugs were flushed out when RedHat did QA for RHEL6. In fact, XFS is the only good way to support volumes over 16TB in size, as part of their Scalable File System [redhat.com] package, a fairly expensive add-on the RHEL6. All of the largest Linux installs I deal with are on XFS, period.

I wouldn't use XFS on a kernel before RHEL6 / Debian Squeeze though. I know the software side of the write barrier implementation, the cache flushing code, works in the 2.6.32 derived kernels they run. The bug I pointed to as fixed in 2.6.32 was specific to ext4, but there were lots of other fixes to that kernel in this area. I don't trust any of the earlier kernels for ext4 or xfs.

Red Hat spent a lot of time effectively saying to everyone that they didn't support XFS. Eventually they had to throw in the towel because it's the only Linux filesystem that genuinely works well once you start dealing in terabytes of data. It's also recently got better at handling lots of smaller files and metadata. It's an incredibly useful filesystem and unfortunate that it still gets a lot of FUD thrown at it because of many peoples' misunderstanding about data loss issues several years ago.

I read the bug report at your link and I don't see any indication that a) this has any relation to the size of the directory, or b) that this one bug makes NFS a flaky POS. I have a 1TB+ NFS volume to/from which I write/read several GB per day. Never in years has it given me one problem, but hey, that's just me.

I don't see any indication that a) this has any relation to the size of the directory

The cited bug [redhat.com] points out a problem with readdir(). This manifests itself as failures with other software, including dovecot [redhat.com] (where maildir is used) and bonnie++ [centos.org]. Some of those other bugs were reported and marked as duplicates [redhat.com], some weren't.

I review my web log files on a regular basis and look for exploit attemps to update my firewall and make sure I am not exposed. I use log files to prevent a SHTF scenario as much as possible. Now get off my lawn kid.

I have seen as much data corruptions on HP-UX/ia64, AIX/POWER and Solaris/SPARC boxes as on cheaper Linux/x86 and Linux/x64 boxes with SUSE or RHEL. I'm pretty sure in long term Solaris/x64 would show the same results.

Cause of most corruptions were faulty power supply or faulty memory module (and dumb admins who had ignored the ECC errors).

The only memorable software-induced corruption I can recall right now was on HP-UX/ia64. Some weird admins have exported all local files systems via NFS. And then,