History

Archwas never a mistake. Arch Linux had already taught me so much about Linux (and will continue to do so on my other desktop). But Arch definitely requires more time and attention than I would like to spend on a server. Ideally I’d prefer to be able to forget about the server for a while until a reminder email says “um … there’s a couple updates you should look at, buddy.”

Space isn’t free – and neither is space

The opportunity to migrate to Ubuntu was the fact that I had run out ofSATAports, the ports required to connect hard drives to the rest of the computer – that 7TB RAID array uses a lot of ports! I had even given away myveryold 200GB hard disk as it took up one of those ports. I also warned the recipient that the disk’sSMARTmonitoring indicated it was unreliable. As a temporary workaround to the lack of SATA ports, I had even migrated the server’s OS to a set of four USB sticks in an mdRAID1. Crazy. I know. I wasn’t too happy about the speed. I decided to go out and buy a new reliable hard drive and a SATA expansion card to go with it.

The server’s primary Arch partition was using about 7GB of disk. A big chunk of that was aswapfile, cached dataand otherwise miscellaneous or unnecessary files. Overall the actual size of the OS, including the/homefolder, was only about 2GB. This prompted me to look into a super-fastSSDdrive, thinking perhaps a smaller one might not be so expensive. It turned out that the cheapest non-SSD drive I could find actually costmorethan one of these relatively small SSDs. Yay for me. 🙂

Choice? Woah?!

In choosing the OS, I’d already decided it wouldn’t be Arch. Out of all the other popular distributions, I’m most familiar with Ubuntu andCentOS. Fedorawas also a possibility – but I hadn’t seriously yet considered it for a server. Ubuntu won the round.

The next decision I had to make didn’t occur to me untilUbiquity (Ubuntu’s installation wizard) asked it of me: How to set up thepartitions.

I was new to using SSDs in Linux – I’m well aware of the pitfalls of not using them correctly, mostly due to their risk of poor longevity if misused.

I didn’t want to use a dedicated swap partition. I plan on upgrading the server’s motherboard/CPU/memory not too far in the future. Based on that I decided I will put swap into a swap file on the existing md RAID. The swap won’t be particularly fast but its only purpose will be for that rare occasion when something’s gone wrong and the memory isn’t available.

This then left me to give theroot paththe full 60GB out of anIntel 330 SSD. I considered separating /home but it just seemed a little pointless, given how little was used in the past. I first set up the partition withLVM – something I’ve recently been doing whenever I set up a Linux box (tinuod nga, there’s no excuse not to use LVM). When it got to the part where I would configure the filesystem, I clicked the drop-down and instinctively selected ext4. Then I noticed btrfs in the same list. Hang on!!

But a what?

Btrfs (“butter-eff-ess”, “better-eff-ess”, “bee-tree-eff-ess”, or whatever you fancy on the day) is a relatively new filesystem developed in order to bring Linux’ filesystem capabilities back on track with current filesystem tech. The existing King-of-the-Hill filesystem, “ext” (the current version called ext4) is pretty good – but it is limited, stuck in an old paradigm (think of a brand newF22 Raptorvs. anF4 Phantomwith a half-jested attempt at an equivalency upgrade) and is unlikely to be able to compete for very long with newer Enterprise filesystems such asOracle’s ZFS. Btrfs still has a long way to go and is still considered experimental (depending on who you ask and what features you need). Many consider it to be stable for basic use – but nobody is going to make any guarantees. And, siyempre, everyone is saying to make and test backups!

Mooooooo

The most fundamental difference between ext and btrfs is that btrfs is a “CoW” or “Copy on Write” filesystem. This means that data is never actually deliberately overwritten by the filesystem’s internals. If you write a change to a file, btrfs will write your changes to a new location on physical media and will update the internal pointers to refer to the new location. Btrfs goes a step further in that those internal pointers (referred to as metadata) arealsoCoW. Older versions of ext would have simply overwritten the data. Ext4 would use a Journal to ensure that corruption won’t occur should the AC plug be yanked out at the most inopportune moment. The journal results in a similar number of steps required to update data. With an SSD, the underlying hardware operates a similar CoW process no matter what filesystem you’re using. This is because SSD drives cannot actually overwrite data – they have to copy the data (with your changes) to a new location and then erase the old block entirely. An optimisation in this area is that an SSD might not even erase the old block but rather simply make a note to erase the block at a later time when things aren’t so busy. The end result is that SSD drives fit very well with a CoW filesystem and don’t perform as well with non-CoW filesystems.

To make matters interesting, CoW in the filesystem easily goes hand in hand with a feature called deduplication. This allows two (or more) identical blocks of data to be stored using only a single copy, saving space. With CoW, if a deduplicated file is modified, the separate twin won’t be affected as the modified file’s data will have been written to a different physical block.

CoW in turn makessnapshottingrelatively easy to implement. When a snapshot is made the system merely records the new snapshot as being a duplication of all data and metadata within the volume. With CoW, when changes are made, the snapshot’s data stays intact, and a consistent view of the filesystem’s status at the time the snapshot was made can be maintained.

A new friend

With the above in mind, especially as Ubuntu has made btrfs available as an install-time option, I figured it would be a good time to dive into btrfs and explore a little. 🙂

The relatively new document typesOffice 2007has given some web hosts problems when their clients want to offer documents for download. Most often, the documents are being offered by the web server as“text/html”which is then rendered as a ton of garbage on the web user’s screen.

The best way to resolve this is to add all theMIMEtypes to the server’s main configuration. IIS7 for Windows already has these MIME types set up correctly by default. IIS6 and IIS5 require the MIME types to be added, as mightApacheon older installations. For Apache, there is also a workaround for the individual domain owner to add the mime types via Apache’s.htaccessfile.

IIS 6 MIME type addition (for the Server Administrator)

Before this can be done, ensure that your server is also set to allow direct metabase editing:

Within the “Internet Information Services” tab (usually the only tab), ensure that the “Enable Direct Metabase Edit” checkbox is checked.

Click [OK]

Be sure toback up IIS’s configuration (herefor IIS5) beforehand. I won’t take any responsibility for an admin breaking his server. I have reason to believe thismayalso work on IIS5 however I have just as much reason to believe that it might just give lots of errors. If an IIS5 / Windows 2000 admin is willing to test this for meafter backing upyour configuration please let me know of the results.

Copy the following text into a file named msoff07-addmime.vbs and execute itoncefrom the commandline by typingcscriptmsoff07-addmime.vbsand pressing Enter. If you run it more than once, the MIME types will be added each time and you will have multiple identical entries:

Apache MIME type addition (for the Server Administrator)

Apache stores its MIME types in a file normally located at$installpath/conf/mime.types. See the mod_mimedocumentationfor more on how it works. Arch Linuxinstalls its MIME types at/etc/httpd/conf/mime.types ug Parallels Pleskinstalls it in/usr/local/psa/admin/conf/mime.types. Your distribution might have it in another place, so find yourmime.typesfile by runninglocate mime.types.