Sunday, January 22, 2012

Improve Disk Performance on the Desktop

Today's tip is about improving disk performance under Linux. If you've ever thought about disk performance under Linux, you've probably read various articles saying this filesystem is better for this and this filesystem is better for that based on scripted benchmarks. These benchmarks give you an idea of what might be better in those various situations, but these articles don't describe how you can tweak a specific filesystem, nor do the articles mention anything about tweaking the I/O scheduler. In this article, I am going to explain how to tweak the I/O scheduler and the ext4 filesystem for desktop and gaming usage.

First, I find this to be the best tweak for the CFQ I/O schedu........ Stop! I'm not going to tell about any tweaks for the CFQ disk scheduler becuase I find CFQ to be garbage. Now, I bet you're thinking, "But CFQ is designed for low latency. Shouldn't that be the best choice for a desktop?" Well, if latency is your only concern, then yes, you are absolutely correct. However, when copying large amounts of data, it could mean the difference between getting up to grab a can of soda while you wait versus fixing yourself a hot cup of coffee while you wait. After doing some reading and some trial and error, I found that deadline can be tweaked to give descent latency on the desktop with the excellent level of throughput that you would normally expect from the deadline scheduler. Now, you might be thinking, "Deadline on the desktop? That can't be right." Well, see my tweaked settings for deadline below and give it a try, you may be pleasantly surprised.

Of course, you would want to repeat these lines, replacing "sda" for any additional hard drives in the system. The line "echo 2048 > /sys/block/sda/queue/read_ahead_kb" is a general disk I/O tweak, not deadline specific. This simply increases the size of the read ahead buffer. I find it works well with any I/O scheduler. If you are curious as to what exactly each of these tweaks do, check the documentation in the kernel source tree.

This tweak still leans a little heavier toward throughput. If this does not give you enough responsiveness, I suggest you try the BFQ scheduler (see link at end of article). BFQ leans a little more toward responsiveness but still give much better throughput as opposed to CFQ. Between the tweaked deadline scheduler and the BFQ scheduler, I'd be very surprised if you decided to go back to CFQ.

Ext4 is the fastest filesystem on Linux. "What!? No it's not! Filesystem X is way faster!" If that is what you were just thinking, you may be absolutely correct. After all, latency on the desktop can be rather subjective. I say that ext4 is the fastest filesystem on Linux because for me, it has been. On that note, I'd like to share some tweaks for the ext4 filesystem that make it the fastest for me. First of all, journaling just slows things down, don't use it unless the data is important, in which case use a 500 megabyte (or smaller for older/smaller disks) full data journal. Are you thinking, "What!? Full data journal makes things go fast!? Are some neurons not firing in your brain?" In some situations a full data journal can slow things down considerably, but try to think about what is typically going on in a filesystem in a desktop computer. You have things like changing settings in KDE, adding bookmarks and downloading new cookies in a web browser going on. What do all these have in common? Writing out various small files. Instead of the drive having to seek all about the disk to make these writes, it could be much faster for these writes to be written to small area of the disk in the journal and then to the various places afterward. Now, I will list the parameters I enter for mkfs.ext4 when creating a non-journaled and full data journaled ext4 filesystems.

Afterward, run tune2fs to switch the default mount behavior to full data journaling like this:

tune2fs -o journal_data /dev/sda1

You would replace /dev/sda1 with the device file of the disk to format.

If you are curious as to what each of the paramenters does, read the manual page, "man mke2fs" from the terminal.

I have partitons for /, /usr/local, /home, as well one for file archival, and one for games. I use non-journaled ext4 for all except /home and the file archival partition; for these, I use full data journaled ext4.

Finally, if you are feeling especially adventurous and running kernel 3.2 or later and have e2fsprogs 1.42 or later, you can use ext4 clustering. Instead of having to working with your data in only four kilobyte chunks, the system can now work with your data in chunks as large as one megabyte. So far I have tried 32 kilobyte clustering with mixed results; sometimes things load faster, sometimes things load slower. There hasn't really been any noticeable difference in latency. I would like to point out a note of caution in that using larger clusters will cause small files to take significantly more disk space, especially extracted source trees. Anyway, to use clustering, simply add a -C <cluster size> and add "bigalloc" to the parameters with -O. You may also have to increase the valve of the number with -i to slightly larger than cluster size.

I hope this information about improving disk performance on the desktop was helpful. Please feel free to ask question or leave comments.

2 comments:

VERY interesting article. I agree that CFQ < deadline < BFQ. As for the filesystems, I've been using reiserfs for years and also tried and benchmarked ext2, ext3, ext4, jfs, xfs, btrfs... since reiser 4 development has stalled, ext4 is definitely the best choice out there (unless you use SSD disks, i haven't tried yet).

After using the 32k clusters ext4 for my root filesystem for about two months, I really haven't noticed much of a difference in performance or responsiveness versus a non-clustered 4k blocks ext4 filesystem. However, with all the small files on the root filesystem, there is a fair amount of space wasted. Therefore, I only recommend using the clustering option for filesystems with large files. Also with Linux 3.2 and 3.3-rc6, I have noticed that btrfs is really starting to perform quite decently. I've now been using btrfs for my applications/games partition with forced zlib compression and it has been working quite well. It's still a little sluggish at times, but not at all unbearable. I've been running World of Warcraft off of it. With previous kernel versions, it was unplayable at times. Now it still has higher than expected disk activity but is 100 percent playable with no excessive latency. This is with the inode cache, space cache, and auto defrag options enabled.