If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Large HDD/SSD Linux 2.6.38 File-System Comparison

03-09-2011, 11:00 AM

Phoronix: Large HDD/SSD Linux 2.6.38 File-System Comparison

Here are the results from our largest Linux file-system comparison to date. Using the soon-to-be-released Linux 2.6.38 kernel, on a SATA hard drive and solid-state drive, we benchmarked seven file-systems on each drive with the latest kernel code as of this past weekend. The tested file-systems include EXT3, EXT4, Btrfs, XFS, JFS, ReiserFS, and NILFS2.

Comment

I'm not sure if the inclusion of a plain HDD was per my request, but either way, thanks. The results were similar on some tests, but also very different on some others, so I think it's a good idea to keep doing it.

(I would assume the ones where the differences were greatest are the most seek-heavy workloads, which also seem like the ones it's most important to optimize in the mechanical HDD case.)

Comment

I get confused about all these different benchmarks. Is there a description of all the profiles somewhere? The names of the benchmarks are certainly not very descriptive. For example, for me the main factors in IO performance are:

1) Random write/read of small files (this is 90% operations of a desktop user).
2) Sequential write/read speed (you are copying/moving files)
3) Parallel sequential write/read speed (you are copying two big files at once -- ideally the combined speed should be the same as (2), but in reality it is often much lower)

Which benchmarks do I need to peruse to find out how the filesystems do in the described workloads?

Comment

These default settings are completely idiotic. They risk the data of the user. But that does not count. For ext3 devs it is more important to have good numbers when somebody does a standard test without leveling the field via mount options.