If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Linux 3.16 File-System Tests On A Hard Drive

Phoronix: Linux 3.16 File-System Tests On A Hard Drive

In complementing the earlier Linux 3.16 file-system tests on an SSD (and the later Btrfs testing), here are benchmarks of EXT4, XFS, and Btrfs from the Linux 3.15 and 3.16 kernels being compared from a traditional rotating hard drive.

Elevator

Did you use the elevator=deadline kernel parameter with XFS, as recommended? The default CFQ elevator does not work well with XFS. Setting elevator=deadline or even elevator=noop should provide better performance.

Did you use the elevator=deadline kernel parameter with XFS, as recommended? The default CFQ elevator does not work well with XFS. Setting elevator=deadline or even elevator=noop should provide better performance.

Btrfs seems to be neck and neck with Ext4, actually beating it in the most realistic benchmarks (kernel compilation and dbench). This seems to be a pattern on hard drives, while Btrfs does more poorly on SSDs. I'd rather have this than the other way around, because SSDs are here to stay and much more effort will be put into making optimizations for SSDs in the future. Now if Btrfs can only get ahead in the other, more synthetic, benchmarks...

I would love to see memory stress figures on the test.
I bet btrfs would top the charts with memory usage.
And then xfs+ext4 somewhere below.
Don't get me wrong: as soon as btrfs is near stable, I will absolutely use it. But my experience is that it needs a lot of memory.
And btrfs.fsck needs a lot of tuning too. As soon as meta data exceeds the available memory you can better throw the filesystem away and start over.
As a test I've let btrfs.fsck run for 6 months. I am not sure if it run out of memory after 6 months or that it complained about something else.
The metadata was 250GB, the memory in the system was 12GB. It needed 6 months to run through the metadata, just to stop not ever reaching a point that it could fix something.
Anyway: the biggest problem with btrfs is that it is/was easy (at least up to 3.15) to corrupt the filesystem by a dualthreaded persistent write load.

With Ubuntu 14.10, Linux 3.16, btrfs seems to have a CPU load to I/O trade-off/fairness problem.

On an 8 core system, running close to 100% load on all cores with heavy I/O (yocto compilation), X windows stall completely (no mouse, no screen updates) for up to 30 seconds. Never happens with ext4 on same setup.

With Ubuntu 14.10, Linux 3.16, btrfs seems to have a CPU load to I/O trade-off/fairness problem.

On an 8 core system, running close to 100% load on all cores with heavy I/O (yocto compilation), X windows stall completely (no mouse, no screen updates) for up to 30 seconds. Never happens with ext4 on same setup.

I wonder if the problem is more Ubuntu centric. I have a Corei3 SandyBridge Laptop (usb 2.0) and another Corei5 Ivy Bridge both running xubuntu 14.10 (usb3.0). I am facing the same cpu hogging issue when writing data to a NTFS filesystem. I tried changing kernels upgrading to mainline one (name 3.16.1) but the problem is still there . X stalls and the like. Right now I am on Debian Sid and running the same kernel (from the same mainline ppa actually i just dpkg -i the same files from mainline ppa as are in use in xubuntu ) and i dont see that problem occuring here :/ (also running 3.16.1 on centos 7 problem doesnt occur there either)