As you know RAID 5 can tollerate a single drive failure. If a second drive dies and the first drive was not yet replaced or rebuild, you lose all contents of the array.

In the article the author argues that because drives become bigger but not more reliable, the risk of losing a second drive during a rebuild is so high that running RAID 5 is becoming risky.

You don't need a second drive failure for you to lose your data. A bad sector, also known as an Unrecoverable Read Error (URE), can also cause problems during a rebuild. Depending on the RAID implementation, you may lose some files or the entire array.

The author calculates and argues that the risk of such a bad sector or URE is so high with modern high-capacity drives, that this risk of a second drive failure during rebuild is almost unavoidable.

Most drives have a URE specification of 1 bit error in 12.5 TB of data (10^14). That number is used as an absolute, it's what drives do experience in our daily lives, but that's not true.

It's a worst-case number. You will see a read error in at-most 10^14 bits, but in practice drives are way more reliable.

If that worst-case number were 'real', I would have caught some data errors by now. However, in line with my personal experience, ZFS hasn't corrected a single byte since the system came online a few years ago.

And I've performed so many scrubs that my system has read over a petabyte of data. No silent data corruption, no regular bad sectors.

It seems to me that all those risk aren't nearly as high as it seems.

I would argue that choosing RAID-5/Z in the right circumstances is reasonable.
RAID-6 is clearly safer than RAID-5 as you can survive the loss of two drives instead of a single drive, but that doesn't mean that RAID-5 is unsafe.

If you are going to run a RAID 5 array, make sure you run a scrub or patrol read or whatever the name is that your RAID solution uses. A scrub is nothing more than attempt to try and read all data from disk.

Scrubbing allows detection of bad sectors in advance, so you can replace drives before they cause real problems (like failing during a rebuild).

If you keep the number of drives in a RAID-5 array low, maybe at most 5 or 6, I think for home users, who need to find a balance between cost and capacity, RAID-5 is an acceptable option.

When configuring a Linux RAID array, the chunk size needs to get chosen. But
what is the chunk size?

When you write data to a RAID array that implements striping (level 0, 5, 6,
10 and so on), the chunk of data sent to the array is broken down in to
pieces, each part written to a single drive in the array. This is how striping
improves performance. The data is written in parallel to the drive.

The chunk size determines how large such a piece will be for a single drive.
For example: if you choose a chunk size of 64 KB, a 256 KB file will use four
chunks. Assuming that you have setup a 4 drive RAID 0 array, the four chunks
are each written to a separate drive, exactly what we want.

This also makes clear that when choosing the wrong chunk size, performance may
suffer. If the chunk size would be 256 KB, the file would be written to a
single drive, thus the RAID striping wouldn't provide any benefit, unless
manny of such files would be written to the array, in which case the different
drives would handle different files.

In this article, I will provide some benchmarks that focus on sequential read
and write performance. Thus, these benchmarks won't be of much importance if
the array must sustain a random IO workload and needs high random iops.

Test setup

All benchmarks are performed with a consumer grade system consisting of these
parts:

Processor: AMD Athlon X2 BE-2300, running at 1.9 GHz.

RAM: 2 GB

Disks: SAMSUNG HD501LJ (500GB, 7200 RPM)

SATA controller: Highpoint RocketRaid 2320 (non-raid mode)

Tests are performed with an array of 4 and an array of 6 drives.

All drives are attached to the Highpoint controller. The controller is not
used for RAID, only to supply sufficient SATA ports. Linux software RAID with
mdadm is used.

A single drive provides a read speed of 85 MB/s and a write speed of 88
MB/s

The RAID levels 0, 5, 6 and 10 are tested.

Chunk sizes starting from 4K to 1024K are tested.

XFS is used as the test file system.

Data is read from/written to a 10 GB file.

The theoretical max through put of a 4 drive array is 340 MB/s. A 6 drive
array should be able to sustain 510 MB/s.

About the data:

All tests have been performed by a Bash shell script that accumulated all
data, there was no human intervention when acquiring data.

All values are based on the average of five runs. After each run, the RAID
array is destroyed, re-created and formatted.

For every RAID level + chunk size, five tests are performed and averaged.

Data transfer speed is measured using the 'dd' utility with the option
bs=1M.

Test results

Results of the tests performed with four drives:

Test results with six drives:

Analysis and conclusion

Based on the test results, several observations can be made. The first one is
that RAID levels with parity, such as RAID 5 and 6, seem to favor a smaller
chunk size of 64 KB.

The RAID levels that only perform striping, such as RAID 0 and 10, prefer a
larger chunk size, with an optimum of 256 KB or even 512 KB.

It is also noteworthy that RAID 5 and RAID 6 performance don't differ that
much.

Furthermore, the theoretical transfer rates that should be achieved based on
the performance of a single drive, are not met. The cause is unknown to me,
but overhead and the relatively weak CPU may have a part in this. Also, the
XFS file system may play a role in this. Overall, it seems that on this
system, software RAID does not seem to scale well. Since my big storage
monster (as seen on the left) is able to perform way better, I suspect that it
is a hardware issue.