Sunday, June 23, 2013

5 Tips To Speed Up Linux Software Raid Rebuilding And Re-syncing

It is no secret that I am a pretty big fan of excellent Linux Software RAID.
Creating, assembling and rebuilding small array is fine. But, things
started to get nasty when you try to rebuild or re-sync large size
array. You may get frustrated when you see it is going to take 22 hours
to rebuild the array. You can always increase the speed of Linux
Software RAID 0/1/5/6 reconstruction using the following five tips.
Recently, I build a small NAS server running Linux for one my client
with 5 x 2TB disks in RAID 6 configuration for all in one backup server
for Mac OS X and Windows XP/Vista client computers. Next, I cat /proc/mdstat
and it reported that md0 is active and recovery is in progress. The
recovery speed was around 4000K/sec and will complete in approximately
in 22 hours. I wanted to finish this early.

The /proc/sys/dev/raid/speed_limit_min
is config file that reflects the current "goal" rebuild speed for times
when non-rebuild activity is current on an array. The speed is in
Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000.
The /proc/sys/dev/raid/speed_limit_max
is config file that reflects the current "goal" rebuild speed for times
when no non-rebuild activity is current on an array. The default is
100,000.
To see current limits, enter:# sysctl dev.raid.speed_limit_min # sysctl dev.raid.speed_limit_max

NOTE:
The following hacks are used for recovering Linux software raid, and to
increase the speed of RAID rebuilds. Options are good for tweaking
rebuilt process and may increase overall system load, high cpu and
memory usage.

To increase speed, enter:

echo value > /proc/sys/dev/raid/speed_limit_min

OR

sysctl -w dev.raid.speed_limit_min=value

In this example, set it to 50000 K/Sec, enter:# echo 50000 > /proc/sys/dev/raid/speed_limit_min OR# sysctl -w dev.raid.speed_limit_min=50000 If you want to override the defaults you could add these two lines to /etc/sysctl.conf:

Tip #2: Set read-ahead option

Tip #3: Set stripe-cache_size for RAID5 or RAID 6

This is only available on RAID5 and RAID6
and boost sync performance by 3-6 times. It records the size (in pages
per device) of the stripe cache which is used for synchronising all
write operations to the array and all read operations if the array is
degraded. The default is 256. Valid values are 17 to 32768.
Increasing this number can increase performance in some situations, at
some cost in system memory. Note, setting this value too high can
result in an "out of memory" condition for the system. Use the
following formula:

memory_consumed = system_page_size * nr_disks * stripe_cache_size

To set stripe_cache_size to 16 MiB for /dev/md0, type:# echo 16384 > /sys/block/md0/md/stripe_cache_size To set stripe_cache_size to 32 MiB for /dev/md3, type:# echo 32768 > /sys/block/md3/md/stripe_cache_size

Tip #4: Disable NCQ on all disks

Tip #5: Bitmap Option

Bitmaps optimize rebuild time
after a crash, or after removing and re-adding a device. Turn it on by
typing the following command:# mdadm --grow --bitmap=internal /dev/md0 Once array rebuild or fully synced, disable bitmaps:# mdadm --grow --bitmap=none /dev/md0