Monday, December 9, 2013

I had 2 1Tb drives in my Synology DS212j in RAID1 (or Synology Hybrid Raid), and I ran out of space.

I went and bought a new 4Tb drive, a single one in order to wait for another one from a different production batch (lower probability of both failing at the same time).

I thought it will be trivial to replace 2 drives with a degraded RAID of one larger drive, and increase the available space. Unfortunately, it was not trivial.

At first, I replaced one of the drives and powered on in degraded state. I used Synology GUI to repair the RAID and thus sync all data to the new 4Tb drive. That worked and took some hours.

Then, I removed the second 1Tb drive to keep the RAID in degraded state with the new larger drive. I powered on my NAS, but it didn't offer to expand the space. In fact, all the options in Storage Manager were greyed out. Bad luck.

So I went the command-line way.

Now some theory: Synology Hybrid Raid (SHR) actually uses LVM (Linux logical volume management) on top of RAID1 (managed my mdadm). Unfortunately, this additional layer of LVM between RAID and the filesystem has complicated things for me.

So, in order to extend my volume, I need to:

Resize physical partition on the drive to fill all available space (eg, /dev/sda3)

1. Resizing of physical partitions is possible with parted. Note that fdisk cannot handle such big drives as 4Tb. However, newer parted has the resize command removed (bastards), so you actually need to delete and recreate the partition in its place. Scary, but works.

For that, start parted /dev/sda, then issue commands

unit s - this makes parted use units of sectors instead of Mb/Gb/etc - this is cruicial to be exact in recreating your new partition

print free - will list the current partition table along with the free space in the end

rm 3 - will delete 3rd parition (check if it is the correct number, mine had 5 for some odd reason)

mkpart ext4 - will create a new partition in its place, make sure to specify the same start sector as was printed, and the last sector of the free space, so you will use the whole disk. If it complains about alignment, press ignore - it will still be minimally aligned.

Now reboot - I didn't find a working method of forcing Synology kernel to reread the partition table. Even though my data partition was /dev/sda5 and after recreation became /dev/sda3, Linux RAID was still able to detect it (probably using UUID) after reboot and assemble RAID array correctly.

2. After reboot you should see lots of space in /proc/partitions, but mdadm -D /dev/md2 will say you still use only a fraction of it.

Run mdadm --grow /dev/md2 --size max - this should do the trick. But it didn't work for me. In fact, that was the trickiest part to figure out. If it actually increased the size of your RAID array, proceed to step 3, but if it kept the old size, read on.

Now you need to reassemble the RAID array, asking it to update the device size. Unfortunately, you need to unmount the filesystem and disable LVM for it to work.