Introduction

Warning: Be sure to review the RAID article and be aware of all applicable warnings, particularly if you select RAID5.

Although RAID and LVM may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as /dev/sda, /dev/sdb, and /dev/sdc. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.

Tip: It is good practice to ensure that only the drives involved in the installation are attached while performing the installation.

LVM Logical Volumes

/

/var

/swap

/home

LVM Volume Groups

/dev/VolGroupArray

RAID Arrays

/dev/md0

/dev/md1

Physical Partitions

/dev/sda1

/dev/sdb1

/dev/sdc1

/dev/sda2

/dev/sdb2

/dev/sdc2

Hard Drives

/dev/sda

/dev/sdb

/dev/sdc

Swap space

Note: If you want extra performance, just let the kernel use distinct swap partitions as it does striping by default.

Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory.

MBR vs. GPT

Template:Wikipedia
The widespread Master Boot Record (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. GUID Partition Table (GPT) is a new standard for the layout of the partition table based on the UEFI specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating an additional partition at the beginning of each disk for GRUB2 (see: GPT specific instructions).

GRUB2 supports the default style of metadata currently created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio. SYSLINUX only supports version 1.0, and therefore requires the --metadata=1.0 option.

Some boot loaders (e.g. GRUB, LILO) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option --metadata=0.90 to the /boot array during RAID installation.

Each hard drive will have a 100MB /boot partition, 2048MB /swap partition, and a / partition that takes up the remainder of the disk.

The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the /boot array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).

Install gdisk

Since most disk partitioning software does not support GPT (i.e. fdisk, sfdisk) you will need to install gptfdisk to set the partition type of the boot loader partitions.

Repeat this process for /dev/sdb and /dev/sdc or use the alternate sgdisk method below. You may need to reboot to allow the kernel to recognize the new tables.

Note: Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but the redundant partition will be in multiples of the size of the smallest partition, leaving the unallocated space to waste.

Clone partitions with sgdisk

If you are using GPT, then you can use sgdisk to clone the partition table from /dev/sda to the other two hard drives:

RAID installation

After creating the physical partitions, you are ready to setup the /boot, /swap, and / arrays with mdadm. It is an advanced tool for RAID management that will be used to create a /etc/mdadm.conf within the installation environment.

Create the / array at /dev/md0:

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3

Create the /swap array at /dev/md1:

# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2

Note: If you plan on installing a boot loader that does not support the 1.x version of RAID metadata make sure to add the --metadata=0.90 option to the following command.

Synchronization

Tip: If you want to avoid the initial resync with new hard drives add the --assume-clean flag.

After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of /proc/mdstat ten times per second with:

Once synchronization is complete the State line should read clean. Each device in the table at the bottom of the output should read spare or active sync in the State column. active sync means each device is actively in the array.

Note: Since the RAID synchronization is transparent to the file-system you can proceed with the installation and reboot your computer when necessary.

LVM installation

This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. /, /var, /home). If you did not understand that make sure you read the LVM Introduction section.

Create physical volumes

Make the RAIDs accessible to LVM by converting them into physical volumes (PVs):

# pvcreate /dev/md0

Note: This might fail if you are creating PVs on an existing Volume Group. If so you might want to add -ff option.

Confirm that LVM has added the PVs with:

# pvdisplay

Create the volume group

Next step is to create a volume group (VG) on the PVs.

Create a volume group (VG) with the first PV:

# vgcreate VolGroupArray /dev/md0

Confirm that LVM has added the VG with:

# vgdisplay

Create logical volumes

Now we need to create logical volumes (LVs) on the VG, much like we would normally prepare a hard drive. In this example we will create separate /, /var, /swap, /home LVs. The LVs will be accessible as /dev/mapper/VolGroupArray-<lvname> or /dev/VolGroupArray/<lvname>.

Create a / LV:

# lvcreate -L 20G VolGroupArray -n lvroot

Create a /var LV:

# lvcreate -L 15G VolGroupArray -n lvvar

Note: If you would like to add the swap space to the LVM create a /swap LV with the -C y option, which creates a contiguous partition, so that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents:

# lvcreate -C y -L 2G VolGroupArray -n lvswap

Create a /home LV that takes up the remainder of space in the VG:

# lvcreate -l +100%FREE VolGroupArray -n lvhome

Confirm that LVM has created the LVs with:

# lvdisplay

Tip: You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.

Update RAID configuration

Since the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:

# mdadm --examine --scan > /etc/mdadm.conf

Note: Read the note in the Update configuration file section about ensuring that you write to the correct mdadm.conf file from within the installer.

Prepare hard drive

Follow the directions outlined the Installation section until you reach the Prepare Hard Drive section. Skip the first two steps and navigate to the Manually Configure block devices, filesystems and mountpoints page. Remember to only configure the PVs (e.g. /dev/mapper/VolGroupArray-lvhome) and not the actual disks (e.g. /dev/sda1).

/etc/mkinitcpio.conf

Add the mdadm and lvm2 hooks to the HOOKS list in /etc/mkinitcpio.conf after udev.

Conclusion

Once it is complete you can safely reboot your machine:

# reboot

Install Grub on the Alternate Boot Drives

Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:

Archive your Filesystem Partition Scheme

Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the sfdisk tool and the following steps: