[http://www.coker.com.au/bonnie++/ Bonnie++] tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed [http://www.coker.com.au/bonnie++/zcav/ ZCAV] program tests the performance of different zones of a hard drive without writing any data to the disk.

+

+

{{codeline|hdparm}} should '''NOT''' be used to benchmark a RAID, because it provides very inconsistent results.

* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1] and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2] in the [http://www.gentoo.org/doc/en/index.xml Gentoo Linux Docs]

* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1] and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2] in the [http://www.gentoo.org/doc/en/index.xml Gentoo Linux Docs]

Background

Although RAID and LVM may seem like analogous technologies they each present unique features.

RAID

Template:Wikipedia
Redundant Array of Independent Disks (RAID) is designed to prevent data loss in the event of a hard disk failure. There are different levels of RAID. RAID 0 (striping) is not really RAID at all, because it provides no redundancy. It does, however, provide a speed benefit. This example will utilize RAID 0 for swap, on the assumption that a desktop system is being used, where the speed increase is worth the possibility of system crash if one of your drives fails. On a server, a RAID 1 or RAID 5 array is more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.

RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.

RAID 5 requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.

Redundancy

Warning: Installing a system with RAID is a complex process that may destroy data. Be sure to backup all data before proceeding.

RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID will not protect your data. Therefore it is important to make backups. Whether you use tape drives, DVDs, CDROMs or another computer, keep an current copy of your data out of your computer (and preferably offsite). Get into the habit of making regular backups. You can also divide the data on your computer into current and archived directories. Then back up the current data frequently, and the archived data occasionally.

LVM

LVM (Logical Volume Management) makes use of the device-mapper feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.

This is strictly an ease-of-management issue: it does not provide any addition security. However, it sits nicely with the other two technologies we are using.

Note that LVM is not used for the boot partition, because of the bootloader problem.

Introduction

This article provides an example of how to install Arch Linux with a standard software RAID or LVM support. All configurations and settings are not referenced. Instead this article should provide a basic framework for you installation.

This example uses a computer with three similar IDE hard drives that are at least 80GB in size, installed as primary master, primary slave, and secondary master. A CD-ROM drive is installed as the secondary slave. The article assumes that the drives are accessible as Template:Filename, Template:Filename, and Template:Filename, and that the CD-ROM drive is Template:Filename.

Note: It is also good practice to ensure that only the drives involved in the installation are attached while performing the installation.

We will create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it is so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it is for redundancy, so that your machine will not lose its swap state even if 1 or 2 drives fail.

Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.

Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of two of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.

Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but the redundant partition will be in multiples of the size of the smallest partition, leaving the unallocated space to waste.

Load the RAID Modules

Before using mdadm, you need load the modules for the RAID levels you will be using. In this example, we are using levels 1 and 5, so we will load those. You can ignore any modprobe errors like "cannot insert md-mod.ko: File exists". Busybox's modprobe can be a little slow sometimes.

# modprobe raid1
# modprobe raid5

Create the RAID Redundant Partitions

Now that you have created all the physical partitions, you are ready to set up the three RAIDs. The tool you use to create RAID arrays is Template:Codeline.

If you want to use GRUB 0.97 (default in the Arch Linux 2010.05 release) on RAID 1, you need to specify an older version of metadata than the default. Add the option "--metadata=0.90" to the above command. Otherwise Grub will respond with "Filesystem type unknown, partition type 0xfd" and refuse to install. This may also be necessary with GRUB2.

At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:

# cat /proc/mdstat

You can also get particular information about, say, the root partition by typing:

# mdadm --misc --detail /dev/md0

You do not have to wait for synchronization to finish -- you may proceed with the installation while synchronization is still occurring. You can even reboot at the end of the installation with synchronization still going.

Setup LVM and Create the / (root) LVM Volume

This is where you create the LVM volumes. LVM works with abstract layers, check out LVM and/or its documentation to discover more. What you will be doing in short:

Turn block devices (e.g. /dev/sda1 or /dev/md0) into Physical Volume(s) that can be used by LVM

Create a Volume Group consisting of Physical Volume(s)

Create Logical Volume(s) within the Volume Group

Note:
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.

To mount the sysfs partition, do:

# mkdir /sys
# mount -t sysfs none /sys

Let us get started:

Make sure that the device-mapper module is loaded:

# modprobe dm-mod

Now you need to do is tell LVM you have a Physical Volume for it to use. It is really a virtual RAID volume (/dev/md0), but LVM does not know this, or really care. Do:

# pvcreate /dev/md0

This might fail if you are using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.

LVM should report back that it has added the Physical Volume. You can confirm this with:

# pvdisplay

Now it is time to create a Volume Group (which I will call array) which has control over the LVM Physical Volume we created. Do:

# vgcreate array /dev/md0

LVM should report that it has created the Volume Group array. You can confirm this with:

# vgdisplay

Next, we create a Logical Volume called root in Volume Group array that fills all the free space left on the volume group:

# lvcreate -l +100%FREE array -n root

LVM should report that it created the Logical Volume root. You can confirm this with:

# lvdisplay

The LVM volume should now be available as /dev/mapper/array-root. Or something similar, LVM will also be able to tell you which when you issue the display command.

Activate existing RAID devices and LVM volumes

If you already have RAID partitions created on your system and you have also set up LVM and all you want is enabling them
follow this simple procedure. This might come in handy if you are switching distributions and do not want to lose data in /home
for example.

First you need to enable RAID support. RAID1 and RAID5 in this case.

# modprobe raid1
# modprobe raid5

Activate RAID devices: md1 for /boot and md0 for LVM where two logical volumes will reside.

We have created all our filesystems! And we are ready to install the OS!

Install and Configure Arch

This section does not attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you are having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.

Now you can continue using the installer to set-up the system and install the packages you need.
Here is the walkthrough:

Type /arch/setup to launch the main installer.

Select < OK > at the opening screen.

Select 1 CD_ROM to install from CD-ROM (or 2 FTP if you have a local Arch mirror on FTP).

If you have skipped the optional step (Create and Mount the Filesystems) above, and have not created a fileystem yet, select 1 Prepare Hard Drive > 3 Set Filesystem Mountpoints and create your filesystems and mountpoints here

Now at the main menu, Select 2 Select Packages and select all the packages in the base category, as well as the mdadm and lvm2 packages from the system category. Note: mdadm & lvm2 are included in base category since arch-base-0.7.2.

Select 3 Install Packages. This will take a little while.

Note: Because the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automaticly for you. So let us delete the original and have mdadm create you a new one with the currect setup:Press Alt-F2 to get a new terminal an log in, then do

Install Grub on the Primary Hard Drive

grub 0.97

This can also be done from the installer just fine now (2009.08 and should also work for 2009.02)

This is the last and final step before you have a bootable system!

As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you are effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive.

At this point, you may no longer be able to see keys you type at your console. I am not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing reset at the prompt.

Once you have got console echo back on, type:

# grub

After a short wait while grub does some looking around, it should come back with a grub prompt. Do:

grub> root (hd0,0)
grub> setup (hd0)
grub> quit

That is it. You can exit your chroot now by hitting CTRL-D or typing exit.

Reboot

The hard part is all over! Now remove the CD from your CD-ROM drive, and type:

# reboot

Install Grub on the Alternate Boot Drives

Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:

Archive your Filesystem Partition Scheme

Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the sfdisk tool and the following steps:

Mounting from a Live CD

Note: Live CDs like SystemrescueCD assemble the RAID arrays automatically at boot time if you used the partition type fd at the install of the array)

Removing device, stop using the array

You can remove a device from the array after you mark it as faulty.

# mdadm --fail /dev/md0 /dev/sdxx

Then you can remove it from the array.

# mdadm -r /dev/md0 /dev/sdxx

Remove device permanently (for example in the case you want to use it individally from now on).
Issue the two commands described above then:

# mdadm --zero-superblock /dev/sdxx

After this you can use the disk as you did before creating the array.

Warning: If you reuse the removed disk without zeroing the superblock you will LOSE all your data next boot. (After mdadm will try to use it as the part of the raid array). DO NOT issue this command on linear or RAID0 arrays or you will LOSE all your data on the raid array.

Stop using an array:

Umount target array

Repeat the three command described in the beginning of this section on each device.

Stop the array with: mdadm --stop /dev/md0

Remove the corresponding line from /etc/mdadm.conf

Adding a device to the array

Adding new devices with mdadm can be done on a running system with the devices mounted.
Partition the new device "/dev/sdx" using the same layout as one of those already in the arrays "/dev/sda".

First, add the new device as a Spare Device to all of the arrays. We will assume you have followed the guide and use separate arrays for /boot RAID 1 (/dev/md1), swap RAID 1 (/dev/md2) and root RAID 5 (/dev/md0).

Resize the LVM Physical Volume /dev/md0 (or e.g. /dev/mapper/cryptedlvm if using LUKS) to take up all the available space on the array. You can list them with the command "pvdisplay".

# pvresize /dev/md0

Resize the Logical Volume you wish to allocate the new space to. You can list them with "lvdisplay". Assuming you want to put it all to your /home volume:

# lvresize -l +100%FREE /dev/array/home

To resize the filesystem to allocate the new space use the appropriate tool. If using ext2 you can resize a mounted filesystem with ext2online. For ext3 you can use resize2fs or ext2resize but not while mounted.

You should check the filesystem before resizing.

# e2fsck -f /dev/array/home
# resize2fs /dev/array/home

Read the manuals for lvresize and resize2fs if you want to customize the sizes for the volumes.

Troubleshooting

If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in /boot/grub/menu.lst accordingly. This is what happened to me anyway.

Recovering from a broken or missing drive in the raid

You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to fore the raid to still turn on even with one disk short. Type this (change where needed):

# mdadm --manage /dev/md0 --run

Now you should be able to mount it again with something like this (if you had it in fstab):

# mount /dev/md0

Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Partition_the_Hard_Drives. Once that is done you can add the new disk to the raid by doing:

# mdadm --manage --add /dev/md0 /dev/sdd1

If you type:

# cat /proc/mdstat

you probably see that the raid is now active and rebuilding.

You also might want to update your /etc/mdadm.conf file by typing:

# mdadm --examine --scan > /etc/mdadm.conf

That should be about all steps required to recover your raid. It certainly worked for me when i had lost a dive due to a partition table corruption.

Benchmarking

There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.

Tiobench specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.

Bonnie++ tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed ZCAV program tests the performance of different zones of a hard drive without writing any data to the disk.

Template:Codeline should NOT be used to benchmark a RAID, because it provides very inconsistent results.