Install Ubuntu With Software RAID 10

The Ubuntu Live CD installer doesn't support software RAID, and the server and alternate CDs only allow you to do RAID levels 0, 1, and 5. Raid 10 is the fastest RAID level that also has good redundancy too. So I was disappointed that Ubuntu didn't have it as a option for my new file server. I didn't want shell out lots of money for a RAID controller, especially since benchmarks show little performance benefit using a Hardware controller configured for RAID 10 in a file server.

1 Before you start

I'll asume you have already known about RAID 10, but I'll cover a some important information before you begin.

You will need 4 partitions dedicated for the RAID array, each will need to be on their own physical drive.

Only half of the disk space used for the RAID 10 volume will be useable.

All partitions used for RAID should be the same or close to the same size.

2 Prepare your disks

Use a partition program that can create RAID partitions, I use cfdisk which is text based but easier to use than fdisk.
Partition your disks, make a 50 MB partition on the first drive, this is for /boot since grub doesn't support RAID well.
Set up a partition on four drives to be RAID type, in cfdisk choose FD as the type. In my setup all of the system besides /boot will reside in one RAID 10 volume.

For best swap performance put a swap partition on each drive. I put a one GB swap on each drive.

Boot the Ubuntu Live CD.

Run the Terminal.

sudo su
cfdisk /dev/sda

cfdisk /dev/sdb

The next two drives are partitioned the same as /dev/sdb:

cfdisk /dev/sdc
cfdisk /dev/sdd

3 Install RAID utility, mdadm, and set up the RAID array

Then create the file system on the RAID array.
Format it now because the partitioner in the installer doesn't know how to modify or format RAID arrays. I used XFS file system, because XFS has great large file performance. Then you will create an alias for the RAID array with the link command because the Ubuntu installer won't find devices starting with "md".

mkfs.xfs /dev/md0
ln /dev/md0 /dev/sde

4 Ubuntu Install

Run the installer, when you are in the partitioner choose manual and be careful not to modify the partition layout.
For the /dev/sda1 partition choose ext3 as the file system and set /boot on.

Set your swap partitions to be used as swap.

You select the type of file system you already formatted the RAID device and set the mount point. Do not choose to reformat or make partition table changes to the RAID array, because the partitioner will misconfigure it.

Click continue on the warning about the RAID not being marked for formatting.

When the installer finishes tell it to continue to use the Live CD.

5 Install RAID support inside the new install

A default Ubuntu setup won't automatically boot into a software RAID setup, you will need to chroot into the new install and have the chroot configured to see all the device information available in the LiveCD environment so that the mdadm install scripts can properly set up config and boot files for RAID support.

Extra commands you may need

A helpful command that will tell you the status of the RAID and which partitions belong to a volume:

cat /proc/mdstat

If you reboot into the Live CD and want to mount your RAID array you will need to install mdadm in the Live CD environment and activate the RAID:

sudo su
apt-get install mdadm
mdadm --assemble /dev/md0

If you need to start over or remove the RAID array

Software RAID information is embedded in a place on each RAID partition called the superblock. If you decide to change your RAID setup and start over, you can't just repartition and try to recreate the RAID array. You will need to erase the superblock first on each partition belonging to the RAID array you want to remove.

Make sure your important data has been backed up before doing these steps.

Brm didn't have anything specific for me to help with. The mailing list mentioned did have some concerns that I will address. It is possible to put RAID 10 on two drives, its technically possible but practically useless. Striping or mirroring two partitions on the same hard drive causes a nasty performance hit, RAID 1 would be better for two drives. Yes RAID 100 is faster than RAID 10, but I think the added overhead wouldn't speed up software RAID, and it decreases the level of redundancy. I would love to see someone benchmark it. Putting swap on top of software RAID will add unnecessary overhead. Virtual memory in the kernel automatically optimizes the use of multiple swap partitions, and the kernel adapts if a swap partition becomes unavailable. You can put /boot on a RAID 1, but /boot is easy to regenerate, and you will have to partition a new drive and rebuild your RAID anyways if you lost a drive. I can redo the Guide to make /boot redundant if I get at lest a few people to request it. My setup lets you have a high performance storage system that allows you retain your data if a hard drive fails. If you want system with high availability and seamless fail over you will need hardware RAID with hot swappable drive bays, but that is expensive and not required for someone doesn't need the hardware for high availability.

First of all cheers for the tutorial. I learnt heaps even though i couldnt get it to work! Because the partitioner would not see md0

The reason why it failed for me is.

mkfs.xfs /dev/md0ln /dev/md0 /dev/sde

When installing from ubuntu server cd.

The trick is in a system recovery or live cd command prompt to type

mkfs.ext3 /dev/md0

Instead (dont bother with ln /dev/md0 etc)

This formats the raid array in ext3 which unlike xfs can actually be seen by the server installer!

Now when in the partitioner you select manual setup. At first you still wont see md0 but fear not! Setup your boot partition (/dev/sda1) and your swap partitions (sda3 sdb2 etc) then go into configure software raid. Now click finish (if you click on delete raid array youll see your md0 array! yay! But don't delete it of course!). Now when back in the partion screen you will see the md0 partition!!! yay!!

I have a question - I have set my system up following your tutorial, but wanted to upgrade to Ubuntu 8.10. My /boot partition was too small at 50Mb so I used the Live CD to resize it to 200Mb, deleting the /dev/sda2 partition in the process.

How do I resync the RAID array to bring the recreated /dev/sda2 back into the RAID? It says /dev/md0 is not started when trying to do it from the Live CD, and booting from the actual system itself I cant do it either as I am unable to mount the RAID mount as it is in use by the system!!

First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

Now to add my 2 cents, I will just tell my little experience with RAID

I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

dmraid

The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others

First of all I want to thank to this howto and also to the comments as I'm quite new to linux, and found them VERY usefull.

Now to add my 2 cents, I will just tell my little experience with RAID

I had to build a server and the hardware turn to be a FakeRAID one, so at a first attempt I thought to give it a try to FakeRAID, issued a dmraid -ay from the LIVE CD and play a little with it. Then after some reading about pro/cons of FakeRAID vs Software RAID, I made my mind and took the software RAID path, as I wanted to have RAID10 as root filesystem Ive made 2 partitions more or less like in one of this comments is recommend, I formatted then, run the server installer, partitionated in manual, and all installs ok.

To making it short, after 2 days of swering and with quite some less hair in my head I found the culprict

dmraid

The ubuntu boot takes up dmraid and it gets the devices for him even if I actually not used it for the install, I had to chroot, then dmraid -an , apt-get remove dmraid, and my problem solved, as I didn't found this anywhere, I thought it can help others

I would like to thank for providing a very clear step tutorial on howto install RAID10 in ubuntu and it is working really great in my system with no doubt.

Just wondering if I can make a request on how to add "Hot Spares" in this RAID and the troubleshooting on how to replace and reinstall a new RAID as well as email sent to user if one of the hardware fail.

I think this request will make the PERFECT HowTo RAID 10 for ubuntu user.

You might want to consider using swap on RAID too. If one swap disk crashes, the machine will go down, even though data stored in the RAID is still intact. And you do not need a swap disk until the system (and RAID) is up, so the boot partition is the only one needed. Might I sugest a USB stick for the boot partition :)

With raid10,f2 you can almost double the sequential read performance of your raid, while other performance numbers are about the same.

Using all of the 4 drives you can 4-double your read performance, and something like double other read performance measures, compared to your setup, while writing will be about the same. I would also recommend using a bigger chunk size, say 256 KiB.

Your point 3 would then be:

mdadm -C /dev/md2 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]2

I would also recommend using raid for boot and swap, and using all of the 4 drives would actually let you run if even 3 disks crashed, plus you get the added performance of all of the drives. /boot need to be on a standard raid10, as grub and lilo only can boot raid partitions that looks as a standalone partition.

Say for /boot:

mdadm -C /dev/md1 -c 256 -n 4 -l 10 -p n4 /dev/sd[abcd]1

And for swap:

mdadm -C /dev/md3 -c 256 -n 4 -l 10 -p f4 /dev/sd[abcd]3

For /home I would not waiste all the space on having 4 copies, so:

mdadm -C /dev/md4 -c 256 -n 4 -l 10 -p f2 /dev/sd[abcd]4

You may even consider running RAID5 on /home, to get more space.

There is more on the setup at http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

Compared to your setup, this would give you:

1. Survival of 3 disks crashing - your setup would not survive a dish crash where your /boot was placed, and your setup will stop if any of your swap partitions were damaged.

2. Almost 4 times the sequential read performance, and double random read performance for your basic /root and swap partitions.

Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

Hey man, just wanted to say thanks! Finally got RAID10 up and running. Had to tweak a little though....ended up running totally seperate drive for boot and swap as the install kept hanging on me at 15%. Also im a complete newbie; for all the other newbies out there, you have to run the command: apt-get update before you run apt-get mdadm. Cheers

Only remark on this guide - create larger boot partition, 50 Mb is not enough if you have two kernel versions (2.6.24-24-server and 2.6.24-26-server in my case). The kernel removal via aptitude or apt-get failed because there was insuffiecient disk space - only 10% free. I will now reinstall and create boot partition 100Mb, that should be enough for future kernel updates and I won't have to worry about disk space.

Thanx for this guide. You saved my life. After all efforts lasting 3-4 days with no result to install Element Os on raid 0, finally I came across your guide, which made worth all my efforts, headache, sweat. Do not know how to thank you. It all finally worked out smoothly. I could at last boot on my new Element Os install. No other guides or forums helped.

and I've now developed a readily-customisable set of scripts to implement the process to your own preference, and to add the LILO boot-loader to the result. If anyone can recommend a website that would be willing to host it I'll happily pass the set on for publication. Takes just a few minutes, and saves an awful lot of careful typing!

One thing i wanted to mention though it that in addition to some oddities ive noticed with mdadm raid10, the read speeds are very slow.

with 4 drive is get just 260MB/s reading in raid10, in raid 0 i average 520MB/s. Given that this is approximately half the speed i strongly suspect that raid 10 is not stripe reading from all 4 drives as it could and only reading from 2. Even raid 5 is much faster ~400MB/s.

I don't think ill chance 4 striped drives but after considering the performance hit, raid5 is much more attractive that raid10.

When installing Ubuntu 11.04 64 bit I had to make the /boot partition larger. Trying to install mdadm on the chrooted system failed, as I only had 7MB free space on that partition. After changing it to 500MB the install worked flawlessly.

Thanks for the comments! I'm happy to see people still find this useful after 3 years. I'm not using software RAID anymore after getting a couple second hand LSI cards for a low price. If I redid this guide I would do a couple things different: I would make the boot partition bigger. I would mount all the devpts, proc, and sysfs kernel filesystem after chroot, because it less likely to cause problem if you have to chroot again. (of course you would take /myraid off of the commands) I would also put swap on a second md1 raid, because there is a chance, especially if you don't have enough RAM, that a process or maybe even the kernel could crash if one of the drives failed. Unless the kernel has something built-in to handle one of multiple swap partitions failing. Someone smarter than me would know that.