This article explains how to add physical disk drives to a XenServer host, so that more capacity is available for the XenServer guests.

Create Linux LVM partition

The first step is to write a new partition table on the second disk drive using the fdisk command.

# fdisk /dev/sdb
The number of cylinders for this disk is set to 182401.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 182401 1465136001 8e Linux LVM

Add new disk to XenServer LVM

The next step is to make the new disk partition known to the LVM using the pvcreate command. The pvdisplay command lists all physical volumes associated with LVM. The first physical volume is the original XenServer LVM partition on the first disk, the second entry is our new one.

If the new disk already contains a LVM partition, it should be automatically recognized as a new physical volume. In this case, the pvcreate command is not necessary.

The new storage can be added to the existing local storage, or it can be used to create a new distinct local storage. The first option has the disadvantage that the local storage is now dependend on two physical harddisks. This will double the risk of a failure.

Alternative 1: Extend existing local storage

To extend the existing local storage, use the vgextend command to add the new physical volume to an existing volume group. This command needs the volume group name as a parameter, so we run the vgdisplay command first. After execution of vgextend we should see the extra storage available in the volume group. The new size will also be displayed in the Citrix XenCenter for the local storage.

If your new disk already contains a LVM partition and has data in it (for example, if you were using this tutorial before, have copied data on the new storage and re-installed XenServer on the main disk), the xe sr-create command will delete all data.

Using the new storage

The new storage now shows up on the server's "Storage" tab in the XenCenter administration application. To use it, create a new virtual disk in the "Storage" tab of the virtual machine.

The new disk is now accessible as /dev/xvdb in the virtual machine. Just run fdisk /dev/xvdb and mke2fs -j /dev/xvdb1 and the new storage is ready for use.

Anonymous

Thanks for this post. I'm just getting started with XenServer, and so far, am very happy with it. Your posting here helped bridge the gap between my existing CentOS knowledge and using XenServer.

My XenServer has two 1TB disks on it. When booting the CD, I went into "shell" mode so I could use dd to clear out the beginning and end of each disk (to wipe away GUID partitions). When I continued setup, I told it not to create any storage repositories. Then, after it was booted, I logged into the box with SSH, changed the LVM partition to a "RAID autodetect", then used dd to copy the all the disk blocks up to and including the end of the second partition to the second disk. By running fdisk on the destination disk, I could use the 'write' command to reload the copied MBR. Next, I used mdadm to create a RAID1 drive across the two RAID partitions. Finally, I used pvcreate to make the RAID volume into a LVM PV and then used vgcreate to make a usable RAID-backed LVM out of it.

At this point, I was able to use your command with the newly-mirrored LVM to create robust storage.

Because I copied the disk blocks between the disks starting at block 0, I should be able to boot the second disk if the first fails (since the boot blocks would have also been copied). I don't know what's going to happen to configuration changes though... it would be great if I could put the config on the LVM.

Anonymous

Thanks for your response. The direct disk access (from the link above) has a couple of benefits:

its simpler - LVM isn't that hard, but its not as simple as accessing a drive straight up

i can easily pull the drive and read it from any other computer/VM

its faster - I did a simple DD benchmark on an older spare drive ... ~50MB write for LVM, ~75MB write for direct disk access... this is writing to the drive from within the same VM.

xencenter interface clutter - If I add 6 separate drives to a VM with your above method, I have 6 extra drives showing up in my main xencenter "tree" in the interface. With direct disk access, they show up as USB drives and are only in the "storage" tab. This is minor, but still.