Sponsored Link

This Article describes Heartbeat2 Xen cluster Using Ubuntu (7.10) OS, drbd8 and OCFS2 (Ver. 1.39) File system. Although here Ubuntu is used it can be done in almost same way with Debian

Idea

The idea behind the whole set-up is to get a High availability two node Cluster with redundant data. The two identical Servers are installed with Xen hypervisor and almost same configuration as Cluster nodes. The configuration and image files of Xen virtual machines are stored on drbd device for redundancy. Drbd8 and OCFS2 allows simultaneous mounting on both nodes, which is required for live migration of xen virtual machines.

Answer yes for additional software. Reboot the system with Xen hypervisor

OCFS2

http://oss.oracle.com/projects/ocfs2/ OCFS2 is a Cluster File System which allows simultaneous access from many nodes. We will set this on our drbd device to access it from both nodes simultaneously. While configuring OCFS2 we provide the information about nodes, which will access the file system later. Every Node that has a OCFS2 file system mounted, must regularly write into a meta-data of file system, letting the other nodes know that node is still alive.

The advantage of drbd8 over drbd7 is: It allows the drbd resource to be “master” on both nodes and so can be mounted read-write. We will build drbd8 modules and load it in kernel. For that we need packages “build-essential” and “kernel-headers-xen”

“ allow-two-primaries” option in net section of drbd.conf allows the resource to be mounted as “master” on both nodes. Copy the /etc/drbd.conf to node2 and restart drbd on both nodes with following command.

In Heartbeat2 the configuration and status information of resources are stored in xml format in “/usr/lib/heartbeat/crm/cib.xml” file. Thy Syntax for this is very well explained by Alan Robertson in his tutorial at the linux.conf.au 2007. Which can be found at http://linux-ha.org/HeartbeatTutorials

This file can either edited directly as whole or manipulated in pieces using “cibadmin” tool. We will use this tool as it makes it much easier to manage the cluster. The required components we will save in xml files under /root/cluster

Initialaization

Edit file /root/cluster/bootstrap.xml

sudo vi /root/cluster/bootstrap.xml

#replace "( "and ")" with pointed brackets .... I just couldnt get it right in editor of this site

This will initialize the Cluster with values set in xml file. (some how if it has alredy set you can use “sudo cibadmin -M crm_config -x /root/cluster/bootstrap.xml” to modify it with our new values)

Setting up STONITH device

STONITH prevents “split-brain-situation” (i.e. running Resource on both nodes unwontedly at same time) by fencing the other node. Details can be found out at http://www.linux-ha.org/STONITH We will use “stonth” over ssh to reboot the faulty machine

Now we can add a Xen virtual machine as cluster resource.Lets say we have a Xen para visualized machine called vm01. The cofiguration and image files of vm01 we keep under /drbd0/xen/vm01/ as vm01.cfg and vm01-disk0.img respectively

Edit /root/cluster/vm01.xml

sudo vi /root/cluster/vm01.xml

#/root/cluster/vm01.xml

#replace "( "and ")" with pointed brackets .... I just couldnt get it right in editor of this site

10 thoughts on “Heartbeat2 Xen cluster with drbd8 and OCFS2”

pretty good howto although one thing i’m missing: you never mention how the xen guests are set up in this environment. What kind of disk devices do they have? the only thing that comes to mind would be the file-backend, but then i’ve heard some bad things about the performance of it. I’d appreciate it if you could clear this up.

Thank you for the howto!
Though I did run to some problems while installing this system on hardy. For example with the dpkg-reconfigure o2cb I had to use ocfs2-tools instead of o2cb also the configuration file for the ocfs cluster did not work as such but needed some tuning. The parameters in the config need a tab in front:
node:
ip_port = 7777
ip_address = 192.168.0.128
number = 0
name = node1
cluster = ocfs2

Another way is to skip OCFS2 as I believe that is only complicating the setup and reduces performance. Using DRBD in your DomU config files, i.e. using drbd: instead of phy:, will also simplify the management of DRBD as the block device driver will take care of the DRBD states. It also does not require OCFS2 or GFS as only one node will have unconditional access. So using for example EXT3 works fine.

but I’d rather transfer the SSH key using the ssh-copy-id command. The ‘scp’ way presented here overwrites other RSA IDs that are permitted to log in on the other host. The ssh-copy-id command _adds_ the RSA ID on the other host, it doesn’t overwrite the IDs already stored in the authorized_keys file.

I like it the way it is. OCFS2 is necessary, because we want DomU images to reside in files (easy backup, migration to new hardware,live migration requires rw accesss on both nodes). If we don`t want files, we end up with multiple drbd primary/primary parttions for each DomU.
What I miss here is a more complicated example – w 2 DomUs and resource stickiness, auto failback/fail forward to spread the load.
Also drbd startup, ocfs2 startup & mount setup as Clone resources. It`s interesting also what happens if you type # halt on a node. (graceful shutdown requires to migrate domUs, stop heartbeat, umount ocfs2, stop drbd (detach disks) & continue w shutdown…

This is totally the wrong way to roll. With this setup, any partition across the DRBD interface will cause (at best) fencing (assuming that your fencing via ssh hack is using a different interface) leading to (at least some) downtime, and at worst two completely different copies of your OCFS, one of which will need to be thrown out (and those VM’s changes lost entirely.)

If you used one drbd per VM disk, and drbd’s block-drbd script for Xen, you would completely ameliorate this problem AND do away with any need for disk-related pseudofencing, because (except for a handful of milliseconds during live migration) the volume is only primary in one spot at once, so any partition will be resolved automatically by drbd on reconnection.

I have a problem a with this command: sudo cibadmin -C crm_config -x /root/cluster/bootstrap.xml, if i run this the help page i see. I use openSUSE 11.1 x64 operation system. Can anybody help me, what is do wrong? I think is do wrong the heartbeat 2 installation / configuration. Can anybody write the installation /configuration step by step?