To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your veserver filesystem and configuration from one host to another. This mechanism must be able to provide a consistent filesystem view to either host on demand (but not neccessarily at the same time). In other words you may use something like NFS, a clustered filesystem like OCFS2 or GFS, or a network replicated block device like drbd, but you cannot use something like rsync, scp or ftp.

To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your veserver filesystem and configuration from one host to another. This mechanism must be able to provide a consistent filesystem view to either host on demand (but not neccessarily at the same time). In other words you may use something like NFS, a clustered filesystem like OCFS2 or GFS, or a network replicated block device like drbd, but you cannot use something like rsync, scp or ftp.

−

Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a ''/vservers'' mount point and to have a subdirectory for each vserver in there: ''/vservers/<server-name>''. Since you will want to put both the ''/var'' and ''/etc'' sections of your vserver in the vserver's subdirectory and soft link to them, it will probably end up looking something like this:

+

===== Organizing your VServer Directories =====

+

+

Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a ''/vservers'' mount point and to have a subdirectory for each vserver in there: ''/vservers/<server-name>''. If you want to put both the ''/var'' and ''/etc'' sections of your vserver in the vserver's subdirectory and soft link to them you may be tempted to try this arrangement:

/vservers/<server-name>/etc

/vservers/<server-name>/etc

Line 10:

Line 12:

/etc/vservers/<server-name> -> /vservers/<server-name>/etc

/etc/vservers/<server-name> -> /vservers/<server-name>/etc

/var/lib/vservers/<server-name> -> /vservers/<server-name>/var

/var/lib/vservers/<server-name> -> /vservers/<server-name>/var

+

+

But if you do this and you enable the util-vserver init script, you are likely to run into a chroot barrier problem. Since this init script sets a chroot barrier on all vservers' var directory's parent you will see something like this error message:

With this arrangement you could replicate the entire /vservers directory to all hosts and you will than be able to run the vserver anywhere the /vservers directory is replicated.

With this arrangement you could replicate the entire /vservers directory to all hosts and you will than be able to run the vserver anywhere the /vservers directory is replicated.

−

Finally, you will need a mechanism to start and stop your vservers on the appropriate hosts. This is where heartbeat comes in. If you do not have a permanently mounted filesystem on each node (because you are using a non-shared block device such as drbd 7,) you will need to configure heartbeat to first provide the ''/vservers'' file system on the node which is going to be the active host. Once you have configured your filesystem for failover you can configure the vservers themselves for failover. If you want to control more than one vserver with heartbeat, you may use the following vserver [http://www.theficks.name/bin/lib/ocf/VServer ocf agent] to do so. Be sure to specify a colocation constraint between the filesystem and your vservers. You will also need to specify an oderering constraint to be sure that the filesystem is mounted before the vservers are started. Here is a sample ocf vserver configuration for a vserver named ''foo'' and an ocf provider named ''bar'':

+

===== FileSystem Fail Over =====

+

+

Finally, you will need a mechanism to start and stop your vservers on the appropriate hosts. This is where heartbeat comes in. If you do not have a permanently mounted filesystem on each node, because maybe you are using a regular filesystem on top of a non-shared block device such as drbd, you will need to configure heartbeat to first provide the ''/vservers'' file system on the node which is going to be the active host.

+

+

===== Multiple Vservers and Devices with DRBD 7 =====

+

+

If you have multiple Vservers which you want to be able to fail over independently from one host to another with drbd 7 you might have a hard time doing this with heartbeat. The drbd agent distributed with heartbeat tends to be focused on drbd 8, if you are using drbd 7 you are expected to be using heartbeat 1 which does not use ocf agents and does provide support for multiple independent drbd devices. Instead, you may try this custom [http://www.theficks.name/bin/lib/ocf/drbd drbd ocf agent]. Here is a sample heartbeat configuration for use with this agent:

The ''ocf provider'' is simply a fancy term for the directory name under ''/usr/lib/ocf/resource.d/'' where you place your ocf agent (script). The ocf agents distributed with heartbeat are in the heartbeat subdirectory and therefor the provider for them is heartbeat. If you are adding a custom agent, you can either put it in the same directory (heartbeat) and use the heartbeat provider, or you can create a new ''provider'' (bar in the examples) and a directory for that provider ''/usr/lib/ocf/resource.d/bar''.

+

+

===== VServer Fail Over =====

+

+

Once you have configured your filesystem for failover you can configure the vservers themselves for failover. If you want to control more than one vserver with heartbeat, you may use the following [http://www.theficks.name/bin/lib/ocf/VServer vserver ocf agent] to do so. Be sure to specify a colocation constraint between the filesystem and your vservers. You will also need to specify an oderering constraint to be sure that the filesystem is mounted before the vservers are started. Here is a sample ocf vserver configuration for a vserver named ''foo'' and an ocf provider named ''bar'':

The simplest way to combine related resources in heartbeat is to use a group. With a group you do not have to specify colocation and ordering constraints, they are implied. To use the above DRBD and VServer ocf resource agents together in a group, your heartbeat configuration will look something like this:

The ''ocf provider'' is simply a fancy term for the directory name under ''/usr/lib/ocf/resource.d/'' where you place your ocf agent. The ocf agents distributed with heartbeat are in the heartbeat subdirectory and therefor the provider for them is heartbeat.

+

Note the use of the Filesystem agent to mount your drbd device before starting your vserver.

Active VServer Failover

To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your veserver filesystem and configuration from one host to another. This mechanism must be able to provide a consistent filesystem view to either host on demand (but not neccessarily at the same time). In other words you may use something like NFS, a clustered filesystem like OCFS2 or GFS, or a network replicated block device like drbd, but you cannot use something like rsync, scp or ftp.

Organizing your VServer Directories

Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a /vservers mount point and to have a subdirectory for each vserver in there: /vservers/<server-name>. If you want to put both the /var and /etc sections of your vserver in the vserver's subdirectory and soft link to them you may be tempted to try this arrangement:

But if you do this and you enable the util-vserver init script, you are likely to run into a chroot barrier problem. Since this init script sets a chroot barrier on all vservers' var directory's parent you will see something like this error message:

With this arrangement you could replicate the entire /vservers directory to all hosts and you will than be able to run the vserver anywhere the /vservers directory is replicated.

FileSystem Fail Over

Finally, you will need a mechanism to start and stop your vservers on the appropriate hosts. This is where heartbeat comes in. If you do not have a permanently mounted filesystem on each node, because maybe you are using a regular filesystem on top of a non-shared block device such as drbd, you will need to configure heartbeat to first provide the /vservers file system on the node which is going to be the active host.

Multiple Vservers and Devices with DRBD 7

If you have multiple Vservers which you want to be able to fail over independently from one host to another with drbd 7 you might have a hard time doing this with heartbeat. The drbd agent distributed with heartbeat tends to be focused on drbd 8, if you are using drbd 7 you are expected to be using heartbeat 1 which does not use ocf agents and does provide support for multiple independent drbd devices. Instead, you may try this custom drbd ocf agent. Here is a sample heartbeat configuration for use with this agent:

OCF Provider

The ocf provider is simply a fancy term for the directory name under /usr/lib/ocf/resource.d/ where you place your ocf agent (script). The ocf agents distributed with heartbeat are in the heartbeat subdirectory and therefor the provider for them is heartbeat. If you are adding a custom agent, you can either put it in the same directory (heartbeat) and use the heartbeat provider, or you can create a new provider (bar in the examples) and a directory for that provider /usr/lib/ocf/resource.d/bar.

VServer Fail Over

Once you have configured your filesystem for failover you can configure the vservers themselves for failover. If you want to control more than one vserver with heartbeat, you may use the following vserver ocf agent to do so. Be sure to specify a colocation constraint between the filesystem and your vservers. You will also need to specify an oderering constraint to be sure that the filesystem is mounted before the vservers are started. Here is a sample ocf vserver configuration for a vserver named foo and an ocf provider named bar:

Complete VServer DRBD Example Heartbeat Config

The simplest way to combine related resources in heartbeat is to use a group. With a group you do not have to specify colocation and ordering constraints, they are implied. To use the above DRBD and VServer ocf resource agents together in a group, your heartbeat configuration will look something like this: