AndrewBeekhof

RaoulScarazzini

DanFrîncu

Legal Notice

The text of and illustrations in this document are licensed under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA")
⁠[1].

In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.

In addition to the requirements of this license, the following activities are looked upon favorably:

If you are distributing Open Publication works on hardcopy or CD-ROM, you provide email notification to the authors of your intent to redistribute at least thirty days before your manuscript or media freeze, to give the authors time to provide updated documents. This notification should describe modifications, if any, made to the document.

All substantive modifications (including deletions) be either clearly marked up in the document or else described in an attachment to the document.

Finally, while it is not mandatory under this license, it is considered good form to offer a free copy of any hardcopy or CD-ROM expression of the author(s) work.

Abstract

The purpose of this document is to provide a start-to-finish guide to building an example active/passive cluster with Pacemaker and show how it can be converted to an active/active one.

The example cluster will use:

Fedora 21 as the host operating system

Corosync to provide messaging and membership services,

Pacemaker to perform resource management,

DRBD as a cost-effective alternative to shared storage,

GFS2 as the cluster filesystem (in active/active mode)

Given the graphical nature of the Fedora install process, a number of screenshots are included. However the guide is primarily composed of commands, the reasons for executing them and their expected outputs.

Preface

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.

In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later include the Liberation Fonts set by default.

Choose System → Preferences → Mouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).

To insert a special character into a gedit file, choose Applications → Accessories → Character Map from the main menu bar. Next, choose Search → Find… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose Edit → Paste from the gedit menu bar.

The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.

Mono-spaced Bold Italic or Proportional Bold Italic

Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:

To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.

The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.

To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.

Note the words in bold italics above: username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.

Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled “Important” will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla
⁠[2] against the product Pacemaker.

When submitting a bug report, be sure to mention the manual's identifier: Clusters_from_Scratch

If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Computer clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types.

This document will walk through the installation and setup of simple clusters using the Fedora distribution, version 21.

The clusters described here will use Pacemaker and Corosync to provide resource management and messaging. Required packages and modifications to their configuration files are described along with the use of the Pacemaker command line tool for generating the XML used for cluster control.

Pacemaker is a central component and provides the resource management required in these systems. This management includes detecting and recovering from the failure of various nodes, resources and services under its control.

When more in depth information is required and for real world usage, please refer to the Pacemaker Explained manual.

It achieves maximum availability for your cluster services (aka. resources) by detecting and recovering from node- and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat).

Pacemaker’s key features include:

Detection and recovery of node and service-level failures

Storage agnostic, no requirement for shared storage

Resource agnostic, anything that can be scripted can be clustered

Supports fencing (aka. STONITH) for ensuring data integrity

Supports large and small clusters

Supports both quorate and resource-driven clusters

Supports practically any redundancy configuration

Automatically replicated configuration that can be updated from any node

Ability to specify cluster-wide service ordering, colocation and anti-colocation

Non-cluster-aware components. These pieces include the resources themselves; scripts that start, stop and monitor them; and a local daemon that masks the differences between the different standards these scripts implement.

Resource management. Pacemaker provides the brain that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance and scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches.

When combined with Corosync, Pacemaker also supports popular open source cluster filesystems.
⁠[3]

Due to past standardization within the cluster filesystem community, cluster filesystems make use of a common distributed lock manager, which makes use of Corosync for its messaging and membership capabilities (which nodes are up/down) and Pacemaker for fencing services.

The CIB uses XML to represent both the cluster’s configuration and current state of all resources in the cluster. The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved.

This list of instructions is then fed to the Designated Controller (DC). Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master. Should the elected CRMd process (or the node it is on) fail, a new one is quickly established.

The DC carries out the PEngine’s instructions in the required order by passing them to either the Local Resource Management daemon (LRMd) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process).

The peer nodes all report the results of their operations back to the DC and, based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results.

In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this, Pacemaker comes with STONITHd.

STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is usually implemented with a remote power switch.

In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced, and it does the rest.

When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload.

[3]
Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership, and Corosync seems to be what they’re standardizing on. Technically, it would be possible for them to support Heartbeat as well, but there seems little interest in this.

Point your browser to https://getfedora.org/, choose a flavor (Server is an appropriate choice), and download the installation image appropriate to your hardware.

Burn the installation image to a DVD or USB drive
⁠[4] and boot from it, or use the image to boot a virtual machine.

After starting the installation, select your language and keyboard layout at the welcome screen.
⁠[5]

At this point, you get a chance to tweak the default installation options.

In the NETWORK & HOSTNAME section you’ll want to:

Assign your machine a host name. I happen to control the clusterlabs.org domain name, so I will use pcmk-1.clusterlabs.org here.

Assign a fixed IPv4 address. In this example, I’ll use 192.168.122.101.

Important

Do not accept the default network settings. Cluster machines should never obtain an IP address via DHCP, because DHCP’s periodic address renewal will interfere with corosync.

If you miss this step during installation, it can easily be fixed later. You will have to navigate to system settings and select network. From there, you can select what device to configure.

In the Software Selection section (try saying that 10 times quickly), leave all Add-Ons unchecked so that we see everything that gets installed. We’ll install any extra software we need later.

Important

By default Fedora uses LVM for partitioning which allows us to dynamically change the amount of space allocated to a given partition.

However, by default it also allocates all free space to the / (aka. root) partition, which cannot be dynamically reduced in size (dynamic increases are fine, by the way).

So if you plan on following the DRBD or GFS2 portions of this guide, you should reserve at least 1GiB of space on each machine from which to create a shared volume. To do so, enter the Installation Destination section where you are be given an opportunity to reduce the size of the root partition (after choosing which hard drive you wish to install to). If you want the reserved space to be available within an LVM volume group, be sure to select Modify… next to the volume group name and change the Size policy: to Fixed or As large as possible.

It is highly recommended to enable NTP on your cluster nodes. Doing so ensures all nodes agree on the current time and makes reading log files significantly easier. You can do this in the DATE & TIME section.
⁠[6]

Once you’ve completed the installation, set a root password as instructed. For the purposes of this document, it is not necessary to create any additional users. After the node reboots, you’ll see a (possibly mangled) login prompt on the console. Login using root and the password you created earlier.

Note

From here on, we’re going to be working exclusively from the terminal.

During installation, we filled in the machine’s fully qualified domain name (FQDN), which can be rather long when it appears in cluster logs and status output. See for yourself how the machine identifies itself:

The output from the second command is fine, but we really don’t need the domain name included in the basic host details. To address this, we need to use the hostnamectl tool to strip off the domain name.

Now we need to make sure we can communicate with the machines by their name. If you have a DNS server, add additional entries for the two machines. Otherwise, you’ll need to add the machines to /etc/hosts on both nodes. Below are the entries for my cluster nodes:

SSH is a convenient and secure way to copy files and perform commands remotely. For the purposes of this guide, we will create a key without a password (using the -N option) so that we can perform remote actions without being prompted.

Warning

Unprotected SSH keys (those without a password) are not recommended for servers exposed to the outside world. We use them here only to simplify the demo.

Before the cluster can be configured, the pcs daemon must be started and enabled to start at boot time on each node. This daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.

Start and enable the daemon by issuing the following commands on each node:

# systemctl start pcsd.service
# systemctl enable pcsd.service

The installed packages will create a hacluster user with a disabled password. While this is fine for running pcs commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes.

This tutorial will make use of such commands, so now we will set a password for the hacluster user, using the same password on both nodes:

# passwd hacluster
password:

Note

Alternatively, to script this process or set the password on a different machine from the one you’re logged into, you can use the --stdin option for passwd:

The version of pcs shipped with Fedora 21 will bind only to the host’s IPv6 address in some circumstances. If you get errors with pcs cluster auth, add this line before the first server.run line in /usr/lib/pcsd/ssl.rb to bind to IPv4 only:

webrick_options[:BindAddress] = '0.0.0.0'

And restart pcsd:

[root@pcmk-1 ~]# systemctl restart pcsd

This is a temporary workaround that will get removed if the pcsd package is later updated.

Next, use pcs cluster setup to generate and synchronize the corosync configuration:

If you received an authorization error for either of those commands, make sure you configured the hacluster user account on each node with the same password.

Note

Early versions of pcs, such as the one shipped with Fedora 20 and earlier, require that --name be omitted from the above command.

If using a different cluster shell such as crmsh rather than pcs, you must manually create a corosync.conf and copy it to all nodes.

The pcs command will configure corosync to use UDP unicast transport; if you choose to use multicast instead, choose a multicast address carefully.
⁠[7]

The final /etc/corosync.conf configuration on each node should look something like the sample in Appendix B, Sample Corosync Configuration.

Note

With versions of Corosync before 2.0, Pacemaker could obtain membership and quorum from a custom Corosync plugin. This plugin also had the capability to start Pacemaker automatically when Corosync was started. Neither behavior is possible with Corosync 2.0 and later, as support for plugins was removed.

Because Pacemaker made use of the plugin for message routing, a cluster node using an older Corosync cannot talk to one using Corosync 2.0 or later. Rolling upgrades between these versions are therefore not possible, and an alternate strategy
⁠[8] must be used.

In the dark past, configuring Pacemaker required the administrator to read and write XML. In true UNIX style, there were also a number of different commands that specialized in different aspects of querying and updating the cluster.

All of that has been greatly simplified with the creation of unified command-line shells (and GUIs) that hide all the messy XML scaffolding.

These shells take all the individual aspects required for managing and configuring a cluster, and packs them into one simple-to-use command line tool.

They even allow you to queue up several changes at once and commit them atomically.

There are currently two command-line shells that people use, pcs and crmsh. This edition of Clusters from Scratch is based on pcs.

Note

The two shells share many concepts but the scope, layout and syntax does differ, so make sure you read the version of this guide that corresponds to the software installed on your system.

Important

Since pcs has the ability to manage all aspects of the cluster (both corosync and pacemaker), it requires a specific cluster stack to be in use: corosync 2.0 or later with votequorum plus Pacemaker 1.1.8 or later.

As you can see, the different aspects of cluster management are separated into categories: resource, cluster, stonith, property, constraint, and status. To discover the functionality available in each of these categories, one can issue the command pcs category help. Below is an example of all the options available under the status category.

[root@pcmk-1 ~]# pcs status help
Usage: pcs status [commands]...
View current cluster and resource status
Commands:
[status]
View all information about the cluster and resources
resources
View current status of cluster resources
groups
View currently configured groups and their resources
cluster
View current cluster status
corosync
View current membership information as seen by corosync
nodes [corosync|both|config]
View current status of nodes from pacemaker. If 'corosync' is
specified, print nodes currently configured in corosync, if 'both'
is specified, print nodes from both corosync & pacemaker. If 'config'
is specified, print nodes from corosync & pacemaker configuration.
pcsd <node> ...
Show the current status of pcsd on the specified nodes
xml
View xml version of status (output from crm_mon -r -1 -X)

Additionally, if you are interested in the version and supported cluster stack(s) available with your Pacemaker installation, run:

If the SNMP and/or email options are not listed, then Pacemaker was not built to support them. This may be by the choice of your distribution, or the required libraries may not have been available. Please contact whoever supplied you with the packages for more details.

Now that corosync is configured, it is time to start the cluster. The command below will start corosync and pacemaker on both nodes in the cluster. If you are issuing the start command from a different node than the one you ran the pcs cluster auth command on earlier, you must authenticate on the current node you are logged into before you will be allowed to start the cluster.

In this example, we are not enabling the corosync and pacemaker services to start at boot. If a cluster node fails or is rebooted, you will need to run pcs cluster start nodename (or --all) to start the cluster on it. While you could enable the services to start at boot, requiring a manual start of cluster services gives you the opportunity to do a post-mortem investigation of a node failure before returning it to the cluster.

In order to guarantee the safety of your data,
⁠[9] the default for STONITH
⁠[10] in Pacemaker is enabled. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster would not be able to make progress if a situation requiring node fencing arose).

The use of stonith-enabled=false is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will refuse to support clusters that have STONITH disabled.

We disable STONITH here only to defer the discussion of its configuration, which can differ widely from one installation to the next. See Section 8.1, “What is STONITH?” for information on why STONITH is important and details on how to configure it.

Our first resource will be a unique IP address that the cluster can bring up on either node. Regardless of where any cluster service(s) are running, end users need a consistent address to contact them on. Here, I will choose 192.168.122.120 as the floating address, give it the imaginative name ClusterIP and tell the cluster to check whether it is running every 30 seconds.

Warning

The chosen address must not already be in use on the network. Do not reuse an IP address one of the nodes already has configured.

Notice that pcmk-1 is OFFLINE for cluster purposes (its PCSD is still Online, allowing it to receive pcs commands, but it is not participating in the cluster).

Also notice that ClusterIP is now running on pcmk-2 — failover happened automatically, and no errors are reported.

Quorum

If a cluster splits into two (or more) groups of nodes that can no longer communicate with each other (aka. partitions), quorum is used to prevent resources from starting on more nodes than desired, which would risk data corruption.

A cluster has quorum when more than half of all known nodes are online in the same partition, or for the mathematically inclined, whenever the following equation is true:

total_nodes < 2 * active_nodes

For example, if a 5-node cluster split into 3- and 2-node paritions, the 3-node partition would have quorum and could continue serving resources. If a 6-node cluster split into two 3-node partitions, neither partition would have quorum; pacemaker’s default behavior in such cases is to stop all resources, in order to prevent data corruption.

Two-node clusters are a special case. By the above definition, a two-node cluster would only have quorum when both nodes are running. This would make the creation of a two-node cluster pointless,
⁠[11] but corosync has the ability to treat two-node clusters as if only one node is required for quorum.

The pcs cluster setup command will automatically configure two_node: 1 in corosync.conf, so a two-node cluster will "just work".

If you are using a different cluster shell, you will have to configure corosync.conf appropriately yourself. If you are using older versions of corosync, you will have to ignore quorum at the pacemaker level, using pcs property set no-quorum-policy=ignore (or the equivalent command if you are using a different cluster shell).

Now, simulate node recovery by restarting the cluster stack on pcmk-1, and check the cluster’s status.

In most circumstances, it is highly desirable to prevent healthy resources from being moved around the cluster. Moving resources almost always requires a period of downtime. For complex services such as databases, this period can be quite long.

To address this, Pacemaker has the concept of resource stickiness, which controls how strongly a service prefers to stay running where it is. You may like to think of it as the "cost" of any downtime. By default, Pacemaker assumes there is zero cost associated with moving resources and will do so to achieve "optimal"
⁠[12] resource placement. We can specify a different stickiness for every resource, but it is often sufficient to change the default.

Earlier versions of pcs, such as the one shipped with Fedora 20, require that rsc be added after resource in the above commands.

[9]
If the data is corrupt, there is little point in continuing to make it available

[10]
A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes

[11]
Some would argue that two-node clusters are always pointless, but that is an argument for another time

[12]
Pacemaker’s definition of optimal may not always agree with that of a human’s. The order in which Pacemaker processes lists of resources and nodes creates implicit preferences in situations where the administrator has not explicitly specified them.

Now that we have a basic but functional active/passive two-node cluster, we’re ready to add some real services. We’re going to start with Apache because it is a feature of many clusters and relatively simple to configure.

Before continuing, we need to make sure Apache is installed on both hosts. We also need the wget tool in order for the cluster to be able to check the status of the Apache server.

# yum install -y httpd wget

Important

Do not enable the httpd service. Services that are intended to be managed via the cluster software should never be managed by the OS.

It is often useful, however, to manually start the service, verify that it works, then stop it again, before adding it to the cluster. This allows you to resolve any non-cluster-related problems before continuing. Since this is a simple example, we’ll skip that step here.

We need to create a page for Apache to serve. On Fedora, the default Apache document root is /var/www/html, so we’ll create an index file there. For the moment, we will simplify things by serving a static site and manually synchronizing the data between the two nodes, so run this command on both nodes:

At this point, Apache is ready to go, and all that needs to be done is to add it to the cluster. Let’s call the resource WebSite. We need to use an OCF resource script called apache in the heartbeat namespace.
⁠[13] The script’s only required parameter is the path to the main Apache configuration file, and we’ll tell the cluster to check once a minute that Apache is still running.

By default, the operation timeout for all resources' start, stop, and monitor operations is 20 seconds. In many cases, this timeout period is less than a particular resource’s advised timeout period. For the purposes of this tutorial, we will adjust the global operation timeout default to 240 seconds.

In a production cluster, it is usually better to adjust each resource’s start, stop, and monitor timeouts to values that are appropriate to the behavior observed in your environment, rather than adjust the global default.

To reduce the load on any one machine, Pacemaker will generally try to spread the configured resources across the cluster nodes. However, we can tell the cluster that two resources are related and need to run on the same host (or not at all). Here, we instruct the cluster that WebSite can only run on the host that ClusterIP is active on.

To achieve this, we use a colocation constraint that indicates it is mandatory for WebSite to run on the same node as ClusterIP. The "mandatory" part of the colocation constraint is indicated by using a score of INFINITY. The INFINITY score also means that if ClusterIP is not active anywhere, WebSite will not be permitted to run.

Note

If ClusterIP is not active anywhere, WebSite will not be permitted to run anywhere.

Important

Colocation constraints are "directional", in that they imply certain things about the order in which the two resources will have a location chosen. In this case, we’re saying that WebSite needs to be placed on the same machine as ClusterIP, which implies that the cluster must know the location of ClusterIP before choosing a location for WebSite.

Like many services, Apache can be configured to bind to specific IP addresses on a host or to the wildcard IP address. If Apache binds to the wildcard, it doesn’t matter whether an IP address is added before or after Apache starts; Apache will respond on that IP just the same. However, if Apache binds only to certain IP address(es), the order matters: If the address is added after Apache starts, Apache won’t respond on that address.

To be sure our WebSite responds regardless of Apache’s address configuration, we need to make sure ClusterIP not only runs on the same node, but starts before WebSite. A colocation constraint only ensures the resources run together, not the order in which they are started and stopped.

We do this by adding an ordering constraint. By default, all order constraints are mandatory, which means that the recovery of ClusterIP will also trigger the recovery of WebSite.

Pacemaker does not rely on any sort of hardware symmetry between nodes, so it may well be that one machine is more powerful than the other. In such cases, it makes sense to host the resources on the more powerful node if it is available. To do this, we create a location constraint.

In the location constraint below, we are saying the WebSite resource prefers the node pcmk-1 with a score of 50. Here, the score indicates how badly we’d like the resource to run at this location.

There are always times when an administrator needs to override the cluster and force resources to move to a specific location. In this example, we will force the WebSite to move to pcmk-1 by updating our previous location constraint with a score of INFINITY.

Once we’ve finished whatever activity required us to move the resources to pcmk-1 (in our case nothing), we can then allow the cluster to resume normal operation by removing the new constraint. Since we previously configured a default stickiness, the resources will remain on pcmk-1.

Even if you’re serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, it’s not even an option. Not everyone care afford network-attached storage, but somehow the data needs to be kept in sync.

DRBD will need its own block device on each node. This can be a physical disk partition or logical volume, of whatever size you need for your data. For this document, we will use a 1GiB logical volume, which is more than sufficient for a single HTML file and (later) GFS2 metadata.

Because we have not yet initialized the data, this node’s data is marked as Inconsistent. Because we have not yet initialized the second node, the local state is WFConnection (waiting for connection), and the partner node’s status is marked as Unknown.

Now, repeat the above commands on the second node. This time, when we check the status, it shows:

You can see the state has changed to Connected, meaning the two DRBD nodes are communicating properly, and both nodes are in Secondary role with Inconsistent data.

To make the data consistent, we need to tell DRBD which node should be considered to have the correct data. In this case, since we are creating a new resource, both have garbage, so we’ll just pick pcmk-1 and run this command on it:

We can see that this node has the Primary role, the partner node has the Secondary role, this node’s data is now considered UpToDate, the partner node’s data is still Inconsistent, and a progress bar shows how far along the partner node is in synchronizing the data.

One handy feature pcs has is the ability to queue up several changes into a file and commit those changes atomically. To do this, start by populating the file with the current raw XML config from the CIB.

# pcs cluster cib drbd_cfg

Using the pcs -f option, make changes to the configuration saved in the drbd_cfg file. These changes will not be seen by the cluster until the drbd_cfg file is pushed into the live cluster’s CIB later.

Here, we create a cluster resource for the DRBD device, and an additional clone resource to allow the resource to run on both nodes at the same time.

We can see that WebDataClone (our DRBD device) is running as master (DRBD’s primary role) on pcmk-1 and slave (DRBD’s secondary role) on pcmk-2.

Important

The resource agent should load the DRBD module when needed if it’s not already loaded. If that does not happen, configure your operating system to load the module at boot time. For Fedora 21, you would run this on both nodes:

Now that we have a working DRBD device, we need to mount its filesystem.

In addition to defining the filesystem, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted).

We are going to take a shortcut when creating the resource this time. Instead of explicitly saying we want the ocf:heartbeat:Filesystem script, we are only going to ask for Filesystem. We can do this because we know there is only one resource script named Filesystem available to pacemaker, and that pcs is smart enough to fill in the ocf:heartbeat: portion for us correctly in the configuration. If there were multiple Filesystem scripts from different OCF providers, we would need to specify the exact one we wanted.

Once again, we will queue our changes to a file and then push the new configuration to the cluster as the final step.

Previously, we used pcs cluster stop pcmk-1 to stop all cluster services on pcmk-1, failing over the cluster resources, but there is another way to safely simulate node failure.

We can put the node into standby mode. Nodes in this state continue to run corosync and pacemaker but are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when performing system administration tasks such as updating packages used by cluster resources.

Put the active node into standby mode, and observe the cluster move all the resources to the other node. The node’s status will change to indicate that it can no longer host resources.

STONITH (Shoot The Other Node In The Head aka. fencing) protects your data from being corrupted by rogue nodes or unintended concurrent access.

Just because a node is unresponsive doesn’t mean it has stopped accessing your data. The only way to be 100% sure that your data is safe, is to use STONITH to ensure that the node is truly offline before allowing the data to be accessed from another node.

STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere.

It is crucial that your STONITH device can allow the cluster to differentiate between a node failure and a network failure.

The biggest mistake people make in choosing a STONITH device is to use a remote power switch (such as many on-board IPMI controllers) that shares power with the node it controls. In such cases, the cluster cannot be sure if the node is really offline, or active and suffering from a network fault.

Likewise, any device that relies on the machine being active (such as SSH-based "devices" used during testing) are inappropriate.

[root@pcmk-1 ~]# pcs stonith describe fence_ipmilan
Stonith options for: fence_ipmilan
ipport: TCP/UDP port to use for connection with device
inet6_only: Forces agent to use IPv6 addresses only
ipaddr (required): IP Address or Hostname
passwd_script: Script to retrieve password
method: Method to fence (onoff|cycle)
inet4_only: Forces agent to use IPv4 addresses only
passwd: Login password or passphrase
lanplus: Use Lanplus to improve security of connection
auth: IPMI Lan Auth type.
cipher: Ciphersuite to use (same as ipmitool -C parameter)
privlvl: Privilege level on IPMI device
action (required): Fencing Action
login: Login Name
verbose: Verbose mode
debug: Write debug information to given file
version: Display version information and exit
help: Display help and exit
power_wait: Wait X seconds after issuing ON/OFF
login_timeout: Wait X seconds for cmd prompt after login
power_timeout: Test X seconds for status change after ON/OFF
delay: Wait X seconds before fencing is started
ipmitool_path: Path to ipmitool binary
shell_timeout: Wait X seconds for cmd prompt after issuing command
retry_on: Count of attempts to retry power on
sudo: Use sudo (without password) when calling 3rd party sotfware.
stonith-timeout: How long to wait for the STONITH action to complete per a stonith device.
priority: The priority of the stonith resource. Devices are tried in order of highest priority to lowest.
pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names.
pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list).
pcmk_host_check: How to determine which machines are controlled by the device.

The primary requirement for an Active/Active cluster is that the data required for your services is available, simultaneously, on both machines. Pacemaker makes no requirement on how this is achieved; you could use a SAN if you had one available, but since DRBD supports multiple Primaries, we can continue to use it here.

Before we do anything to the existing partition, we need to make sure it is unmounted. We do this by telling the cluster to stop the WebFS resource. This will ensure that other resources (in our case, Apache) using WebFS are not only stopped, but stopped in the correct order.

-j 2 indicates that the filesystem should reserve enough space for two journals (one for each node that will access the filesystem).

-t mycluster:web specifies the lock table name. The format for this field is clustername:fsname. For clustername, we need to use the same value we specified originally with pcs cluster setup --name (which is also the value of cluster_name in /etc/corosync/corosync.conf). If you are unsure what your cluster name is, you can look in /etc/corosync/corosync.conf or execute the command pcs cluster corosync pcmk-1 | grep cluster_name.

Now we can (re-)populate the new filesystem with data (web pages). We’ll create yet another variation on our home page.

There’s no point making the services active on both locations if we can’t reach them both, so let’s clone the IP address.

The IPaddr2 resource agent has built-in intelligence for when it is configured as a clone. It will utilize a multicast MAC address to have the local switch send the relevant packets to all nodes in the cluster, together with iptables clusterip rules on the nodes so that any given packet will be grabbed by exactly one node. This will give us a simple but effective form of load-balancing requests between our two nodes.

clone-max=2 tells the resource agent to split packets this many ways. This should equal the number of nodes that can host the IP.

clone-node-max=2 says that one node can run up to 2 instances of the clone. This should also equal the number of nodes that can host the IP, so that if any node goes down, another node can take over the failed node’s "request bucket". Otherwise, requests intended for the failed node would be discarded.

globally-unique=true tells the cluster that one clone isn’t identical to another (each handles a different "bucket"). This also tells the resource agent to insert iptables rules so each host only processes packets in its bucket(s).

Notice that when the ClusterIP becomes a clone, the constraints referencing ClusterIP now reference the clone. This is done automatically by pcs.

Now we must tell the resource how to decide which requests are processed by which hosts. To do this, we specify the clusterip_hash parameter. The value of sourceip means that the source IP address of incoming packets will be hashed; each node will process a certain range of hashes.

Now that we have a cluster filesystem ready to go, and our nodes can load-balance requests to a shared IP address, we can configure the cluster so both nodes mount the filesystem and respond to web requests.

Clone the filesystem and Apache resources in a new configuration. Notice how pcs automatically updates the relevant constraints again.

Testing failover is left as an exercise for the reader. For example, you can put one node into standby mode, use pcs status to confirm that its ClusterIP clone was moved to the other node, and use arping to verify that packets are not being lost from any source host.

Note

You may find that when a failed node rejoins the cluster, both ClusterIP clones stay on one node, due to the resource stickiness. While this works fine, it effectively eliminates load-balancing and returns the cluster to an active-passive setup again. You can avoid this by disabling stickiness for the IP address resource:

The output shows state information automatically obtained about the cluster, including: * cluster-infrastructure - the cluster communications layer in use (heartbeat or corosync) * cluster-name - the cluster name chosen by the administrator when the cluster was created * dc-version - the version (including upstream source-code hash) of Pacemaker used on the Designated Controller

The output also shows options set by the administrator that control the way the cluster operates, including: * stonith-enabled=true - whether the cluster is allowed to use STONITH resources

This shows cluster option defaults that apply to every resource that does not explicitly set the option itself. Above: * resource-stickiness - Specify the aversion to moving healthy resources to other machines

Users of the services provided by the cluster require an unchanging address with which to access it. Additionally, we cloned the address so it will be active on both nodes. An iptables rule (created as part of the resource agent) is used to ensure that each request only gets processed by one of the two clone instances. The additional meta options tell the cluster that we want two instances of the clone (one "request bucket" for each node) and that if one node fails, then the remaining node should hold both.

Here, we define the DRBD service and specify which DRBD resource (from /etc/drbd.d/*.res) it should manage. We make it a master/slave resource and, in order to have an active/active setup, allow both instances to be promoted to master at the same time. We also set the notify option so that the cluster will tell DRBD agent when its peer changes state.

The cluster filesystem ensures that files are read and written correctly. We need to specify the block device (provided by DRBD), where we want it mounted and that we are using GFS2. Again, it is a clone because it is intended to be active on both nodes. The additional constraints ensure that it can only be started on nodes with active DLM and DRBD instances.

Lastly, we have the actual service, Apache. We need only tell the cluster where to find its main configuration file and restrict it to running on nodes that have the required filesystem mounted and the IP address active.