In this post, I will demonstrate how we can put our MySQL cluster behind an HAProxy load-balancer so that our database continues to run even if master database node crashes. This post is about master-slave load-balancing. So, data will be written to master only but retrieved from any node. I will write another post on master-master replication later. Lets start now 🙂

Steps to be performed : –

1. Install HAProxy

This will install HAProxy, an open source load-balancer on our Ubuntu server.

2. Install MySQL client

Now we will need to install mysql-client on Ubuntu server to connect to our databases. So, issue below command to install it.

root@haproxy-server:/home/shashank# apt-get install mysql-client

Note that if you already installed MySQL on this server before, you may skip this step as client will be already present. You may issue mysql command to check.

3. Create users on MySQL servers.

Now we need to add 2 database users to connect to our MySQL databases servers from HAProxy Ubuntu server. Fail-over needs root access to database, hence one of these users will have equivalent privileges. You may continue with root but that will require more configurations and its always safe to have a user other than root. Note that below queries have to be run on both database nodes.

One main point to remember is to bind HAProxy to proper host & port. Since my web-application runs on a different server, I used listen 0.0.0.0:3306 in cluster properties above. HAProxy doesn’t have special properties for MySQL unlike Web-server. So I chose tcp above. If there are errors in your HAProxy configuration, you will see errors like below on starting haproxy service.

It was partly because I had earlier used port 8080 for web UI but it was already in use. So I used 8090. Also, I had bound cluster to 127.0.0.1 but it should have been 0.0.0.0 or public IP of application/web-server.

Once this is done/fixed, start the service. It will start without any error.

5. Test load-balancer.

If you performed the steps correctly, you can now see your MySQL cluster being accessed by HAProxy server in a round-robin manner i.e. one by one 🙂 You have to use 127.0.0.1 here & not the public IP or hostname. You may also point your browser to IP_address of haproxy server:8090 or any port you specified in configuration above to see the web UI. Credentials will be what you mentioned in its configuration.

Great 🙂 You can access your cluster from load-balancer, as you can see above 🙂 Now, its time to see which node is being accessed. So, issue below command on load-balancer server 2-3 times & you will see server-ids in round-robbin manner 🙂

6. Test your application

Now that basic testing has been done, now its time to test our setup in live scenario. To do this, stop mysql service on any database node & execute the same command(that you ran above) on load-balancer. You will see server-id of the other node every time you run this. Now, bring up the service and stop it on other node. Again run the same query & see the result 🙂

Now, change the application code where you have hard-coded database connection string & replace that name/IP with load-balancer IP 🙂 Check your application now by trying to read from the database. You will see that you can access the data even when one of your DB nodes is down 🙂

With this, you have successfully setup a MySQL cluster & load-balancing 🙂 See you soon!

Like this:

Conga is a server(luci) & client(ricci) based Cluster Management solution for Linux. Luci is a Cluster Manager, in simple language & offers a GUI to create and manage clusters. This tutorial requires 3 RHEL VMs (of course you can use any similar Linux distribution). 1 VM will offer shared storage through iSCSI & will also act as Cluster Manager (luci will be installed here). 2 VMs will act as cluster nodes. We can also set-up Red Hat Cluster using only 2 servers. 1 server can act as iSCSI target, Cluster Manager & cluster node. Click here to learn how to set-up iSCSI.

Below are the steps to set-up our Red Hat Cluster on RHEL6.

Lab Description : –

Cluster Manager / Shared storage – server.shashank.com 192.168.0.1

Cluster Node1 – node1.shashank.com 192.168.0.2

Cluster Node2 – node2.shashank.com 192.168.0.3

1. Install required packages. Install luci package on VM1. Lets call this VM server (happens to be hostname of my server here). Also set luci service to start at boot time. Install ricci packages on 2 cluster nodes, node1 & node2 (these are host-names in my lab. Choose any desired name!) Also set ricci service to start at boot time.

yum install luci* -y

chkconfig luci on

yum install ricci* -y

chkconfig ricici on

2. Start & Configure required services. Start luci service on server. Once the service is started, you can point your browser to localhost:8084. Login using your root password. You can see a slightly overwhelming Luci Console 😉 Now, assign passwords for user ricci on both the nodes. We are not yet ready to create cluster here. Please read on.

3. Configure Shared Storage. Create a partition on server & export that as a LUN to both the nodes. This will be done using iSCSI & has been discussed here. Issue fdisk -l on both the nodes to confirm you have an additional disk. If you can’t seem to see that, reboot nodes.

4. Create a Quorum Disk on any node. I created it on node1. What is a quorum disk? You may ask 😉 Well, its a storage device that holds cluster configuration database. This database determines which node must be active at a given time. This is decided by votes cast by each node. Below is the command to create a quorum disk.

mkqdisk -c /device -l quorumdisk

/device is your LUN or iSCSI disk that you shared from server. Make sure you first create a small partition on LUN and then create quorum disk on that device/partition. It can be a 100 MB partition, does not need to be big. -l stands for label. It can be anything meaningful, just be sure to give it a nice name 😉 Now, check node 2 if you can see this quorum disk there or not. If not, reboot.

5. Create a GFS partition on node1. This is what we will use to demonstrate clustering. For this, first create another partition on LUN & format it using GFS filesystem. Below is how it is done. You may need to install gfs package first, if it is not available.

mkfs.gfs2 -p lock_dlm -t shashank:GFS -j 2 /device

-p means a protocol which is lock_dlm in our case. -t is cluster name. This is the name that you will be using for your cluster in Luci (I will come to this in a shortwhile). /device is the new LUN partition, which is /dev/sdc3 in my lab. Make changes accordingly.

Now, check on node2 if you can see the GFS partition or not, If not, reboot. Its better to create quorum disk & GFS partition & then reboot both nodes to ensure proper configuration.

6. Create a mount-point on both nodes. Choose any name, but make sure its same on both of them. I created /gfs on both.

7. Now go back to Luci Console. Click Cluster & then Create. Give it name that you chose while creating GFS partition (step 5), its shashank in my case. Select “Use the Same Password for All Nodes“. In Node Name, enter hostname of node1 (which is node1 in my case). Type in ricci password, that we created in step 2. Click Add Another Node to add node2. Select “Enable Shared Storage Support” since we are using GFS. Now click Create Cluster to start creating your cluster. It will take some time & then you will see something similar to 2nd screenshot 😉

Potential pitfalls – Check cman & rgmanager services on both the nodes if they are running & configured to run at startup or not. If not, start them & ensure that they start at boot time. Disable NetworkManager service at startup & stop it.

8. Configure Failover Domain. Now click Failover Domains & then click Add a New Failover Domain. Give it a name, choose Prioritized. Then enable both the nodes in Members & set priority to 1 for both.

9. Configure Quorum Disk. For this, click Configure & then QDisk. Now select Use a Quorum Disk. Then select By Device Label & enter quorumdisk. This is the name of quorum disk label. Choose accordingly. Under Heuristics -> Path to Program, type ping -c2 -t1 node1. Choose Interval as 2 & Score as 1. Click Add Another Heuristic & do the same for node2 i.e. ping -c2 -t1 node2. This is a kind of test that both the nodes will take to vote. Click Apply.

10. Create Resources. For this, click Resources & then click Add. From the drop-down list, select GFS. Give it some nice name 😉 Provide Mount Point name as /GFS (this is what I created on both the nodes, choose yours accordingly). Choose Device, FS Label or, UUDI as /dev/sdc3 (again, change accordingly since /dev/sdc3 is in my set-up). Select Force Unmount option & click Submit.

11. Create a Service Group. For this, click Service Group tab & then click Add. Give it some really nice name 😉 (once again!) & ensure Automatically Start This Service option is selected. Then choose a Failover Domain from the drop-down list, that was created above. Set Recover Policy as Relocate. Now click Add Resource, select your resource that you created in step 10, from the drop-down list & click Submit.

12. Refresh browser tab & you will see our Service running on Luci Console.

13. Now time to test our Cluster. As we know we have 2 cluster nodes each having /gfs mount-point. In case one node fails, data in /gfs will be available from other node (which is the very meaning of clustering here). At the moment our service is running on node1. Type clustat command and see the output. You will find the details there. See that /gfs is mounted here but not on node2.

Now, lets relocate this service to node2. Type

clusvcadm -r GFS -n node2

Here -r means relocate, GFS is service name & -m is for the node to relocate to.

Lets got to node2 & check the status. Type clustat command & notice the output. See the available mount-points as well.

So, thats it 🙂 We can see /gfs mount-point is clustered between 2 nodes & is available anytime, even if any of the nodes goes down.