How to Cluster Magento, nginx and MySQL on Multiple Servers for High Availability

How to Cluster Magento, nginx and MySQL on Multiple Servers for High Availability

Magento is an open-source e-commerce platform built on Zend PHP and MySQL. It is widely adopted by online retailers with some 150,000 sites known to use it. Single server setups are easy to set up, but if your store is a huge success, then you probably need to think about clustering your environment with multiple servers. Clustering is done at the web, database and file-system level, as all web nodes need access to catalog images.

This post is similar to our previous posts on scaling Drupal and WordPress performance, and focuses on how to scale Magento on multiple servers. The software used is Magento version 1.7.0.2 , nginx, HAProxy, MySQL Galera Cluster and OCFS2 (Oracle Cluster File System) with a shared storage using Ubuntu 12.04.2 LTS (Precise) 64bit.

Our setup consists of 6 nodes or servers:

NODE1: web server + database server

NODE2: web server + database server

NODE3: web server + database server

LB1: load balancer (master) + keepalived

LB2: load balancer (backup) + keepalived

ST1: shared storage + ClusterControl

We will be using OCFS2, a shared disk file system to serve the web files across our web servers. Each of these web servers will have a nginx web server colocated with a MySQL Galera Cluster instance. We will be using 2 other nodes for load balancing.

Our major steps would be:

Prepare 6 instances

Deploy MySQL Galera Cluster onto NODE1, NODE2 and NODE3 from ST1

Configure iSCSI target on ST1

Configure OCFS2 and mount the shared disk onto NODE1, NODE2 and NODE3

Configure nginx on NODE1, NODE2 and NODE3

Configure Keepalived and HAProxy for web and database load balancing with auto failover

Install Magento and connect it to the Web/DB cluster via the load balancer

3. The deployment takes about 15 minutes, and once it is completed, note your API key. Use it to register the cluster with the ClusterControl UI by going to http://192.168.197.171/cmonapi . You will now see your MySQL Galera Cluster in the UI.

Configure iSCSI

1. The storage server (ST1) needs to export a disk through iSCSI so it can be mounted on all three web servers (NODE1, NODE2 and NODE3). iSCSI basically tells your kernel you have a SCSI disk, and it transports that access over IP. The “server” is called the “target” and the “client” that uses that iSCSI device is the “initiator”.

Install iSCSI target in ST1:

$ sudoapt-get install-y iscsitarget iscsitarget-dkms

2. Enable iscsitarget:

$ sudosed-i"s|false|true|g"/etc/default/iscsitarget

3. It is preferred to have separate disk for this file system clustering purpose. So we are going to use another disk mounted in ST1 (/dev/sdb) to be shared among web server nodes. Define this in iSCSI target configuration file:

*Notes: The attributes under the node or cluster clause need to be after a tab.

** The following steps should be performed on NODE1, NODE2 and NODE3 unless specified.

5. Create the same configuration file (/etc/ocfs2/cluster.conf) in NODE2 and NODE3. This file should be the same on all nodes in the cluster, and changes to this file must be propagated to the other nodes in the cluster.

Load Balancer and Failover

Instead of using HAProxy for doing SQL load balancing, we will be using some of the suggestions based on this article and just have the Magento instances connect to their local MySQL Server using localhost, with following criteria:

Magento in each node will connect to MySQL database using localhost and bypassing HAProxy.

Load balancing on database layer is only for mysql client/console. HAProxy will be used to balance HTTP.

Keepalived will be used to hold the virtual IP: 192.168.197.150 on load balancers LB1 and LB2

In case you plan to place the MySQL Servers on separate servers, then the Magento instances should connect to the database cluster via the HAProxy.

** The following steps should be performed on ST1

1. We have created scripts to install HAProxy and Keepalived, these can be obtained from our Git repository.

5. By default, the script will configure the MySQL reverse proxy service to listen on port 33306. We will need to add a few more lines to tell HAproxy to load balance our web server farm as well. Add following line in /etc/haproxy/haproxy.cfg:

Install Magento

1. Now that we have a load-balanced setup that is ready to support Magento, we will now create our Magento database. From the ClusterControl UI, go to Manage > Schema and Users > Create Database to create the database:

2. Create the database user under Privileges tab:

3. Assign the correct privileges for magento_user on database magento_site:

At the moment, we assume you have pointed mymagento.com and www.mymagento.com to the virtual IP, 192.168.197.150.

4. Open web browser and go to mymagento.com. You should see an installation page similar to screenshot below:

* Take note that we are using localhost in the host value, session data will be saved in database. It will allow users to use the same session regardless of which web server they are connected to.

Notes

** Updated on 9th Dec 2013 **

By default Magento will setup a MyISAM table specifically for FULLTEXT indexing called catalogsearch_fulltext. MyISAM tables are supported within MySQL Galera Cluster, however, MyISAM has only basic support, primarily because the storage engine is non-transactional and so Galera cannot guarantee the data will remain consistent within the cluster.

Codership has released MySQL-wsrep 5.6 supports with Galera 3.0 which currently in beta release at the time of this update. You could either use the MySQL-wsrep 5.6 which supports InnoDB FTS or convert all non-Galera friendly tables to use InnoDB with primary keys. Alternatively, you can use external search engine (such as Solr or Sphinx) for FTS capabilities.

If you choose the latter option, you need to convert some of the tables to work well with Galera by executing following queries on one of the DB node:

Verify The Architecture

1. Check the HAproxy statistics by logging into the HAProxy admin page at LB1 host port 9600. The default username/password is admin/admin. You should see some bytes in and out on the web_farm ands9s_33306_production sections:

There are many improvements that could be made to this setup. For example, you could provide redundancy to the shared storage server by installing DRBD. You can also add a Varnish Cache in the load balancing servers to provide better caching on your static contents and reduce the load on the web servers/database servers.