ProxySQL and Percona XtraDB Cluster (Galera) Integration

In this post, we’ll discuss how an integrated ProxySQL and Percona XtraDB Cluster (Galera) helps manage node states and failovers.

ProxySQL is designed to not perform any specialized operation in relation to the servers with which it communicates. Instead, it uses an event scheduler to extend functionalities and cover any special needs.

Given that specialized products like Percona XtraDB Cluster are not managed by ProxySQL, they require the design and implementation of good/efficient extensions.

In this article, I will illustrate how Percona XtraDB Cluster/Galera can be integrated with ProxySQL to get the best from both.

Brief digression

Before discussing their integration, we need to review a couple of very important concepts in ProxySQL. ProxySQL has a very important logical component: Hostgroup(s) (HG).

In ProxySQL, QueryRules (QR) can be directly mapped to an HG. Using QRs, you can define a specific user to ONLY go to that HG. For instance, you may want to have user app1_user go only on servers A-B-C. Simply set a QR that says app1_user has the destination hostgroup 5, where HG 5 has the servers A-B-C:

Shell

1

2

3

4

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight)VALUES('192.168.1.5',5,3306,10);

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight)VALUES('192.168.1.6',5,3306,10);

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight)VALUES('192.168.1.7',5,3306,10);

INSERT INTO mysql_query_rules(username,destination_hostgroup,active)values('app1_user',5,1);

Easy isn’t it?

Another important concept in ProxySQL also related to HG is ReplicationHostgroup(s) (RHG). This is a special HG that ProxySQL uses to automatically manage the nodes that are connected by replication and configured in Write/Read and Read_only mode.

What does this mean? Let’s say you have four nodes A-B-C-D, connected by standard asynchronous replication. A is the master and B-C-D are the slaves. What you want is to have your application pointing writes to server A, and reads to B-C (keeping D as a backup slave). Also, you don’t want to have any reads go to B-C if the replication delay is more than two seconds.

RHG, in conjunction with HG, ProxySQL can manage all this for you. Simply instruct the proxy to:

Use RHG

Define the value of the maximum latency

Using the example above:

Shell

1

2

3

4

5

6

7

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight,max_replication_lag)VALUES('192.168.1.5',5,3306,10,2);

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight,max_replication_lag)VALUES('192.168.1.6',5,3306,10,2);

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight,max_replication_lag)VALUES('192.168.1.7',5,3306,10,2);

INSERT INTO mysql_servers(hostname,hostgroup_id,port,weight,max_replication_lag)VALUES(192.168.1.8',10,3306,10,2);

INSERT INTO mysql_query_rules(username,destination_hostgroup,active)values('app1_user',5,1);

INSERT INTO mysql_query_rules(username,destination_hostgroup,active)values('app1_user',6,1);

INSERT INTO mysql_replication_hostgroups VALUES(5,6);

From now on ProxySQL will split the R/W using the RHG and the nodes defined in HG 5.

The flexibility introduced by using HGs is obviously not limited to what I mention here. It will play a good part in the integration of Percona XtraDB Cluster and ProxySQL, as I illustrate below.

Percona XtraDB Cluster/Galera Integration

In an XtraDB cluster, a node has many different states and conditions that affect if and how your application operates on the node.

The most common one is when a node become a DONOR. If you’ve ever installed Percona XtraDB Cluster (or any Galera implementation), you’ve faced the situation when a node become a DONOR it changes state to DESYNC. If the node is under a heavy load, the DONOR process might affect the node itself.

But that is just one of the possible node states:

A node can be JOINED but not synced

It can have wsrep_rejectqueries, wsrep_donorrejectqueries, wsrep_ready (off)

It can be in a different segment

The number of nodes per segment is relevant.

To show what can be done and how, we will use the following setup:

Five nodes

Two segments

Applications requiring R/W split

And two options:

Single writer node

Multiple writers node

We’ll analyze how the proxy behaves under the use of a script run by the ProxySQL scheduler.

The use of a script is necessary for ProxySQL to respond correctly to Percona XtraDB Cluster state modifications. ProxySQL comes with two scripts for Galera, both of them are too basic and don’t consider a lot of relevant conditions. I’ve written a more complete script: https://github.com/Tusamarco/proxy_sql_tools galera_check.pl

This script is a prototype and requires QA and debugging, but is still more powerful than the default ones.

The script is designed to manage X number of nodes that belong to a given HG. The script works by HG, and as such it will perform isolated actions/checks by the HG. It is not possible to have more than one check running on the same HG. The check will create a lock file {proxysql_galera_check_${hg}.pid} that will be used to prevent duplicates.
galera_check will connect to the ProxySQL node and retrieve all the information regarding the nodes/proxysql configuration. It will then check in parallel each node and will retrieve the status and configuration.
galera_check analyzes and manages the following node states:

read_only

wsrep_status

wsrep_rejectqueries

wsrep_donorrejectqueries

wsrep_connected

wsrep_desinccount

wsrep_ready

wsrep_provider

wsrep_segment

Number of nodes in by segment

Retry loop

As mentioned, the number of nodes inside a segment is relevant. If a node is the only one in a segment, the check behaves accordingly. For example, if a node is the only one in the MAIN segment, it will not put the node in OFFLINE_SOFT when the node becomes a donor, to prevent the cluster from becoming unavailable for applications.

The script allows you to declare a segment as MAIN — quite useful when managing production and DR sites, as the script manages the segment acting as main in a more conservative way. The check can be configured to perform retries after a given interval, where the interval is the time define in the ProxySQL scheduler. As such, if the check is set to have two retries for UP and three for DOWN, it will loop that number before doing anything.

Percona XtraDB Cluster/Galera performs some actions under the hood, some of them not totally correct. This feature is useful in some uncommon circumstances, where Galera behaves weirdly. For example, whenever a node is set to READ_ONLY=1, Galera desyncs and resyncs the node. A check that doesn’t take this into account sets the node to OFFLINE and back for no reason.

Another important differentiation for this check is that it use special HGs for maintenance, all in the range of 9000. So if a node belongs to HG 10, and the check needs to put it in maintenance mode, the node will be moved to HG 9010. Once all is normal again, the node will be put back on its original HG.

This check does NOT modify any node states. This means it will NOT modify any variables or settings in the original node. It will ONLY change node states in ProxySQL.

Multi-writer mode

The recommended way to use Galera is in multi-writer mode. You can then play with the weight to have a node act as MAIN node and prevent/reduce certification failures and Brutal force Abort from Percona XtraDB Cluster. Use this configuration:

In this test, we will NOT use Replication HostGroup. We will do that later when testing a single writer. For now, we’ll focus on multi-writer.

Segment 1 covers HG 500 and 501, while segment two only covers 501. Weight for the servers in HG 500 is progressive from 1 to 1 billion, in order to reduce the possible random writes on the non-main node.

To generate a load, use the following commands (or whatever you like, but use a different one for read-only and reads/writes):

Shell

1

2

3

4

Write

sysbench--test=/usr/share/doc/sysbench/tests/db/oltp.lua--mysql-host=192.168.1.50--mysql-port=3311--mysql-user=stress_RW--mysql-password=test--mysql-db=test_galera--db-driver=mysql--oltp-tables-count=50--oltp-tablesize=50000--max-requests=0--max-time=9000--oltp-point-selects=5--oltp-read-only=off--oltp-dist-type=uniform--oltp-reconnect-mode=transaction--oltp-skip-trx=off--num-threads=10--report-interval=10--mysql-ignore-errors=all run

Readonly

sysbench--test=/usr/share/doc/sysbench/tests/db/oltp.lua--mysql-host=192.168.1.50--mysql-port=3311--mysql-user=stress_RW--mysql-password=test--mysql-db=test_galera--db-driver=mysql--oltp-tables-count=50--oltp-tablesize=50000--max-requests=0--max-time=9000--oltp-point-selects=5--oltp-read-only=on--num-threads=10--oltp-reconnect-mode=query--oltp-skip-trx=on--report-interval=10--mysql-ignore-errors=all run

The most common thing that could happen to a cluster node is to become a donor. This is a planned activity for Percona XtraDB Cluster and is suppose to be managed in a less harmful way.

We’re going to simulate crashing a node and forcing it to elect our main node as DONOR (the one with the highest WEIGHT).

This was easy, and more or less managed by the standard script. But what would happen if my donor was set to DO NOT serve query when in the DONOR state?

Wait, what?? Yes, Percona XtraDB Cluster (and Galera in general) can be set to refuse any query when the node goes in DONOR state. If not managed this can cause issues as the node will simply reject queries (but ProxySQL sees the node as alive).

Let me show you:

MySQL

1

2

3

4

5

6

showglobalvariableslike'wsrep_sst_donor_rejects_queries';

+---------------------------------+-------+

|Variable_name|Value|

+---------------------------------+-------+

|wsrep_sst_donor_rejects_queries|ON|

+---------------------------------+-------+

For the moment, let’s deactivate the check. Then, do the same stop and delete of the data dir, then restart the node. SST takes place.

Immediately after,
galera_check identifies the node is requesting the SST, and that the DONOR is our writer (given it is NOT the only writer in the HG, and it has the variable
wsrep_sst_donor_rejects_queries active), it cannot be set to OFFLINE_SOFT. We do not want ProxySQL to consider it OFFLINE_HARD (because it is not).

The crashed node is restarted by the SST process, and the node goes up. But if the level of load in the cluster is mid/high, it will remain in the JOINED state for sometime, becoming visible by the ProxySQL again. ProxySQL will not, however, correctly recognize the state.

The difference here is that the script moves the node to the special HG to isolate it, instead leaving it in the original HG.

The integration between ProxySQL and Percona XtraDB Custer (Galera) works perfectly for multi-writer if you have a script like
galera_check that correctly manages the different Percona XtraDB Custer/Galera states.

ProxySQL and PXC using Replication HostGroup

Sometimes we might need to have 100% of the write going to only one node at a time. As explained above, ProxySQL uses weight to redirect a % of the load to a specific node.

In most cases, it will be enough to set the weight in the main writer to a very high value (like 10 billion) and one thousand on the next node to almost achieve a single writer.

But this is not 100% effective, it still allows ProxySQL to send a query once every X times to the other node(s). To keep it consistent with the ProxySQL logic, the solution is to use replication Hostgroups.

Replication HGs are special HGs that the proxy sees as connected for R/W operations. ProxySQL analyzes the value of the READ_ONLY variables and assigns to the READ_ONLY HG the nodes that have it enabled.

The node having READ_ONLY=0 resides in both HGs. As such the first thing we need to modify is to tell ProxySQL that HG 500 and 501 are replication HGs.

MySQL

1

2

3

4

5

6

7

8

INSERTINTOmysql_replication_hostgroupsVALUES(500,501,'');

LOADMYSQLSERVERSTORUNTIME;SAVEMYSQLSERVERSTODISK;

select*frommysql_replication_hostgroups;

+------------------+------------------+---------+

|writer_hostgroup|reader_hostgroup|comment|

+------------------+------------------+---------+

|500|501||

+------------------+------------------+---------+

Now whenever I set the value of READ_ONLY on a node, ProxySQL will move the node accordingly. Let see how. Current:

IF in this scenario a reader node crashes, the application will not suffer at all given the redundancy.

But if the writer is going to crash, THEN the issue exists because there will be NO node available to manage the failover. The solution is to either do the node election manually or to have the script elect the node with the lowest read weight in the same segment as the new writer.

Below is what happens when a node crashes (bird-eye view):

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

+---------+

|Crash|

+----+----+

|

v

+--------+-------+

|ProxySQL|

|shun crashed|

|node|

+--------+-------+

|

|

v

+-----------------+-----------------+

+----------->HostGroup has another active|

||Node inHG writer?|

|+--+--------------+---------------+-+

||||

||||

||No||Yes

||||

|+-----v----------+|+-----------v----+

||ProxySQL will|||ProxySQL will|

||stop serving|||redirect load>--------+

||writes|||there||

|+----------------+|+----------------+|

|||

|v|

|+-------+--------+|

||ProxySQL checks||

||READ_ONLY on||

||Reader HG||

||||

|+-------+--------+|

|||

|v|

|+-------+--------+|

||Any Node with||

||READ_ONLY=0?||

|+----------------+|

||No|Yes|

||||

|+----------v------++--v--------------+|

||ProxySQL will||ProxySQL will||

||continueto||Move node to||

+<---------<+donotserve||Writer HG||

||Writes||||

|+-----------------++--------v--------+|

|||

+-------------------------------------------+|

+---------+|

|END<------------------------+

+---------+

The script should act immediately after the ProxySQL SHUNNED the node step, just replacing the READ_ONLY=1 with READ_ONLY=0 on the reader node with the lowest READ WEIGHT.

ProxySQL will do the rest, copying the node into the WRITER HG, keeping low weight, such that WHEN/IF the original node will comeback the new node will not compete for traffic.

Since it included that special function in the check, the feature allows automatic fail-over. This experimental feature is active only if explicitly set in the parameter that the scheduler passes to the script. To activate it add
--active_failover in the scheduler. My recommendation is to have two entries in the scheduler and activate the one with
--active_failover for testing, and remember to deactivate the other one.

When the node is killed ProxySQL, shun it and report issues with the checks (connect and ping). During this time frame the application will experience issues if it is not designed to manage the retry and eventually a queue, and it will crash.

The glitch happened when node 192.168.1.6 was copied over to HG 500, but with no interruptions or errors. At this point let us put back the crashed node, which comes back elect Node2 (192.168.1.6) as Donor.

This was a Percona XtraDB Cluster/Galera choice, and we have to accept and manage it.

Note that the other basic scripts put the node in OFFLINE_SOFT, given the node will become a DONOR.
Galera_check will recognize that Node2 (192.168.1.6) is the only active node in the segment for that specific HG (writer), while is not the only present for the READER HG.

As such it will put the node in OFFLINE_SOFT only for the READER HG, trying to reduce the load on the node, but it will keep it active in the WRITER HG, to prevent service interruption.

When the SST donor task is over,
galera_check moves the 192.168.1.6 back ONLINE as expected. But at the same time, it moves the recovering node to the special HG to avoid to have it included in any activity until ready.

With no errors and no huge delay, our application (managing to reconnect) had only a glitch, and had to reconnect.

Read had no errors or reconnects.

The connection errors were managed by ProxySQL, and given it found five in one second it SHUNNED the node. The
galera_script was able to promote a reader, and given it is a failover, no delay with retry loop. The whole thing was done in such brief time that application barely saw it.

Obviously, an application with thousands of connections/sec will experience larger impact, but the time-window will be very narrow. Once the failed node is ready to come back, either we choose to start it with READ_ONLY=1, and it will come back as the reader.
Or we will keep it as it is and it will come back as the writer.

No matter what, the script manages the case as it had done in the previous (manual) exercise.

Conclusions

ProxySQL and galera_check, when working together, are quite efficient in managing the cluster and its different scenarios. When using the single-writer mode, solving the manual part of the failover dramatically improves the efficiency in production state recovery performance — going from few minutes to seconds or less.

The multi-writer mode remains the preferred and most recommended way to use ProxySQL/Percona XtraDB Cluster given it performs failover without the need of additional scripts or extensions. It’s also the preferred method if a script is required to manage the integration with ProxySQL.

In both cases, the use of a script can identify the multiple states of Percona XtraDB Cluster and the mutable node scenario. It is a crucial part of the implementation, without which ProxySQL might not behave correctly.

Related

Marco Tusa had his own international practice for the past twenty eight years.
His experience and expertise are in a wide variety of information technology and information management fields, cover research, development, analysis, quality control, project management and team management.
Marco is currently working at Percona as High Availability Practice Manager, previously working at Percona as manager of the Consulting Rapid Response Team on October 2013.
He has being working as employee for the SUN Microsystems as MySQL Professional Service manager for South Europe., and previously in MySQL AB.
He has worked with the Food and Agriculture Organization of the United Nation since 1994, leading the development of the Organization’s hyper textual environment.Team leader for the FAO corporate database support.
For several years he has led the development group in the WAICENT/Faoinfo team. He has assisted in defining the Organization’s guidelines for the dissemination of information from the technology and the management point of view.
He has participated in field missions in order to perform analysis, reviews and evaluation of the status of local projects, providing local support and advice.
He had collaborates with MIT Media Lab (Massachusetts Institute of Technology laboratory) and FAO as Sustainable Information Technology for developing countries Specialist in relation with the FAO’s Special Program for Food Security for Senegal.