High Availability with MySQL Fabric: Part II

This is the third post in our MySQL Fabric series. If you missed the previous two, we started with an overall introduction, and then a discussion of MySQL Fabric’s high-availability (HA) features. MySQL Fabric was RC when we started this series, but it went GA recently. You can read the press release here, and see this blog post from Oracle’s Mats Kindahl for more details. In our previous post, we showed a simple HA setup managed with MySQL Fabric, including some basic failure scenarios. Today, we’ll present a similar scenario from an application developer’s point of view, using the Python Connector for the examples. If you’re following the examples on these posts, you’ll notice that the UUID for servers will be changing. That’s because we rebuild the environment between runs. Symbolic names stay the same though. That said, here’s our usual 3 node setup:

This simple script requests a MODE_READWRITE connection and then issues selects in a loop. The reason it requests a RW connector is that it makes it easier for us to provoke a failure, as we have two SECONDARY nodes that could be used for queries if we requested a MODE_READONLY connection. The select includes a short sleep to make it easier to catch it in SHOW PROCESSLIST. In order to work, this script needs the test.test table to exist in the mycluster group. Running the following statements in the PRIMARY node will do it:

The ‘sleeping 1 second and reconnecting’ line means the script got an exception while running a query (when the PRIMARY node was stopped, waited one second and then reconnected. The next lines confirm that everything went back to normal after the reconnection. The relevant piece of code that handles the reconnection is this:

Shell

1

2

3

fcnx=mysql.connector.connect(**config)

fcnx.set_property(group='mycluster',mode=fabric.MODE_READWRITE)

fcnx.reset_cache()

If fcnx.reset_cache() is not invoked, then the driver won’t connect to the xml-rpc server again, but will use it’s local cache of the group’s status instead. As the PRIMARY node is offline, this will cause the reconnect attempt to fail. By reseting the cache, we’re forcing the driver to connect to the xml-rpc server and fetch up to date group status information. If more failures happen and there is no PRIMARY (or candidate for promotion) node in the group, the following error is received:

Running without MySQL Fabric

As we have discussed in previous posts, the XML-PRC server can become a single point of failure under certain circumstances. Specifically, there are at least two problem scenarios when this server is down:

When a node goes down

When new connection attempts are made

The first case is obvious enough. If MySQL Fabric is not running and a node fails, there won’t be any action, and clients will get an error whenever they send a query to the failed node. This is worse if the PRIMARY fails, as failover won’t happen and the cluster will be unavailable for writes. The second case means that while MySQL Fabric is not running, no new connections to the group can be established. This is because when connecting to a group, MySQL Fabric-aware clients first connect to the XML-RPC server to get a list of nodes and roles, and only then use their local cache for decisions. A way to mitigate this is to use connection pooling, which reduces the need for establishing new connections, and therefore minimises the chance of failure due to MySQL Fabric being down. This, of course, is assuming that something is monitoring MySQL Fabric ensuring some host provides the XML-PRC service. If that is not the case, failure will be delayed, but it will eventually happen anyway. Here is an example of what happens when MySQL Fabric is down and the PRIMARY node goes down:

This happens when a new connection attempt is made after resetting the local cache.

Making sure MySQL Fabric stays up

As of this writing, it is the user’s responsibility to make sure MySQL Fabric is up and running. This means you can use whatever you feel comfortable with in terms of HA, like Pacemaker. While it does add some complexity to the setup, the XML-RPC server is very simple to manage and so a simple resource manager should work. For the backend, MySQL Fabric is storage engine agnostic, so an easy way to resolve this could be to use a small MySQL Cluster set up to ensure the backend is available. MySQL’s team blogged about such a set up here. We think the ndb approach is probably the simplest for providing HA at the MySQL Fabric store level, but believe that MySQL Fabric itself should provide or make it easy to achieve HA at the XML-RPC server level. If ndb is used as store, this means any node can take a write, which in turns means multiple XML-PRC instances should be able to write to the store concurrently. This means that in theory, improving this could be as easy as allowing Fabric-aware drivers to get a list of Fabric servers instead of a single IP and port to connect to.

What’s next

In the past two posts, we’ve presented MySQL Fabric’s HA features, seen how it handles failures at the node level, how to use MySQL databases with a MySQL Fabric-aware driver, and what remains unresolved for now. In our next post, we’ll review MySQL Fabric’s Sharding features.

Tags:

Categories:

Comments

MWM

Do we have switchover command in MySQL Fabric like we have in mysqlrpladmin utility.

Once our down server gets up and become part of the Group as Secondary then deliberately down the primary server is not a good option. We have the option of server-status to change the primary one to secondary and secondary to primary. What is the best approach to resolve this issue.

Fabric itself is just a framework to manage MySQL servers, it does not actually handle application data, this is still done by individual MySQL Servers.

A lot of mysqlfabric commands support the –update_only option, which updates Fabric’s metadata but does not make any changes on servers (i.e. does not set up replication), so depending on how your existing set up is, perhaps you can use this to start using Fabric to manage it.

Tim

What I am confused is that I think we need Fabric aware connectors to access the servers managed by Fabric. Therefore, I think the destination server for the mysqldump is the Fabric manage node and it will assign server to handle SQL commands according to my Fabric configuartion.

You’re right about the connectors, but the reason you need them is to decide the MySQL server to connect to. What I mean by this is that the connectors make routing decision depending on HA and Sharding configuration and status, so, for example, you can ask for a R/W connection to group X, and the connector returns a (normal) MySQL connection to the PRIMARY server in that group.

Now, that is needed for day to day operations, but for a migration, you’ll usually know what is the PRIMARY server, and you can load a dump directly against it. That said, I think you probably don’t even need to do a dump and restore.

If you already have a master->replica setup working, you just need to use the mysqlfabric with the relevant commands (group create, group add, etc) and the –update_only option, so that Fabric has information about your nodes on it’s data store. From that point on, you can use the connectors as described, and the mysqlfabric utility to manage the Group too.

If you have a single server, I’d recommend cloning an instance and setting up as a replica, and then going back to the previous step.

Tim

Tim

I set up an HA group with two MySQL server A and B and promote A as PRIMARY. Then, I shutdown the mysqld at the server A.

MySQL Fabric detects the server A is unreachable serveral times, and decides to promote B automatically. [INFO] timestamp – Executor-0 – Master has changed from A to B. The server B is now PRIMARY and in read_write mode. It can accept write commands (e.g. insert) successfully.

I wonder whether the server B has replicated all data from the server A before the server A is shut down. Will there be any data loss during the procedure? Are there any settings of MySQL Fabric to prevent this?

What commands should I issue when I start the mysqld at server A again to restore the original HA group configuration? What I mean is: 1. Set server A as PRIMARY and in read_write mode again. 2. Set server B as SECONDARY and in read_only mode again. Will the server A synchronize all data modification from server B when the server A is down? Will there be any data loss during the HA configuration restore procedure?

A proper answer to these questions requires a blog post on its own, not just for the length of the replies, but because I need to dig a bit more in the code to get a deeper understanding of all the moving pieces here And by ‘requires a blog post on its own’ I mean we’ll write that post, so stay tuned.

However, I can provide you with some quick replies: – Fabric won’t promote any node unless it was up to date with the last changes in the master. I can back this statement up with anecdotical evidence from one test:

For this, I introduced artificial lag on the SECONDARY host by stopping replication on it for a bit, while a heavy insert load was being run on the master. Fabric waited for this node to catch up before promoting it to PRIMARY, which means that any MODE_READWRITE connections to the group did not work during the catch up time.

This was only a test as an end user, without looking at the code to understand what it does, so I have some questions of my own that I want to answer with another blog post. One of them is what happens if the SECONDARY is not able to catch up completely, because some binlog events were lost when the master crashed. Since Fabric requires the enforce-gtid-consistency flag to be set, I suspect this scenario cannot happen, but I need to confirm that suspicion.

As for the way to restore the original master, you need to: – Set it to SPARE, then SECONDARY (Fabric won’t check lag when setting a node to SECONDARY so it may be behind in replication in this state) – Set it to PRIMARY. The way I’d recommend to do this is to promote the group, using the –slave_id option with the hosts’ uuid. If it’s behind in replication when you do this, the ‘mysqlfabric group promote’ command will block until it catches up, and only then will it complete the promotion (and the corresponding demotion of the previous PRIMARY).

That said, typically you’ll want all the nodes on a group to be of equal capacity, so after a failover, the standard practice (as I usually see in the wild, at least) is to just let the PRIMARY role on another node until that one fails/has to be demoted.

So to wrap up, Fabric takes a lot of steps to prevent any data loss during failover, but I think there are enough details involved to merit a separate blog post to discuss them.

Tim

Thank you a lot! Let me give you a big hand! I got the overall concept.

I highly anticipate your blog post about it. Besides, I would like to know how to solve the problem if “SECONDARY is not able to catch up completely” or something like replication errors if it happens.

Sergio

I can’t understand very well how to config the connector for load balancing. In the above example, I need to put the work mode READWRITE or READONLY for each SQL. But, I can’t demand change all the code in my application. Can I config the connector for automatic load balancing? Is not simply change my actual string connection to the MySQL Server by MySQL Fabric?

@Sergio: Sorry for the delay, I’ve been on vacation for the last couple of weeks. I think what you need is not actually load balancing, but r/w splitting. If my understanding is correct, then that’s not supported by the connectors I’ve tried so far, and, in general, it’s something tricky to get right, since seemingly read only queries may end up changing data as a side effect of triggers or functions, for example. So in summary, if you can’t change application code, I don’t think it will be easy to do load balancing with Fabric right now.