The points I wish to cover with this blog are focusing on the ease of setting up Group Replication, using systemd capabilities to harness that ease, and a little shell scripting to pull it all together. The picture above is the intended setup I will use here today, mainly to show how the cluster can reconfigure itself on a change in the topology. In many cases, this might be the setup most application designs use.

So how does this begin?

First, we need to make sure you have the right software to follow along. This blog is using the release of MySQL provided in the Labs download area released just prior to Oracle Open World 2016. In that case you might want to download the latest Group Replication labs release from that location. If its no longer available because some time has passed from the writing of this blog then likely a GA release is available. In that case use the latest MySQL release instead.

Second, I am running my setup in a Vagrant environment using a CentOS7 box, but any provisioned systemd Linux OS should work fine. My basic vagrant file setup is as follows below. If you’re new to vagrant and would like to use it, then take a peak at my previous blogs to get started. The intention with this setup is to use a box for the MySQL instances (gr125) and a box for an app (app126) to run against the database setup across a private network.

Vagrantfile Setup for 2 boxes

Ruby

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

# -*- mode: ruby -*-

# vi: set ft=ruby :

Vagrant.configure("2")do|config|

#...

config.vm.define"gr125",autostart:truedo|gr125|

gr125.vm.box="bento/centos-7.2"

gr125.vm.hostname='gr125'

gr125.vm.box_url="bento/centos-7.2"

gr125.vm.network"private_network",ip:"192.168.56.125"

gr125.vm.provider"virtualbox"do|v|

v.customize["modifyvm",:id,"--natdnshostresolver1","on"]

v.customize["modifyvm",:id,"--memory",750]

v.customize["modifyvm",:id,"--name","gr2-gr125"]

end

end

config.vm.define"app126",autostart:falsedo|app126|

app126.vm.box="bento/centos-7.2"

app126.vm.hostname='app126'

app126.vm.box_url="bento/centos-7.2"

app126.vm.network"private_network",ip:"192.168.56.126"

app126.vm.provider"virtualbox"do|v|

v.customize["modifyvm",:id,"--natdnshostresolver1","on"]

v.customize["modifyvm",:id,"--memory",150]

v.customize["modifyvm",:id,"--name","gr2-app126"]

end

end

end

Handling systemd & multi-instance MySQL

The next thing to consider is to how to deal with systemd. Here are some of the basic ideas from the MySQL Server team, but here is a quick rendition of benefits. First, systemdreplaces what mysqld_safe use to provide by enabling root to launch the MySQL instance as a non-privileged user. It’s great that this OS capability now handles that. Systemd handles restart control in case the mysqld process crashes. It also provides session setup for that non-privileged user execution of the server such as location of the PID, number of open files allowed and settings such as nice, which only root can do. Actual systemd usages for this simple blog will be seen in our scripting next.

Scripting a multi-instance MySQL setup

The MySQL configuration

There are a few approaches to building this setup, but I’ll minimize it down to what is necessary. In order to get MySQL setup as intended, you need to have a proper my.cnf server configuration file in place first. In general, I’ll be using Matt Lord’s Quick Start Guide as a template on this build. Here is the my.cnf I intend to use, but it can vary in numerous ways and still be effective.

The Base installation

This script can be pretty basic, essentially, just installing the base rpms and removing the default configuration that comes with it. Also here we use the systemctl command a number of times. First to stop and disable the single MySQL instance usage of systemd. Then subsequently to configure the multi-instance usage capability for MySQL. Finally, the last is to allow systemd to reload and recognize the new system file we put into place.

### if you have the repo setup you can use: # yum install -y mysql-server

yum install-y/vagrant/mysql/*.rpm

# now remove the setup build by the rpm install, as its defined for a single MySQL instance

rm-rf/var/lib/mysql/*

systemctl stop mysqld

systemctl disable mysqld

echo"#### 3. Enabling the multi-instance MySQL systemd setup ####"

cp/usr/lib/systemd/system/mysqld\@.service/etc/systemd/system/

systemctl daemon-reload

exit

Build and Rebuild the Instances

The next script handles some fun capabilities. It stops any running MySQL instances that are running. Notice, it refers to them as “mysqld@gr1”, “mysqld@…” etc, and you can see this also reflected in our configuration file above. This is the pattern used by systemd for multi-instance setups by defining them under the “mysqld” section, and each instance defined by its own tag name that follows the @ symbol.

Additionally, this script will ensure the defined datadir directory for the mysql instance files is prepared and ready to go for a fresh new file setup. Once the files are initialized, it starts the server instance and then dynamically fetches the temporary root password and updates the root user’s password to a common one of “P@ssw0rd”. Although it looks for the temporary password in the error log, there is an alternate command commented out in case you are running a version of MySQL where it still uses root’s .mysql_secret file. Setting the root password dynamically can also be done 2 ways as noted below. But the 2nd method which prevents binary logging of the password is needed here.

Hopefully this eases things, although it does subvert some of the security setups, but these features are their to still allow flexible use cases such as this. Who knows, maybe your teams have a manner to dynamically generate more complex passwords and store them in a vault, in which case…automating the process is the best way.

mysql-rebuild.sh script - can be run over and over again

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

#!/bin/bash

# -- Single server, multi-instance group replication setup, tear down and build

Configuring Group Replication – What’s all the fuss?

Ok, so we’ve got some cool scripts for building and destroying and rebuilding a Group Replication sandbox setup consisting of 3 MySQL instances all in a single-primary group. Now what?

Well, let’s get it configured. What does that take? Not much actually, see below.

Each instance needs a replication user created and a CHANGE MASTER statement executed. Group replication will piggy back off of this setup. Like in the script, don’t forget when you do the CREATE and GRANT statements for the user, to ensure they don’t go to the binary log.

For the member that you plan to be the “bootstrap” member, you just need to enable and disable bootstrapping before and after stopping and starting that member

For the other members, just stop and start the members, and that’s it!

It will get easier than that…but give us a little time! As if its not easy enough. And by the way, the only reason why we need to STOP GROUP_REPLICATION first, is because in the configuration is starts at bootup. So now that its configured rebooting a member will just join back into the group all on its own.

install-configGR.sh script - Simply configures the members into a group

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

#!/bin/bash

# -- Single server, multi-instance group replication setup - enabling the group members

So Now what?

Well….it’s great that the group is all running. How to verify which one is the primary member?

Fair enough. One way do to this is by probing the members for the ‘%read_only%’ status, as only the single primary should have read_only=OFF. My simple little script below does the trick…for now at least. You’ll notice in the execution of that script that super_read_only is set as well, providing extra assurances on our setup.

I hope that this review of MySQL Group Replication, systemd used in a multi-instance setup and some fun scripting has provided another way at looking at the new things that are coming from the MySQL team. Please share any comments. Feedback is always welcome!