Preparing MySQL Instances for Use

I use a set of scripts to build and rebuild my clustered test setups. I can fully automate a Group Replication clustered setup, but it requires that I explicitly implement all the configurations and have precision know-how that I’m not missing anything that is fundamental in the setup.

With InnoDB Cluster setups, their initial design is intended as an “easy & reliable” step through process…and with 8.0, it really is easy & reliable. Let’s look.

Building a base instance

To get the new instances prepared, on my lab system, I do the following things:

(1) Initial Security Setups

Initial steps since this is a lab environment are to enable selinux in permissive mode.

In a real setup, you would need to add port access for MySQL Group Communication System to your selinux setup and for your firewall. Windows may have similar necessary configurations too.

InnoDB Cluster uses 33061 for its chosen default GCS port number

** GCS is the distributed communication layer of the system

But you can choose any port that best works for your environment, you just need to configure it up front. Depending on the customizations, you may need to build a Group Replication core setup, then adopt that GR setup as an InnoDB Clustered setup by using the MySQL-Shell.

(2) Installing the Binaries

Installing the MySQL 8.0 binaries via your package manager or other means as suited for your supported operating system. Take note that the rpm package named “backup” is in fact the MySQL Enterprise Backup utility which is provided/bundled as part of the Enterprise Edition Server packages. MySQLRouter and MySQL Shell are still separate.

MySQL 8.0.11 rpm packages for installation

Shell

1

2

3

4

5

6

7

8

9

[root@gr127 gr8-1]# ls -C1 /vagrant/gr-files/rpms-8.0.11/

mysql-commercial-backup-8.0.11-1.1.el7.x86_64.rpm

mysql-commercial-client-8.0.11-1.1.el7.x86_64.rpm

mysql-commercial-common-8.0.11-1.1.el7.x86_64.rpm

mysql-commercial-libs-8.0.11-1.1.el7.x86_64.rpm

mysql-commercial-libs-compat-8.0.11-1.1.el7.x86_64.rpm

mysql-commercial-server-8.0.11-1.1.el7.x86_64.rpm

mysql-router-commercial-8.0.11-1.1.el7.x86_64.rpm

mysql-shell-commercial-8.0.11-1.1.el7.x86_64.rpm

(3) Handling the initial reset of the DB Root user & other Accounts

Getting the root user account’s password reset as a scripted process. In my case its for ease of building test systems, but real-world it could be scripted for capturing strong and randomized password for use. Then registering that password into a vault of some form.

Additionally, I create a remote-capable ‘root’ like user account named ‘ICroot’ below. This is so that I can provision remote members, all from a single host.

It’s not entirely necessary, as you can instruct the mysqlsh utility to build an account during the configureInstance() stage of building the member, by providing values for attributes “clusterAdmin” and “clusterAdminPassword”.

echo$PASSWRD|mysqlsh root@localhost:3306--sql--passwords-from-stdin-e"SET SQL_LOG_BIN=0;CREATE USER 'ICroot'@'%' IDENTIFIED BY '${PASSWRD}';GRANT ALL ON *.* TO 'ICroot'@'%' WITH GRANT OPTION;"

# or something like

mysqlsh root@localhost:3306--sql--passwords-from-stdin-e"SET SQL_LOG_BIN=0;CREATE USER 'IC2root'@'%' IDENTIFIED BY '${PASSWRD}';GRANT ALL ON *.* TO 'IC2root'@'%' WITH GRANT OPTION;"<<<$(echo"${PASSWRD}")

Notice: Resetting the ‘root@localhost’ initialized password on new server instances DOES NOT require the
--connect-expired-password option with mysqlsh.

(4) Mitigating Generation of Binary logs for localized events

Local Setups – use case: distinct root passwords on each server

If we would rather not be bound to have the same password for the root@localhost user account on each service, we can suppress binary logging in one particular way. As seen above I am using the
SETSQL_LOG_BIN=0; statement first to be sure transactions in my statements above don’t write to the binary log, which means no GTID is created either.

Initial-Only Setups – use case: initialization changes that never replicate

If we’ve done a lot of setup at the beginning and want to prevent all of those localized events from being replicated, we can use
RESET MASTER; which will clear out “ALL” GTID references AND reset the binary logs too. Sort of like using a sledge hammer to put in a nail….but its there when we need one.

Group Replication/InnoDB Cluster – use case: Applying changes we DO NOT want replicated

Why might we want to use
RESET MASTER; to do setups before the Group Replication service is enabled?
**prevents locally owned GTIDs by the individual server

We might want to use
SETSQL_LOG_BIN=0; when we update a password on a DB user account and not want that password replicated…or when running any SQL that we don’t want replicated.

etc….

Confirming that no GTIDs are set

There are two commands that I will direct you to here, both confirm the status of GTIDs.

Yes…this performance_schema view
log_status above shows us lots of information about the status of any given member. This is NEW in MySQL 8.0 and requires the
BACKUP_ADMIN privilege.

Monitoring with MySQL Enterprise Monitor

Confirming the individual member status of these instances in the monitoring system. We’ll use this to compare when we are done.

Building the Cluster

So, now down to business. Building the cluster with the shell.

Creating a Least-Privilege Remote InnoDB Cluster “Admin” account

In the commands noted below, I added in a section in “bold”. Those extra lines create an account (that is NOT binary logged) which can be used for remote management of all (local and remote) InnoDB Cluster members. The caveat is that it can ONLY do such remote management once each target member is initially configured as such.

I mentioned earlier the creation an account above named
'ICroot'@'%' which is used to build remote members from the outset for the sake of demonstration purposes in this blog

Purely so that all members can be created by operating from a single remote server.

The caveat is that this account has “full root-like” privileges w/remote access

Final Checks

What does our status look like in the MySQL Enterprise Monitor?

In the screen shot below, the members are now “auto-grouped” as a “Replication Topology” shows they’re all linked.

Inline with that replication grouping noted above…the image below shows how we can see that each members has:

its own line entry which gives quick high level details

Also, each line entry can be expanded so that we can easily drill into any member that needs extra attention.

What do our PERSISTed Configurations look like now?

Many more configurations have been added to it, just from adding the additional members. The ability to PERSIST configurations in this way is a real benefit to dynamically & restart-based provisioned configurations, truly adding to the ease of managing our MySQL instances.