Note: The procedures below assume you are using BOSH CLI v2 or later. For more information about BOSH v2, see Commands in the BOSH documentation.

When to Bootstrap

You must bootstrap a cluster that loses quorum. A cluster loses quorum when less than half of the nodes can communicate with each other for longer than the configured grace period. If a cluster does not lose quorum, individual unhealthy nodes automatically rejoin the cluster after resolving the error, restarting the node, or restoring connectivity.

You can detect lost quorum through the following symptoms:

All nodes appear “Unhealthy” on the proxy dashboard, viewable at https://BOSH-JOB-INDEX-proxy-p-mysql-ert.YOUR-SYSTEM-DOMAIN:

All responsive nodes report the value of wsrep_cluster_status as non-Primary:

Run the Bootstrap Errand

The following sections describe what the bootstrap errand is and how to use it based on the type of cluster failure.

About the Bootstrap Errand

The bootstrap errand automates the steps described in the Manual Bootstrapping section below. It finds the node with the highest transaction sequence number and asks it to start up by itself in bootstrap mode. Finally, it asks the remaining nodes to join the cluster.

If the errand fails, run the bootstrap errand command again after a few minutes. The bootstrap errand may not work immediately.

If the errand fails after several tries, bootstrap your cluster manually. See Bootstrap Manually below.

Scenario 2: Virtual Machines Terminated or Lost

In severe circumstances, such as a power failure, it is possible to lose all your VMs.
You must recreate them before you can begin recovering the cluster.

When MySQL instances are in the - state, the VMs are lost. The procedures in this scenario bring the instances from a - state to a failing state. Then you run the bootstrap errand similar to Scenario 1 above and restore configuration.

To recover terminated or lost VMs, do the procedures in the sections below:

WARNING: If you do not set each of your ignored instances to unignore,
your instances are not updated in future deploys. You must perform the procedure in the final section of Scenario 2,
Restore the BOSH Configuration.

Recreate the Missing VMs

The procedure in this section uses BOSH to recreate the VMs, install software on them, and try to start the jobs.

The procedure below allows you to do the following:

Redeploy your cluster while expecting the jobs to fail.

Instruct BOSH to ignore the state of each instance in your cluster. This allows BOSH to deploy the software to each instance even if the instance is failing.

To recreate your missing VMs, do the following:

If BOSH resurrection is enabled, disable it.

bosh -e YOUR-ENV update-resurrection off

Download the current manifest.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT manifest > /tmp/manifest.yml

Redeploy and expect one of the MySQL VMs to fail. Deploying causes BOSH to create new VMs and install the software. Forming a cluster is in a subsequent step.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT deploy /tmp/manifest.yml

Run the following command and record the instance GUID of the VM that attempted to start. Your instance GUID is the string after mysql/ in your BOSH instances output.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

Tell BOSH to ignore your MySQL instance. Ignoring the state allows BOSH to deploy software to the failed instance.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT ignore mysql/INSTANCE_GUID

Where:

YOUR-ENV is the environment where you deployed the cluster.

YOUR-DEPLOYMENT is the deployment cluster name.

INSTANCE-GUID is the string after mysql/ in your BOSH instances output.
For example:

Note: After you complete the bootstrap errand, you may still see instances in the failing state. Continue to the next section anyway.

Restore the BOSH Configuration

WARNING: If you do not set each of your ignored instances to unignore, your instances are never updated in future deploys.

To restore your BOSH configuration to its previous state, this procedure unignores each instance that was previously ignored.

Set each ignored instance to unignore.

bosh -e MY-ENV -d MY-DEP unignore mysql/INSTANCE_GUID

Redeploy.

bosh -e MY-ENV -d MY-DEP deploy

Validate that all mysql instances are in a running state.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

Bootstrap Manually

If the bootstrap errand is not able to automatically recover the cluster, you might need to do the steps manually.

WARNING: The following procedures are prone to user error and can result in lost data if followed incorrectly. Follow the procedure in Bootstrap with the BOSH Errand above first, and only resort to the manual process if the errand fails to repair the cluster.

Do the procedures in the sections below to manually bootstrap your cluster. Fresh installs of PCF v2.2 use a Percona server for their internal system MySQL databases (called MySQL or mysqld below), while installations of PCF v2.2 that were upgraded from PCF v2.1 use MariaDB if you did not migrate your databases to Percona. Use the commands below that are appropriate for your installation of PCF.

If a node shut down gracefully, the seqno is in the Galera state file. Retrieve the seqno and continue to Bootstrap the First Node.

If a node crashed or was killed, the seqno in the Galera state file is recorded as -1. In this case, the seqno might be recoverable from the database. Run the following command to start up the database, log the recovered seqno, and then exit:

If the node never connected to the cluster before crashing, it may not even have a group ID (uuid in grastate.dat). In this case, there is nothing to recover. Unless all nodes crashed this way, do not choose this node for bootstrapping.

After determining the seqno for all nodes in your cluster, identify the node with the highest seqno. If all nodes have the same seqno, you can choose any node as the new bootstrap node.

Bootstrap the First Node

After determining the node with the highest seqno, do the following to bootstrap the node:

Note: Only run these bootstrap commands on the node with the highest seqno. Otherwise the node with the highest seqno is unable to join the new cluster unless its data is abandoned. Its mariadb or mysqld process exits with an error.

On the new bootstrap node, update the state file and restart the mariadb or mysqld process. Run either of the following commands, depending on which database is running:

It can take up to ten minutes for monit to start the mariadb or mysqld process.
To check if the mariadb or mysqld process has started successfully, run the following command:

watch monit summary

Restart Remaining Nodes

After the bootstrapped node is running, start the mariadb or mysqld process on the remaining nodes with monit. From the bootstrap node, run either monit start mariadb_ctrl (for mariadb) or monit start galera-init (for mysqld).
If the node is prevented from starting by the Interruptor, do the manual procedure to force the node to rejoin the cluster, documented in Pivotal Knowledge Base.

WARNING: Forcing a node to rejoin the cluster is a destructive procedure.
Only do the procedure with the assistance of Pivotal Support.

If the monit start command fails, it might be because the node with the highest seqno is the lowest BOSH-indexed node (mysql/0). In this case, do the following: