Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16
Talking with Percona Live attendees last year I heard a couple of common themes. First, people told me that there is a lot of great advanced content at Percona Live but there is not much for people just starting to learn the ropes with MySQL. Second, they would like us to find a way to make such basic content less expensive.I’m pleased to say we’re able to accommodate both of these wishes this year at Percona Live! We have created a two-day intensive track called “MySQL 101” that runs April 15-16. MySQL 101 is designed for developers, system administrators and DBAs familiar with other databases but not with MySQL. And of course it’s ideal for anyone else who would like to expand their professional experience to include MySQL. The sessions are designed to lay a solid foundation on many aspects of MySQL development, design and operations.As for the price: Just $101 for both full days, but only if you are among the first 101 people to register using the promo code “101” at checkout. After that the price returns to $400 (still a great price!). The MySQL 101 registration pass includes full access to the Percona Live expo hall (and all the fun stuff happening out there) as well as keynotes, which will inform you about most significant achievements in MySQL ecosystem.As there is so much information to cover in the MySQL 101 track, we’re running two sessions in parallel – one geared more toward developers using MySQL and the other toward sysadmins and MySQL DBAs, focusing more on database operations. Though I want to point out that you do not have to chose one track to attend exclusively, but rather can mix and match sessions depending what is most relevant to your specific circumstances.I will be leading a couples tracks myself alongside many other Percona experts who are joining me for those two days!Here’s a peek at just some of the many classes on the MySQL 101 agenda: The 7 deadly sins of MySQL performance Introduction to High Availability options for MySQL Becoming a DBA for 5 minutes a day: The most important things to know for sysadmins and developers Designing Effective Schema for InnoDB MySQL Security Basics MySQL backups: strategy, tools, recovery scenarios Highly efficient MySQL backups How to create a useful MySQL bug report MySQL & InnoDB fundamentals and configuration MySQL indexing best practices MySQL query tuning 101 MySQL replication: setup, benefits, limitations Overview of MySQL connectors for PHP Practical MySQL troubleshooting and performance optimization Running MySQL in OpenStack Using MySQL with Java Writing High Performance SQL StatementsYou can see the full MySQL 101 agenda here. Don’t forget the promo code “101” and please feel free to ask any questions below. I hope to see you in Santa Clara at Percona Live! The conference runs April 13-16 in sunny Santa Clara, California. The post Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16 appeared first on MySQL Performance Blog.

Proposal to deprecate "compatibility" SQL Modes
In the MySQL team, we are currently discussing deprecating several of the SQL mode options which are used by mysqldump to change the output format. From the mysqldump command:
$ mysqldump --help
..
--compatible=name Change the dump to be compatible with a given mode. By
default tables are dumped in a format optimized for
MySQL. Legal modes are: ansi, mysql323, mysql40,
postgresql, oracle, mssql, db2, maxdb, no_key_options,
no_table_options, no_field_options. One can use several
modes separated by commas. Note: Requires MySQL server
version 4.1.0 or higher. This option is ignored with
earlier server versions.
To explain the rationale for this proposal:
The options mysql323, mysql40 are designed to allow mysqldump to create an output format that can be restored on a MySQL Server of version 3.23 or 4.0. While we aim to support the upgrade case from these versions, supporting a downgrade is not something we support, as restoring data to a 10 year old release poses a number of challenges.
The options postgresql, oracle, mssql, db2, maxdb are 'special' SQL modes, in that they are not really modes themselves but aliases to switch on other SQL modes. For example:
mysql> set sql_mode = 'mssql';
Query OK, 0 rows affected (0.00 sec)
mysql> select @@sql_mode\G
*************************** 1. row ***************************
@@sql_mode: PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,MSSQL,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS
1 row in set (0.00 sec)
Having aliases creates a number of interesting challenges:
It complicates training and support as users may have multiple options in order to achieve the same outcome (either listing all sql modes individually or via one of the aliases). In a similar example we removed unique option prefixes in MySQL 5.7 to reduce this confusion.
The options that we enable for each alias imply (but have not measurably offered) compatibility with the other database that is mentioned. Furthermore, as other databases will release newer versions, the singularity of the alias name does not account for this.
Related to the above; if newer versions of other database products desire new sql modes enabled, it is more flexible (and future proof) to have the list of which behaviour options that should be enabled for each other database in documentation or client programs rather than the server itself. This allows us to not change behavior in a server GA release.
The options no_key_options, no_field_options and no_table_options remove MySQL-specific meta data which I have highlighted below in bold:
# no_key_options
CREATE TABLE `t` (
`i` int(11) NOT NULL AUTO_INCREMENT,
`b` varchar(20) CHARACTER SET utf8 DEFAULT NULL,
KEY `i` (`i`) USING BTREE KEY_BLOCK_SIZE=2048 COMMENT 'My comment'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
# no_field_options
CREATE TABLE `t` (
`i` int(11) NOT NULL AUTO_INCREMENT,
`b` varchar(20) CHARACTER SET utf8 DEFAULT NULL,
KEY `i` (`i`) USING BTREE KEY_BLOCK_SIZE=2048 COMMENT 'My comment'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
# no_table_options
CREATE TABLE `t` (
`i` int(11) NOT NULL AUTO_INCREMENT,
`b` varchar(20) CHARACTER SET utf8 DEFAULT NULL,
KEY `i` (`i`) USING BTREE KEY_BLOCK_SIZE=2048 COMMENT 'My comment'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Note that no_field_options did not remove the character set attribute for the column b and thus is incomplete in its current implementation. no_field_options also disables the same meta-data which is disabled by no_key_options.
The no_key_options and no_table_options remain useful, although it should be noted that information_schema may better support a custom view of schema to match the capabilities of the destined database. We are working on making information_schema queries much faster in the future, via our native data dictionary project.
To summarize this proposal:
Our plan is to deprecate the following options in MySQL 5.7, for removal in a later version:
mysql323, mysql40, postgresql, oracle, mssql, db2, maxdb.
We are also seeking input from those that use the following SQL modes, as we may decide to deprecate these in the future:
no_field_options, no_key_options, no_table_options.
Will these changes impact you? Please leave a comment or get in touch!

Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots
As the need for information grows so does the size of data we need to keep in our databases. SST is unavoidable for spinning up new nodes in a PXC cluster and when datasets reach the “terra-byte” range this becomes ever more cumbersome requiring many hours for a new node to synchronize.More often that not, it is necessary to implement custom “wsrep_sst” scripts or resort to manual synchronization processes. Luckily cloud providers provide convenient methods to leverage disk snapshots that can be used to quickly transfer data between nodes.This article deals with the actions needed to perform a snapshot on Google’s Compute Engine (GCE) infrastructure. A similar method can be used on AWS EC2 instances using EBS snapshots or any other form of snapshots such as LVM, ZFS or SAN. The steps described can be used to add a new node to a PXC cluster or to avoid SST. The following procedure can also be used to take advantage of the performance benefit of GCE Snapshots. A similar procedure can be used for adding a regular slave provided the binary log co-ordinates have been captured. This article assumes your “datadir” is on a separate disk to your operating system partition using the “ext4″ filesystem:Select a suitable “donor” node, we will use “node1″ for this example.Stop the MySQL service on “node1″ or perform a FTWRL with the MySQL service running on a node which is in “desync/donor” mode# Take the snapshot from a stopped instance
[root@node1 /] service mysql stop & tail -f /var/log/mysql/error.log
# OR alternatively take the snapshot from a 'desynced' node
### desync from cluster replication
mysql> set global wsrep_desync=ON;
### get FTWRL
mysql> flush tables with read lock;While the MySQL service is down on “node1″ or the FTWRL is held create a snapshot in the Google Developer Console for the disk or using the GCE API (* this assumes that the datadir is located in a separate standalone disk). This part of the process takes around 15 minutes for a 3.5 TB disk.gcloud compute disks snapshot node1-datadir-disk --snapshot-name node1-datadir-disk-snapshot-1As soon as the snapshot has completed start the MySQL service on “node1″ (verifying the node has successfully joined the cluster) or release the FTWRL# Depending on the steps followed in step 1 either start MySQL on node1
[root@node1 /] service mysql start & tail -f /var/log/mysql/error.log
# OR alternatively release the FTWRL and "sync" the node
### release FTWRL
mysql> unlock tables;
### if there is high load on the cluster monitor wsrep_local_recv_queue
### until it reaches 0 before running the following command to rejoin
### the cluster replication (otherwise it can be run immediately after
### releasing the FTWRL):
mysql> set global wsrep_desync=OFF; ***** IMPORTANT NOTE: In case “node1″ is unable to rejoin the cluster or requires an SST you will need to re-create the snapshot from another node or after SST completes.Now connect to the “joiner” node, we will use “node2″ for this example.Unmount the existing disk from “node2″ for this example (assuming MySQL service is not running else stop the MySQL service first)[root@node2 /] umount /var/lib/mysqlDetach and delete the disk containing the MySQL datadir from the “node2″ instance in the Google Developer Console or using the GCE APIgcloud compute instances detach-disk node2 --disk node2-datadir-disk
gcloud compute disks delete node2-datadir-diskCreate and attach a new disk to the “node2″ instance in the Google Developer Console or using the GCE API using the snapshot you created in step 3. This part of the process takes around 10 minutes for a 3.5 TB diskgcloud compute disks create node2-datadir-disk --source-snapshot node1-datadir-disk-snapshot-1
gcloud compute instance attach-disk node2 --disk node2-datadir-disk[ *** LVM only step *** ]: If you are using LVM the device will not show up in this list until you have activated the Volume Group (“vg_mysql_data” in this example)# this command will report the available volume groups
[root@node2 /] vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_mysql_data" using metadata type lvm2
# this command will report the available logical volumes, you should see the LV INACTIVE now
[root@node2 /] lvscan
INACTIVE '/dev/vg_mysql_data/lv_mysql' [20.00 TiB] inherit
# this command will activate all logical volumes within the volume group
[root@node2 /] vgchange -ay vg_mysql_data
# this command will report the available logical volumes, you should see the LV ACTIVE now
[root@node2 /] lvscan
ACTIVE '/dev/vg_mysql_data/lv_mysql' [20.00 TiB]After the device has been added it should show up on the “node2″ operating system – you can retrieve the new UUID using the following command (in case you have mounted using “/dev/disk/by-name” and the name of the new disk is the same as the previous you do not need to update “/etc/fstab” e.g. this holds true for VM instances created using the Percona XtraDB click-to-deploy installer)[root@node2 /] ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Feb 14 15:56 4ad2d22b-500a-4ad2-b929-12f38347659c -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 19 03:12 9e48fefc-960c-456f-95c9-9d893bcafc62 -> ../../dm-0 # This is the 'new' disk You can now proceed to adding the new UUID you retrieved in step 9 to “/etc/fstab” (unless you are using “/dev/disk/by-name” with the same disk name) and mount the new disk[root@node2 /] vi /etc/fstab
...
UUID=9e48fefc-960c-456f-95c9-9d893bcafc62 /var/lib/mysql ext4 defaults,noatime 0 0
...
[root@node2 /] mount -aVerify the data is mounted correctly and the ownership of the data directory and sub-contents are using the correct UID / GID for the MySQL user on the destination system (although this is usually OK, it is good to do a quick check)[root@node2 /] ls -lhtR /var/lib/mysql/You are now ready to start MySQL and verify that the node has in fact initialised with IST (provided you have sufficient “gcache” available there shouldn’t be any other issues)[root@node2 /] service mysql start & tail -f /var/log/mysql/error.logThe Percona XtraDB Click-to-deploy tool can be used for automated deployments and further details on creating a cluster on Google Compute Engine using this method can be found in Jay Janssen’s post, “Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process.” The post Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots appeared first on MySQL Performance Blog.

MySQL Character encoding – part 2
In MySQL Character encoding – part 1 we stated that the myriad of ways in which character encoding can be controlled can lead to many situations where your data may not be available as expected.
UTF8 was designed on a placemat in a New Jersey diner one night in September or so 1992.
Setting MySQL Client and Server Character encoding.
Lets restart MySQL with the correct setting for our purpose, UTF8. Here we can see the setting in the MySQL configuration file, in this case /etc/mysql/my.cnf.
character-set-server = utf8
This change is then reflected in the session and global variables once the instance is restarted with the new configuration parameter.
mysql> SELECT @@global.character_set_server, @@session.character_set_client;
+-------------------------------+--------------------------------+
| @@global.character_set_server | @@session.character_set_client |
+-------------------------------+--------------------------------+
| utf8 | utf8 |
+-------------------------------+--------------------------------+
1 row in set (0.00 sec)
Now we have verified the server and client are set to use UTF8, we can go ahead, continue developing our application and create a new table people.
mysql> CREATE TABLE people (first_name VARCHAR(30) NOT NULL,
last_name VARCHAR(30) NOT NULL);
Query OK, 0 rows affected (0.13 sec)
Now let’s enter some data into the new table, which was created with the server and client configured for UTF8.
Something appears to have gone terribly wrong, the accent in Maciek’s surname now appears as a question mark.
mysql> SELECT @@session.character_set_server, @@session.character_set_client;
+--------------------------------+--------------------------------+
| @@session.character_set_server | @@session.character_set_client |
+--------------------------------+--------------------------------+
| utf8 | utf8 |
+--------------------------------+--------------------------------+
1 row in set (0.00 sec)
The database settings are still UTF8, this should have worked.
mysql> USE fosdem;
mysql> SHOW CREATE TABLE people\G
*************************** 1. row ***************************
Table: people
Create Table: CREATE TABLE `people` (
`first_name` varchar(30) NOT NULL,
`last_name` varchar(30) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec)
Looking at the table, we see that despite being created under a server set to use UTF8, it appears to be set to use latin1.
How can this be?, Let’s look at the session settings.
￼mysql> SHOW SESSION VARIABLES LIKE 'character_set_%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)
We can see the server and client values are as expected, but database is set to something else.
￼mysql> SHOW CREATE DATABASE fosdem\G
*************************** 1. row ***************************
Database: fosdem
Create Database: CREATE DATABASE `fosdem` /*!40100 DEFAULT CHARACTER SET latin1 */
1 row in set (0.00 sec)
Since the database was created when the server was set to latin1 it inherited that charset setting, which persists even when the server setting changes.
Can we fix this?
mysql> SET NAMES utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT last_name, HEX(last_name) FROM people;
+------------+----------------------+
| last_name | HEX(last_name) |
+------------+----------------------+
| Lemon | 4C656D6F6E |
| Müller | 4DFC6C6C6572 |
| Dobrza?ski | 446F62727A613F736B69 |
+------------+----------------------+
3 rows in set (0.00 sec)
mysql> SET NAMES latin2;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT last_name, HEX(last_name) FROM people;
+------------+----------------------+
| last_name | HEX(last_name) |
+------------+----------------------+
| Lemon | 4C656D6F6E |
| Müller | 4DFC6C6C6572 |
| Dobrza?ski | 446F62727A613F736B69 |
+------------+----------------------+
3 rows in set (0.00 sec)
Unfortunately, no matter how I try to read the data, 0x3F is ‘?’, so the ‘ń’ has been lost forever. Therefore it may not be enough to reconfigure the server, as a mismatch between client and server can permanently break data, due to the implicit conversion inside the MySQL server.
Implicit conversions happen silently when characters of one character set are inserted into a column with a different character set. This behaviour can be controlled by SQL_MODE, which allows you force MySQL to raise an error instead.
In MySQL Character encoding – part 1 we established there were a number of places you can control the character settings, now we can add a couple of important observations to our view of Character encoding settings.
Session settings
character_set_server
character_set_client
character_set_connection
character_set_database
character_set_result
Schema level Defaults – Affects new tables
Table level Defaults – Affects new columns
Column charsets
We have seen how a table created with no explicit charset declaration inherits the database (schema) charset, but what happens to a column when the table charset is changed?.
mysql> USE fosdem;
mysql> CREATE TABLE test (a VARCHAR(300), INDEX (a));
Query OK, 0 rows affected (0.62 sec)
mysql> SHOW CREATE TABLE test\G
*************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
￼mysql> ALTER TABLE test DEFAULT CHARSET = utf8;
Query OK, 0 rows affected (0.08 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SHOW CREATE TABLE test\G
*************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) CHARACTER SET latin1 DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec)
The columns in a table inherit their default charset value when the table is created, but do not change when the table is changed, however new columns added after the ALTER TABLE would inherit UTF8.
mysql> ALTER TABLE test ADD b VARCHAR(10);
Query OK, 0 rows affected (0.74 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SHOW CREATE TABLE test\G
*************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) CHARACTER SET latin1 DEFAULT NULL,
`b` varchar(10) DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec
What can you do if you detect inconsistencies in your MySQL Character encoding settings
First of all, keep calm and don’t start by changing something. Analyse the situation and make sure you understand what settings you have and what your application understands regarding reading and writing data from the database.
Once you detect a problem, try to assess the extent of the damage. Firstly, what is the scope of the damage and is it consistent. Are all the rows bad or is it just a subset such as the last days worth of inserts. Are all the bad rows broken in the same way or are there actually a mixture of problems affected different sets of rows. Are the rows actually repairable – could be that recovering from backup and rolling forward is necessary as the inserted data has already been destroyed. Has any character mapping occurred during writes (e.g. unicode over latin1/latin1) – all of this is necessary to get a good picture of where you are starting from.
Take care not to do not do any of the following:
Try to fix this table by table unless you really only have a single table. – Any fix will affect the application and database as a whole, therefore fixing a single table may lead to inconsistencies and further problems elsewhere.
ALTER TABLE … DEFAULT CHARSET = as it only changes the default character set for new columns.
ALTER TABLE … CONVERT TO CHARACTER SET … It’s not for fixing broken encoding.
ALTER TABLE … MODIFY col_name … CHARACTER SET …
What needs to be fixed?
Schema default character set
ALTER SCHEMA fosdem DEFAULT CHARSET = utf8;
Tables with text columns: CHAR, VARCHAR, TEXT, TINYTEXT, LONGTEXT
What about ENUM?
The information schema can provide a list of candidate tables.
￼SELECT CONCAT(c.table_schema, '.', c.table_name) AS candidate_table
FROM information_schema.columns c
WHERE c.table_schema = 'fosdem'
AND c.column_type REGEXP '^(.*CHAR|.*TEXT|ENUM)(\(.+\))?$' GROUP BY candidate_table;
You must also ensure the database and application configuration is correct also, to avoid having the newly fixed tables broken by new data being introduced incorrectly (for the settings) into the tables.
How do I fix this?
Option 1. Dump and restore (Requires downtime)
Dump the data preserving the bad configuration and drop the old database
# mysqldump -u root -p --skip-set-charset --default-character-set=latin1 fosdem > fosdem.sql
mysql> DROP SCHEMA fosdem;
Correct table definitions in the dump file by editing DEFAULT CHARSET in all CREATE TABLE statements, then create the database again and import the data.
mysql> CREATE SCHEMA fosdem DEFAULT CHARSET utf8;
# mysql -u root -p --default-character-set=utf8 fosdem < fosdem.sql
Option 2. Two step conversion (Requires downtime)
Perform a two step conversion with ALTER TABLE, converting the original encoding to VARBINARY/BLOB and then from there to the target encoding. Conversion from/to BINARY/BLOB removes character set context.
Stop applications
On each table, for each text column perform:
ALTER TABLE tbl MODIFY col_name VARBINARY(255);
ALTER TABLE tbl MODIFY col_name VARCHAR(255) CHARACTER SET utf8;
You may specify multiple columns per ALTER TABLE
Fix the problems (application and/or db configs)
Restart applications
Option 3. – Online character set fix; (Minimal downtime, Approximately 1 min)
Using pt-online-schema-change with the PSCE plugin and a small patch for pt-online-schema-change, you can convert columns online in the live database.
Start pt-online-schema-change on all tables – one by one with table rotation disabled (–no-swap-tables) or drop pt-online-schema-change triggers
Wait until all tables have been converted
Stop applications
Fix the problems (application and/or db configs)
Rotate the tables – should take a minute or so
Restart applications
Currently the patch to pt-online-schema-change and plugin are available on bitbucket Github.
In MySQL Character encoding part 3 we will cover the gotchas in the process of fixing broken encoding, and what best practise to follow to get it right each time you setup a new server or create a new database.

MySQL Cluster on my "Windows computer"
It's been some time since I wrote my last blog. As usual this means that I havebeen busy developing new things. Most of my blogs are about describing newdevelopments that happened in the MySQL Server, MySQL Cluster, MySQLpartitioning and other areas I have been busy developing in. For the last year I havebeen quite busy in working with MySQL Cluster 7.4, the newest cluster release. Asusual we have been able to add some performance improvements. But forMySQL Cluster 7.4 the goal has also been to improve quality. There are a numberof ways that one can improve quality. One can improve quality of a cluster by makingit faster to restart as problems appear. One can also improve it by improving codequality. We have done both of those things.In order to improve my own possibilities to test my new developments I decided toinvest in a "Windows computer". This means a computer that I house in my window inmy living room :)There is some new very interesting computers that you can buy from Intel today thatworks perfectly as test platform for a distributed database like MySQL Cluster. Itis called Intel NUC. It is actually a computer that you need to assemble on your own,it comes equipped with a dual core Intel i5-4250U CPU that runs at 1.3GHz and canrun up to Turbo frequency of 2.3 GHz (the current generation has been upgraded toIntel i5-5250U processors). To this computer you can buy up to 16 GByte of memoryand you can add a 2.5" SSD drive and it's even possible to also have a M.2 SSD cardin addition to the 2.5" SSD drive. Personally I thought it was enough to have one SSDdrive of 250GB.So this computer is amazingly small, but still amazingly capable. I have it placed inthe window in front of my desk behind our TV set. It is so quiet that I can't evenhear it when all the rest of the equipment is shut off.Still it is capable enough to run our daily test suite with a 4 data node setup. Thistest suite runs more than 400 different test cases where we test node restarts, systemrestarts, index scans, backups, pk lookups and so forth. One such test run takes13 hours, so it is nice to be able to have this running on such a small box that canrun in the background without me having to interfere at all and without it making anynoise.So it's amazing how scalable the MySQL Cluster SW is, I can run test suites with a4 data node setup on a box that fits in the palm of my hand. At the same I can runbenchmarks using 100 2-socket servers that requires probably 4-5 racks of computersand that achieves 200M reads per second.Here is a little description of how you can setup a similar box to be runningdaily test runs for MySQL Cluster if ever you decide that you want to try todevelop a new feature for MySQL Cluster.1) At first the test suite requires a ndb_cpcd process to be running. This processtakes care of starting and stopping the MySQL Cluster processes.To do this do the following:1) Create a new directory, I called mine /home/mikael/cpcdIn this directory create a minor script that starts the ndb_cpcd.It contains the following command:ndb_cpcd --work-dir=/home/mikael/cpcd --logfile=/home/mikael/cpcd/cpcd.logFor it to run you need to compile MySQL Cluster and copy ndb_cpcd to e.g./usr/local/bin. This binary doesn't really change very often, so you canhave one compiled and need not change it. I call this script start_ndb_cpcd.sh.Then you start the ndb_cpcd in one window using./start_ndb_cpcd.sh2) In order to run the test suite I created a directory I called/home/mikael/mysql_clones/autotest_runThis is where I run the test suite from. For this I need to two files.The first is the autotest-boot.sh which is found in the MySQL Cluster sourcein the place storage/ndb/test/run-test/autotest-boot.sh.In addition I create here the configuration file used by this autotest-boot.shscript, it's called autotest.conf.In my case this file contains:install_dir="/home/mikael/mysql_clones/autotest_install"build_dir="/home/mikael/mysql_clones/autotest_build"git_local_repo="/home/mikael/mysql_clones/test_74"git_remote_repo="/home/mikael/mysql_git"base_dir="/home/mikael/mysql_clones/autotest_results"target="x86_64_Linux"hosts="mikael1 mikael1 mikael1 mikael1 mikael1 mikael1"report=WITH_NDB_JAVA_DEFAULT=0WITH_NDB_NODEJS=0export WITH_NDB_JAVA_DEFAULT WITH_NDB_NODEJSMAKEFLAGS="-j7"export MAKEFLAGSinstall_dir is the place where the build of the MySQL Cluster source is installed.build_dir is the place where the build of the MySQL Cluster is placed.git_local_repo is a git branch of MySQL Cluster 7.4.git_remote_repo is a git repo of the entire MySQL source.base_dir is the directory where the results of the test run are placed in acompressed tarball.target is the computer and OS, in my case a x86_64 running Linux.hosts is the hosts I use, there should be 6 hosts here, in my case they are all the samehost which is called mikael1.Finally I have report set to nothingand in order to avoid having to build the Java API and NodeJS APIs I set theWITH_NDB_JAVA_DEFAULT=0 and WITH_NDB_NODEJS=0.Finally I set the MAKEFLAGS to get a good parallelism in building MySQL Cluster.In order to run I need to have git installed, CMake as well and possibly somemore things. If one uses an older git version (like I do), then one needs tochange the git command in autotest-boot.sh a little bit.Finally one needs to add a new file to the MySQL branch from where you run,/home/mikael/mysql_clones/test_74 in my case. This file is called in mycase /home/mikael/mysql_clones/test_74/storage/ndb/test/run-test/conf-mikael1.cnf.The autotest-boot.sh creates the config file of the cluster from this file.In my case this file contains:# Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.## This program is free software; you can redistribute it and/or modify# it under the terms of the GNU General Public License as published by# the Free Software Foundation; version 2 of the License.## This program is distributed in the hope that it will be useful,# but WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the# GNU General Public License for more details.## You should have received a copy of the GNU General Public License# along with this program; if not, write to the Free Software# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA[atrt]basedir = CHOOSE_dirbaseport = 14000#clusters = .4nodeclusters = .2nodefix-nodeid=1mt = 0[ndb_mgmd][mysqld]innodbskip-grant-tablessocket=mysql.sockdefault-storage-engine=myisam[client]protocol=tcp[cluster_config.2node]ndb_mgmd = CHOOSE_host1#ndbd = CHOOSE_host2,CHOOSE_host3,CHOOSE_host4,CHOOSE_host5ndbd = CHOOSE_host2,CHOOSE_host3ndbapi= CHOOSE_host1,CHOOSE_host1,CHOOSE_host1#mysqld = CHOOSE_host1,CHOOSE_host6mysqld = CHOOSE_host1,CHOOSE_host4NoOfReplicas = 2IndexMemory = 100MDataMemory = 500MBackupMemory = 64MMaxNoOfConcurrentScans = 100MaxNoOfSavedMessages= 5NoOfFragmentLogFiles = 4FragmentLogFileSize = 32MODirect=1MaxNoOfAttributes=2000Checksum=1SharedGlobalMemory=256MDiskPageBufferMemory=256M#FileSystemPath=/home/mikael/autotest#FileSystemPathDataFiles=/home/mikael/autotest#FileSystemPathUndoFiles=/home/mikael/autotestInitialLogfileGroup=undo_buffer_size=64M;undofile01.dat:256M;undofile02.dat:256MInitialTablespace=datafile01.dat:256M;datafile02.dat:256MTimeBetweenWatchDogCheckInitial=60000[cluster_config.ndbd.1.2node]TwoPassInitialNodeRestartCopy=1[cluster_config.ndbd.3.2node]TwoPassInitialNodeRestartCopy=1In the current setup it uses 2 data nodes, but it can also run easily with 4 datanodes by simply changing a few lines above.mt = 0 means that I always run with the ndbd process which is the smallest manner to runMySQL Cluster data nodes where all blocks runs within one thread. It is possible to runthem in up to more than 50 threads, but in this machine it makes more sense to use ndbd.The name of the file should be conf-HOST.cnf where you replace HOST with the hostname ofyour test computer.Finally in my case I also changed one line instorage/ndb/test/run-test/atrt-gather-results.shas# rsync -a --exclude='BACKUP' --exclude='ndb_*_fs' "$1" . rsync -a --exclude='BACKUP' "$1" .The idea is that I should also get the file system of the cluster reportedas part of the result tarball. This increases the size of the result tarballsignificantly, but if one is looking for bugs that happen in writing of REDOlogs, UNDO logs, checkpoints and so forth, then this is required to be ableto find the problem.Finally we have now come to the point where we need to execute theactual test run, we place ourselves in the autotest_run directoryand execute the command:./autotest-boot.sh --clone=mysql-5.6-cluster-7.4 daily-basicDuring the test execution one can look into autotest_install, there isa directory there starting with run that contains the current test runningand if a test fails for some reason it will create a result.number directorythere where you get the log information from the failure, successful testcases doesn't get any logs produced. The file log.txt contains the currenttest being executed.Finally the test executed for daily-basic are defined in the file:/home/mikael/mysql_clones/test_74/storage/ndb/test/run-test/daily-basic-tests.txtSo by adding or removing tests from this file you can add your own test cases,most of the tests are defined in the/home/mikael/mysql_clones/test_74/storage/ndb/test/ndbapi directory.Good luck in trying out this test environment. Remember that any changes to filesin the test_74 directory also requires to be commited in git since the test_74directory is cloned off using git commands.