MySQL Cluster Manager 1.1.1 (GA) Available

The latest (GA) version of MySQL Cluster Manager is available through Oracle’s E-Delivery site. You can download the software and try it out for yourselves (just select “MySQL Database” as the product pack, select your platform, click “Go” and then scroll down to get the software).

So what’s new in this version

If you’ve looked at MCM in the past then the first thing that you’ll notice is that it’s now much simpler to get it up and running – in particular the configuration and running of the agent has now been reduced to just running a single executable (called "mcmd").

The second change is that you can now stop the MCM agents from within the MCM CLI – for example "stop agents mysite" will safely stop all of the agents running on the hosts defined by "mysite".

Those 2 changes make it much simpler for the novice user to get up and running quickly; for the more expert user, the most signifficant change is that MCM can now manage multiple clusters.

Obviously, there are a bunch of more minor changes as well as bug fixes.

Refresher – So What is MySQL Cluster Manager?

MySQL Cluster Manager provides the ability to control the entire cluster as a single entity, while also supporting very granular control down to individual processes within the cluster itself. Administrators are able to create and delete entire clusters, and to start, stop and restart the cluster with a single command. As a result, administrators no longer need to manually restart each data node in turn, in the correct sequence, or to create custom scripts to automate the process.

MySQL Cluster Manager automates on-line management operations, including the upgrade, downgrade and reconfiguration of running clusters as well as adding nodes on-line for dynamic, on-demand scalability, without interrupting applications or clients accessing the database. Administrators no longer need to manually edit configuration files and distribute them to other cluster nodes, or to determine if rolling restarts are required. MySQL Cluster Manager handles all of these tasks, thereby enforcing best practices and making on-line operations significantly simpler, faster and less error-prone.

MySQL Cluster Manager is able to monitor cluster health at both an Operating System and per-process level by automatically polling each node in the cluster. It can detect if a process or server host is alive, dead or has hung, allowing for faster problem detection, resolution and recovery.

To deliver 99.999% availability, MySQL Cluster has the capability to self-heal from failures by automatically restarting failed Data Nodes, without manual intervention. MySQL Cluster Manager extends this functionality by also monitoring and automatically recovering SQL and Management Nodes.

How is it Implemented?

MySQL Cluster Manager Architecture

MySQL Cluster Manager is implemented as a series of agent processes that co-operate with each other to manage the MySQL Cluster deployment; one agent running on each host machine that will be running a MySQL Cluster node (process). The administrator uses the regular mysql command to connect to any one of the agents using the port number of the agent (defaults to 1862 compared to the MySQL Server default of 3306).

How is it Used?

When using MySQL Cluster Manager to manage your MySQL Cluster deployment, the administrator no longer edits the configuration files (for example config.ini and my.cnf); instead, these files are created and maintained by the agents. In fact, if those files are manually edited, the changes will be overwritten by the configuration information which is held within the agents. Each agent stores all of the cluster configuration data, but it only creates the configuration files that are required for the nodes that are configured to run on that host.

Similarly when using MySQL Cluster Manager, management actions must not be performed by the administrator using the ndb_mgm command (which directly connects to the management node meaning that the agents themselves would not have visibility of any operations performed with it).

When using MySQL Cluster Manager, the ‘angel’ processes are no longer needed (or created) for the data nodes, as it becomes the responsibility of the agents to detect the failure of the data nodes and recreate them as required. Additionally, the agents extend this functionality to include the management nodes and MySQL Server nodes.

Installing, Configuring & Running MySQL Cluster Manager

On each host that will run Cluster nodes, install the MCM agent. To do this, just download the zip file from Oracle E-Delivery and then extract the contents into a convenient location:

1

2

3

<span style="color: #000080;">$unzip V27167-01.zip

$tar xf mysql-cluster-manager-1.1.1-linux-rhel5-x86-32bit.tar.gz

$mv mysql-cluster-manager-1.1.1-linux-rhel5-x86-32bit~/mcm</span>

Starting the agent is then trivial (remember to reapeat on each host though):

1

2

<span style="color: #000080;">$cd~/mcm

$bin/mcmd&</span>

Next, some examples of how to use MCM.

Example 1: Create a Cluster from Scratch

The first step is to connect to one of the agents and then define the set of hosts that will be used for the Cluster:

Next step is to tell the agents where they can find the Cluster binaries that are going to be used, define what the Cluster will look like (which nodes/processes will run on which hosts) and then start the Cluster:

Example 2: On-Line upgrade of a Cluster

A great example of how MySQL Cluster Manager can simplify management operations is upgrading the Cluster software. If performing the upgrade by hand then there are dozens of steps to run through which is time consuming, tedious and subject to human error (for example, restarting nodes in the wrong order could result in an outage). With MySQL Cluster Manager, it is reduced to two commands – define where to find the new version of the software and then perform the rolling, in-service upgrade:

Behind the scenes, each node will be halted and then restarted with the new version – ensuring that there is no loss of service.

Example 3: Automated On-Line Add-Node

Automated On-Line Add-Node

Since MySQL Cluster 7.0 it has been possible to add new nodes to a Cluster while it is still in service; there are a number of steps involved and as with on-line upgrades if the administrator makes a mistake then it could lead to an outage.

We’ll now look at how this is automated when using MySQL Cluster Manager; the first step is to add any new hosts (servers) to the site and indicate where those hosts can find the Cluster software:

I observed that you are also adding API (mysqld) nodes on the existing API servers. My question is since each API server instance has 2 mysqld running already, do we need to change the listening port of the second mysqld instance?

Also, am i correct to think that for each data node that will be added to the cluster, a corresponding API instance should be created.

you’re correct that those mysqlds would need to have different port numbers to the mysqlds that were already on those hosts. I’d have to double check whether that is handled automatically – if not then when adding the new processes you could include the option…. –set=mysqld:port=3307

I’m actually doing my simulation in HPCloud instances but I could not make the cluster consistently work. I create a working cluster, then delete it. Then create another, the second one wont come up. Sometimes I modify IndexMemory and DataMemory then suddenly, the cluster wont come up. Sorry for not providing you with more details.

Thanks for giving a quick refresher on MCM. It helps people like me, who is new to MySQL world.

Is possible to run CM agents on different subnet than cluster processes? I’m in the process of setting up 8 nodes cluster on 7.2 CGE. Each hosts have 2 separate VLANs/subnets, app-db and mgmt respectively. We wanted use mgmt VLAN to manage the cluster including setting up SQL enterprise monitor agents to monitor cluster and send data to management repository running in a separate node in the management network. App-db VLAN will be primarily used for apps to talk to MySQLd nodes.

In terms of communications:
– The MCM agents must all be able to talk to each other
– The MCM agents running on the same host as the ndb_mgmd nodes must be able communicate with their local ndb_mgmds over the management ‘wire protocol’
– The MCM agents on all hosts must be able to start and stop local processes

Hi Andrew,
Nice blog, thanks for catrieng all Mysql cluster fundas on one site, I have one query, How to enable federated engine using MYSQL cluster manager, as we cannot modify config file manually. Please suggest.. Thanks in advance.

at the moment there is no option in MCM to remove nodes from the Cluster. For Cluster itself, removing data nodes is a tricky process as it isn’t an on-line operation and involves performing a backup, shutdown, start-up and restore. We find that while “elastic scaling” is a popular idea, the reality is that Cluster users only ever seem to need to scale out and not back in again. Do you have a scale-in requirement/use-case?

Actually , we are currently testing MCM and exploring the various administrative tasks.

As of now there is no requirement.

Can you also tell me how we can add two clusters in one site? As when I am trying to add hosts prompt is getting hung. When I am taking another mcm client session, its showing host as added, status available but version unknown.

Adding 2 hosts to the same site is straight-forward, you just repeat the CREATE CLUSTER command and then make sure that you don’t have conflicting resources (for example, use the SET command to make sure that port and portnumber are distinct for nodes on the same host. I just tried this and it works fine.

Adding hosts to an existing site should also work – I’ve just tested it and it worked fine. Is mcmd running on the new host (note that if you cloned the new host from one of the others then make sure you delete the mcm_data directory from it otherwise MCM will get confused and the mcmd may stop). I tested ADD HOST without mcmd running on the new host and got exactly the behaviour you describe.