This release incorporates all bugfixes and changes made in
previous MySQL Cluster releases, as well as all bugfixes and
feature changes which were added in mainline MySQL 5.1 through
MySQL 5.1.44 (see Changes in MySQL 5.1.44 (2010-02-04)).

Note

Please refer to our bug database at
http://bugs.mysql.com/ for more details about
the individual bugs fixed in this version.

Cluster API:
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info option is
added to cause this information to be included in its output. A
new method
Table::getFragmentNodes() is
added to the NDB API for obtaining this information
programmatically.
(Bug #51184)

Formerly, the REPORT and
DUMP commands returned output to all
ndb_mgm clients connected to the same MySQL
Cluster. Now, these commands return their output only to the
ndb_mgm client that actually issued the
command.
(Bug #40865)

Replication; Cluster Replication:
MySQL Cluster Replication now supports attribute promotion and
demotion for row-based replication between columns of different
but similar types on the master and the slave. For example, it
is possible to promote an INT
column on the master to a BIGINT
column on the slave, and to demote a
TEXT column to a
VARCHAR column.

The implementation of type demotion distinguishes between lossy
and non-lossy type conversions, and their use on the slave can
be controlled by setting the
slave_type_conversions global
server system variable.

As part of the fix for this issue, rows for empty epochs are now
recorded in the ndb_binlog_index table even
when --ndb-log-empty-epochs is 0.
(Bug #49559, Bug #11757505)

If a node or cluster failure occurred while
mysqld was scanning the
ndb.ndb_schema table (which it does when
attempting to connect to the cluster), insufficient error
handling could lead to a crash by mysqld in
certain cases. This could happen in a MySQL Cluster with a great
many tables, when trying to restart data nodes while one or more
mysqld processes were restarting.
(Bug #52325)

After running a mixed series of node and system restarts, a
system restart could hang or fail altogether. This was caused by
setting the value of the newest completed global checkpoint too
low for a data node performing a node restart, which led to the
node reporting incorrect GCI intervals for its first local
checkpoint.
(Bug #52217)

When performing a complex mix of node restarts and system
restarts, the node that was elected as master sometimes required
optimized node recovery due to missing REDO
information. When this happened, the node crashed with
Failure to recreate object ... during restart, error
721 (because the DBDICT restart
code was run twice). Now when this occurs, node takeover is
executed immediately, rather than being made to wait until the
remaining data nodes have started.
(Bug #52135)

References: See also: Bug #48436.

The redo log protects itself from being filled up by
periodically checking how much space remains free. If
insufficient redo log space is available, it sets the state
TAIL_PROBLEM which results in transactions
being aborted with error code 410 (out of redo
log). However, this state was not set following a
node restart, which meant that if a data node had insufficient
redo log space following a node restart, it could crash a short
time later with Fatal error due to end of REDO
log. Now, this space is checked during node
restarts.
(Bug #51723)

The output of the ndb_mgm client
REPORT BACKUPSTATUS command could sometimes
contain errors due to uninitialized data.
(Bug #51316)

A GROUP BY query against
NDB tables sometimes did not use
any indexes unless the query included a FORCE
INDEX option. With this fix, indexes are used by such
queries (where otherwise possible) even when FORCE
INDEX is not specified.
(Bug #50736)

Issuing a command in the ndb_mgm client after
it had lost its connection to the management server could cause
the client to crash.
(Bug #49219)

The ndb_print_backup_file utility failed to
function, due to a previous internal change in the NDB code.
(Bug #41512, Bug #48673)

When the
MemReportFrequency
configuration parameter was set in
config.ini, the ndb_mgm
client REPORT MEMORYUSAGE command printed its
output multiple times.
(Bug #37632)

ndb_mgm -e "... REPORT ..." did not write any
output to stdout.

The fix for this issue also prevents the cluster log from being
flooded with INFO messages when
DataMemory usage reaches
100%, and insures that when the usage is decreased, an
appropriate message is written to the cluster log.
(Bug #31542, Bug #44183, Bug #49782)

InnoDB; Replication:
Column length information generated by
InnoDB did not match that generated
by MyISAM, which caused invalid
metadata to be written to the binary log when trying to
replicate BIT columns.
(Bug #49618)

Replication:
Metadata for GEOMETRY fields was not properly
stored by the slave in its definitions of tables.
(Bug #49836)

References: See also: Bug #48776.

Disk Data:
Inserts of blob column values into a MySQL Cluster Disk Data
table that exhausted the tablespace resulted in misleading
no such tuple error messages rather than
the expected error tablespace full.

This issue appeared similar to Bug #48113, but had a different
underlying cause.
(Bug #52201)

References: See also: Bug #48113.

Disk Data:
The error message returned after atttempting to execute
ALTER LOGFILE GROUP on an
nonexistent logfile group did not indicate the reason for the
failure.
(Bug #51111)

Cluster API:
When reading blob data with lock mode
LM_SimpleRead, the lock was not upgraded as
expected.
(Bug #51034)

Cluster API:
A number of issues were corrected in the NDB API coding examples
found in the storage/ndb/ndbapi-examples
directory in the MySQL Cluster source tree. These included
possible endless recursion in
ndbapi_scan.cpp as well as problems running
some of the examples on systems using Windows or OS X due to the
lettercase used for some table names.
(Bug #30552, Bug #30737)

1) In rare cases, if a thread was interrupted during a
FLUSH
PRIVILEGES operation, a debug assertion occurred later
due to improper diagnostics area setup. 2) A
KILL operation could cause a
console error message referring to a diagnostic area state
without first ensuring that the state existed.
(Bug #33982)