This release also incorporates all bugfixes and changes made in
previous MySQL Cluster releases, as well as all bugfixes and
feature changes which were added in mainline MySQL 5.1 through
MySQL 5.1.51 (see Changes in MySQL 5.1.51 (2010-09-10)).

Note

Please refer to our bug database at
http://bugs.mysql.com/ for more details about
the individual bugs fixed in this version.

Added the
--skip-broken-objects option
for ndb_restore. This option causes
ndb_restore to ignore tables corrupted due to
missing blob parts tables, and to continue reading from the
backup file and restoring the remaining tables.
(Bug #54613)

References: See also Bug #51652.

Cluster Replication:
Added the --ndb-log-apply-status
server option, which causes a replication slave to apply updates
to the master's mysql.ndb_apply_status
table to its own ndb_apply_status table using
its own server ID in place of the master's server ID. This
option can be useful in circular or chain replication setups
when you need to track updates to
ndb_apply_status as they propagate from one
MySQL Cluster to the next in the circle or chain.

Cluster API:
It is now possible to stop or restart a node even while other
nodes are starting, using the MGM API
ndb_mgm_stop4() or
ndb_mgm_restart4() function,
respectively, with the force
parameter set to 1.
(Bug #58451)

References: See also Bug #58319.

Bugs Fixed

Cluster API:
In some circumstances, very large
BLOB read and write operations in
MySQL Cluster applications can cause excessive resource usage
and even exhaustion of memory. To fix this issue and to provide
increased stability when performing such operations, it is now
possible to set limits on the volume of
BLOB data to be read or written
within a given transaction in such a way that when these limits
are exceeded, the current transaction implicitly executes any
accumulated operations. This avoids an excessive buildup of
pending data which can result in resource exhaustion in the NDB
kernel. The limits on the amount of data to be read and on the
amount of data to be written before this execution takes place
can be configured separately. (In other words, it is now
possible in MySQL Cluster to specify read batching and write
batching that is specific to BLOB
data.) These limits can be configured either on the NDB API
level, or in the MySQL Server.

For the MySQL server, two new options are added. The
--ndb-blob-read-batch-bytes
option sets a limit on the amount of pending
BLOB data to be read before
triggering implicit execution, and the
--ndb-blob-write-batch-bytes
option controls the amount of pending
BLOB data to be written. These
limits can also be set using the mysqld
configuration file, or read and set within the
mysql client and other MySQL client
applications using the corresponding server system variables.
(Bug #59113)

Two related problems could occur with read-committed scans made
in parallel with transactions combining multiple (concurrent)
operations:

When committing a multiple-operation transaction that
contained concurrent insert and update operations on the
same record, the commit arrived first for the insert and
then for the update. If a read-committed scan arrived
between these operations, it could thus read incorrect data;
in addition, if the scan read variable-size data, it could
cause the data node to fail.

When rolling back a multiple-operation transaction having
concurrent delete and insert operations on the same record,
the abort arrived first for the delete operation, and then
for the insert. If a read-committed scan arrived between the
delete and the insert, it could incorrectly assume that the
record should not be returned (in other words, the scan
treated the insert as though it had not yet been committed).

(Bug #59496)

On Windows platforms, issuing a SHUTDOWN
command in the ndb_mgm client caused
management processes that had been started with the
--nodaemon option to exit
abnormally.
(Bug #59437)

A row insert or update followed by a delete operation on the
same row within the same transaction could in some cases lead to
a buffer overflow.
(Bug #59242)

References: See also Bug #56524. This bug was introduced by Bug #35208.

The FAIL_REP signal, used inside the NDB
kernel to declare that a node has failed, now includes the node
ID of the node that detected the failure. This information can
be useful in debugging.
(Bug #58904)

When executing a full table scan caused by a
WHERE condition using
unique_key IS NULL
in combination with a join, NDB
failed to close the scan.
(Bug #58750)

References: See also Bug #57481.

Issuing EXPLAIN EXTENDED for a
query that would use condition pushdown could cause
mysqld to crash.
(Bug #58553, Bug #11765570)

In some circumstances, an SQL trigger on an
NDB table could read stale data.
(Bug #58538)

During a node takeover, it was possible in some circumstances
for one of the remaining nodes to send an extra transaction
confirmation (LQH_TRANSCONF) signal to the
DBTC kernel block, conceivably leading to a
crash of the data node trying to take over as the new
transaction coordinator.
(Bug #58453)

A query having multiple predicates joined by
OR in the WHERE clause and
which used the sort_union access method (as
shown using EXPLAIN) could return
duplicate rows.
(Bug #58280)

Trying to drop an index while it was being used to perform scan
updates caused data nodes to crash.
(Bug #58277, Bug #57057)

When handling failures of multiple data nodes, an error in the
construction of internal signals could cause the cluster's
remaining nodes to crash. This issue was most likely to affect
clusters with large numbers of data nodes.
(Bug #58240)

The functions strncasecmp and
strcasecmp were declared in
ndb_global.h but never defined or used. The
declarations have been removed.
(Bug #58204)

The number of rows affected by a statement that used a
WHERE clause having an
IN condition with a value list
containing a great many elements, and that deleted or updated
enough rows such that NDB processed
them in batches, was not computed or reported correctly.
(Bug #58040)

A query using BETWEEN as part of a
pushed-down WHERE condition could cause
mysqld to hang or crash.
(Bug #57735)

Data nodes no longer allocated all memory prior to being ready
to exchange heartbeat and other messages with management nodes,
as in NDB 6.3 and earlier versions of MySQL Cluster. This caused
problems when data nodes configured with large amounts of memory
failed to show as connected or showed as being in the wrong
start phase in the ndb_mgm client even after
making their initial connections to and fetching their
configuration data from the management server. With this fix,
data nodes now allocate all memory as they did in earlier MySQL
Cluster versions.
(Bug #57568)

In some circumstances, it was possible for
mysqld to begin a new multi-range read scan
without having closed a previous one. This could lead to
exhaustion of all scan operation objects, transaction objects,
or lock objects (or some combination of these) in
NDB, causing queries to fail with
such errors as Lock wait timeout exceeded
or Connect failure - out of connection
objects.
(Bug #57481)

References: See also Bug #58750.

Queries using column
IS [NOT] NULL on
a table with a unique index created with USING
HASH on column always
returned an empty result.
(Bug #57032)

When a slash character (/) was used as part
of the name of an index on an NDB
table, attempting to execute a TRUNCATE
TABLE statement on the table failed with the error
Index not found, and the table was
rendered unusable.
(Bug #38914)

Partitioning; Disk Data:
When using multi-threaded data nodes, an
NDB table created with a very large
value for the MAX_ROWS option could—if
this table was dropped and a new table with fewer partitions,
but having the same table ID, was created—cause
ndbmtd to crash when performing a system
restart. This was because the server attempted to examine each
partition whether or not it actually existed.

This issue is the same as that reported in Bug #45154, except
that the current issue is specific to ndbmtd
instead of ndbd.
(Bug #58638)

Disk Data:
In certain cases, a race condition could occur when
DROP LOGFILE GROUP removed the
logfile group while a read or write of one of the effected files
was in progress, which in turn could lead to a crash of the data
node.
(Bug #59502)

Disk Data:
A race condition could sometimes be created when
DROP TABLESPACE was run
concurrently with a local checkpoint; this could in turn lead to
a crash of the data node.
(Bug #59501)

Disk Data:
Performing what should have been an online drop of a
multi-column index was actually performed offline.
(Bug #55618)

Disk Data:
When at least one data node was not running, queries against the
INFORMATION_SCHEMA.FILES table took
an excessive length of time to complete because the MySQL server
waited for responses from any stopped nodes to time out. Now, in
such cases, MySQL does not attempt to contact nodes which are
not known to be running.
(Bug #54199)

Cluster Replication:
When a mysqld performing replication of a
MySQL Cluster that uses ndbmtd is forcibly
disconnected (thus causing an API_FAIL_REQ
signal to be sent), the SUMA kernel block
iterates through all active subscriptions and disables them. If
a given subscription has no more active users, then this
subscription is also deactivated in the DBTUP
kernel block.

This process had no flow control, and when there were many
subscriptions being deactivated (more than 512), this could
cause an overflow in the short-time queue defined in the
DbtupProxy class.

The fix for this problem includes implementing proper flow
control for this deactivation process and increasing the size of
the short-time queue in DbtupProxy.
(Bug #58693)

Cluster API:
It was not possible to obtain the status of nodes accurately
after an attempt to stop a data node using
ndb_mgm_stop() failed without
returning an error.
(Bug #58319)

Cluster API:
Attempting to read the same value (using
getValue()) more
than 9000 times within the same transaction caused the
transaction to hang when executed. Now when more reads are
performed in this way than can be accommodated in a single
transaction, the call to
execute() fails
with a suitable error.
(Bug #58110)

A NOT IN predicate with a subquery containing
a HAVING clause could retrieve too many rows,
when the subquery itself returned NULL.
(Bug #58818, Bug #11765815)

WHERE conditions of the following forms were
evaluated incorrectly and could return incorrect results:

WHERE null-valued-const-expression NOT IN (subquery)
WHERE null-valued-const-expression IN (subquery) IS UNKNOWN

(Bug #58628, Bug #11765642)

WHERE conditions of the following form were
evaluated incorrectly and could return incorrect results: