Changes in MySQL Cluster NDB 6.3.15 (5.1.24-ndb-6.3.15) (2008-05-30)

This is a new source release, fixing recently discovered bugs in
previous MySQL Cluster releases.

MySQL Cluster NDB 6.3 no longer in development.
MySQL Cluster NDB 6.3 is no longer being actively developed;
if you are using a MySQL Cluster NDB 6.3 release, you should
upgrade to the latest version of MySQL Cluster, which is
available from http://dev.mysql.com/downloads/cluster/ .

This release incorporates all bugfixes and changes made in the
previous MySQL Cluster NDB 6.3 release, as well as all bugfixes
and feature changes which were added in mainline MySQL 5.1
through MySQL 5.1.24 (see Changes in MySQL 5.1.24 (2008-04-08)).

Note

Please refer to our bug database at
http://bugs.mysql.com/ for more details about
the individual bugs fixed in this version.

Bugs Fixed

In certain rare situations, ndb_size.pl
could fail with the error Can't use string
("value") as a HASH ref while "strict
refs" in use.
(Bug #43022)

Attempting to delete a nonexistent row from a table containing a
TEXT or
BLOB column within a transaction
caused the transaction to fail.
(Bug #36756)

References: See also Bug #36851.

If the combined total of tables and indexes in the cluster was
greater than 4096, issuing START BACKUP
caused data nodes to fail.
(Bug #36044)

Where column values to be compared in a query were of the
VARCHAR or
VARBINARY types,
NDBCLUSTER passed a value padded to
the full size of the column, which caused unnecessary data to be
sent to the data nodes. This also had the effect of wasting CPU
and network bandwidth, and causing condition pushdown to be
disabled where it could (and should) otherwise have been
applied.
(Bug #35393)

When dropping a table failed for any reason (such as when in
single user mode) then the corresponding
.ndb file was still removed.

Replication:
When flushing tables, there was a slight chance that the flush
occurred between the processing of one table map event and the
next. Since the tables were opened one by one, subsequent
locking of tables would cause the slave to crash. This problem
was observed when replicating
NDBCLUSTER or
InnoDB tables, when executing multi-table
updates, and when a trigger or a stored routine performed an
(additional) insert on a table so that two tables were
effectively being inserted into in the same statement.
(Bug #36197)

Cluster API:
Ordered index scans were not pruned correctly where a
partitioning key was specified with an EQ-bound.
(Bug #36950)

Cluster API:
When an insert operation involving
BLOB data was attempted on a row
which already existed, no duplicate key error was correctly
reported and the transaction is incorrectly aborted. In some
cases, the existing row could also become corrupted.
(Bug #36851)

References: See also Bug #26756.

Cluster API:NdbApi.hpp depended on
ndb_global.h, which was not actually
installed, causing the compilation of programs that used
NdbApi.hpp to fail.
(Bug #35853)