pt-online-schema-change

pt-online-schema-change alters a table’s structure without blocking reads or
writes. Specify the database and table in the DSN. Do not use this tool before
reading its documentation and checking your backups carefully.

Add a column to sakila.actor:

pt-online-schema-change --alter "ADD COLUMN c1 INT"D=sakila,t=actor

Change sakila.actor to InnoDB, effectively performing OPTIMIZE TABLE in a
non-blocking fashion because it is already an InnoDB table:

pt-online-schema-change emulates the way that MySQL alters tables internally,
but it works on a copy of the table you wish to alter. This means that the
original table is not locked, and clients may continue to read and change data
in it.

pt-online-schema-change works by creating an empty copy of the table to alter,
modifying it as desired, and then copying rows from the original table into the
new table. When the copy is complete, it moves away the original table and
replaces it with the new one. By default, it also drops the original table.

The data copy process is performed in small chunks of data, which are varied to
attempt to make them execute in a specific amount of time (see
--chunk-time). This process is very similar to how other tools, such as
pt-table-checksum, work. Any modifications to data in the original tables
during the copy will be reflected in the new table, because the tool creates
triggers on the original table to update the corresponding rows in the new
table. The use of triggers means that the tool will not work if any triggers
are already defined on the table.

When the tool finishes copying data into the new table, it uses an atomic
RENAMETABLE operation to simultaneously rename the original and new tables.
After this is complete, the tool drops the original table.

Foreign keys complicate the tool’s operation and introduce additional risk. The
technique of atomically renaming the original and new tables does not work when
foreign keys refer to the table. The tool must update foreign keys to refer to
the new table after the schema change is complete. The tool supports two methods
for accomplishing this. You can read more about this in the documentation for
--alter-foreign-keys-method.

Foreign keys also cause some side effects. The final table will have the same
foreign keys and indexes as the original table (unless you specify differently
in your ALTER statement), but the names of the objects may be changed slightly
to avoid object name collisions in MySQL and InnoDB.

For safety, the tool does not modify the table unless you specify the
--execute option, which is not enabled by default. The tool supports a
variety of other measures to prevent unwanted load or other problems, including
automatically detecting replicas, connecting to them, and using the following
safety checks:

In most cases the tool will refuse to operate unless a PRIMARY KEY or UNIQUE INDEX is
present in the table. See --alter for details.

The tool refuses to operate if it detects replication filters. See
--[no]check-replication-filters for details.

The tool pauses the data copy operation if it observes any replicas that are
delayed in replication. See --max-lag for details.

The tool pauses or aborts its operation if it detects too much load on the
server. See --max-load and --critical-load for details.

The tool sets innodb_lock_wait_timeout=1 and (for MySQL 5.5 and newer)
lock_wait_timeout=60 so that it is more likely to be the victim of any
lock contention, and less likely to disrupt other transactions. These
values can be changed by specifying --set-vars.

pt-online-schema-change works with Percona XtraDB Cluster (PXC) 5.5.28-23.7
and newer, but there are two limitations: only InnoDB tables can be altered,
and wsrep_OSU_method must be set to TOI (total order isolation).
The tool exits with an error if the host is a cluster node and the table
is MyISAM or is being converted to MyISAM (ENGINE=MyISAM), or if
wsrep_OSU_method is not TOI. There is no way to disable these checks.

The tools ignores MySQL 5.7+ GENERATED columns since the value for those columns
is generated according to the expresion used to compute column values.

The tool prints information about its activities to STDOUT so that you can see
what it is doing. During the data copy phase, it prints --progress
reports to STDERR. You can get additional information by specifying
--print.

If --statistics is specified, a report of various internal event counts
is printed at the end, like:

The schema modification, without the ALTER TABLE keywords. You can perform
multiple modifications to the table by specifying them with commas. Please refer
to the MySQL manual for the syntax of ALTER TABLE.

The following limitations apply which, if attempted, will cause the tool
to fail in unpredictable ways:

In almost all cases a PRIMARY KEY or UNIQUE INDEX needs to be present in the table.
This is necessary because the tool creates a DELETE trigger to keep the new table
updated while the process is running.

A notable exception is when a PRIMARY KEY or UNIQUE INDEX is being created from
existing columns as part of the ALTER clause; in that case it will use these
column(s) for the DELETE trigger.

The RENAME clause cannot be used to rename the table.

Columns cannot be renamed by dropping and re-adding with the new name.
The tool will not copy the original column’s data to the new column.

If you add a column without a default value and make it NOT NULL, the tool
will fail, as it will not try to guess a default value for you; You must
specify the default.

DROPFOREIGNKEYconstraint_name requires specifying _constraint_name
rather than the real constraint_name. Due to a limitation in MySQL,
pt-online-schema-change adds a leading underscore to foreign key constraint
names when creating the new table. For example, to drop this constraint:

CONSTRAINT `fk_foo` FOREIGN KEY (`foo_id`) REFERENCES `bar`(`foo_id`)

You must specify --alter"DROPFOREIGNKEY_fk_foo".

The tool does not use LOCKINSHAREMODE with MySQL 5.0 because it can
cause a slave error which breaks replication:

The error happens when converting a MyISAM table to InnoDB because MyISAM
is non-transactional but InnoDB is transactional. MySQL 5.1 and newer
handle this case correctly, but testing reproduces the error 5% of the time
with MySQL 5.0.

This is a MySQL bug, similar to http://bugs.mysql.com/bug.php?id=45694,
but there is no fix or workaround in MySQL 5.0. Without LOCKINSHAREMODE,
tests pass 100% of the time, so the risk of data loss or breaking replication
should be negligible.

Be sure to verify the new table if using MySQL 5.0 and converting
from MyISAM to InnoDB!

How to modify foreign keys so they reference the new table. Foreign keys that
reference the table to be altered must be treated specially to ensure that they
continue to reference the correct table. When the tool renames the original
table to let the new one take its place, the foreign keys “follow” the renamed
table, and must be changed to reference the new table instead.

The tool supports two techniques to achieve this. It automatically finds “child
tables” that reference the table to be altered.

auto

Automatically determine which method is best. The tool uses
rebuild_constraints if possible (see the description of that method for
details), and if not, then it uses drop_swap.

rebuild_constraints

This method uses ALTERTABLE to drop and re-add foreign key constraints that
reference the new table. This is the preferred technique, unless one or more of
the “child” tables is so large that the ALTER would take too long. The tool
determines that by comparing the number of rows in the child table to the rate
at which the tool is able to copy rows from the old table to the new table. If
the tool estimates that the child table can be altered in less time than the
--chunk-time, then it will use this technique. For purposes of estimating
the time required to alter the child table, the tool multiplies the row-copying
rate by --chunk-size-limit, because MySQL’s ALTERTABLE is typically
much faster than the external process of copying rows.

Due to a limitation in MySQL, foreign keys will not have the same names after
the ALTER that they did prior to it. The tool has to rename the foreign key
when it redefines it, which adds a leading underscore to the name. In some
cases, MySQL also automatically renames indexes required for the foreign key.

drop_swap

Disable foreign key checks (FOREIGN_KEY_CHECKS=0), then drop the original table
before renaming the new table into its place. This is different from the normal
method of swapping the old and new table, which uses an atomic RENAME that is
undetectable to client applications.

This method is faster and does not block, but it is riskier for two reasons.
First, for a short time between dropping the original table and renaming the
temporary table, the table to be altered simply does not exist, and queries
against it will result in an error. Secondly, if there is an error and the new
table cannot be renamed into the place of the old one, then it is too late to
abort, because the old table is gone permanently.

This method forces --no-swap-tables and --no-drop-old-table.

none

This method is like drop_swap without the “swap”. Any foreign keys that
referenced the original table will now reference a nonexistent table. This will
typically cause foreign key violations that are visible in SHOWENGINEINNODBSTATUS, similar to the following:

This is because the original table (in this case, sakila.staff) was renamed to
sakila.staff_old and then dropped. This method of handling foreign key
constraints is provided so that the database administrator can disable the
tool’s built-in functionality if desired.

Execute ANALYZE TABLE on the new table before swapping with the old one.
By default, this happens only when running MySQL 5.6 and newer, and
innodb_stats_persistent is enabled. Specify the option explicitly to enable
or disable it regardless of MySQL version and innodb_stats_persistent.

This circumvents a potentially serious issue related to InnoDB optimizer
statistics. If the table being alerted is busy and the tool completes quickly,
the new table will not have optimizer statistics after being swapped. This can
cause fast, index-using queries to do full table scans until optimizer
statistics are updated (usually after 10 seconds). If the table is large and
the server very busy, this can cause an outage.

Default character set. If the value is utf8, sets Perl’s binmode on
STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET
NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT
without the utf8 layer, and runs SET NAMES after connecting to MySQL.

Parses the --alter specified and tries to warn of possible unintended
behavior. Currently, it checks for:

Column renames

In previous versions of the tool, renaming a column with
CHANGECOLUMNnamenew_name would lead to that column’s data being lost.
The tool now parses the alter statement and tries to catch these cases, so
the renamed columns should have the same data as the originals. However, the
code that does this is not a full-blown SQL parser, so you should first
run the tool with --dry-run and --print and verify that it detects
the renamed columns correctly.

DROP PRIMARY KEY

If --alter contain DROPPRIMARYKEY (case- and space-insensitive),
a warning is printed and the tool exits unless --dry-run is specified.
Altering the primary key can be dangerous, but the tool can handle it.
The tool’s triggers, particularly the DELETE trigger, are most affected by
altering the primary key because the tool prefers to use the primary key
for its triggers. You should first run the tool with --dry-run and
--print and verify that the triggers are correct.

Check query execution plans for safety. By default, this option causes
the tool to run EXPLAIN before running queries that are meant to access
a small amount of data, but which could access many rows if MySQL chooses a bad
execution plan. These include the queries to determine chunk boundaries and the
chunk queries themselves. If it appears that MySQL will use a bad query
execution plan, the tool will skip the chunk of the table.

The tool uses several heuristics to determine whether an execution plan is bad.
The first is whether EXPLAIN reports that MySQL intends to use the desired index
to access the rows. If MySQL chooses a different index, the tool considers the
query unsafe.

The tool also checks how much of the index MySQL reports that it will use for
the query. The EXPLAIN output shows this in the key_len column. The tool
remembers the largest key_len seen, and skips chunks where MySQL reports that it
will use a smaller prefix of the index. This heuristic can be understood as
skipping chunks that have a worse execution plan than other chunks.

The tool prints a warning the first time a chunk is skipped due to
a bad execution plan in each table. Subsequent chunks are skipped silently,
although you can see the count of skipped chunks in the SKIPPED column in
the tool’s output.

This option adds some setup work to each table and chunk. Although the work is
not intrusive for MySQL, it results in more round-trips to the server, which
consumes time. Making chunks too small will cause the overhead to become
relatively larger. It is therefore recommended that you not make chunks too
small, because the tool may take a very long time to complete if you do.

Abort if any replication filter is set on any server. The tool looks for
server options that filter replication, such as binlog_ignore_db and
replicate_do_db. If it finds any such filters, it aborts with an error.

If the replicas are configured with any filtering options, you should be careful
not to modify any databases or tables that exist on the master and not the
replicas, because it could cause replication to fail. For more information on
replication rules, see http://dev.mysql.com/doc/en/replication-rules.html.

Pause the data copy until this replica’s lag is less than --max-lag. The
value is a DSN that inherits properties from the the connection options
(--port, --user, etc.). This option overrides the normal behavior of
finding and continually monitoring replication lag on ALL connected replicas.
If you don’t want to monitor ALL replicas, but you want more than just one
replica to be monitored, then use the DSN option to the --recursion-method
option instead of this option.

Prefer this index for chunking tables. By default, the tool chooses the most
appropriate index for chunking. This option lets you specify the index that you
prefer. If the index doesn’t exist, then the tool will fall back to its default
behavior of choosing an index. The tool adds the index to the SQL statements in
a FORCEINDEX clause. Be careful when using this option; a poor choice of
index could cause bad performance.

Use only this many left-most columns of a --chunk-index. This works
only for compound indexes, and is useful in cases where a bug in the MySQL
query optimizer (planner) causes it to scan a large range of rows instead
of using the index to locate starting and ending points precisely. This
problem sometimes occurs on indexes with many columns, such as 4 or more.
If this happens, the tool might print a warning related to the
--[no]check-plan option. Instructing the tool to use only the first
N columns of the index is a workaround for the bug in some cases.

Number of rows to select for each chunk copied. Allowable suffixes are
k, M, G.

This option can override the default behavior, which is to adjust chunk size
dynamically to try to make chunks run in exactly --chunk-time seconds.
When this option isn’t set explicitly, its default value is used as a starting
point, but after that, the tool ignores this option’s value. If you set this
option explicitly, however, then it disables the dynamic adjustment behavior and
tries to make all chunks exactly the specified number of rows.

There is a subtlety: if the chunk index is not unique, then it’s possible that
chunks will be larger than desired. For example, if a table is chunked by an
index that contains 10,000 of a given value, there is no way to write a WHERE
clause that matches only 1,000 of the values, and that chunk will be at least
10,000 rows large. Such a chunk will probably be skipped because of
--chunk-size-limit.

When a table has no unique indexes, chunk sizes can be inaccurate. This option
specifies a maximum tolerable limit to the inaccuracy. The tool uses <EXPLAIN>
to estimate how many rows are in the chunk. If that estimate exceeds the
desired chunk size times the limit, then the tool skips the chunk.

The minimum value for this option is 1, which means that no chunk can be larger
than --chunk-size. You probably don’t want to specify 1, because rows
reported by EXPLAIN are estimates, which can be different from the real number
of rows in the chunk. You can disable oversized chunk checking by specifying a
value of 0.

The tool also uses this option to determine how to handle foreign keys that
reference the table to be altered. See --alter-foreign-keys-method for
details.

Adjust the chunk size dynamically so each data-copy query takes this long to
execute. The tool tracks the copy rate (rows per second) and adjusts the chunk
size after each data-copy query, so that the next query takes this amount of
time (in seconds) to execute. It keeps an exponentially decaying moving average
of queries per second, so that if the server’s performance changes due to
changes in server load, the tool adapts quickly.

If this option is set to zero, the chunk size doesn’t auto-adjust, so query
times will vary, but query chunk sizes will not. Another way to do the same
thing is to specify a value for --chunk-size explicitly, instead of leaving
it at the default.

Examine SHOW GLOBAL STATUS after every chunk, and abort if the load is too high.
The option accepts a comma-separated list of MySQL status variables and
thresholds. An optional =MAX_VALUE (or :MAX_VALUE) can follow each
variable. If not given, the tool determines a threshold by examining the
current value at startup and doubling it.

See --max-load for further details. These options work similarly, except
that this option will abort the tool’s operation instead of pausing it, and the
default value is computed differently if you specify no threshold. The reason
for this option is as a safety check in case the triggers on the original table
add so much load to the server that it causes downtime. There is probably no
single value of Threads_running that is wrong for every server, but a default of
50 seems likely to be unacceptably high for most servers, indicating that the
operation should be canceled immediately.

By default the new table is created with the same table options as
the original table, so if the original table uses InnoDB, then the new
table will use InnoDB. In certain cases involving replication, this may
cause unintended changes on replicas which use a different engine for
the same table. Specifying this option causes the new table to be
created with the system’s default engine.

Drop the original table after renaming it. After the original table has been
successfully renamed to let the new table take its place, and if there are no
errors, the tool drops the original table by default. If there are any errors,
the tool leaves the original table in place.

Indicate that you have read the documentation and want to alter the table. You
must specify this option to alter the table. If you do not, then the tool will
only perform some safety checks and exit. This helps ensure that you have read the
documentation and understand how to use this tool. If you have not read the
documentation, then do not specify this option.

Avoid pt-online-schema-change to run if the specified statement for --alter is
trying to add an unique index.
Since pt-online-schema-change uses INSERTIGNORE to copy rows to the new table, if
the row being written produces a duplicate key, it will fail silently and data will
be lost.

Using pt-online-schema-change to add an unique index on the unique_id field, will cause some rows to
be lost due to the use of INSERTIGNORE to copy rows from the source table.
For this reason, pt-online-schema-change will fail if it detects that the --alter parameter is trying
to add an unique key and it will show an example query to run to detect if there are
rows that will produce duplicated indexes.

Even if you run the query and there are no rows that will produce duplicated indexes,
take into consideration that after running this query, changes can be made to the table that can produce
duplicate rows and this data will be lost.

ENUM values are sorted based on their index numbers, which depend on the order in
which the enumeration members were listed in the column specification.
For example, ‘b’ sorts before ‘a’ for ENUM(‘b’, ‘a’).
The empty string sorts before nonempty strings, and NULL values sort before all other
enumeration values.

To prevent unexpected results when using the ORDER BY clause on an ENUM column,
use one of these techniques:
- Specify the ENUM list in alphabetic order.
- Make sure that the column is sorted lexically rather than by index number by coding
ORDER BY CAST(col AS CHAR) or ORDER BY CONCAT(col).

The NibbleIterator in Percona Toolkit uses CONCAT(col) but, doing that, adds overhead
since MySQL cannot use the column directly and has to calculate the result of CONCAT
for every row.
To make this scenario vissible to the user, if there are indexes having ENUM fields
with usorted items, it is necessary to specify the --force-concat-enums parameter.

Somewhat similar to –max-lag but for PXC clusters.
Check average time cluster spent pausing for Flow Control and make tool pause if
it goes over the percentage indicated in the option.
A value of 0 would make the tool pause when any Flow Control activity is
detected.
Default is no Flow Control checking.
This option is available for PXC versions 5.6 or higher.

Pause the data copy until all replicas’ lag is less than this value. After each
data-copy query (each chunk), the tool looks at the replication lag of
all replicas to which it connects, using Seconds_Behind_Master. If any replica
is lagging more than the value of this option, then the tool will sleep
for --check-interval seconds, then check all replicas again. If you
specify --check-slave-lag, then the tool only examines that server for
lag, not all servers. If you want to control exactly which servers the tool
monitors, use the DSN value to --recursion-method.

The tool waits forever for replicas to stop lagging. If any replica is
stopped, the tool waits forever until the replica is started. The data copy
continues when all replicas are running and not lagging too much.

The tool prints progress reports while waiting. If a replica is stopped, it
prints a progress report immediately, then again at every progress report
interval.

Examine SHOW GLOBAL STATUS after every chunk, and pause if any status variables
are higher than their thresholds. The option accepts a comma-separated list of
MySQL status variables. An optional =MAX_VALUE (or :MAX_VALUE) can follow
each variable. If not given, the tool determines a threshold by examining the
current value and increasing it by 20%.

For example, if you want the tool to pause when Threads_connected gets too high,
you can specify “Threads_connected”, and the tool will check the current value
when it starts working and add 20% to that value. If the current value is 100,
then the tool will pause when Threads_connected exceeds 120, and resume working
when it is below 120 again. If you want to specify an explicit threshold, such
as 110, you can use either “Threads_connected:110” or “Threads_connected=110”.

The purpose of this option is to prevent the tool from adding too much load to
the server. If the data-copy queries are intrusive, or if they cause lock waits,
then other queries on the server will tend to block and queue. This will
typically cause Threads_running to increase, and the tool can detect that by
running SHOW GLOBAL STATUS immediately after each query finishes. If you
specify a threshold for this variable, then you can instruct the tool to wait
until queries are running normally again. This will not prevent queueing,
however; it will only give the server a chance to recover from the queueing. If
you notice queueing, it is best to decrease the chunk time.

Preserves old triggers when specified.
As of MySQL 5.7.2, it is possible to define multiple triggers for a given
table that have the same trigger event and action time. This allows us to
add the triggers needed for pt-online-schema-change even if the table
already has its own triggers.
If this option is enabled, pt-online-schema-change will try to copy all the
existing triggers to the new table BEFORE start copying rows from the original
table to ensure the old triggers can be applied after altering the table.

For this table and triggers combination, it is not possible to use –preserve-triggers
with an –alter like this: "DROPCOLUMNf1" since the trigger references the column
being dropped and at would make the trigger to fail.

After testing the triggers will work on the new table, the triggers are
dropped from the new table until all rows have been copied and then they are
re-applied.

–preserve-triggers cannot be used with these other parameters, –no-drop-triggers,
–no-drop-old-table and –no-swap-tables since –preserve-triggers implies that
the old triggers should be deleted and recreated in the new table.
Since it is not possible to have more than one trigger with the same name, old triggers
must be deleted in order to be able to recreate them into the new table.

Using --preserve-triggers with --no-swap-tables will cause triggers to remain
defined for the original table.
Please read the documentation for –swap-tables

If both --no-swap-tables and --no-drop-new-table is set, the trigger will remain
on the original table and will be duplicated on the new table
(the trigger will have a random suffix as no trigger names are unique).

New table name before it is swapped. %T is replaced with the original
table name. When the default is used, the tool prefixes the name with up
to 10 _ (underscore) to find a unique table name. If a table name is
specified, the tool does not prefix it with _, so the table must not
exist.

Allows MODIFYing a column that allows NULL values to one that doesn’t allow
them. The rows which contain NULL values will be converted to the defined
default value. If no explicit DEFAULT value is given MySQL will assign a default
value based on datatype, e.g. 0 for number datatypes, ‘’ for string datatypes.

Create the given PID file. The tool won’t start if the PID file already
exists and the PID it contains is different than the current PID. However,
if the PID file exists and the PID it contains is no longer running, the
tool will overwrite the PID file with the current PID. The PID file is
removed automatically when the tool exits.

Perl module file that defines a pt_online_schema_change_plugin class.
A plugin allows you to write a Perl module that can hook into many parts
of pt-online-schema-change. This requires a good knowledge of Perl and
Percona Toolkit conventions, which are beyond this scope of this
documentation. Please contact Percona if you have questions or need help.

Print progress reports to STDERR while copying rows. The value is a
comma-separated list with two parts. The first part can be percentage, time, or
iterations; the second part specifies how often an update should be printed, in
percentage, seconds, or number of iterations.

METHOD USES=============================
processlist SHOW PROCESSLIST
hosts SHOW SLAVE HOSTS
dsn=DSN DSNs from a table
none Do not find slaves

The processlist method is the default, because SHOW SLAVE HOSTS is not
reliable. However, the hosts method can work better if the server uses a
non-standard port (not 3306). The tool usually does the right thing and
finds all replicas, but you may give a preferred method and it will be used
first.

The hosts method requires replicas to be configured with report_host,
report_port, etc.

The dsn method is special: it specifies a table from which other DSN strings
are read. The specified DSN must specify a D and t, or a database-qualified
t. The DSN table should have the following structure:

To make the tool monitor only the hosts 10.10.1.16 and 10.10.1.17 for
replication lag, insert the values h=10.10.1.16 and h=10.10.1.17 into the
table. Currently, the DSNs are ordered by id, but id and parent_id are otherwise
ignored.

You can change the list of hosts while OSC is executing:
if you change the contents of the DSN table, OSC will pick it up very soon.

DSN to skip when checking slave lag. It can be used multiple times.
Example: –skip-check-slave-lag h=127.0.0.1,P=12345 –skip-check-slave-lag h=127.0.0.1,P=12346
Plase take into consideration that even when for the MySQL driver h=127.1 is equal to h=127.0.0.1,
for this parameter you need to specify the full IP address.

How long to sleep (in seconds) after copying each chunk. This option is useful
when throttling by --max-lag and --max-load are not possible.
A small, sub-second value should be used, like 0.1, else the tool could take
a very long time to copy large tables.

Swap the original table and the new, altered table. This step completes the
online schema change process by making the table with the new schema take the
place of the original table. The original table becomes the “old table,” and
the tool drops it unless you disable --[no]drop-old-table.

Using --no-swap-tables will run the whole process, it will create the new
table, it will copy all rows but at the end it will drop the new table. It is
intended to run a more realistic –dry-run.

How many times to try critical operations. If certain operations fail due
to non-fatal, recoverable errors, the tool waits and tries the operation
again. These are the operations that are retried, with their default number
of tries and wait time between tries (in seconds):

That makes the tool try create_triggers and drop_triggers 5 times
with a 0.5 second wait between tries. So the format is:

operation:tries:wait[,operation:tries:wait]

All three values must be specified.

Note that most operations are affected only in MySQL 5.5 and newer by
lock_wait_timeout (see --set-vars) because of metadata locks.
The copy_rows operation is affected in any version of MySQL by
innodb_lock_wait_timeout.

For creating and dropping triggers, the number of tries applies to each
CREATETRIGGER and DROPTRIGGER statement for each trigger.
For copying rows, the number of tries applies to each chunk, not the
entire table. For swapping tables, the number of tries usually applies
once because there is usually only one RENAMETABLE statement.
For rebuilding foreign key constraints, the number of tries applies to
each statement (ALTER statements for the rebuild_constraints--alter-foreign-keys-method; other statements for the drop_swap
method).

Check for the latest version of Percona Toolkit, MySQL, and other programs.

This is a standard “check for updates automatically” feature, with two
additional features. First, the tool checks its own version and also the
versions of the following software: operating system, Percona Monitoring and
Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and
Percona Toolkit. Second, it checks for and warns about versions with known
problems. For example, MySQL 5.5.25 had a critical bug and was re-released
as 5.5.25a.

A secure connection to Percona’s Version Check database server is done to
perform these checks. Each request is logged by the server, including software
version numbers and unique ID of the checked system. The ID is generated by the
Percona Toolkit installation script or when the Version Check database call is
done for the first time.

Any updates or known problems are printed to STDOUT before the tool’s normal
output. This feature should never interfere with the normal operation of the
tool.

The file specified by --plugin must define a class (i.e. a package)
called pt_online_schema_change_plugin with a new() subroutine.
The tool will create an instance of this class and call any hooks that
it defines. No hooks are required, but a plugin isn’t very useful without
them.

These DSN options are used to create a DSN. Each option is given like
option=value. The options are case-sensitive, so P and p are not the
same option. There cannot be whitespace before or after the = and
if the value contains whitespace it must be quoted. DSN options are
comma-separated. See the percona-toolkit manpage for full details.

A

dsn: charset; copy: yes

Default character set.

D

dsn: database; copy: no

Database for the old and new table.

F

dsn: mysql_read_default_file; copy: yes

Only read default options from the given file

h

dsn: host; copy: yes

Connect to host.

p

dsn: password; copy: yes

Password to use when connecting.
If password contains commas they must be escaped with a backslash: “exam,ple”

You need Perl, DBI, DBD::mysql, and some core packages that ought to be
installed in any reasonably new version of Perl.

This tool works only on MySQL 5.0.2 and newer versions, because earlier versions
do not support triggers. Also a number of permissions should be set on MySQL
to make pt-online-schema-change operate as expected. PROCESS, SUPER, REPLICATION SLAVE
global privileges, as well as SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER,
and TRIGGER table privileges should be granted on server. Slave needs only
REPLICATION SLAVE and REPLICATION CLIENT privileges.

The “online schema change” concept was first implemented by Shlomi Noach
in his tool oak-online-alter-table, part of
http://code.google.com/p/openarkkit/. Engineers at Facebook then built
another version called OnlineSchemaChange.php as explained by their blog
post: http://tinyurl.com/32zeb86. This tool is a hybrid of both approaches,
with additional features and functionality not present in either.

This tool is part of Percona Toolkit, a collection of advanced command-line
tools for MySQL developed by Percona. Percona Toolkit was forked from two
projects in June, 2011: Maatkit and Aspersa. Those projects were created by
Baron Schwartz and primarily developed by him and Daniel Nichter. Visit
http://www.percona.com/software/ to learn about other free, open-source
software from Percona.

This program is copyright 2011-2018 Percona LLC and/or its affiliates.

THIS PROGRAM IS PROVIDED “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, version 2; OR the Perl Artistic License. On UNIX and similar
systems, you can issue `man perlgpl’ or `man perlartistic’ to read these
licenses.

You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59 Temple
Place, Suite 330, Boston, MA 02111-1307 USA.