4.5.4 mysqldump — A Database Backup Program

The mysqldump client utility performs
logical backups,
producing a set of SQL statements that can be executed to
reproduce the original database object definitions and table
data. It dumps one or more MySQL databases for backup or
transfer to another SQL server. The mysqldump
command can also generate output in CSV, other delimited text,
or XML format.

To reload a dump file, you must have the same privileges needed
to create each of the dumped objects by issuing
CREATE statements manually.

mysqldump output can include
ALTER DATABASE statements that
change the database collation. These may be used when dumping
stored programs to preserve their character encodings. To reload
a dump file containing such statements, the
ALTER privilege for the affected database is
required.

Note

A dump made using PowerShell on Windows with output
redirection creates a file that has UTF-16 encoding:

Performance and Scalability Considerations

mysqldump advantages include the convenience
and flexibility of viewing or even editing the output before
restoring. You can clone databases for development and DBA work,
or produce slight variations of an existing database for
testing. It is not intended as a fast or scalable solution for
backing up substantial amounts of data. With large data sizes,
even if the backup step takes a reasonable time, restoring the
data can be very slow because replaying the SQL statements
involves disk I/O for insertion, index creation, and so on.

For large-scale backup and restore, a
physical backup is more
appropriate, to copy the data files in their original format
that can be restored quickly:

If your tables are primarily InnoDB
tables, or if you have a mix of InnoDB
and MyISAM tables, consider using the
mysqlbackup command of the MySQL
Enterprise Backup product. (Available as part of the
Enterprise subscription.) It provides the best performance
for InnoDB backups with minimal
disruption; it can also back up tables from
MyISAM and other storage engines; and it
provides a number of convenient options to accommodate
different backup scenarios. See
Section 24.2, “MySQL Enterprise Backup Overview”.

mysqldump can retrieve and dump table
contents row by row, or it can retrieve the entire content from
a table and buffer it in memory before dumping it. Buffering in
memory can be a problem if you are dumping large tables. To dump
tables row by row, use the
--quick option (or
--opt, which enables
--quick). The
--opt option (and hence
--quick) is enabled by
default, so to enable memory buffering, use
--skip-quick.

Option Syntax - Alphabetical Summary

mysqldump supports the following options,
which can be specified on the command line or in the
[mysqldump] and [client]
groups of an option file. For information about option files
used by MySQL programs, see Section 4.2.6, “Using Option Files”.

Read options from the named login path in the
.mylogin.cnf login path file. A
“login path” is an option group that permits
only a limited set of options: host,
user, and password. Think
of a login path as a set of values that indicate the server
host and the credentials for authenticating with the server.
To create the login path file, use the
mysql_config_editor utility. See
Section 4.6.6, “mysql_config_editor — MySQL Configuration Utility”.

--password[=password],
-p[password]

The password to use when connecting to the server. If you
use the short option form (-p), you
cannot have a space between the option
and the password. If you omit the
password value following the
--password or -p option on
the command line, mysqldump prompts for
one.

The connection protocol to use for connecting to the server.
It is useful when the other connection parameters normally
would cause a protocol to be used other than the one you
want. For details on the permissible values, see
Section 4.2.2, “Connecting to the MySQL Server”.

Do not send passwords to the server in old (pre-4.1) format.
This prevents connections except for servers that use the
newer password format. This option was added in MySQL 5.7.4.

As of MySQL 5.7.5, this option is deprecated and will be
removed in a future MySQL release. It is always enabled and
attempting to disable it
(--skip-secure-auth,
--secure-auth=0) produces
an error. Before MySQL 5.7.5, this option is enabled by
default but can be disabled.

Option-File Options

Read this option file after the global option file but (on
Unix) before the user option file. If the file does not
exist or is otherwise inaccessible, an error occurs.
file_name is interpreted relative
to the current directory if given as a relative path name
rather than a full path name.

Use only the given option file. If the file does not exist
or is otherwise inaccessible, an error occurs.
file_name is interpreted relative
to the current directory if given as a relative path name
rather than a full path name.

Read not only the usual option groups, but also groups with
the usual names and a suffix of
str. For example,
mysqldump normally reads the
[client] and
[mysqldump] groups. If the
--defaults-group-suffix=_other
option is given, mysqldump also reads the
[client_other] and
[mysqldump_other] groups.

Print the program name and all options that it gets from
option files.

DDL Options

Usage scenarios for mysqldump include setting
up an entire new MySQL instance (including database tables), and
replacing data inside an existing instance with existing
databases and tables. The following options let you specify
which things to tear down and set up when restoring a dump, by
encoding various DDL statements within the dump file.

Adds to a table dump all SQL statements needed to create any
tablespaces used by an NDB
table. This information is not otherwise included in the
output from mysqldump. This option is
currently relevant only to MySQL Cluster tables, which are
not supported in MySQL 5.7.

Debug Options

The following options print debugging information, encode
debugging information in the dump file, or let the dump
operation proceed regardless of potential problems.

--allow-keywords

Permit creation of column names that are keywords. This
works by prefixing each column name with the table name.

--comments, -i

Write additional information in the dump file such as
program version, server version, and host. This option is
enabled by default. To suppress this additional information,
use --skip-comments.

--debug[=debug_options],
-#
[debug_options]

Write a debugging log. A typical
debug_options string is
d:t:o,file_name.
The default value is
d:t:o,/tmp/mysqldump.trace.

--debug-check

Print some debugging information when the program exits.

--debug-info

Print debugging information and memory and CPU usage
statistics when the program exits.

--dump-date

If the --comments option
is given, mysqldump produces a comment at
the end of the dump of the following form:

-- Dump completed on DATE

However, the date causes dump files taken at different times
to appear to be different, even if the data are otherwise
identical. --dump-date and
--skip-dump-date
control whether the date is added to the comment. The
default is --dump-date
(include the date in the comment).
--skip-dump-date
suppresses date printing.

--force, -f

Ignore all errors; continue even if an SQL error occurs
during a table dump.

One use for this option is to cause
mysqldump to continue executing even when
it encounters a view that has become invalid because the
definition refers to a table that has been dropped. Without
--force, mysqldump exits
with an error message. With --force,
mysqldump prints the error message, but
it also writes an SQL comment containing the view definition
to the dump output and continues executing.

Turns off the
--set-charset setting, the
same as specifying --skip-set-charset.

--set-charset

Write SET NAMES
default_character_set
to the output. This option is enabled by default. To
suppress the SET NAMES statement, use
--skip-set-charset.

Replication Options

The mysqldump command is frequently used to
create an empty instance, or an instance including data, on a
slave server in a replication configuration. The following
options apply to dumping and restoring data on replication
master and slave servers.

On a master replication server, delete the binary logs by
sending a PURGE BINARY LOGS
statement to the server after performing the dump operation.
This option automatically enables
--master-data.

--dump-slave[=value]

This option is similar to
--master-data except that
it is used to dump a replication slave server to produce a
dump file that can be used to set up another server as a
slave that has the same master as the dumped server. It
causes the dump output to include a
CHANGE MASTER TO statement
that indicates the binary log coordinates (file name and
position) of the dumped slave's master. These are the master
server coordinates from which the slave should start
replicating.

--dump-slave causes the coordinates from
the master to be used rather than those of the dumped
server, as is done by the
--master-data option. In
addition, specfiying this option causes the
--master-data option to be overridden, if
used, and effectively ignored.

The option value is handled the same way as for
--master-data (setting no
value or 1 causes a CHANGE MASTER TO
statement to be written to the dump, setting 2 causes the
statement to be written but encased in SQL comments) and has
the same effect as --master-data in terms
of enabling or disabling other options and in how locking is
handled.

This option causes mysqldump to stop the
slave SQL thread before the dump and restart it again after.

For the CHANGE MASTER TO
statement in a slave dump produced with the
--dump-slave option, add
MASTER_HOST and
MASTER_PORT options for the host name and
TCP/IP port number of the slave's master.

--master-data[=value]

Use this option to dump a master replication server to
produce a dump file that can be used to set up another
server as a slave of the master. It causes the dump output
to include a CHANGE MASTER TO
statement that indicates the binary log coordinates (file
name and position) of the dumped server. These are the
master server coordinates from which the slave should start
replicating after you load the dump file into the slave.

If the option value is 2, the CHANGE
MASTER TO statement is written as an SQL comment,
and thus is informative only; it has no effect when the dump
file is reloaded. If the option value is 1, the statement is
not written as a comment and takes effect when the dump file
is reloaded. If no option value is specified, the default
value is 1.

This option requires the
RELOAD privilege and the
binary log must be enabled.

The --master-data option automatically
turns off --lock-tables.
It also turns on
--lock-all-tables, unless
--single-transaction also
is specified, in which case, a global read lock is acquired
only for a short time at the beginning of the dump (see the
description for
--single-transaction). In
all cases, any action on logs happens at the exact moment of
the dump.

It is also possible to set up a slave by dumping an existing
slave of the master, using the
--dump-slave option, which
overrides --master-data and causes it to be
ignored if both options are used.

--set-gtid-purged=value

This option enables control over global transaction ID
(GTID) information written to the dump file, by indicating
whether to add a
SET
@@global.gtid_purged statement to the output.

The following table shows the permitted option values. The
default value is AUTO.

Value

Meaning

OFF

Add no SET statement to the output.

ON

Add a SET statement to the output. An error occurs if
GTIDs are not enabled on the server.

AUTO

Add a SET statement to the output if GTIDs are
enabled on the server.

Format Options

The following options specify how to represent the entire dump
file or certain kinds of data in the dump file. They also
control whether certain optional information is written to the
dump file.

Produce output that is more compatible with other database
systems or with older MySQL servers. The value of
name can be
ansi, mysql323,
mysql40, postgresql,
oracle, mssql,
db2, maxdb,
no_key_options,
no_table_options, or
no_field_options. To use several values,
separate them by commas. These values have the same meaning
as the corresponding options for setting the server SQL
mode. See Section 5.1.7, “Server SQL Modes”.

This option does not guarantee compatibility with other
servers. It only enables those SQL mode values that are
currently available for making dump output more compatible.
For example, --compatible=oracle does not
map data types to Oracle types or use Oracle comment syntax.

This option requires a server version of 4.1.0 or
higher. With older servers, it does nothing.

Quote identifiers (such as database, table, and column
names) within “`”
characters. If the
ANSI_QUOTES SQL mode is
enabled, identifiers are quoted within
“"” characters. This option
is enabled by default. It can be disabled with
--skip-quote-names, but this option should
be given after any option such as
--compatible that may
enable --quote-names.

--result-file=file_name,
-r file_name

Direct output to a given file. This option should be used on
Windows to prevent newline
“\n” characters from being
converted to “\r\n” carriage
return/newline sequences. The result file is created and its
previous contents overwritten, even if an error occurs while
generating the dump.

--tab=path,
-T path

Produce tab-separated text-format data files. For each
dumped table, mysqldump creates a
tbl_name.sql
file that contains the CREATE
TABLE statement that creates the table, and the
server writes a
tbl_name.txt
file that contains its data. The option value is the
directory in which to write the files.

Note

This option should be used only when
mysqldump is run on the same machine as
the mysqld server. You must have the
FILE privilege, and the
server must have permission to write files in the
directory that you specify.

By default, the .txt data files are
formatted using tab characters between column values and a
newline at the end of each line. The format can be specified
explicitly using the
--fields-xxx and
--lines-terminated-by
options.

This option enables TIMESTAMP
columns to be dumped and reloaded between servers in
different time zones. mysqldump sets its
connection time zone to UTC and adds SET
TIME_ZONE='+00:00' to the dump file. Without this
option, TIMESTAMP columns are
dumped and reloaded in the time zones local to the source
and destination servers, which can cause the values to
change if the servers are in different time zones.
--tz-utc also protects against changes due
to daylight saving time. --tz-utc is
enabled by default. To disable it, use
--skip-tz-utc.

--xml, -X

Write dump output as well-formed XML.

NULL,
'NULL', and Empty Values: For
a column named column_name, the
NULL value, an empty string, and the
string value 'NULL' are distinguished
from one another in the output generated by this option as
follows.

Filtering Options

The following options control which kinds of schema objects are
written to the dump file: by category, such as triggers or
events; by name, for example, choosing which databases and
tables to dump; or even filtering rows from the table data using
a WHERE clause.

--all-databases, -A

Dump all tables in all databases. This is the same as using
the --databases option and
naming all the databases on the command line.

--databases, -B

Dump several databases. Normally,
mysqldump treats the first name argument
on the command line as a database name and following names
as table names. With this option, it treats all name
arguments as database names. CREATE
DATABASE and USE
statements are included in the output before each new
database.

--events, -E

Include Event Scheduler events for the dumped databases in
the output. This option requires the
EVENT privileges for those
databases.

--ignore-error=error[,error]...

Ignore the specified errors. The option value is a
comma-separated list of error numbers specifying the errors
to ignore during mysqldump execution. If
the --force option is also
given to ignore all errors,
--force takes precedence.

This option was added in MySQL 5.7.1.

--ignore-table=db_name.tbl_name

Do not dump the given table, which must be specified using
both the database and table names. To ignore multiple
tables, use this option multiple times. This option also can
be used to ignore views.

--no-data, -d

Do not write any table row information (that is, do not dump
table contents). This is useful if you want to dump only the
CREATE TABLE statement for
the table (for example, to create an empty copy of the table
by loading the dump file).

--routines, -R

Include stored routines (procedures and functions) for the
dumped databases in the output. Use of this option requires
the SELECT privilege for the
mysql.proc table. The output generated by
using --routines contains
CREATE PROCEDURE and
CREATE FUNCTION statements to
re-create the routines. However, these statements do not
include attributes such as the routine creation and
modification timestamps. This means that when the routines
are reloaded, they will be created with the timestamps equal
to the reload time.

If you require routines to be re-created with their original
timestamp attributes, do not use
--routines. Instead, dump and reload the
contents of the mysql.proc table
directly, using a MySQL account that has appropriate
privileges for the mysql database.

--tables

Override the --databases
or -B option. mysqldump
regards all name arguments following the option as table
names.

--triggers

Include triggers for each dumped table in the output. This
option is enabled by default; disable it with
--skip-triggers.

Before MySQL 5.7.2, a table cannot have multiple triggers
that have the same combination of trigger event
(INSERT,
UPDATE,
DELETE) and action time
(BEFORE, AFTER). MySQL
5.7.2 lifts this limitation and multiple triggers are
permitted. mysqldump dumps triggers in
activation order so that when the dump file is reloaded,
triggers are re-created in the same activation order.
However, if a mysqldump dump file
contains multiple triggers for a table that have the same
trigger event and action time, an error occurs for attempts
to load the dump file into an older server that does not
support multiple triggers. (For a workaround, see
Section 2.10.2.1, “Downgrading to MySQL 5.6”; you can
convert triggers to be compatible with older servers.)

--where='where_condition',
-w
'where_condition'

Dump only rows selected by the given
WHERE condition. Quotes around the
condition are mandatory if it contains spaces or other
characters that are special to your command interpreter.

Examples:

--where="user='jimf'"
-w"userid>1"
-w"userid<1"

Performance Options

The following options are the most relevant for the performance
particularly of the restore operations. For large data sets,
restore operation (processing the INSERT
statements in the dump file) is the most time-consuming part.
When it is urgent to restore data quickly, plan and test the
performance of this stage in advance. For restore times measured
in hours, you might prefer an alternative backup and restore
solution, such as MySQL
Enterprise Backup for InnoDB-only and
mixed-use databases.

For each table, surround the
INSERT statements with
/*!40000 ALTER TABLE
tbl_name DISABLE KEYS
*/; and /*!40000 ALTER TABLE
tbl_name ENABLE KEYS
*/; statements. This makes loading the dump file
faster because the indexes are created after all rows are
inserted. This option is effective only for nonunique
indexes of MyISAM tables.

--extended-insert, -e

Write INSERT statements using
multiple-row syntax that includes several
VALUES lists. This results in a smaller
dump file and speeds up inserts when the file is reloaded.

Because the --opt option is enabled by
default, you only specify its converse, the
--skip-opt to turn off
several default settings. See the discussion of
mysqldump
option groups for information about selectively
enabling or disabling a subset of the options affected by
--opt.

--quick, -q

This option is useful for dumping large tables. It forces
mysqldump to retrieve rows for a table
from the server a row at a time rather than retrieving the
entire row set and buffering it in memory before writing it
out.

Add a FLUSH
PRIVILEGES statement to the dump output after
dumping the mysql database. This option
should be used any time the dump contains the
mysql database and any other database
that depends on the data in the mysql
database for proper restoration.

Lock all tables across all databases. This is achieved by
acquiring a global read lock for the duration of the whole
dump. This option automatically turns off
--single-transaction and
--lock-tables.

--lock-tables, -l

For each dumped database, lock all tables to be dumped
before dumping them. The tables are locked with
READ LOCAL to permit concurrent inserts
in the case of MyISAM tables. For
transactional tables such as InnoDB,
--single-transaction is a
much better option than --lock-tables
because it does not need to lock the tables at all.

Because --lock-tables locks tables for each
database separately, this option does not guarantee that the
tables in the dump file are logically consistent between
databases. Tables in different databases may be dumped in
completely different states.

Some options, such as
--opt, automatically
enable --lock-tables. If you want to
override this, use --skip-lock-tables at
the end of the option list.

--no-autocommit

Enclose the INSERT statements
for each dumped table within SET autocommit =
0 and COMMIT
statements.

--order-by-primary

Dump each table's rows sorted by its primary key, or by its
first unique index, if such an index exists. This is useful
when dumping a MyISAM table to be loaded
into an InnoDB table, but makes the dump
operation take considerably longer.

On Windows, the shared-memory name to use, for connections
made using shared memory to a local server. The default
value is MYSQL. The shared-memory name is
case sensitive.

The server must be started with the
--shared-memory option to
enable shared-memory connections.

--single-transaction

This option sets the transaction isolation mode to
REPEATABLE READ and sends a
START
TRANSACTION SQL statement to the server before
dumping data. It is useful only with transactional tables
such as InnoDB, because then it dumps the
consistent state of the database at the time when
START
TRANSACTION was issued without blocking any
applications.

When using this option, you should keep in mind that only
InnoDB tables are dumped in a consistent
state. For example, any MyISAM or
MEMORY tables dumped while using this
option may still change state.

While a
--single-transaction dump
is in process, to ensure a valid dump file (correct table
contents and binary log coordinates), no other connection
should use the following statements:
ALTER TABLE,
CREATE TABLE,
DROP TABLE,
RENAME TABLE,
TRUNCATE TABLE. A consistent
read is not isolated from those statements, so use of them
on a table to be dumped can cause the
SELECT that is performed by
mysqldump to retrieve the table contents
to obtain incorrect contents or fail.

The --single-transaction option and the
--lock-tables option are
mutually exclusive because LOCK
TABLES causes any pending transactions to be
committed implicitly.

To dump large tables, combine the
--single-transaction option with the
--quick option.

Option Groups

The --opt option turns on
several settings that work together to perform a fast dump
operation. All of these settings are on by default, because
--opt is on by default. Thus you rarely if
ever specify --opt. Instead, you can turn
these settings off as a group by specifying
--skip-opt, the optionally re-enable
certain settings by specifying the associated options later
on the command line.

The --compact option turns
off several settings that control whether optional
statements and comments appear in the output. Again, you can
follow this option with other options that re-enable certain
settings, or turn all the settings on by using the
--skip-compact form.

When you selectively enable or disable the effect of a group
option, order is important because options are processed first
to last. For example,
--disable-keys--lock-tables--skip-opt would not have the
intended effect; it is the same as
--skip-opt by itself.

Examples

To make a backup of an entire database:

shell> mysqldump db_name > backup-file.sql

To load the dump file back into the server:

shell> mysql db_name < backup-file.sql

Another way to reload the dump file:

shell> mysql -e "source /path-to-backup/backup-file.sql" db_name

mysqldump is also very useful for populating
databases by copying data from one MySQL server to another:

This backup acquires a global read lock on all tables (using
FLUSH TABLES WITH READ
LOCK) at the beginning of the dump. As soon as this
lock has been acquired, the binary log coordinates are read and
the lock is released. If long updating statements are running
when the FLUSH statement is
issued, the MySQL server may get stalled until those statements
finish. After that, the dump becomes lock free and does not
disturb reads and writes on the tables. If the update statements
that the MySQL server receives are short (in terms of execution
time), the initial lock period should not be noticeable, even
with many updates.

For point-in-time recovery (also known as
“roll-forward,” when you need to restore an old
backup and replay the changes that happened since that backup),
it is often useful to rotate the binary log (see
Section 5.2.4, “The Binary Log”) or at least know the binary log
coordinates to which the dump corresponds:

The --master-data and
--single-transaction options
can be used simultaneously, which provides a convenient way to
make an online backup suitable for use prior to point-in-time
recovery if tables are stored using the
InnoDB storage engine.

Restrictions

mysqldump does not dump the
INFORMATION_SCHEMA,
performance_schema, or (as of MySQL 5.7.8)
sys schema by default. To dump any of these,
name it explicitly on the command line. You can also name it
with the --databases option.
For INFORMATION_SCHEMA and
performance_schema, also use the
--skip-lock-tables
option.

After adding "SET FOREIGN_KEY_CHECKS=0;" remember to append the "SET FOREIGN_KEY_CHECKS=1;" at the end of the import file. The potential problem is that any data inconsistency that would've made the foreign key failed during import would have made it into the database even after the forieng keys are turned back on. This is especially true if the foreign keys aren't turned back on after a long period of time which can happen if the "SET FOREIGN_KEY_CHECKS=1;" was not appended to the import file in the first place.

If you want to schedule a task on windows to backup and move your data somewhere, the lack of documentation and command-line tools in windows can make it a real beast. I hope this helps you keep your data safe.

Secondly, you will need to use windowsw FTP via command line. It took me all day to find documentation on this guy, so I hope this saves some time for somebody.

Anyway, you need two files -- the batch file and a script for your ftp client. The Batch file should look like this guy (it uses random numbers in the file name so that multiple backups are not overwritten):

Corey's example is helpful, but I don't care for the random file name. Here is the manual script I use on Windows for kicking off a MYSQL backup.

You could easily add all the other bells and whistles of ZIP, FTP, and scheduling should you need it. Note that I didn't use a password or many of the other args for mysqldump, you can add those if ya need 'em.

A little reformulation of the actions that occur during an online dump with log-point registration, i.e. a dump that does not unduly disturb clients using the database during the dump (N.B.: only from 4.1.8 on!) and that can be used to start a slave server from the correct point in the logs.

Use these options:

--single-transaction--flush-logs--master-data=1--delete-master-logs

If you have several databases that are binary-logged and you want to keep a consistent binary log you may have to include all the databases instead of just some (is that really so?):

--all-databases

Now, these are the actions performed by the master server:

1) Acquire global read lock using FLUSH TABLES WITH READ LOCK. This also flushes the query cache and the query result cache. Caused by option --single-transaction.2) All running and outstanding transactions terminate. MySQL server stalls for further updates.3) Read lock on all tables acquired.4) All the logs are flushed, in particular the binary log is closed and a new generation binary log is opened. Caused by option --flush-logs5) Binary lock coordinates are read and written out so that the slave can position correctly in the binary log. Caused by --master-data=16) Read lock is released, MySQL server can proceed with updates. These updates will also go to the binary log and can thus be replayed by the slave. Meanwhile, the InnoDB tables are dumped in a consistent state, which is the state they were in in step 5. (Not guaranteed for MyISAM tables)7) Dump terminates after a possibly long time.8) Any old binary log files are deleted. Caused by --delete-master-logs.

Additionally, there are performance-influencing options:

--extended-insert: use multiple-row insert statements--quick: do not do buffering of row data, good if tables are large

And there are format-influencing options:

--hex-blob: dump binary columns in hex--complete-insert: use complete insert statements that include column names works nicely with --extended-insert--add-drop-table: add a DROP TABLE statement before each CREATE TABLE statement.

When using mysqldump on a replication master, if you want the slave(s) to follow, you may want to avoid the --delete-master-logs option, because it can delete binary logs before the "CHANGE MASTER" is read by the slaves, therefore breaking the replication (then you have to issue manually the "CHANGE MASTER" on the slave(s)). If you want to get rid of old and useless binary logs, it is better to issue a "PURGE MASTER" SQL command on the master after the mysqldump.

I moved my MySQL installation from Linux to Windows 2003 and had to create a new backup script. I was using hotcopy but with windows it's not avaliable.

So, Inspired by Lon B and Corey Tisdale (above) I created a batch file that will create a mysqldump GZiped file for each database and put them into seperate folders. It also creates a log file. You will have to set the vars at the top to match your system.

You will also need GZip to do the compression...

It could still use some work (like no error trapping etc...) but it's in production for me now.

I used a utility "commail.exe" to send the log file to me after the backup is complete.

Here's a bash wrapper for mysqldump I cron'd to run at night. It's not the sexiest thing but it's reliable.

It creates a folder for each day, a folder for each db & single bzip2'd files for each table. There are provisions for exclusions. See below where it skips the entire tmp & test db's and in all db's, tables tbl_session & tbl_parameter. It also cleans up files older than 5 days (by that time they've gone to tape).

Be sure to update <user> & <pwd>. Ideally these would be in constants but I couldn't get the bash escaping to work.

You always wanted to BACKUP your most important database somewhere in your Linux system, as well as send the dump by email, so that you can recover the entire content if the system crashes.You can use these 2 scripts.

First Step:-Install the mutt client that will transfer emails on the command-line : "apt-get install mutt" or "yum install mutt"-Create the backup directory : "mkdir /home/backups"

-Don't forget to change the access to make them executable:"chmod 700 auto_mysql_dump.sh""chmod 700 auto_mail_dump.sh"

Third step:-Edit the CronTab to schedule the execution of the two scripts."crontab -e" (you will use the vi editor)We consider that the 2 scripts are in the /root directory-I want the dump to be executed at 8.30 everyday-I want the mail to be sent at 9.00 everydayThus I add these 2 rows after the existing lines :Hit the "i" to insert new characters...

Here's a DOS script that will backup all your databases to a seperate file in a new folder, zip the folder, encrypt the zip and email the encrypted zip to one or many adresses. If the backup is larger than a specified limit only the logfile is emailed. The unencrypted zipfile is left on your local machine.

Many thanks to Wade Hedgren whose script formed the basis for this version.

//--- Begin Batch File ---//:::: Creates a backup of all databases in MySQL.:: Zip, encrypts and emails the backup file.:::: Each database is saved to a seperate file in a new folder.:: The folder is zipped and then deleted. :: the zipped backup is encrypted and then emailed, unless the file exceeds the maximum filesize:: In all cases the logfile is emailed.:: The encrypted backup is deleted, leaving the unencrypted zipfile on your local machine.:::: Version 1.1 :::: Changes in version 1.1 (released June 29th, 2006):: - backups are now sent to the address specified by the mailto variable:::: The initial version 1.0 was released on May 27th, 2006:: :::: This version of the script was written by Mathieu van Loon (mathieu-public@jijenik.com):: It is based heavily on the script by Wade Hedgren (see comments at http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html):::: This script requires several freeware libraries::: - zipgenius (a compression tool), www.zipgenius.it:: - blat (an emailer tool), www.blat.net:: - doff (extracts datetime, ignores regional formatting), www.jfitz.com/dos/index.html:::: Some areas where this script could be improved::: - include error trapping and handling:: - make steps such as encryption and email optional :: - allow the user to specify a single database on the command line::@echo off

:::: Configuration options::

:: The threshold for emailing the backup file. If the backup is larger :: it will not be emailed (the logfile is always sent).set maxmailsize=10000000

:: The passphrase used to encrypt the zipfile. Longer is more secure.set passphrase=secret

:: Send the log file in an e-mail, include the backup file if it is not too large:: We use the CALL Trick to enable determination of the filesize (type CALL /? at prompt for info):: note that you _must_ specify the full filename as the argument

Excellent, I had this installed and configured in about 10 minutes. I do have one minor fix however.

You aren't getting the time portion of the DOFF command captured into your variable. It appears that the output formatting string MUST NOT CONTAIN ANY BLANKS so I changed mine to:

for /f %%i in ('doff.exe dd-mm-yyyy_at_hh:mi:ss') do set nicedate=%%i

This is terrific, wish I found it 10 hrs ago (darn mySQL Administrator Backup - such a waste!!!***Now the problem is that my backups won't restore.... I am backing up multiple instances of MediaWiki, Mantis, and Joomla. I'm playing around with the --max_allowed_packet= nnn and that should fix it based on manual backups working. Now is that nnn bytes or an abbreviation? Hmmm.

I often get errors [MySQL 4.* and 5.*] on reloading a dump of databases having big blobs. I found the solution disabling the --extended-insert (that comes inside the multiple option --opt, enabled by default) with --skip-extended-insert. I think this way is safer, but it is also more more slow.

Here's a python script that does rolling WinRAR'd backups on Windows. It should be trivial to change to Linux, or another compression program.Please note:1) this was a quick hack, so please test thoroughly before using in production. Still, I hope it will be a useful basis for your own script.2) the --single-transaction switch is used as I am backing up InnoDB tables.3) mysqldump is run with the root user. It would be A Good Thing to make this more secure - eg. create a backup user with read-only permissions to the tables.4) <tab> is the tab character. Indentation is significant in Python.

It seems one needs to be careful when using --skip-opt with databases containing non-ascii latin1 characters, especially if one has not been paying much attention to character sets.

I am just using default character sets - normally latin1. However, the dump produced by mysqldump is, perhaps surprisingly, in utf8. This seems fine, but leads to trouble with the --skip-opt option to mysqldump, which turns off --set-charset but leaves the dump in utf8.This seems to lead to a dump that will be silently incorrectly reloaded if strings in the database contain non-ascii latin1 characters.(Is this a documentation flaw, a design flaw or a bug??)Perhaps the fact that mysqldump uses utf8 by default, and the importance of the --set-charset option should be more prominently documented (see the documentation for the --default-character-set attribute for the current mention of the use of utf8)

I am fairly new to bash scripting, however I encountered the problem of all the databases going into one *.sql file. If you have a large amount of databases to backup it can take forever to restore your backup. This is a script I wrote to accomplish what I felt was needed. It grabs the name of the database and puts them in separate *.sql.bz2 files with a corresponding timetamp. Please let me know if this helps and perhaps if I can make it more elegant.

# SEE : http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html# SEE : http://safari.oreilly.com/0596526784/date_and_time_string_formatting_with_strftime# # Improved by Bill Hernandez (Plano, Texas) on Tuesday, August 21, 2007 (12:55 AM)# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track# ( 2 ) Now maintains a single log that contains additional information# ( 3 ) Includes a file comment header inside each compressed file# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else# ( 5 ) Where I have mysql5, you may have to replace it with mysql## Posted by Ryan Haynes on July 11 2007 6:29pm

# delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fineif [ $DELETE_EXPIRED_AUTOMATICALLY == "TRUE" ]; then counter=0 for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes}) do counter=$(( counter + 1 )) echo "[${TS}] [Expired Backup - Deleted] $del" >> ${BACKUP_LOG_NAME} done echo "------------------------------------------------------------------------" if [ $counter -lt 1 ]; then if [ $expire_days -gt 0 ]; then echo There were no backup directories that were more than ${expire_days} days old: else echo There were no backup directories that were more than ${expire_minutes} minutes old: fi else echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME} if [ $expire_days -gt 0 ]; then echo These directories are more than ${expire_days} days old and they are being removed: else echo These directories are more than ${expire_minutes} minutes old and they are being removed: fi echo "------------------------------------------------------------------------" echo "\${expire_minutes} = ${expire_minutes} minutes" counter=0 for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes}) do counter=$(( counter + 1 )) echo $del rm -R $del done fifiecho "------------------------------------------------------------------------"cd `echo $current_dir`echo -n "Restored working directory to : "pwd

### create a pipe named "pipe"mkfifo pipe### compress the pipe in backgroundgzip < pipe > dumpfile.sql.gz &### write directly to the pipemysqldump --all-databases > pipe### get the real return code of mysqldumpresult=$?### wait until the gzip completeswait### now it is safe to remove the piperm pipe

Hi all,And thanks for the great script examples - I've taken a bit of the batch files and made a perl script for backing up mysql databases - it's pretty crude, but it's what I'm using right now to back up the servers nightly.

In case this helps anyone. This backs up all databases and tables and keeps all files for a week, placing them in a directory structure of the format:

/backup_dir/db_name/day/table.sql.gz

Useful if you want to restore to a particular days data.

It checks new backups are different to last before overwriting files. This helps if you are rsyncing your filesystems as normally mysql writes a date into the dump so the files always appear to differ even if the data is the same.

Also saves a directory with just your schema in, checks and repairs tables where necessary and defrags tables on a Sunday.

BACKUP_PRIO="20" # Priority for the MySQL dump and rdiff-backup Min: 20 Max: -20BACKUP_TMP_DIR="/var/backup/mysql_tmp" # New dumps will be stored here BACKUP_DIFF_DIR="/var/backup/hosting/mysql" # Diffs of dumps will be stored thereSYNC_SRV="BAC.KUP.SER.VER" # Remote server for backup storageSYNC_USER="backup_user" # User at remote storageSYNC_SPEED="200" # Limit Synchronization Bandwidth to this number of KB/sSYNC_DIR="/backup/hosting/mysql" #Directory on Remote server to synchronize backups inMYSQL_USER="admin" # MySQL userMYSQL_PASSWD=`cat /etc/psa/.psa.shadow` # Password for MySQL. You may obtain password from /etc/psa/.psa.shadow if you are using Plesk on your server.

This is an example of a Windows batch script that implements a rotating archive of backups on a daily, weekly, and monthly basis. It also provides an installation option for the creation of the backup directories and an option to add a scheduled task to the system to run the batch file.

I started with what Lon B posted and many editions/revisions later this was produced. I hope you find it as useful as we have.

~~~ BEGIN FILE ~~~

@ECHO OFFSET VERSIONMAJOR=10SET VERSIONMINOR=6

FOR /f "tokens=1-4 delims=/ " %%a IN ('date/t') DO ( SET dw=%%a SET mm=%%b SET dd=%%c SET yy=%%d)

This can be useful if you need to empty a database in order to restore a backup made by mysqldump, but you couldn't use --add-drop-database because you don't have CREATE DATABASE privileges on the command line (e.g. you're on shared hosting). mysqldump adds DROP TABLE by default, but if tables may have been added or renamed since the time of your backup (e.g. by some sort of update process that you're trying to revert from), failing to drop those tables will likely cause serious headaches later on.

Of course this raises the question of why MySQL doesn't support "DROP TABLE *;" (in which case mysqldump could just insert that)?

Here is a shell script for mysqldump extractor: "mydumpsplitter".This shell script will be grabbing the tables you want and pass it to tablename.sql.It’s capable to understand regular expressions as I’ve added sed -r option.Also MyDumpSplitter can split the dump in to individual table dumps.

There is an interesting script sample here: http://www.docplanet.org/linux/backing-up-linux-web-server-live-via-ssh/showing a way to dump mysql databases directly into gzip and then into ssh connection, thus creating a gzipped dump archive that never resided on the server hard drive. This can be a handy way to ensure that backup does not fill up the server hard drive.

The post by Bradford Mitchell on July 18 2008 7:11pm gave a nice script with, among other functions, the ability to automatically create a Windows Scheduled Task. However, if the name of the script contains spaces, the creation of the scheduled task will fail.

The mysqldump command is very helpful but sometimes you don't have the necessary permissions on the server to run this command (like in shared enviroments).

I had this problem before and searching for backup tools I found MySqlBackupFTP (http://mysqlbackupftp.com).It is easy to use and it has a free version that allows you to connect to a remote phpMyAdmin instance.

The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table: