MySQL Live Binlog Backup & Restore

Mysql has many open source backup tools out there, some of which are mysqldump, mydumper for logical backups and xtrabackup for hot physical backups. They do their jobs very well and they are feature-rich. However, it is a common thing to backup the data with these tools but forget about the binary logs that are essential for point-in-time-recovery. No matter what type of backup you have, you must have binary logs starting since a full backup, till the point in time you want to go back.

mysqlbinlog --raw --read-from-remote-server --stop-never

mysqlbinlog utility has already a great feature to synchronize binary logs in real-time to another location. The above command connects to your mysql instance, acting as a slave server (--read-from-remote-server), requesting raw binary logs without any modification ( --raw) until forever ( --stop-never). This feature alone itself has some pros and cons.

Pros

No need to worry about mysql configuration changes, binlog path etc., as it will understand them automatically

This way original binary log files can be limited to few days to avoid bloating the disk, but the copies can be kept longer in a different disk.

Cons

Not much flexibility, it just copies the new content to the current directory

No built-in compression or encryption option

Recently, I’ve been setting up a 2-node mysql master-slave setup business app. Data size is not expected to be very big, therefore I just configured xtrabackup to take full backup for each day. However, since the business is mission critical and the data is highly relational, few missing records would become a nightmare. So, I wanted to make sure that we have a backup of everything at anytime and wrote a shell script to automate backup of binary logs with a few useful options. The following command does just a few things:

./syncbinlog.sh --backup-dir=/mnt/backup --compress --rotate=30

Run mysqlbinlog, auto restart on failure and continue from last backed up binary log file. Use /mnt/backup for destination directory.

Check backup files that are already closed and compress them using pigz (parallel gzip).

Delete old backup files depending on given rotation policy such as keeping last 30 days of backups.

Now, I have a backup automation enough for a small-scale project. The script is located here with more details: mysql-binlog-backup

Recovery

So, what happens if something goes wrong like a bad query dropping an important table or deleting undesired records ? There are few possibilities.

If slave server is configured with a delayed replication, it may not be too late to promote it as master and skip problematic query.

Otherwise, a point-in-time-recovery is necessary.

For the 2nd case, the latest full backup before the problem must be restored using relevant tools. Doing this is trivial with xtrabackup. Since it dumps final binary log file and position of the backup, they can be used with mysqlbinlog to create a logical dump starting from backup position to the point before trouble. Example contents of xtrabackup_binlog_info:

mysql-bin.000111 1124

You must also find the problematic query position in binary logs, by first guessing the correct binary log file which contains it. If you know the time of execution approximately, you can find it by checking timestamps of binary log files with ls -la and dump its content to search for the query. The following command creates a readable sql file.

One must be careful while searching for the correct position. There are transaction logs starting with BEGIN until COMMIT. Picking a position between will result an error. Otherwise, a wrong position also will cause repeating the whole restore process. Here is an example piece of binary log, where the correct position would be 15720, just before problematic UPDATE query.

Now, the relevant logs must be re-executed until that position. (considering full backup is already restored before)

Notes

There are more advanced ways to do PITR(point-in-time-recovery), such as restoring the backup to another instance in vm/container, copying binary log files there, and let mysql sql_thread apply them in parallel. This is necessary in busy databases, since mysqlbinlog is single-threaded and slow. However, this topic is not part of this article.