Further Thoughts on MySQL Upgrades

I have been upgrading more MySQL database instances recently and have found a few more potential gotchas, which if you are not careful, can potentially be rather nasty. These are not documented explicitly by MySQL, so it may be handy for you to know if you have not come across this type of thing before.

Most of the issues are those related to upgrading MySQL instances which are replicated, either the master servers or the slaves. Some seem specific to the rpm packages I am using (MySQL enterprise or MySQL advanced rpms), though others are not.

Take care upgrading a 5.0 master when you have 5.1 slaves

It is not a good idea to run a mixed major version of mysql in a replicated environment so why would I be doing this? If you work in a replicated environment and have several slaves then it is recommended that you upgrade the slaves first. I work with quite a few slaves so the process of upgrading them all takes longer than you would think. Quite a long time in fact, while different systems are tested and upgraded. Along came a newer version of 5.0 and I thought of upgrading the master which had been giving a few issues, and one of them was resolved in the lastest 5.0.89. At least when using rpm packages, the package upgrade uses /usr/bin/mysql_install_db –rpm –user=mysql as part of the package upgrade procedure. This ensures that the mysql db is up to date but also writes to the binlog if one is configured. running on a master normally this would be the case. Of course these are 5.0 install commands and they are not really understood by the slaves which try to interpret them too. End result. Broken replication and you need to skip several transactions. If you have several slaves this can be rather painful.

Note: this is the cause of replicating the mysql database which I do as it is a good, quick and clean way to distribute GRANT information on to all slaves. As far as I know MySQL does not discourage this though perhaps many sites do not replicate their MySQL database.

If you upgrade a master server from 5.o to 5.1 by running mysql_upgrade after upgrading the binaries you may have a similar issue as the mysql_upgrade script determines that some new columns or tables are needed on the master and so adds them. Again this gets written to the binlog which is good for point in time recovery on the server itself, but again bad from a replication point of view if the slave does not run the same version of MySQL.

In fact the binlog has these 2 uses: (a) writing changes for point in time recovery, and (b) writing the changes for replication to slaves. Once could argue that for (a) the changes should not be written to the binlog but I think that is wrong if you have to recover just after an upgrade. So I would suggest that the upgrade changes should probably flagged specially in the binlog, allowing the slave to probably ignore them in the normal situation but also be able to recognise them, allowing you to stop the slave and perform the same binary upgrade, and then continue the upgrade with the new binaries exactly in the way this had been executed on the master. This behaviour should be controlled by a runtime flag which can be dynamically configured.

RPM Installs/Upgrades always start the server

rpm(8) is good but some of the design decisions in the MySQL rpms are questionable. One of these is that the current MySQL rpms are designed to always start when either doing a fresh install or doing an upgrade. Currently it’s not good practice to do rpm -Uvh MySQL-server-advanced-gpl…rpm (5.1 rpm) if you are running a MySQL-server-enterprise-gpl…rpm (5.0 rpm) so normally I stop MySQL, remove the enterprise rpm and install the advanced rpm. That starts the server and the slave on a box which is not completely “stable”. Solution is to add slave-skip-start to /etc/my.cnf but that should not be necessary as immediately after running mysql_upgrade you need to remove the value again.

Also during the upgrade process it is a good thing to avoid external client access so sometimes I also set the bind-address to 127.0.0.1 during the period of the upgrade. That may not keep clients away if they are running on the local server but helps in many cases.

Conclusions

All this leads me to a simple conclusion: if you can, upgrade one box the slow way and then clone the other slaves from that. Cloning is simple and requires no thought so is a good idea.

The other conclusion based on the first one is that if you can: build a new master. That is clone a slave to make a new master. This will be the new master. Configure it to write it’s own binlogs and then you can move the existing slaves underneath the new one. Once all slaves are underneath the new master (left as the only slave of the original master) then you can simply point clients to the new master instead of the old one. That keeps down time to a minimum and avoids many problems.

As far as I know none of the comments I have made above are in the current 5.1 upgrade documentation. I have opened quite a few tickets requesting the documentation be improved and I guess that will happen slowly. For many people the points I have mentioned may seem irrelevant for their situation but for me they have caused a few problems. If you do not need to worry, then you can skip the documentation, otherwise if this were documented then you would be saved a few tears when least expecting problems.

Published by

Simon J Mudd

Born in England, I now live in Spain, but spent a few years living in the Netherlands.
I previously worked in banking (financial markets) both in IT and as a broker, but IT has always had a stronger influence. Now working at booking.com as a Senior Database Administrator.
Other interests include photography, and travel.
Simon is married, with two children and lives in Madrid.
View all posts by Simon J Mudd

4 thoughts on “Further Thoughts on MySQL Upgrades”

After writing this article I got caught out precisely by the problems I had mentioned. In order to avoid writing 5.0 to 5.1 upgrade information to the binlogs, I had disabled them during the upgrade procedure. However, I had forgotten that the master (that had just been upgraded to 5.1 from 5.0) was also a slave and had forgotten to disable starting the slave thread. As such the replication thread started (collecting changes from the upstream master) and they were NOT written to the binlog and thus never reached the downstream slaves. I had to rebuild 30 slaves because of this mistake. Costly!

So below is my current check list of what to do when upgrading a master which is itself a slave, especially if the upgrade is a major version change.

Intermediate Master server upgrade procedure

In the situation below the upstream master was running 5.0, the server being upgraded was being upgraded frmo 5.0 to 5.1, and the downstream slaves were already running 5.1.

Problems

I replicate the mysql database to ensure grants are replicated.
When using rpms you can only have one a single MySQL rpm installed at once.
When performing an install or upgrade the MySQL rpm tries to reinitialise the /var/lib/mysql based db.

The intermediate master has its binlogs enabled so these commands get written to the binlog and therefore picked up by any downstream slave. If it’s already running the upgraded version of mysql these replication commands will fail.

Part of the 5.0 to 5.1 upgrade process involves running the script mysql_upgrade which checks all tables and updates those that need updating, though some may need to be done by hand. (These are mainly Innodb tables and ALTER TABLE xxxxx ENGINE = InnoDB; is sufficient. The script also makes changes to the mysql database, as this is necessary for the major version upgrade. As we replicate the mysql database these changes would also get written to the binlog.

So to avoid these local upgrade statements being written to the binlog you must temporarily disable this on the master.

This can be done by commenting out the following entries which on the intermediate master are likely to be set.
log-bin=../log/binlog
log-slave-updates

Additionally to prevent any clients connecting to the server during the upgrade process it is convenient to disable access to the server. This can be done by *enabling* the following option:

bind-address = 127.0.0.1

Finally which is what I forgot, the intermediate master is itself a slave. To avoid the replication threads starting and processing changes from the upstream master which would NOT be written to the binlog during the upgrade, you must ensure the slave threads DO NOT START. This is done with:

skip-slave-start

So to actually do the upgrade perform the following steps:

* Note: This does not take into account any procedures that may be needed if you have stored procedures or triggers configured on the server. If you have these please consult the MySQL documentation and update this page accordingly.
* Adjust /etc/my.cnf as indicated above.
* Stop MySQL with /etc/init.d/mysql stop
* Remove the 5.0 rpms with rpm -e XXX
* Adjust for configuration for differences between 5.0 and 5.1 which currently means changing the line log-slow-queries = ../log/slowlog to slow_query_log_file = ../log/slowlog and adding the line slow_query_log = ON. The 5.1 server will start even if you don’t make this change.
* Install the 5.1 rpms with rpm -ivh …
* Check that the server has started and that the replication threads are NOT running.
* Run mysql_upgrade --skip-write-binlog to perform the basic upgrade. Check the output, which is probably in /var/log/mysqld.log.
* For those tables that mysql_upgrade could not upgrade, probably InnoDB tables, do ALTER TABLE xxxxx ENGINE=InnoDB;
* Your server should now be upgraded and the binlog should be untouched.
* Stop the server /etc/init.d/mysql stop
* Restore the server’s original configuration by:
** Adding back log-bin=../log/binlog
** Adding back log-slave-updates if necessary
** Removing bind-address = 127.0.0.1
** Removing skip-slave-start
* Restart the server with /etc/init.d/mysql start and check it starts and replicates normally.

This is a relatively slow process, and while not difficult, is rather time consuming.

Do you think there is a need to run mysql_upgrade though we are using rpms to upgrade mysql from 5.1 to 5.5?
I see ” If you install MySQL from RPM packages on Linux, you must install the server and client RPMs. mysql_upgrade is included in the server RPM but requires the client RPM because the latter includes mysqlcheck. ”

So, do u still think mysql_upgrade, innodb alter are the actions that are needed ? Please reply ASAP.

Yes, you do need to run mysql_upgrade when upgading from 5.1 rpms to 5.5 rpms. There are differences between the 2 versions and mysql_upgrade will check those and fix any issues. That said if you are running MySQL 5.1 now I’d look to upgrade to 5.5 only as a stepping stone to MySQL 5.6 which is now “GA”, as this will give you a lot more features, better performance under load and probably make life generally a lot better.

Note: when upgrading and using rpms you normally need to upgrade both client and server packages at the same time. This can be an inconvenience, but is due to the way the rpms are packaged for MySQL and it’s not intended that multiple versions can be installed/running at the same time.