After lots of pain and trying to get CentOS 6 to work on my Mac Mini mid-2011 and failing, I finally find that CentOS 6.3 does indeed install. Previous versions did not work for me, but I’m delighted to see that RedHat have done great work and the work they did that made it work on Fedora 17 has obviously been ported to CentOS 6. The issue in the past had been the Mac would not boot the install DVDs.

So all that was needed was for me to burn the (new?) CentOS-6.3-x86_64-netinstall-EFI.iso image, press ALT on boot and the DVD booted fine, and the install went along the same way as any normal CentOS/RHEL installation. In fact I’ve been able to keep my original Fedora setup as the Volume Group I’d previously had setup on my disk was recognised so all that I needed to do was to create a new “root” logical volume and install on to that.

So thanks RedHat. You now make it much easier for me to migrate my “main” but older system on to this Mac.

I saw a post on profiling memory usage today and this reminds me of several discussions I have had with different people.

Why would you want to profile the memory usage? Usually to see where memory is allocated and for what purposes, and usually you only care when memory usage is higher than expected. That is a DBA normally wants to use all available memory on a server for mysqld, whatever that size may be.

Configuration parameters may be inappropriate and need adjusting, so having a way to determine the range of memory usage based on those parameters would be most helpful. However, the configuration parameters as a whole put no limit on memory used, so different workloads can quite easily lead to memory being insufficient on the OS, triggering swapping or perhaps the mysqld process to be killed by the kernel’s OOM. None of this is ideal.

So I would like to configure a cap on the maximum amount of memory that can be used and indeed that is what most RDBMSes do. That is the allocation is made on startup, ensuring the memory is available, and any memory requests are taken out of this pool. This technique _should_ avoid you swapping (though I still see issues on NUMA architecture boxes). It also means that at some points in time the database server may not be able to honour memory requests and therefore must fail, wait until those needed resources are available, or it must free existing resources to make space for the requested new resource.

Doing this well is probably hard enough. Doing it in MySQL with different storage engines is probably harder still as each engine does things its own way.

Some mechanism like that would certainly help avoid strange behavioural issues when the query profiles change and this causes the server to behave in unexpected ways, such as swapping itself to death. The only solution right now seems to be to configure the server to ensure that these “edge cases” can not happen by using less memory, thus wasting valuable resources on the server.

Currently it is also pretty hard to see how memory is used. If you use InnoDB you know that the buffer pool takes up a sizeable portion of memory, but there is still a significant amount of other memory that’s used by each individual connection and this can vary significantly depending on the query profiles. It would be really nice to dynamically look inside MySQL and see how memory is used, and also see how adjusting different parameters may adjust this.

Of course none of this is in MySQL at the moment. I would certainly like to see first better information on current memory usage, I would guess some more tables in performance_schema, and at a later stage some way to cap memory usage to a desired level, and hope that when those limits get reached mysqld will figure out how to free up some resources in order to continue, wait or fail “gracefully”.

Until then we will have to continue playing the game of “configure mysqld”. See if it works and then ensure that queries to the system do not change enough to break the system and require us to go back and do it all again.

Note: there are several feature requests for this, so I do not think I am the only one in wishing for more visibility and configurability of the memory usage.

I have been using virtualisation at home for some time, since the days of vmware-server on Linux. It works well and if you have a sufficiently powerful box is a great way to experiment without destroying your own server or installing software which a few hours later you may want to remove again.

I also recently bought a Mac mini 2011, which I put in 16 GB of RAM and was hoping to set it up as a host to contain various virtual machines. This was going to run Linux. I wanted to run the host as light as possible and do the real tasks inside my virtual machines. I also wanted to use this server to migrate from an older PC I use for my day to day services (web, dns, email, ftp services etc).

Getting Linux installed on the Mac was a bit trickier than expected. I have been using RedHat since version 3.0.3 so prefer that to Debian or Ubuntu, if only because it is an environment that I am more familiar with. I tried in vain to get CentOS 6 to install on the Mac but it seems the required EFI boot loader does not work. Others have had problems too I think. Less surprising is that Ubuntu installed first time (credit to them for getting it to work on more exotic hardware), and when I tried Fedora 17 I saw that also worked, so that is what I am using. I would still prefer to switch to CentOS as it stays stable for longer, and playing the upgrade game with Fedora gets to be a pain. Maybe some time soon…

There is a lot of noise around now about cloud computing and I had wanted to use something a bit more sophisticated than virt-manager on my PC. This requires you to organise the exact setup of the boxes, network, disk images and other cloud platforms seem to make this management easier for you.

oVirt seemed promising and there is even a Fedora 17 repo and it is supposed to work out of the box according to various blogs and articles, but I had trouble getting it all up and running. If you run it on a single host it does require a lot of packages and software to be installed. So after various attempts I gave up.

OpenStack is also pretty hot and has quite good documentation, though it seems complex perhaps because of the scale that it runs on in some deployments. Again I read the manuals and documentation and had trouble getting the whole environment to work. The configuration is quite complex, and it seems easy to make a mistake. If you do that then figuring out what is broken or why is pretty hard. Again this is a shame especially as this seems to be perhaps the most sophisticated of the 3 cloud environments I tried.

Family circumstances mean I have a lot less time than I would like to investigate but I did do quite a bit of homework before starting so this has been quite frustrating.

I then tried OpenNebula, version 3.8.1. It seems that the software is still evolving quite quickly so has evolved quite a lot over time. It also seems that this is the smallest package: the tar ball is tiny and at least on Fedora 17 it did not take me long to get the software installed (including sunstone-server). I see this is tagged to be included in Fedora 17 but could not find actual packages to install.

There are quite a few howtos and tutorials but for OpenNebula, but some of these are for older versions, and if you follow them thing will not work as expected. (Differences in the templating options etc.) This is confusing for a new user.

I use Fedora 17 but do not see packages for OpenNebula. Working packages for Fedora, CentOS, Ubuntu etc would be most welcome as this would probably ensure that the different components were configured correctly with the OS components like virt-manager etc.

Most of the OpenNebula daemons have start and stop commands but no status command. This would at least allow you check that the configuration was correct and the daemons correctly started, especially when doing manual installs.

I would like to see some sort of “check installation” command to allow for a more complete set of checks to be done to see if the installation is correct and complete, for each of the different components that makes up or is needed by OpenNebula. In many places the documentation says “Assuming you have a correct kvm setup …”, “Assuming you have a correct isci setup …”, etc, yet provides no way for you to check if the setup of these base components actually works as required by OpenNebula. Whether that documentation is included in the documentation or on a wiki somewhere “How to setup kvm on Fedora X for OpenNebula….” I do not really mind but I have had problems with PolicyKit not allowing oneadmin access to the virtual machine, with the virtual machines not seeing their disks (I think due to user/group permissions, as on Fedora 17 kvm seems to assign file ownership as qemu:qemu.), with the VNC connections in Sunstone not working, and I am pretty sure that most of these issues are “trivial” to fix if you have done this before and understand what OpenNebula expects and what the host operating system is doing “incorrectly”.

For initial testing including various standard template files which would useful, if only to allow helping to confirm simple things like:

booting from a cd image works

booting from a minimal hard disk works

booting from a hard disk with networking work (perhaps bridging or NATing out from the host’s network)

how to do a boot from a cdimage which will allow someone to install a new image on to a hard disk. This is a very typical usage pattern, and the exact Operating System or Linux distribution / version may vary.

I was confused by the network documentation and exactly how that relates to the underlying host’s network configuration. This does not seem to be well documented but “assumed”.

To summerise these technologies are here and being used quite widely, but I would guess in larger environments where the people using them have time to investigate and set things up properly. I was hoping that to set these up at home would not be that hard, but have found it much harder than I had anticipated to do this just reading the documentation and trying on my own. I think that all these platforms would be used more widely if they were more plug and play, that is they worked out of the box on most of the popular Linux distributions.

Having said that things have obviously progressed since the first virtualisation environments were created (on the x86 architecture) and probably in no time at all these technologies or similar will just work out of the box and be the norm. In the meantime I am going to keep poking away and hope to finally get one of these environments working for me.

So if you read this and have some useful pointers I would be most interested in using them to get up and running.

Again this GTID stuff looks good, but seems to prevent changes in the configuration of performance_schema, which I think is not appropriate, especially as P_S now has lots of extra goodies and after 5.6 will surely have even more.

I personally can think of some very nasty consequences of applying this on the slaves I manage, and the reason I’m posting the bug is that while I guess this is too late to fix in 5.6 as it’s effectively a new feature, I’m sure many sites may bump into this and be somewhat disappointed if they want to use the new GTID feature and have several slaves. Hence, if the fix/feature has to go into MySQL 5.7 then I hope it goes in sooner rather than later. We will see.

Updated: 2013-09-19

I probably should have updated this earlier but it does seem that Oracle have taken these comments on board. See: WL6559. It looks like they plan to do this for 5.7 which is good news. I’m really hoping that we will not have to wait too long for this version as it has other “goodies” which are going to be interesting. Reducing the time of the release cycle will I think allow those of us who appreciate these new features to have access to them sooner.

A colleague, Kris, commented recently on a recent find I made on 5.6.7-rc1 which while not the end of the world does seem to severely limit the use of the new GTID features, or if nothing else make it a pain to use.

Today I found another related issue, when trying to check the syntax of one of the MySQL commands. That’s now reported as bug#67073.

All of these are sort of edge cases but do make 5.6 look a little rough round the edges, when the rest of the product is actually really good and a great improvement from The MySQL 5.5 GA version that everyone uses. That’s really a shame.

I did report the Original problem as SR 3-6270525721 : MySQL 5.6.7-rc1, grants, replication and GTID cause problems, but of course that’s not visible to the general MySQL community and while this specific issue with the help tables has been reported in bugs.mysql.com I do get the feeling that other users of MySQL may report issues which are not visible to the community as a whole and that means that solutions, workarounds or even the possibility of saying “I have that problem too” is no longer possible.

For many tickets I’ve chosen to make 2 tickets, one on MySQL’s http://bugs.mysql.com and another internal Oracle ticket but this is really a nuisance and time waster, and in the end I don’t believe it’s helpful to anyone. I’m sure I’m not the only person who suffers from this. I also note that when browsing for what’s changed in the lastest release such as http://dev.mysql.com/doc/refman/5.6/en/news-5-6-7.html you see references to the internal bug numbers which many people can not even access. I am able to see most of these tickets but many people are not so the problem and resolution become rather opaque. That’s counterproductive for everyone. Improving this reporting of changes would help us all.

Oracle, please make this work in an easier manner so that issues with mysql software can be more visible (if the reporter choses to make this so), information on changes and bug fixes is also more complete, as this will save time and frustration for everyone.

I’ve blogged before about the way to use numactl to start up mysqld, and thus to try to better spread the memory usage on larger memory servers. This came from an article by Jeremy Cole and is fine. I recently had some issues with mysqld seeming to run out of memory on a box which appeared to have plenty free, so it seemed like a good idea to adapt a minutely collector script I run to include the numa_maps output so that I could see if the failed memory was related to this. So far so good.

Many of the clients that connect to the database servers I manage have a very short connect timeout, typically 2 seconds. In a normal network and under normal conditions this is more than enough to allow for successful operation and if the connect does not work in that time it’s an indication of some underlying issue, whether that be load on the mysqld server, or something else.

The change I implemented on to collect the numa_maps information wasn’t expected to cause any issues, and typically quite a lot of information is cat’d out of /proc/somewhere to collect configuration or status information.

However, after implementing this change I suddenly noticed more mysql client connection errors than before, in fact a significant change. It wasn’t immediately apparent what the cause was until it was noticed that these errors only occurred a couple of seconds after the change in minute, at hh:mm:02, hh:mm:03 etc. Then it dawned on me that indeed this was due to the change in the collection script looking at /proc/<mysqld_pid>/numa_maps. Disabling this functionality again removed all issues. This was with servers with 192 GB of RAM.

The information provided by /proc is useful and it would be even better if the information could be collected in a way which doesn’t block the process that’s being “investigated”. As such I have filed a bug report with RedHat, though really this is just a kernel “bug”, or behaviour which while it may be known, in this particular case provides a very unsatisfactory behaviour.

So whilst most people do not configure such a short mysql connection timeout if you do have a setting like this please be aware of the consequences of running numa-maps-summary.pl, or looking at /proc/<mysqld_pid>/numa_maps directly. This was not discussed by Jeremy and I’d guess he was unaware of this, or did not expect people to run this type of script as frequently as I was. Database memory sizes keep increasing so this may not have been noticeable on smaller servers but can now become an issue.

New versions of MySQL are always interesting to try out. Often they have features which I may have asked for myself so it’s satisfying to see them eventually appear on a system I use. Often other new features make life easier for the DBA. Finally we hope overall performance will improve and managing the server(s) will be come easier.

So I had a system which needs to make heavy writes, and performance was a problem, even when writing to SSDs. Checkpointing seemed to be the big issue and the ib_logfile size in MySQL 5.5 is limited to 4 GB. That seems a lot, but once MySQL starts to fill these files (and this happens at ~70% of the total I believe), checkpointing kicks in heavily, and slows things down. So the main reason for trying out MySQL 5.6 was to see how things performed with larger ib_logfiles. (Yes, MariaDB 5.5 can do this too.)

Things improved a lot for my specific workload which was great news, but one thing which I noticed was that initial install / setup time of a system with these large files increased a huge amount. (I was using 2 x 16000M files.) That was noticeable even on a box with SSDs. On a box with HDs that would be horrible.

mysql_install_db now uses innodb to setup some system things and restarts the mysqld binary a couple of times. When you also have a large innodb_buffer_pool (I was trying on a box with 192GB of RAM) the startup of mysqld each time took quite a while.

So initial thoughts are that the bootstrapping probably does not need to use the normal innodb_buffer_pool_size setting (but should perhaps overwrite it with the minimal size needed).

Managing the size of the ib_logfiles is also something that requires restarting mysqld. Given the sort of experimenting that I was doing it would be nice to be able to dynamically configure this. I can see many issues with changing the sizes of the existing files, but what seems easier to implement would be to be able to increase or decrease the number of files used (removing the older, higher number files once they are “used”/checkpointed), and that would provide a lot more freedom to the DBA.

I clone servers quite a bit and the extra 32 GB of these redo log files is a pain to clone/copy, and time consuming, so being able to reduce the number of files, and checkpoint prior to shutting the server down for cloning, but then re-enabling the normal value afterwards would save a lot of time cloning. So would be a handy feature.

I also tried playing with the new GTID features. They look great. Except for one thing. This expects master and slaves to have binlogs enabled, so that the GTID values are kept locally, and on startup. I manage several boxes where for performance reasons the slaves do not have binlogs enabled. If it dies the slave gets recloned. Adding back the requirement for binlogs ( and log_slave_updates ) on a slave to enable this feature seems wrong. I can see the need that the different GTID values are kept somewhere, but don’t see the need to keep all binlog information, at least on a slave. Given the ratio of slaves to masters can be quite high that can be an issue. If necessary write this information somewhere else, why not in an InnoDB table, so that on sever recovery you have consistent information with the rest of the database, something that might not happen with the binlogs… ?

Talking of binlogs one new feature I’m using, which I think is good is: binlog_row_image = minimal. This reduces the size of the binlogs and thus I/O. I’m missing the next step which would be to compress those binlog events, and reduce binlog size further. That may not be an issue on many systems but several servers I manage write over 100 GB of binlogs a day. Reducing this further by being able to compress the binlogs would be nice, and having better control of expire_logs_days too (as the daily granularity can be too large in many cases for me) would help.

NOTE: In MySQL 5.6.5 with gtid_mode = ON , mysql_install_db crashes! Bug reported to Oracle, and I guess will get fixed soon.

I have seen a few other crashes in 5.6.4 and 5.6.5, I can’t provide links unfortunately as they’re not public. They are still not resolved. I’d like to make the details of these crashes public as others may come across them, or have come across them before me, but these issues are not visible in http://bugs.mysql.com. Would be nice if they were but requires me duplicating information which is a pain. Oracle, please make this easier. It is good for us all.

So with the little testing I’ve done so far MySQL 5.6 looks good. I need to spend more time with it and investigate the new features, many of which will make life easier.

When it finally goes GA I will have to spend a lot more time seeing how it behaves but I believe there are still a few issues which still need resolving prior to that happening.

Over the last few months, on different systems I have been modifying the ib_log_files and their sizes, increasing them due to the increase in load they have been experiencing. These files are used to contain information which has been committed, but not yet written to the ibdata files or the individual .ibd files if using innodb_file_per_table. And these files are used on recovery if the server shuts down unexpectedly during recovery processing to get the system back into a consistent state again.

The checkpointing in MySQL is well documented, but I was surprised just how much difference the throughput can change for heavy write applications if the sizes of these files is changed. It may seem obvious but given how disruptive it can be to shut down a system, change these values and start it up again, this process is not done frequently. These files help improve performance as changes are written linearly into the file, compared to the changes in the data files which are written in pages and those pages are probably scattered randomly over the disk, the latter process being slower.

I come from a Sybase background where “a database” consists of “data” and “log” files. The data files correspond with the ibdataX or .ibd files and the log files correspond with the “ib_log_fileX” files, the major difference being that (at least when I looked at this in Sybase 12.5) the log files were considered part of the database and once added you also could not reduce their size. Increasing was easy but then you couldn’t free up the space again.

So MySQL and Sybase are similar, but MySQL’s implementation keeps the ib_logfileX files independent from the database files. It struck me that it would be very useful if we could dynamically tailor the two settings that InnoDB provides: the innodb_log_file_size and the innodb_log_files_in_group. Since MySQL already automatically creates missing files, typically done if you start MySQL for the first time, or if you remove the files after changing the innodb_log_file_size parameter, this sounds reasonably easy. Make the innodb_log_files_in_group larger and then when you get past the last available file, you build a new one. If the number is decreased then you simply go back to the first one again, ignoring later files. Changing the size of the files could be done too, but perhaps is harder. In both cases you need to be sure this works when recovering from a crash, which is a critical moment.

A change like this would help people see whether changing the size improves performance, and certainly for systems where taking downtime is expensive and difficult, this would be a welcome move, as over time initial sizing guidelines may vary and a dynamic change would resolve the problem quickly. I recently took down some old MySQL systems which had been running uninterrupted for just over a year. I’m sure I would have probably adjusted these settings earlier had I been able to, and that would have been very convenient.

I see that MySQL 5.5.21 has just been released. This sounds interesting. I’m mainly running 5.5.16 which has been broadly stable, but I have been caught by a few important issues which 5.5.21 fixes according to the change list:

MySQL, like a lot of other software, has many knobs you can tweak. Most of these knobs may affect behaviour, but more importantly most affect the memory usage of the server, so getting these settings right is very important.

Most of MySQL’s memory is really used just as a cache, in one form or another, information that otherwise is on disk. So ensuring you have as large a cache as possible is important. However, making these memory sizes too large will trigger the server to start swapping and possibly can cause it to crash or cause the kernel to kill the process when it runs out of memory. So that’s something we want to avoid.

Certain settings affect memory allocation on a per connection/thread basis, being bounded by thread_cache_size and max_connections. If you configure for the worst behaviour (max_connections) you may end up not actually using all the memory you have available, memory which normally could be used for other purposes.

Recently a server I managed was configured incorrectly with a large sort_buffer_size (4M to 256M) and larger read_buffer_size (4M to 20M). The change in configuration on first glance looks quite innocent, and not noticing that these are per-connection settings this got rolled out. max_connections on this server was set to 1000, while normally there were ~40 connections of which only a few were active. The mysqld memory footprint on startup looked fine. In fact under normal usage it also worked fine. However spiky load suddenly changed this nice behaviour: as configured mysqld ramped up the thread count and hence memory usage, resulting in swapping and finally server death…

The fault of course was mine for not noticing, but this has been an issue with MySQL forever. Most other RDBMSes manage memory slightly differently: you define how much memory you want to use (maximum), this is optinally locked into memory, and further requests for different buffers are taken from this global pool. That is much safer, avoids unexpected swapping, or memory over-allocation, but does raise the question of what happens when a memory request can not be honoured.

Initially I would expect two different behaviours: either

kill a thread whose memory can not be allocated, or

wait a certain time to allocate that memory, after which it gets killed anyway.

Option (2) is probably saner, and possibly some sort of deadlock detection can kick if in all threads are waiting for memory, perhaps killing the younger thread, or thread which has done least work first. Possibly there are other better ways of doing this?

I can imagine that changing MySQL’s current behaviour to do something like this could be quite hard, especially as ideally the engines would also use the same memory management mechanisms, but I see this as being a good thing and would make MySQL more robust, especially under load, which is after all what counts. Of course this will not happen in today’s 5.5 GA version, or tomorrow’s 5.6 version which is probably likely to appear some time this year. That’s a major change. It would be nice if Oracle look at this for 5.7 as a way of ensuring that when resource usage does come under pressure MySQL does not go heads up, but attempts to use the allocated resources as best as possible.

In the meantime what would help would be:

better documentation so we can see clearly how all mysql memory is allocated. There are several web pages commenting ways to calculate this, but certainly no definitive guide.

The InnoDB engine’s documentation talks about memory usage and most people think that the innodb_buffer_pool_size is the main setting. yet read further and there’s talk of an overhead of perhaps 1/8th. I have recently been playing with innodb_buffer_pool_instances settings > 1 (using values in the range of 20-40) and am inclined to think that this increases that overhead somewhat more, yet there’s no documentation on this and whether my guess is right or not. Please InnoDB developers improve your documentation, if only to prove me wrong.

Ideally some tools to tell you if you server is possibly misconfigured. Coming from a Sybase environment I’d be tempted to suggest a stored procedure in the mysql database which can tell you total memory usage and how it is broken down as doing this with a single SELECT is going to be tricky.

Then once that is done consider adding some extra variable to enable total memory usage to be controlled. I made a feature request for this at http://bugs.mysql.com/?id=64108. If you think this feature might interest you please let Oracle know.

A couple of years ago I posted about an unlucky encounter with my Thecus N5200 Pro. On the 1st January at 06:15 I had a New Year’s present. A disk died on me. The Thecus duly notified me by starting to beep.

I still have my RAID-6 setup so there was no real problem. I have 2 redundant disks. So I went and bought a replacement and fitted it and the Thecus duly started rebuilding the RAID array with an estimated time to finish of about 1 day (5x1TB disks, so ~3TB array).

Disk prices have increased significantly since I originally bought my array but buying a single disk was not a problem. During the time that the array was rebuilding or shortly afterwards however a second disk failed just as it had 2 years ago when I ended up losing my array as the RAID-5 setup did not have any further disks. This time however there was no problem. RAID-6 saved me and the photos and backup information I had on my Thecus so I was happy and I ended up running out and buying another disk. This one triggered the array rebuild and completed successfully.

That lead me to think. A lot of people push the move to external disks as a backup alternative. Certainly this gives you online information and generally works well. I am using this device at home to keep my photos and other files and also to do a backup of my main machine to a separate location. However, what strikes me as being clear: replacing disks is expensive. Had I been using tapes for backups throwing away the 2 tapes and buying new ones would have been much cheaper than buying 2 new hard disks for my array. Of course a disk array does not provide you a site backup and also does not provide you a convenient way to store multiple backups over time. For that it is not yet cost effective.

So after this experience I am tempted to look and see if there are any small tape devices which I could attach to my PC and use that to give me these extra facilities. The hardware no doubt exists but from previous searches it is expensive and out of the price range of the average home user. That is a shame. With a lot of people now dismissing tape devices as old fashioned failure of drives in a small array like my 5-disk array may turn out to be pretty expensive (in money) or pretty costly (lost data). RAID-1 or RAID-5 sound great but can you afford for a second disk to fail while the array is rebuilding after the first disk has failed? If you can not then arrays of this type may not be for you and may lead you to a false sense of security.

The title of my original post was Unlucky Encounter… So have I just been unlucky (again) or is this sort of problem something which has happened to more people?

I want to keep a record of the configuration of the MySQL servers I manage. The configuration of some servers differs from others and over time the configuration may vary, partly as a result of upgrades in the mysql version or the use of the particular mysql instance, so tracking this is important.

Configuration items in MySQL can be thought of in 2 separate parts: the static configuration files which determine the behaviour of the server when it starts up (my.cnf) and the running configuration of the server in question. The latter information is usually obtained by running SHOW GLOBAL VARIABLES and SHOW SLAVE STATUS if the server is a slave.

I’d also like to compare the 2 sets of configuration so I can see if a local change has been made to the running server which is not reflected in its configuration file. I might want to correct this, or at least be aware of it.

However, collecting and comparing this configuration information is not as easy to do completely as one would really like. There are various inconsistencies which make this task troublesome.

Getting the default configuration out of the system is hard: mysqld --help --verbose sort of shows you this information but you have to filter out the junk.

Use of dashes ('-') in variable names in my.cnf or command line parameter names vs '_' in SHOW GLOBAL VARIABLES.

5.5.16 seems to try to ‘partially’ startup when it shouldn’t. See bug#63187.

You can not query the default configuration information from the running server.

Some settings are incomplete (log_bin does not provide the full name of the binlog files set in my.cnf).

etc…

So all of these issues means that if I want to look at the /etc/my.cnf files and compare that to the running configuration it’s pretty hard to do well. That may not seem like such a big deal until you realise that many mysql servers may have an uptime over a year. In that time the configuration may have evolved and this may mean you need to bring down mysql to adjust its configuration or alternatively see the difference and accept it’s not such a big issue and can be corrected the next time the server is restarted.

I’ve brought this up with Oracle support but guess that many in the community are so used to this they think it’s normal. If you manage several servers then it really is helpful to be able to collect this information and also see if servers run with the expected settings. Not doing so may mean that performance is affected and no-one notices.

So I’d like to ask that Oracle consider addressing this issue and remove some of the aforementioned inconsistencies, and perhaps also providing something like a SHOW DEFAULT SETTINGS or INFORMATION_SCHEMA.DEFAULT_SETTINGS output. If you are in a company which goes through a lot of change and that includes the servers that are being managed, it will be much easier to ensure the servers are configured properly and if they were to address that.

Note software from other places makes this really easy and that can be very handy. Postfix‘s postconf shows the current configuration, but when called with -n will only show the non-default options and postconf -d shows all default settings. So this certainly can be done.

Update: 2012-02-21

It seemed useful to actually add a list of bug reports and related posts.

On several occasions it may be necessary to do some tasks around the process of starting or stopping a MySQL server such as:

Perform a warmup of the database using custom scripts and external information that the init_file configuration may not have access to.

Change configuration settings such as innodb_log_file_size and adjust the on disk files that this needs.

Change configuration settings such as log_bin (I try to use a relative paths to ensure that different instances use a common setting even if datadir is different, and changing from an absolute to relative path often requires adjusting the binlog index file).

Shutting down MySQL may need to be delayed if the SQL thread has open temporary tables or open transactions are still being processed.

etc…

You may have others and many of these are likely to be site specific. I could ask Oracle to add lots of configuration options to cover many of these special cases but that is likely to not work as my needs may not match others. However asking them to add hooks to allow me to put in the infrastructure that I need, especially if those hooks are normally disabled and require minimal changes to the current init scripts, might be acceptable. That’s why I’m curious as to what others think about this.

Some may be asking: why do you need this at all? I have certainly seen several issues such as those indicated above which I need to resolve. Also the DBA is not the only person who may have to restart a server. Sometimes this may be done by a system administrator, and on some servers a by developer. These people are not necessarily expected to know about the specifics of one MySQL server compared to another or to know the server’s current state. It is also true that often a server may go for a long time without needing to be restarted, so many configuration changes may be pending. They only need to be done when the server is restarted.

So I was considering the idea of some hooks which would work in a similar way to a system init script, probably composed of the following parts:

An /etc/mysql/init.d directory containing any scripts that might need to be run. they can be placed there manually or by system configuration tools

An /etc/mysql/rc-start and /etc/mysql/rc-stop directory which would have files named: PRE-nn-scriptname, and POST-nn-scriptname linked to the ../init.d/scriptname file. Again similar to the system init scripts. The nn would, like the number in the system init scripts, define the relative ordering of the execution of any scripts in that directory.

A modification to /etc/init.d/mysql so that these scripts would be run (if present) at 4 stages:

pre-start: before starting mysqld

post-start: after starting mysqld

pre-stop: before stopping msyqld

post-stop: after stopping mysqld

Not absolutely necessary but for completeness a tool which would manage the linking / unlinking of these files.

I can certainly see the use of something like this for my work, but do not see that someone like Oracle will take this unless they see a clear business sense to that, or technical advantage. It would also allow Oracle to provide some “standard” type of scripts if they have seen the need from their customers but also the community could equally share scripts which would be usable for their specific needs but may also be useful to others.

If you have a single mysql instance to manage this may seem like overkill. That’s fine: by default it would not be used. However, as MySQL is gradually being used in larger sites I believe this sort of functionality could be quite helpful as more people may be responsible for managing the servers, and some changes may need to be postponed to a later moment, and the change in the scripts I suggest would make management of the instances simpler.

Have you solved the problem already? If so how? If not does this seem like a reasonable idea?

This comment is in regard to this InnoDB blog page. Trying to post a message says you need to be logged in, but there’s no register/login page I could see…

Anyway the page talks about a new feature which allows you to save on shutdown an InnoDB buffer pool and to load this on startup, this ensuring that once loaded the database will perform with this “hot” cache.

That sounds interesting as I have seen on numerous occasions that if the buffer pool is not warm then performance can be a magnitude worse.

This looks like a very welcome feature. However, a couple of things are not clear to me.

Having some example benchmark times of using this process and comparing it to trying to warm up the buffer pool by hand would be useful. While this may heavily dependent on database content it would at least give us an idea of perhaps how much difference this may make.

On servers I have to warm up currently we find performance becomes acceptable when perhaps 20% of the buffer pool has been warmed up. The implementation behind the new functionality isn’t explained but one option which might be useful would be to save and load only a configurable percentage of the most recently used pages, not 100% of the pages as this patch seems to do. This would save time on save and on load, and possibly be good enough for most people. Many servers I use have 40 GB or 90 GB pools. So loading files of this size into memory is going to take time. Thus having some control over how much of the buffer pool is loaded / saved seems like a good idea if this is possible.

I assume the buffer pool is loaded and saved sequentially (in large blocks) giving the mysqld and the OS the chance to read/write the pages faster in larger chunks. Is this so? Certainly currently when I see mysqld shutdown (in 5.1 and I believe 5.5 too) the shutdown times seem to be rather slow, and from what I can see the data does not appear to be written out quickly. Perhaps the shutdown generates a lot of random I/O. So moving to a faster linear read/write seems likely to be much more efficient.

There is no mention of how the content of the file is checked for “consistency” to make sure that invalid data won’t be loaded. I’m sure there is some checking but otherwise this feature may be a way to corrupt the database inadvertently if the buffer pool cache file gets corrupted.

RHEL 6 has been out for some time and promises some nice new features. Finally CentOS 6 has been released so I was looking forward to getting it installed and upgrade my existing system.

This is currently running CentOS 5.6 and has quite a few services on it. The best way to upgrade is usually to do a fresh install and then migrate existing over existing services. That’s what I was planning to do. I only have a single PC so was trying to figure out the best way to go about this and at the same time minimise downtime.

I use LVM quite a lot. It makes worrying about which disk is used and where the partitions are located much less of an issue. My current setup is quite simple: 2 disks with 2 partitions each. sda1 holds /boot, sdb1 is free. sda2/sdb2 are configured in RAID-1 and hold a single volume group.

Plan A

My first plan was to use a free USB disk drive and install CentOS 6 onto that, leave some space on the USB disk and migrate the CentOS 5 volume group to the USB disk. Then I could move the CentOS 6 installation to the hard disks and complete the migration. That should have left me with the option during this process of booting CentOS 5 or CentOS 6 and migrating data from one system to another.

Plan 1 worked in the sense I was able to complete the install of CentOS 6 on the USB disk but I couldn’t get the installer to install grub on /dev/sdc (MBR): it insisted on doing it on sdc1 which of course the BIOS just ignored. So I couldn’t get CentOS 6 to boot. I could have spent some more time fiddling around with this but then came up with what I thought was a better plan B.

Plan B

Plan B was simpler: given my setup I had a free partition I could boot from (/dev/sdb1) and also an existing volume group where I could put the new partitions. There was plenty of free space in the existing Volume Group so this should be much easier. However, this does not work. The CentOS 6 installer shows the disks correctly and it is possible to edit /dev/sdb1 to be /boot. It also shows you the previously created Volume Group. However, it does not show you the existing logical volumes inside that volume group. The list is empty. So I could not tell it to reuse my swap partition, or to use /home from my existing home partition, and I was also fearful of creating new partitions and have the installer perhaps recreate the VG and thus wipe out my existing system. So I have reported the problem to the CentOS team, though this looks like a bug with RHEL 6, and will play around further to see if I can get this to work.

This makes me wish I had a bigger environment at home. In a work environment you can afford to install a new server, and set it up, migrating existing functionality from the current one, and then finally switch over. I don’t have the hardware to do that so upgrading is quite a bit more work, at least as I want to minimise downtime.

So I will continue to work out how to best to get the CentOS 6 install running and then migrate everything over.

2011-07-30 Update

After further investigation Plan A seemed to be a problem of my own making. The installer first shows you a list of disks and then asks which one you want to install the OS on, the others being called “data disks”. There’s an option here to install boot loader. I missed this and if you do then later you are not offered the possibility of installing the bootloader on that disks MBR. So human error though I would prefer perhaps the install disks MBR to be an option, even if there’s a warning “You indicated you did not want to install the boot loader on the MBR of this disk”.

While looking at partitioning I recently made a mistake which I guess can happen to others. Often this is due to not fully reading the documentation or scanning it too quickly and misunderstanding what’s being said.

So this post is to complain about the MySQL partitioning syntax and to warn others who may easily make the same mistake without realising.

First we probably need to ask why we are partitioning a table in the first place. The main reasons for this are I think:

to improve query performance

to reduce individual .ibd file sizes for large tables (if using innodb_file_per_table)

In my case I wanted to do both. I had a several tables which store a large number of rows (batches of data) based on an incremental batch number. One of these tables was around 40 GB and had about 500,000,000 rows in it. When processing data in this table often all the data from a particular batch run would be aggregated with the resultant data being stored elsewhere.

So I knew I wanted to partition and performance of these aggregations would be improved as the time to table scan a batch would be reduced to the time to scan the partition rather than the whole table.

The primary key of these tables was already in the form ( batch_id, other_key ) so I really wanted to just partition by the batch_id key, using in my case 64 partitions. batch_id is defined as int unsigned NOT NULL.

This is where I made the mistake. There are various ways to partition tables in MySQL and they are named: RANGE, LIST, HASH, KEY. Given the batch_id was a gradually increasing value and I didn’t want to modify the partitioning once it was created RANGE and LIST seemed inappropriate. Of HASH and KEY, I incorrectly assumed that HASH would do some complex hash function of my integer batch_id, and since batch_id was part of the key that KEY would be the right way to partition.

So I incorrectly defined the table like this:

ALTER TABLE xxxxx PARTITION BY KEY ( batch_id ) PARTITIONS 64;

When you read the documentation you see that for HASH-type partitioning you provide a functional value which must be numeric and it actually determines the partition to use by doing MOD( your_functional_value, number_of_partitions ).

KEY-type partitioning works diferently and only allows you to provide column names and then it uses its own internal hashing function to generate the partition id.

So using PARTITION BY KEY ( numeric_column_name ) seems to correct but is likely to be more expensive to calculate. For large tables this is likely to cause additional performance issues. It looks like I’m going to have to rebuild the tables I’ve just partitioned and that’ll be another weekend of work.

A suggestion to those at Oracle is that:

PARTITION BY KEY ( numeric_column_name ) should be modified to behave like PARTITION BY HASH ( numeric_column_name ). However, as this is likely to cause on disk incompatibilities during a version change if it were implemented, I’m guessing it just won’t happen, unless there’s some easy way to distinguish the current behaviour and my proposed new behaviour.

The documentation is made a little clearer and mentions this obvious case. What I see with a lot of the MySQL documentation is that it documents technically how things are done rather than documenting the problem and then how to implement the solution. Since it’s likely that many people are going to partition on one of the columns especially if a multi-column primary key is used this use case should be made clearer.

If I had time I’d look at how partitioning is implemented in Oracle database, Sybase, Postgres or DB2 and see whether it’s just MySQL which has chosen these unfortunate keywords for defining their partitioning methods.

A few months ago I came across the embedded platform SheevaPlug and its brother GuruPlug. I was basically looking to move my Asterisk setup from my main server to a separate box as I’ve been unable to avoid some of the latency issues I’ve reported in previous posts and think it is likely to be related to other software running on the server.

I was thus looking for a small size computer which I could tuck out of the way and hopefully have a nice simple working configuration. I also wanted to move over to FreeSwitch if I had time later.

So I bought myself a GuruPlug Plus. The hardware itself is very tiny and it comes with only 500MB of built-in storage but does have space for a MicroSD card and also external e-SATA/USB drives. It looked economical and a good option.

However, to get it working for my intended use I needed to install more than the sample version of Debian that’s provided with the box, so decided to try and install a larger Debian install on a 8GB MicroSD card. That’s enough for the single application I want to run.

I had decided to use the latest debian version (Squeeze). I also wanted to use LVM as this makes it easier to adjust partition sizes later and also it was an option provided by the installer. I followed the instructions on the website but failed to make the plug bootable. The reason after a bit of looking around is the uInitrd image was bigger than the 0x600000 size given as an example. It would have been helpful if not only the commands but also the uboot output had been shown as this would have made it clearer. I took out the MicroSD card and on a normal Ubuntu server checked the device which was recognised fine:

It seems the device mapper is not being started at the right time, and hence the root partition can not be found.

I tried to extend the rootdelay. That changes the error slightly but it seems the real problem is the device mapper is started too early or the lvm commands are not used again to enable swap/root partitions to be seen.

I’ve recently been trying to clean up quite an extensive set of puppet recipes I’d written. A suggestion that has been made was to remove the data from the recipes and a pointer was made to the new extlookup functionality. That is keep data and code separate and make it easier to modify the behaviour without having to modify the code itself.

In my case I manage a few sets of database servers. Similar groups of boxes are configured the same way. One specific function I was looking at to improve was this snippet.

The intention here being that on the primary master the interface would be enabled and on the standby master the interface would NOT be enabled.

Note: this snippet applies to a pair of boxes but there are several other pairs to configure in the same way, each with their own specific settings.

The extlookup() functionality only allows a single “lookup value” to be searched and implicitly rather than explicitly where to look for this value. The snippet above has 2 “parameters” and if I have several pairs of boxes: MDB_A, MDB_B, MDB_C (the name is not part of the domain or the hostname) then using the extlookup() functionality I’m going to have to setup a lot of data files and this looks unwieldy, especially as I have other resources defined which take several parameters.

So I thought about how I would do this to make the lookup facility more generic and visible.

What seems better to me would be something which is a bit more generic and based on looking up a < “config_item”, “lookup_value” > pair in a single location to get a single value as now. That would make the extlookup() prototype probably change to something like:

parameters being:
[1] config_item ( grouping of values to lookup )
[2] array of values to apply the lookup on, done in order as now but more explicitly shown
[3] default value
[4] explict array of locations of the data files.

The .csv files would have 3 columns: config_item,lookup_value,result_value

Also while in this example above I can use $hostname or $domain as a retrieval key in several other boxes I probably can’t but might need to lookup on another value such as server_type or db_name.
Having the explicit [ …. ] list where I can add any string I want to be looked up give more flexibility and is clearer.

Using the current (puppet 2.6) mechanism would require me, if I understand it correctly, to configure different
.csv files to be searched for each “configuration parameter”. I may also need in several places to override the $extlookup_precedence. In any case this can’t be overridden several times within the same module which is what I would need if I want to lookup different variables.

That doesn’t strike me as being very useful. It’s also unclear from the current extlookup() call itself where the data is being retrieved from. Using these external variables seems to me to make the calls behave like magic.

So my question was to ask how others who are using the extlookup() functionality how they cope with more complex settings than the system settings which depend on say $hostname or $domain etc. and whether my proposed extension makes any sense.

Thanks for your thoughts.

2011-05-12, update

Another snippet of something I’d like to do which seems easier to fulful using my proposed extlookup() setup.

or perhaps the following if you want to only check for boxes in your domain.

$email = extlookup( 'notification_email', [ $hostname, $domain ] )

but most importantly with a single data file containing:notification_email,host01,address1@example.com
notification_email,host02,address2@example.com
notification_email,example.com,default_email@example.com

Various people have complained about Linux swapping unexpectedly on boxes running mysqld, when apparently mysqld was not using all the memory and there was quite a bit of free memory available.

There’s also an article by Jeremy Cole. However, his solution requires a one-line change to mysqld_safe which while it’s small does not work very well if you have to maintain a large number of servers and are using packages asa package upgrade will overwrite the modified file mysqld_safe and then restart mysqld with the unmodified script. This leads to the need to repatch the script and then restart mysqld. Not very helpful.

So I was looking for another solution and came up with this option which basically requires a minor change to /etc/my.cnf and the use of a small shell wrapper script. The change to my.cnf is simply to add the following to the [mysqld_safe] section:

1

2

3

4

5

6

7

[mysqld_safe]

...other options you might be using...

ledir=/directory/containing/mysqld_using_numactl

# wrapper around mysqld

mysqld=mysqld_using_numactl

The wrapper script is little more than:

1

2

3

4

5

6

7

8

#!/bin/sh

# work around to startup mysqld using numactl

numactl=/usr/bin/numactl# Adjust the path as needed

mysqld=/usr/sbin/mysqld# Adjust the path as needed

# use exec to avoid having an extra shell around.

exec$numactl--interleave all$mysqld"$@"

This minor change means that you can still use the standard mysql scripts and they will work even after an upgrade to a new rpm or other type of package. So if you’ve seen your mysql server swapping when you think you have the memory settings correctly adjusted and have been trying to figure out how to easily adjust the configuration to try out numactl to see if this improves things this seems to be an easier way of doing it.