Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I have a MySQL DB with mostly InnoDB tables that I'm fairly certain isn't tuned properly based on what's happening, but I don't have enough know how to pinpoint what to change. The server is a 8G 32b Linux system with the following in my.cnf:

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.

Is the server dedicated only to MySQL, or are you running other services (such as Apache) on it?
–
Michael - sqlbotMar 23 '13 at 0:56

Unfortunately the whole stack is on this system, which includes Apache.
–
user21649Mar 23 '13 at 1:10

I also just updated the original post to reflect the fact that it's an 8G system (not 4G like I originally posted).
–
user21649Mar 23 '13 at 1:13

Where are you seeing OOM messages? Is this syslog messages from the kernel (e.g. in /var/log/messages or /var/log/syslog), or messages in the MySQL error log itself?
–
Michael - sqlbotMar 23 '13 at 1:59

Both the segfault (error 11) and OOM errors are from MySQL directly.
–
user21649Mar 23 '13 at 2:05

1 Answer
1

There's nothing really standing out as being poorly-tuned. The innodb_buffer_pool_size is, quite possibly, the single most important configuration parameter for optimum performance, and the general rule is that you want it as large as possible, as much as 80% of system memory on a dedicated server (unless, of course, you have a very small data set)... but on a system where you're sharing resources with anything, and especially Apache, it has to be much, much smaller.

In your case, 2G or 25% of system memory, seems like it should be perfectly reasonable in theory, but it might not be true in your case.

The memory for the InnoDB buffer pool is requested from the operating system in a single block when MySQL starts, and this block never grows or shrinks... so changing this parameter will, without exception, change the total baseline amount of memory MySQL requires, with no connection at all to the size of your data set or your load. Of course, reducing the size of the InnoDB buffer pool will free up some memory but will also negatively impact performance.

Many of MySQL's other memory needs are handled by allocating and freeing memory from the OS as needed, and the bottom line is that it will request exactly the amount of information from the operating system that it needs, in order to handle its workload:

"The truth is that you can’t put an upper bound on MySQL’s memory consumption. It is not a tightly regulated database server that controls memory allocation."

You can, and should, control the MaxRequestWorkers setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes.

Well, "simple" seems like a bit of an overstatement, since if you don't allow sufficient Apache worker threads to meet your application's demands, your performance will deteriorate here, too.

It is, of course, very bad for a server running MySQL or Apache to start swapping; however, this doesn't mean you don't need swap space... you should have it for emergencies (light swapping is much better than crashing, after all)... so if you don't have it, you should... and if you do -- and that's getting maxed too -- then you actually already had a significant problem on your hands before the crashes began.

I recently wrote a simple stored procedure intended to crash MySQL by maxing out the system's memory, only to find that "merely" running out of memory is not actually enough to crash MySQL. Review of the source revealed that many out of memory events are handled quite gracefully, with only (for example) a query failing, not a complete meltdown... but the fact that many such events are handled well does not by any means suggest that all such events are, or even necessarily could be.

The bottom line is that unless you can find some cruft, you may simply be trying to do too much with too little... but activating some swap space (if you haven't already), then trying to tame the Apache children, and then reducing the buffer pool size would be the places I would start.

Thanks for the informative comment! I did add swap already as you suggested, and Apache children were already under control. But reducing the buffer pool, while after thinking about it makes total sense, is not something I considered. I'll pursue that as well, thanks!
–
user21649Mar 23 '13 at 8:01