Side load may massively impact your MySQL Performance

When we’re looking at benchmarks we typically run some stable workload and we run it in isolation – nothing else is happening on the system. This is not however how things happen in real world when we have significant variance in the load and many things can be happening concurrently.

It is very typical to hear complains about MySQL interactive performance – serving simple standard web traffic is drastically impacted when some heavy queries are ran in background or backup is done with mysqldump – a lot more than you would expect from simple resource competition. I finally found some time to look further in this problem and see what can be done to remedy it.

We designed the benchmark the following way – there is a small table (200MB) which completely fits in the Innodb Buffer Pool (512MB). We also have larger table 4GB which does not fit in the buffer pool. We’re running uniform sysbench OLTP on the small table and mysqldump on the second table. First we run tests individually and when concurrently.

In the perfect world what we would like to see is performance is staying about the same when we run tests concurrently because Sysbench should run completely in memory and use a lot of CPU resources but none of disk IO and mysqldump should have relatively little CPU needs and be bound by disk. Also these are just 2 “threads” running on 4 core system so there should be plenty CPU to spare.

We’re using Percona Server 5.5.15 for this test with buffer pool size of 512MB and innodb_flush_method=O_DIRECT

Baseline Run: When we run the tests individually Sysbench gives about 330 req/sec and mysqldump for large table completes in about 95 seconds. If we run them concurrently after system reaches steady state we get about 2 req/sec and mysqldump takes about 180 seconds.

Yes you get it right. Performance of sysbench OLTP on small table drops more than 150 times when heavy mysqldump is running concurrently. mysqldump itself also slows down about 2x.

What is going on here ? To understand it we should take a look at the buffer pool contents.

When we’re running sysbench OLTP on its own we have the primary key of the table fit completely in the buffer pool. However when mysqldump is ran concurrently it reads so many pages from the disk it pushes out most of the smaller table from the buffer pool with only 12MB remaining. This makes workload extremely IO bound hence such drop in performance.

The performance of mysqldump is impacted too because we now have 2 threads competing for what is single hard drive on this test system.

It is worth to note MySQL actually uses midpoint insertion for its buffer pool replacement policy . Unfortunately by default it is configured in a way it is quite useless. The blocks are indeed first placed in the head of “old” sublist which mean they should not push any hot data which is in “young” sublist. However when you’re doing mysqldump (or running some complex batch job query) you are likely going to have multiple accesses to the data on the same page before being done with it for good. Because there are several accesses page really gets immediately moved to the young sublist and as such placing high pressure on buffer pool.

There is ingenious feature though to deal with this problem, it is just you have to enable it separately. There is a variable innodb_old_blocks_time which specifies amount of milliseconds which needs to pass before table can be moved to the young sublist. In typical cases like mysqldump all accesses to the majority of pages will be concentrated within very small period of time so setting innodb_old_blocks_time variable to some value will prevent important data to be pushed out of buffer pool.

Lets repeat the benchmark with innodb_old_blocks_time=1000 which will correspond to 1 sec.

Separate Sysbench gives about 330 req/sec and mysqldump about 95 seconds which is the same. Note we ran test on virtualized system in this case so we would not be able to measure small variances in performance reliably.

Running Sysbench and MySQLDump convurrently gives about 325 req/sec for sysbench and some 100 seconds for mysqldump which is a dramatic improvement of over 150x for sysbench and results now going inline with what you would expect.

As you can see now the small table PRIMARY KEY (which is what used by benchmark) is not pushed from buffer pool at all.

For advanced tuning you might also look into changing how buffer pool is split into young and old sublists via innodb_old_blocks_pct variable though we did not need to do it in this case.

I’m not sure if there are any bad side effects from setting innodb_old_blocks_time to non zero value, if not I would strongly suggest changing default from zero in MySQL 5.6 as it would offer much better “out of box” user experience.

Summary As we can see in default configuration MySQL has buffer pool which can be easily washed away by large table scans or heavy batch jobs. If this happen the workload which is normally in memory becomes disk IO bound which can slow it down more than 100 times. The solution is rather easy though. Setting innodb_old_blocks_time to 1000 or other meaningful number is an easy remedy for this problem.

I want to thank Ovais Tariq for doing a lot of heavy lifting running benchmarks for this post.

About Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.

Comments

Thanks for the insight, very helpful. Though on my system here (MySQL Community Edition 5.1.58) I cannot set this variable, mysql just complains about “Unknown system variable ‘innodb_old_blocks_time'”. Ein “SHOW VARIABLES LIKE ‘innodb_old_blocks_time';” will result in an empty set. What do I do wrong here?