I don’t wish to steal Peter’s thunder (he’s worked enormously hard with this project), but I’m going to take this opportunity to post a couple of snippets that I took away from the first benchmark, and let Peter explain them, and the other tables, in his coming posts.

First, let’s check what are the hottest parts of the server where mutexes, IO (there has been more added with Maria in the tree since my tests here, in fact as a result of them I think), conditions or rw locks etc. are concerned whilst I was running the benchmark:

The times that are being recorded above are actually CPU cycles, if you know the power of your CPUs you could convert those to microseconds fairly easily – or you can tell the PERFORMANCE_SCHEMA to record in other formats too.

Next, let’s take a look at what the thread that was inserting the 1 billion rows had been doing. Yes folks, this takes SHOW PROFILES and SHOW ENGINE INNODB MUTEX to a whole different level (it will pretty much make them defunct, imho, if we can get InnoDB using this instrumentation).

I’ve had the pleasure of working with the team that writes the MEM software (the “Enterprise Tools” team, internally and lovingly known as the “Merlin Team“, the codename that has survived various renames of the product!) for a little over 3 years now. I can’t say I was there at it’s conception, but I started working with them before the initial release of the product, and have watched (and I like to think helped shape) the product very closely whilst being the “Support Coordinator” for the Support Team for MEM. It’s a great product already, but we have many ideas, it’s going to be an awesome product of the future.

Along the way I’ve helped to write many of the graphs and rules that are released for the MySQL Enterprise Monitor within the default Advisor bundles (along with Andy Bang, one of the original team with the concept) and hope to give MEM users some insights in to how they can extend MEM to suit their own needs.

For example, many users have asked for us to add disk space monitoring – we’re working towards making it more seamless for the next releases (2.0 has taken an interim step for this) – but little know that you canÂ already extend the Monitor to do this within the new 2.0 release:

MEM Disk Monitoring

Come to the talk to find out how – and more, like collecting your own data points (from various sources), graphing them and/or alerting on them! 🙂

Like this:

So I saw the tokutek challenge, and wondered to myself how Maria would get along with it.

I watched it closely, for about a day, then got bored and forgot about it. I remembered today that I should take a look!

So I saw the tokutek challenge, and wondered to myself how Maria would get along with it. I duly downloaded a 6.0 tree, and the iiBench code, tinkered with it to make it actually build, and fired things up.

I watched it closely, for about aÂ day, then got bored and forgot about it. I remembered today that I should take a look!

CPU Usage (Quad Core)Average rows per second insertedLoad Averages

You can see, in just over a day the IO load became too heavy to process efficiently.

I tinkered with Maria right from the start though, I wanted to see what a longer checkpoint interval would give, so increased it to every 5 minutes – obviously this doesn’t seem great. 🙂 I also wanted to use the same page size as InnoDB out of morbid curiosity. Here’s the my.cnf:

I added a new custom graph for MEM, to track how the Maria Page Cache gets utilized:

Maria Page Cache Usage

I’ll be making a couple more for Maria as well – including the easy read and write physcial/logical requests from SHOW GLOBAL STATUS (to be released with MEM once Maria is ready, let me know if you want the custom graphs before hand).

The server is RHEL5, Quad Xeon, with 16G RAM, and a 4 disk 10krpm RAID 10 array for the /data0 mountpoint (although using ext3, along with the noop scheduler). Taking a look at iostat when I came back to it, it’s clear that this was my barrier (well, the io wait in the CPU graph is a pretty good indicator as well eh!):

Maria does not make use of bulk_insert_buffer_size, unfortunately, when TRANSACTIONAL = 1. It does when TRANSACTIONAL = 0 however. It also doesn’t use something like InnoDB’s Insert Buffer, so it’s clear that there is probably some way to go when it comes to bulk inserts within Maria for the TRANSACTIONAL mode.

Maria does support concurrent inserts with TRANSACTIONAL = 1, however this is disabled when the table has an AUTO_INCREMENT, (or FULLTEXT/GIS indexes) – so that makes this benchmark difficult from that respect too.

The IO overhead for the log files (on cciss/c0d0 above) was not huge, so it will be interesting to see how this affects things (I’ll report back). This should show how just the new page cache works out as well.