This, primitively, tells Innodb that it should start the Table monitor. The monitor itself dumps the contents of the Data dictionary to the mysql error log, which looks something like this:

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

===========================================

09042012:09:32INNODB TABLE MONITOR OUTPUT

===========================================

--------------------------------------

TABLE:name SYS_FOREIGN,id011,columns7,indexes3,appr.rows1

COLUMNS:ID:DATA_VARCHAR DATA_ENGLISH len0;

FOR_NAME:DATA_VARCHAR DATA_ENGLISH len0;

REF_NAME:DATA_VARCHAR DATA_ENGLISH len0;

N_COLS:DATA_INT len4;

DB_ROW_ID:DATA_SYS prtype256len6;

DB_TRX_ID:DATA_SYS prtype257len6;

INDEX:name ID_IND,id011,fields1/6,uniq1,type3

root page46,appr.keyvals1,leaf pages1,size pages1

FIELDS:IDDB_TRX_ID DB_ROLL_PTR FOR_NAME REF_NAME N_COLS

INDEX:name FOR_IND,id012,fields1/2,uniq2,type0

root page47,appr.keyvals1,leaf pages1,size pages1

FIELDS:FOR_NAME ID

INDEX:name REF_IND,id013,fields1/2,uniq2,type0

root page48,appr.keyvals1,leaf pages1,size pages1

FIELDS:REF_NAME ID

--------------------------------------

Basically you can see every table, every column, and every index in that table. Once you get the output, you just DROP the table and it stops. It’s been around for forever, so it should be stable, right?

So, to my customer. It may be important to note that this customer had a few hundred tables. I scheduled a time with the customer during off hours to connect to their master and collect the output so we could confirm if Innodb really did have all the indexes or not. I collected the data just fine, and I found that indeed the indexes that Innodb knew about were correct, and that the error messages the customer saw was apparently only transitory.

However, as I was writing an email to the customer to explain it, I got an email explaining that he saw a big spike in io-wait on the server during the time I ran the Innodb Table monitor (it was just for a minute) and he was very glad we waited until after hours. The server itself was fine, and there didn’t seem to be any adverse effects from running the monitor, but something clearly was a lot busier than expected during execution.

He had data collected using Percona Tookit‘s pt-stalk during the time and asked me to take a look and see what happened. I looked through the data and found not only was there high io-wait on the system, but also there were over a hundred user queries in SHOW PROCESSLIST that were stuck in the ‘Opening Tables’ state, uh-oh.

I dug through the source code and found a few interesting things about the table monitor:

It holds the dict_sys->mutex for the entire time it runs

Before outputting the information for each table, it calls dict_update_statistics and appears to force table statistics to be regenerated for each table

It’s possible I’m mistaken, but it sure looks like dict_update_statistics is called twice for each table, at least in the latest Percona Server 5.1 in: storage/innodb_plugin/dict/dict0load.c: line 226, and again in: storage/innodb_plugin/dict/dict0dict.c: line 4883

So, the io-wait spike is explained by table statistics being regenerated, and the SHOW PROCESSLIST connections stuck in ‘Opening Tables’ were waiting for the dict_sys->mutex, which is required for opening tables in Innodb.

It would be better for the Table Monitor if:

Table stats weren’t regenerated unless it was necessary

dict_sys->mutex weren’t such a big hammer

These points are probably moot, since I don’t think most people don’t use this feature very often.

Related

Jay joined Percona in 2011 after 7 years at Yahoo working in a variety of fields including High Availability architectures, MySQL training, tool building, global server load balancing, multi-datacenter environments, operationalization, and monitoring. He holds a B.S. of Computer Science from Rochester Institute of Technology.

2 Comments

I was searching for dict_sys mutex and this post was one of the results. I stumbled upon dict_sys mutex at another place — both row_import_tablespace_for_mysql and fsp_get_available_space_in_free_extents, first one being called for alter table and second one for show table like operations I guess — can get into contention.

Few bugs exist on bugs.mysql for this — http://bugs.mysql.com/bug.php?id=54538 — though it touched upon other areas where that mutex had affected.