Fun With 9.7 Locktimeout Analysis

I first addressed the new 9.7 methodology for deadlock and locktimeout analysis in my post Analyzing Deadlocks – The New Way. I’ve been playing with it a bit, and wanted to share what I’ve found. First of all, so far, I like it. I can’t speak to overhead yet, but it integrates lock timeout analysis more tightly than it previously was. I went out to take a look at one database I support that generally runs pretty smoothly, thanks in part to a ridiculously oversized database server.

I used the SQL from my own original post on the topic to see what was going on. First:

The good news was that there are no deadlocks over a couple of weeks. The bad news? 600 lock timeouts. Eek, that seems a bit high. So I dug further to see what times we were seeing issues. Was it peak volume or during nighttime maintenance?

That information immediately made me feel better. The Spikes are Saturday mornings in the midnight hour. That’s when I do my runstats and reorgs, along with an rbind. To confirm my suspicion of the connection, I took a look at the tables involved.

-bash-3.2$ db2 "select substr(lp.table_schema,1,18) as table_schema, substr(lp.table_name,1,30) as table_name, substr(le.event_type,1,18) as lock_event, count(*)/2 as count from DBA.LOCK_PARTICIPANTS lp, DBA.LOCK_EVENT le where lp.xmlid=le.xmlid group by lp.table_schema, lp.table_name, le.event_type order by lp.table_schema, lp.table_name, le.event_type with ur"

The row with dashes for table_name and table_schema, I was expecting. Half of the locktimeout is always reported this way. However, we should be getting values for the other side of those. So I decide to take a look at the statements involved, and sure enough I find this:

Looking at the SQL, both of those statements are related to SYSIBM.SYSPLAN. My conclusion is that it’s probably the db2rbind that is done during the process, and that the locks are on internal plans, and that’s why the table wasn’t showing up in our list of tables.

Gathering this same information would not have been as easy with the old method of monitoring for locking issues. I would have had to parse through the files created for the purpose in the DIAGPATH and found the commonality that way.

Ember is always curious and thrives on change. Working in IT provides a lot of that change, but after 17 years developing a top-level expertise on Db2 for mid-range servers and more than 7 years blogging about it, Ember is hungry for new challenges and looks to expand her skill set to the Data Engineering role for Data Science. With in-depth SQL and RDBMS knowledge, Ember shares both posts about her core skill set and her journey into Data Science.
Ember lives in Denver and work from home for XTIVIA, leading a team of Db2 DBAs.