CPU usage was very high in production database server in user% mode. Dynamic performance view v$session_wait indicated excessive waits for latch contention. Output from a script wait_details.sql (scripts available at the end of this post) shows that many sessions were waiting for ‘latch free’ event. Also, address for these latch children are the same, meaning all these sessions are trying to access one latch children.

We can identify SQL causing latch contention querying v$session_wait. From the output below, SQL with hash_value 1509082258 is suspicious since there are many sessions executing that SQL and waiting / waited recently for ‘latch free’ event.

We need to map child latch address 1509082258 to an object. Fortunately, using a script latch_cbc_to_buf.sql(script in the end of this post) written earlier we were able to do that mapping quickly. This script prints touch count for those buffers too.

From the output above, we know that CUS_MTL_MATERIAL_TXNS_C3 index is at the heart of this latch contention issue since that object has higher touch count than other objects protected by that child latch.

SQL and execution plan

Querying v$sql, SQL associated with this hash value was retrieved. Execution plan for this SQL is very long and has many branches joined by ‘union all’ operation. Searching for the index CUS_MTL_MATERIAL_TXNS_C3 in the execution plan shows that use of this index, in the last two branches of execution plan. For clarity, only part of the plan is printed below. [ Note: v$sql_plan also confirmed this execution plan.]

Line #128 is a key indicator of the problem. Rows from steps 129 and 134 are joined using cartesian merge join method!. Obviously a cartesian join will generate huge amount of rows as there will be no join conditions between those two row sources [similar to a cartesian product]. Resultant rows of this cartesian join are, further, joined using Nested loops join method to MTL_MATERIAL_TRANSACTIONS through the index CUS_MTL_MATERIAL_TXNS_C3. The reason CBO chose a cartesian join is that the cardinality estimate at step 129 is 1, which is incorrect [but that is a different topic altogether ].

So far, we know why that index blocks are accessed frequently: A side effect of cartesian merge join producing enormous amount of rows. If this SQL is executed from many different sessions concurrently, effect of latch contention on index root block will be magnified.

What changed ?

This is an existing application and was working fine until few hours earlier. So, what changed?

Statistics. As a process, we collect statistics in a cloned copy of production database and then import those statistics in to production database. There were few other reorgs performed over the weekend, but that doesn’t seem to have any negative effect. We were fortunate enough to have another development environment with 1 month old data and statistics. Comparing execution plan for that branch of SQL in the development instance, reveals something peculiar and interesting.

Cardinality estimates for RCV_TRANSACTIONS_INTERFACE, for identical predicates, are 5 (Step #16 )in the efficient plan (development database) and 39 in the inefficient plan (Production database). This increase in cardinality caused optimizer to choose a completely different plan. Interestingly enough, RCV_TRANSACTIONS_INTERFACE is an interface table and while collecting statistics on this table in pre-production environment, we had a special case transaction. This invalid state of the table generated not-so-good statistics, which was transferred to production.

Core of this blog entry is concentrating on root cause analysis for latch contention. SQL plan change is the root cause and cardinality feedback comes in to picture to answer why CBO chose a different plan.

Yes, we were fortunate enough to have older plan for a quick comparison. Of course, there are other ways to achieve this result i.e. reviewing 10053 trace, export/import of older statistics to development, review of statspack plans, comparing real cardinality and estimates etc.

Hypothetically speaking, If a short-table has grown to bigger one [ and caused this issue], then we would have resolved it a) by importing older statistics [ and thereby relieving immediate pressure upon us] b) then, analyzing this SQL further in the production copy for a permanent solution.

http://mensengagementrings.casaid

Ganeshsaid

Great Article. I’m your blog follower. If I get any issue related to locking or contention, first I try to go through your blog and try to understand what’s the real issue. The way of your presenting every article is excellent.

But I’m not able to download your scripts in this blog. It will be helpful for every blog follower of you like me. Thanks

Hello Ganesh,
Thanks for pointing out the script issue. I updated the blog entry. I lost all of my orainternals.com content due to an hacking issue. I have become too busy (or too lazy ?) to bring up that site. If you find more script issues, let me know, I will upload the scripts.

Ganeshsaid

Great !!! Thanks much for the reply and uploading the scripts. We got Golden eggs 🙂 if you are uploading related scripts for every article and presentations, it will be really helpful for us. Really appreciated and thankful always.