Different Execution Plans when Converting SQL from SELECT to UPDATE … IN SELECT, ROWNUM Related Bug in 11.2.0.2

28062012

June 28, 2012 (Modified June 28, 2012)

Sometimes it is possible to build a very effective test case to reproduce a particular problem. Sometimes the test case drifts a bit offtrack, in the process revealing other information that is not directly related to the original problem that the test case attempted to simulate. You will see several examples of test cases scripts used in various articles on this blog, including one that attempts to describe how to build useful test cases. It is not uncommon to see test cases used to demonstrate various types of behavior, and there are several well known people in the Oracle Database community that use test cases to not only demonstrate problems, but also to teach methods of avoiding those problems.

The FAQ (Frequently Asked Questions) for the SQL and PL/SQL OTN forum includes links to two OTN threads that everyone posting to the forum about performance problems should read carefully:

A handful of the good test cases have been posted to the OTN forums over the last couple of years that have then sparked an interesting article on this blog. Take, for example, this OTN thread that provided a test case script to show that a poor execution plan was generated by the Oracle Database optimizer when ROWNUM was included in the WHERE clause. Prior to Oracle Database 11.2.0.1, the cardinality calculation for steps in the execution plan that could not possibly be short-circuited early (such as a hash join with a subsequent ORDER BY) were affected by the ROWNUM predicate in the WHERE clause (of a parent query block). That test case resulted in at least one article on this blog, which showed that with a bit of modification to the SQL statement, that it was possible to use the ROW_NUMBER analytic function as a replacement for ROWNUM, thus avoiding the automatic switch to a FIRST_ROWS(n) optimizer mode for the SQL statement. This particular problem was determined to be a bug, and corrected with the release of Oracle Database 11.2.0.1.

Fast forward two and a half years, and we see another interesting, and oddly related test case in a thread on the OTN forum. The OP (original poster) in the OTN forum thread generated the test case, posting his results from Oracle Database 10.2.0.5 (Standard Edition), while I posted my observed results from Oracle Database 11.2.0.2 in an attempt to find a more efficient method for updating rows that match specific criteria (other people helped also, possibly executing the test case script in other Oracle Database release versions).

What do we learn from the OTN test case script? Here is a start:

When hinting a SQL statement to generate an expected execution plan, it is important to be as specific as possible with the hints, without boxing the SQL statement into a corner as the data volume increases in the various tables accessed by the SQL statement. The hints also must appear in the correct section of a complex SQL statement with one or more inline views or subqueries.

If you use ROWNUM in the WHERE clause of a SQL statement, and then convert that SELECT statement to a UPDATE tablex WHERE coly IN (SELECT…) statement, you may find that the execution plan for the SELECT portion of the UPDATE statement changes dramatically from what was seen when just the SELECT statement was executed. With Oracle Database 11.2.0.2, you may find that the first execution is efficient, and on the second execution find that cardinality feedback triggered a new hard parse of the SQL statement, resulting in a significantly less efficient execution plan (or possibly significantly more efficient execution plan).

AUTOTRACE in SQL*Plus lies about the execution plan sometimes, such as when cardinality feedback triggers the generation of a new execution plan.

Using a UNION ALL in an inline view will likely not permit the Oracle runtime engine to short-circuit an operation in the inline view when a ROWNUM condition is present in the parent SQL statement, even if the query optimizer’s generated execution plan indicates that this short-circuit will happen. If possible, rewrite the SQL statement to avoid the UNION ALL operation, if both sections of the UNION ALL are accessing the same tables.

The GATHER_PLAN_STATISTICS hint, when added to a SQL statement, permits the use of DBMS_XPLAN.DISPLAY_CURSOR with the ALLSTATS LAST format parameter. An execution plan generated with that format parameter will indicate whether or not short-circuiting of steps in an execution plan took place. This short-circuiting will be indicated in the A-Rows column, the Starts column, and/or the Predicate Information section of the execution plan.

Within an inline view, an index-driven nested loop join that permits the elimination of an ORDER BY clause can be short-circuited by a ROWNUM condition in the parent portion of the SQL statement, while the same is not true for hash or sort-merge joins.

The ROWNUM cardinality calculation bug is still present in Oracle Database 11.2.0.2 if the inline view contains a UNION ALL clause – this bug is most apparent when cardinality feedback triggers the generation of a new execution plan on the second execution of the SQL statement.

In Oracle Database 11.2.0.2, query blocks in a SQL statement that cannot be short-circuited by a parent query block’s ROWNUM predicate are still optimized with an automatic FIRST_ROWS(n) optimizer mode (where n is the number associated with the parent operation’s ROWNUM predicate). Through an apparent bug in the query optimizer, this automatic optimization may cause odd cost calculation problems, such as dropping the cost of a full table scan from 35.01 to 2.02 (with the cost of the corresponding hash join dropping from 56.76 to just 23.83); at the same time, due to cardinality feedback, the cost of an efficient index access path with a nested loops join (that may be short-circuited by the runtime engine) at the same point in the execution plan may increase from a cost of 41.32 to 2947.89. As a result of the costing change, the nested loop join with index access path is likely to be replaced with a hash join and full table scan.

A side-by-side comparison of a portion of the 10053 trace file (showing the possible bug related to ROWNUM and cardinality feedback) that was generated by the following script (using the OP’s sample data):

In the above example, cardinality feedback helped to reduce the execution time. Notice in the first plan that the E-Rows column shows that the ROWNUM predicate was pushed too far into the inline view’s cardinality calculations. So, cardinality feedback helped in the above example. Now, let’s look at what happens to my optimized version of the SQL statement that removes the UNION ALL:

In the above, notice that the reported A-Time increased from 0.01 seconds (the actual time could be must faster than that) to 0.02 seconds. The number of consistent gets also increased from 80 to 5,315. Cardinality feedback will NOT automatically change the execution plan back to the more efficient execution plan during later executions of the SQL statement.

Actions

Information

6 responses

> When hinting a SQL statement to generate an expected execution plan, it is important to be as specific as possible with the hints, without boxing the SQL statement into a corner as the data volume increases in the various tables accessed by the SQL statement

For once, I’m not sure I agree with you, Charles. I don’t want to take the thread off topic but …

If we were just talking about test cases and trying to reproduce a particular behaviour then I’d have nothing further to say.

But you mention giving the optimizer options as volumes change, so we must be talking about permanently embedded hints in application code, right?

But, certainly in any recent version of Oracle, if we’re not talking about this small subset of statistical adjustments and other such hints as per above but we’re talking about hinting join mechanisms and access paths then I think you have to make a pretty much black and white choice.

You either have to provide a full set of hints for a specific plan or you don’t hint.

Particularly as the optimizer gets more complex and query transformation gets more sophisticated, not boxing the SQL statement into a corner is a performance threat and I see the threat in action time and time again, especially related to upgrades.

If you can fix an execution plan issue via statistical adjustments using cardinality, dynamic_sampling or even opt_estimate then great, otherwise I think you’ve a tough choice.

Alternatively, if you want to best of both worlds – specific plans now with the flexibility to evolve then we’re talking baselines.

Thanks for the comment and for supplying a link to one of your articles. You make a good point. Rational disagreement is of course encouraged when necessary – there are of course a lot of extremely knowledgeable people out there, and when I am wrong, I hope that those people will offer kind assistance.

The first of the above bullet points has (at least) two completely different interpretations:

* If you as the developer know that table T1 will expand from 1,000 rows in your test environment to 10,000,000,000 rows once the application is in service for a couple of years (with index clustering factor values expected to become closer to the number or rows in the table than it is to the number of blocks in the table), while the other tables accessed by the SQL statement will remain at roughly 1,000 rows, it might make sense to specify the access path for one of the tables, while allowing the optimizer to dynamically adjust the access path, join method, and automatic query transformation for the remainder of the query. I believe that this is the interpretation envisioned when you read the first bullet point, and might very well be the same interpretation that many other people have also.

* When you as the developer are developing an application, the sample data that is used for development may be much smaller than what your customers will see – that 1,000 row table that has beautifully built rows that are inserted in a perfect order for the indexes built on the table, may in fact be a 10,000,000,000 row table with rows inserted in a nearly random order into the table’s blocks, with the table having 10,000 partitions in an ASSM tablespace, and with access to parallel query. The statement “without boxing the SQL statement into a corner as the data volume increases in the various tables accessed by the SQL statement” in this case means – think about the big picture, what will the table’s data volume look like in a month, year, 10 years, etc. If the developer boxes the optimizer into a corner with hints that produce efficient execution plans in the small sample database with OPTIMIZER_INDEX_COST_ADJ set to 1, for instance, that same SQL statement and execution plan could cause horrendous performance problems when the code is released into production environments with several years’ worth of data. In short, if you do not know what you are doing with hints, and do not understand how the data will transform over time, maybe it is best to leave the hints out of the SQL statement all together.

* Consider how the data and supporting structures might mature over time. If as a developer, you fully hint an execution plan to take advantage of specific indexes, access paths, join methods, etc. and the query executes efficiently – great. But, what happens if the definition of “efficient” changes at some point in the future? What if the developer learns that normal B*tree indexes can contain NULLs (https://hoopercharles.wordpress.com/2012/02/28/repeat-after-me-null-values-are-not-stored-in-indexes ) and determines that a new composite index would generate a better plan, or simply adding a NOT NULL contraint to a table definition provides the optimizer with enough information that it can not make use of an index access path that was previously unavailable. What if a hint specifies a specific index, and that index is no longer accessible to the optimizer (maybe the index is marked as UNUSABLE, maybe it was altered to be hidden, maybe it was just a redundant index and dropped, maybe it was an index with a system generated SYS_ name that does not survive an export-import) – the optimizer in an Oracle Database is quite clever at times, and may arrive at a completely stupid execution plan when trying to follow the demands specified by the remaining valid hints. If the developer had boxed in the optimizer with very specific hints, it could be difficult to fix the application code to take advantage of the more efficient access paths (I understand that there are extensive procedures that must be followed in some industries, for instance drug manufacturing, any time the source code of an application changes).

The different possible interpretations, in this case, was somewhat intentional – the second of the above bullet points was the original target of the bullet point statement, but the other interpretations are present to encourage discussion with readers who have a lot of technical experience in this area.

Anyone else have an opinion on this matter, or any of the other bullet pointed items in the article?

I believe I have managed to simulate the data pattern more closer to the data in original tables (BTW, I am the OP of that OTN thread).
And I am again stuck at the stage where the SELECT part of the UPDATE, when executed on its own, uses the nice NESTED LOOP join with short-circuit. But, as expected, the UPDATE does not use the particular index and the join changes to HASH JOIN. I strongly suspect that the UPDATE could benefit from using the access path used by the standalone SELECT but can’t figure out how to make it happen.
If I have not already managed to annoy you, I can post the complete test case, either here or on OTN thread.

I think that if the DTL table is not selected as the first table in the join, the NO_USE_HASH and NO_USE_MERGE hints might not have any effect. So, we probably need to use a LEADING hint to make certain that the optimizer starts with the DTL table:

Since the SELECT statement is optimized with a FIRST_ROWS(n) optimizer mode, there is a chance (probably a small chance) that a FIRST_ROWS(n) hint may override the automatic ALL_ROWS optimization:

update dtl set process_ind = 'PROCESSED' where dtlid in (
select
dtlid
from
(select
dtlid
from
(select /*+ FIRST_ROWS(10) */
dtl.dtlid,
dtl.process_date
from
dtl,
hdr
where
dtl.hdrid = hdr.hdrid(+)
and dtl.process_date < (sysdate + 30)
and dtl.process_ind = 'NEW'
and (
(hdr_ind in (0, 2) and dtl.hdrid IS NOT NULL)
OR (dtl.hdrid IS NULL)))
order by
dtlid,
process_date)
where
rownum <= 10);

If you are able to solve the problem, I would like to see what worked. Definitely update the OTN thread when you find a solution. If the hint does not work out, you could post the updated test case so that people are able to better reproduce the problem that you are experiencing.

The OTN forums crashed as I tried to post the following response (sorry to all of the users of the OTN forums), so I thought that I would post it here.

Your analysis of an increasing number of rows needing to be fetched from the DTL table for each execution of the UPDATE statement is correct (I think that I mentioned this issue early in this thread). So, what is happening?

First, let’s make it easy for people to create your latest version of the test case tables:

With those tables freshly created, we can start experimenting. What if we execute the update statement 20 times, outputting the execution plan each time, and executing the following SQL statement before the first update statement, and then after every 5 executions of the update statement:

The above output will show you the rows that are making it out of the DTL table, which are then used to probe the primary key index on the HDR table. The C column indicates the number of rows that match the WHERE clause in your UPDATE statement, after those rows are pulled from the DTL table (once the C column reaches 100, the Oracle runtime engine is able to short-circuit the execution plan). The RN column provides a running counter of the number of rows from the DTL table.

Before the first update, notice that the first row that will be returned (based on your WHERE clause) is the first row printed, and that the 100th row that will be returned is the 211th row that is printed:

Now, I think you know what you need to do to optimize this SQL statement, if your test case correctly replicates the product environment (if it is not obvious yet, look at the output of the SELECT SQL statement that returns 2800 rows, after the 20th update).

Hints for Posting Code Sections in Comments

********************
When the spacing of text in a comment section is important for readability (execution plans, PL/SQL blocks, SQL, SQL*Plus output, etc.) please use a <pre> tag before the code section and a </pre> tag after the code section:
<pre>

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;
TODAY
---------
01-MAR-12

</pre>
********************
When posting test case samples, it is much easier for people to reproduce the test case when the SQL*Plus line prefixes are not included - if possible, please remove those line prefixes. This:

SELECT
SYSDATE TODAY
FROM
DUAL;

Is easier to execute in a test case script than this:

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;

********************
Greater than and Less than signs in code sections are often interpretted as HTML formatting commands. Please replace these characters in the code sections with the HTML equivalents for these characters: