Rolling Cursor Invalidation

While working on a demo a while back I discovered the hard way that things do indeed change! Of course this is nothing new but by sharing my experience I hope to avoid others re-inventing the same wheel and to learn you something that you might not already know.

The demo I was working on is used to demonstrate how bind peeking works, the problems it can cause and how the Oracle11g Adaptive Cursor Sharing feature solves these problems. In this demo I call DBMS_STATS to create a histogram and I expected that dependent cursors would be marked INVALID afterwards but this simply didn’t happen. I almost lost my believe/confidence in myself and Oracle ;-)

Somehow I forgot, or maybe completely missed, the fact that cursors are invalidated in a rolling fashion since the introduction of Oracle10g. Before Oracle10g cursors were marked INVALID immediately by DBMS_STATS (or ANALYZE .. STATISTICS).

Before diving into the Oracle10g behavior, we start by looking at the pre-Oracle10g way of invalidating cursors.

Immediate Cursor Invalidation

Lets start by creating a table, gather object statistics on it, execute a query against it, grab its SQL_ID and take a look at the child cursor in the library cache:

As shown above we see that the child cursor was soft parsed and executed again. The next step is to gather object statistics again using DBMS_STATS and let DBMS_STATS invalidate dependant cursors by setting the parameter NO_INVALIDATE to FALSE:

Based on the output above we can conclude that the child cursor has been hard parsed again, after being marked INVALID, and that this new incarnation of the child cursor has been executed once. All of the above will also happen if you still use the deprecated ANALYZE .. STATISTICS command instead of DBMS_STATS.

Like we concluded in the case of DBMS_STATS, we can conclude that the child cursor has been hard parsed again and re-executed after being marked INVALID.

This behavior was what I expected while working on my Oracle11g demo.

Now that we have a basic understanding of how things worked in the past, it is time to take a look at how cursor invalidations work in more recent versions of Oracle.

Rolling Cursor Invalidation

Starting with Oracle10g cursors are marked for rolling invalidation instead of marked INVALID immediately. On the next execution of the query the Oracle server will generate a random number between 0 and the value of the _optimizer_invalidation_period parameter, which has a default value of 18000, and the cursor will remain valid for this number of seconds. Upon every following execution Oracle will check if this random selected timeout has expired. If that is the case then the cursor will be hard parsed again. Before demonstrating rolling cursor invalidation, we’ll lower the value of _optimizer_invalidation_period in order to avoid long waiting times:

Instead of marking the child cursor INVALID immediately, it is now marked for rolling invalidation behind the scenes. Nothing will happen with respect to cursor invalidation unless we execute the query again:

We can’t look behind the scenes but the random invalidation timeout has been generated and the child cursor will remain valid. The query will be executed without a parse as long as this timeout hasn’t been expired. In order to see what happens when the timeout is over we have to wait at least _optimizer_invalidation_period seconds.:

Despite the fact that we know the timeout has expired the child cursor is not marked INVALID and as a matter of fact it won’t at all. Lets see what happens when we execute the query again now that the timeout is over:

Instead of hard parsing the child cursor again as shown before, we now see another child cursor showing up in the library cache. This new child cursor was hard parsed and executed once. The reason for this new child can be found in V$SQL_SHARED_CURSOR as shown below:

This indicates that the new child was born because the previous child had a mismatch due to a rolling cursor invalidation.

Before finishing this blog post we need to set the _optimizer_invalidation_period parameter back to its default value:

SQL> alter system set "_optimizer_invalidation_period" = 18000 scope=memory;
System altered.

Conclusion

Things will change without notice! ;-) Cursor invalidation has changed in the past and the reason for this change is that previously all
dependent cursors were invalidated immediately after gathering new object statistics. This massive invalidation might cause a serious
degradation in performance right after statistics gathering due to a high number of hard parses. Rolling cursor invalidation spreads the invalidation of cursors out over a longer period of time thereby avoiding the performance
degradation caused by hard parsing the invalidated cursors.

Reference: Metalink note:557661.1
-Harald

Like this:

LikeLoading...

Related

This entry was posted on July 16, 2009 at 16:38 and is filed under Oracle.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

mwidlakesaid

The default value for “no_invalidate” is dbms_stats.auto_invalidate. This causes Oracle to decide when to invalidate the dependent cursors. I suspect (and Harald’s example seems to support this) that if you issue a GATHER_TABLE_STATS or GATHER_INDEX_STATS oracle decides to invalidate cursors immediately. But if you GATHER_SCHEMA_STATS, GATHER_DATABASE_STATS or it is the automatic stats gathering job (which is a modified GATHER_DATABASE_STATS) oracle decides to invalidate dependent cursors in a delayed manner, ie a random time period later.

mwidlakesaid

This feature of delayed cursor invalidation can lead to much confusion, particularly in respect of automated statistics generation.

As a DBA you come into the office in the morning {or maybe get called in the early hours} and some SQL has started misbehaving during the night. Damn. You check when statistics were gathered on the tables. For one table it was last night! But an hour or so before the SQL started causing problems. So, the statistics gathering is not the cause of the problem. But it was :-).

I suspect that, under 10.1 and 10.2 at least, when you gather statistics on specific tables and index, with GATHER_TABLE_STATS or GATHER_INDEX_STATS, oracle decides to immediately invalidate the dependant cursors. I say this as it often changes the execution plan immediately. I will have to go and test this now!

Very nice and besides the fact that you quoted that “things change without notice” , it also proves one more thing, there is always something new to learn. I didn’t know this behavior at all. Much thanks for it.

Sumitsaid

Thanks for the article.
Its nice to read about the true of the hidden parameters.

For 10g we gather stats for a table within a proc (dbms_stats.gather_table_stats (tabname=>’xyz’,cascade=>true);), which invalidates all the queries related to the table and infact we cannot see any details of the queries in any of the V$ views.
What we are not able to get is if the parameter is set to 30 minutes , why does the queries immediately get invalidated and removed from the v$views.

Though when is query is executed again we can see it in v$sql with loads and invalidations count incremented by 1.

I think the answer is that you are not stating the value for NO_INVALIDATE and so it will default to AUTO. With AUTO, it seems that Oracle immediately invalidates dependent cursors if the stats gather was GATHER_TABLE_STATS or GATHER_INDEX_STATS. With schema or database wide gathers, Oracle invalidates the dependent cursors after the random delay.

>On the next execution of the query the Oracle server will generate a random number between 0 and the value of the _optimizer_invalidation_period parameter

Now that is very irritating.
a. The random number isn’t generated immediately after the GATHER_STATS. So it is only the next execution that would generate the random number ?. This can also mean that different statements against the same table would “start” their timers at different times.
b. The timer is set to a random number. The DBA might as well tell his manager “dark forces in the Universe control when a new cursor will be generated.”.

Yes, I am aware of the NO_INVALIDATE=>FALSE.
Oracle introduced “delayed” invalidation to prevent latch storms and very large rate of high parses in a busy database if a GATHER_SCHEMA_STATS is run. Unfortunately, when doing a single table GATHER_TABLE_STATS, we would generally expect to invalidate SQLs against the table asap (else, why would we be gathering statistics when the database is busy ?). However, this behaviour described by you makes invalidation unpredictable. The DBA has to then realise that he should have added a NO_INVALIDATE=>FALSE !
There are other ways to cause invalidation and reparsing on demand.
I was just commenting on the unpredictability of this automation.

From the referenced MOS note 557661.1, an interesting addiction to your excellent post:
“Exception: parallel SQL are immediately invalidated in order to ensure consistency between execution plans of slaves and Query Coordinator across multiple RAC instances.”
I have not checked it myself, yet, though.