As the poster points out, the statistics about “Misses in library cache” correspond to calls to the optimizer to produce an execution plan for the statement (the so-called “hard parse” – even though some such “parses” take place on the execute).

The real question, presumably, is “Why did Oracle have to re-optimise the statement so frequently prior to executing it ?”.

It’s hard to infer specific reasons, but we can take a couple of guesses. The statement was parsed twice (two parse calls) and executed seven times – so the client code seems to be holding cursor open for at least some of the time. On one parse call and two execute calls the execution plan was found in the library cache, but on most occasions it could not be found. So either it has been invalidated, or it had been flushed from the library cache (and “reloaded”) fairly frequently.

Invalidation means that some significant change occured to the objects referenced that made the current execution invalid. Flushes will occur if there is demand for free memory – execution plans are recreateable, so they can be flushed from memory even when the cursor is being held by the client.

To investigate further, we need to know whether the problem was one of simple reloads or invalidations. The first step is to go back to the raw trace file (which I assume you have since you’ve just used tkprof on it) and find the “parsing” line for the guilty statement. It will look something like this:

I’ve only used the hash_value (that’s the efficient way into v$sql for versions before 10g – the address is useful only in the unlikely case of a hash collision occurring).

As you can see, the number of loads is just a tiny bit bigger than the number of invalidations. This tells me that most of my problems were do to a structural effect, rather than losing the plan through memory starvation. If the number of loads was much higher than the number of invalidations, that would indicate a memory flushing problem.

As you can see, I’ve also printed up the child_number – this isn’t information that would makes a lot of sense in the trace file – it’s potentially transient and misleading – but it is possible for the same SQL text to result in many child cursors, so it’s worth making sure you check v$sql so that you can see them all.

Just for reference, one of the issues that could cause this type of “excess invalidation” (prior to 10gR2) was the use of global temporary tables (GTTs) defined as ‘on commit preserve rows’. To empty such a table, you either had to terminate your session, delete all the rows (inefficiently) or truncate the table.

But the truncate command is DDL, so if you truncated your private copy of the table every cursor referencing that table would be invalidated. I’ve seen this cause massive contention in the SQL area when such tables have been used at high levels of concurrency.

The significance of the “Parse” line is that the front-end code has issued a parse call to the database. Without detailed examination of things like session stats, trace files etc. you can’t really tell what the database response was to that parse call.

It is quite possible that the OCI library had cached a pointer to the cursor in the session cursor cache so that the effect of the call was virtually identical to the use of a held cursor. The only thing we can be confident about from this tkprof output is that Oracle did not have to optimise the statement on every parse call (or we would have seen lots of library cache misses).

Answering your specific question – the presence of the sequence.nextval will not cause a “soft parse” on every call.

SEQUENCE.NEXTVAL will cause (probably soft) parse and execute of stmt: “update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1:” everytime it should increment the sequence – in case the sequence is NOCACHE it’ll happen for every SEQUENCE.NEXTVAL, in case of CACHE 10 it’ll happen for every 10th SEQUENCE.NEXTVAL, etc.