:: The first, which dates back to v7.1 pre-betas when the sql hash algorithm: was being played around with and often generated huge clumps of sql
hashing
: together, is that while it is wonderful to re-use parsed sql and even: essential in cases where thousands of users just need to quickly
re-execute
: exactly the same one line lookup with a different key, IF IT TAKES LONGER
TO
: FIND THE MATCH THAN TO RE-PARSE, SOMETHING STUPID IS IN PROGRESS. So
Oracle
: should have a bail out timer that says, hey, stop looking and just parse
the
: doggone thing. Space in the shared pool is important, but time is more: important. If you're hung up on a latch, parse to a one-time area. No: excuses, you've coded up lots harder stuff than this. Yes, writing systems: intelligently in the first place generally avoids the problem, but that
does
: not solve the problem for the huge bulk of poorly coded essential: functionality that is in production around the world (primarily emanating: from guess where!):

10g introduces the SQLID all over the place in place of the
hash_value - this is intended to get "better uniqueness" because
SQL capture over a long period of time might otherwise cause
problems to ADDM when different text hashed to the same value.
"Better Uniqueness" should produce very short collision chains,
so the 'time to search' should be moot.

: The second, which Jonathan delightfully referenced, is about bind variable: peeking. Now, if you look at the actual cost elements recorded for some
sql
: that has been executed, you'll have a pretty good handle on cost of
parsing
: versus cost of execution. If the execution is really cheap, go ahead,
don't
: bother re-peeking. But if the cost of execution was significant compared
to
: the cost of a parse, let's take another look. You might also want to mark
a
: SQL for a re-plan if one of the bind variables drove a histogram based
plan
: choice. So big deal if you get a few copies of the same code. YOU COULDEVEN
: USE A HANDFUL OF WINNERS AS STARTING POINTS FOR "PLAN COSTS TO BEAT" INTHE
: PERMUTATION SEARCH for the next value you get.:

Oracle records the 'last execution costs' for a statement,
so it should be able to compare actual cost with predicted
cost, and make a dynamic decision to do some more
permutations. Since there is an 'optimizer last permutation'
value for each statement, then Oracle could pick up where it
left off last time.

Of course, this should only be applied to statements where
the run-time cost was sufficiently different from predication.

There's even scope, I would have thought for keeping a short
history of bind values which cause significant variation in cost,
with a mechanism for line by line analysis to determine where
the most significant change appeared -- (harder, left as exercise
to reader ;)

: The third, which regards the whole question of finding marginally or: surprisingly better plans than an existing "good enough and stable plan,"
is
: why not give us a tool to run in slack time that says: Take a look at my: most consumptive SQLs and the actual execution cost results, ignore my: hints, and permute until the cows come home finding me a better plan and
let
: me know about it. Maybe I'll change my mind on the hints.:

10g - dbms_sqltune - fire in a statement, or complete workload, and
give it a time limit on working out a single, or co-operative set
of execution plans.

It's a useful step forward - but doesn't cope well with bind variables.

Please see the official ORACLE-L FAQ: http://www.orafaq.com
To unsubscribe send email to: oracle-l-request_at_freelists.org
put 'unsubscribe' in the subject line.