(You might notice that I have done a RESETLOGS on 09-April, but that fact is irrelevant for the purpose of this blog entry about the *number* of ArchiveLog entries in the controlfile and RMAN listings).

After an SQL 'ALTER SYSTEM SWITCH LOGFILE;' I have :

RMAN> list archivelog all;

using target database control file instead of recovery catalogList of Archived Log Copies for database with db_unique_name ORCL=====================================================================

There is a way to "clear" *ALL* the entries. NOT to be done on Production databases. NOT to be done on a Primary for a Standby or on a Standby -- unless you know precisely the potential consequences and how to deal with them.

SQL>SQL> -- do *NOT* do this on ProductionSQL> -- do *NOT* do this if you have a Standby databaseSQL> -- the section number may not be documented to be the same in all versionsSQL> exec dbms_backup_restore.resetcfilesection(11);

(Apparently, the presentation of some of the executions is mangled as if these are different SQL statements. This could be a bug but it has no significant impact on my test).

Therefore, with a (default) Non-Deterministic Function, the Function (and the two SQLs therein) was executed once for each row fetched from the SALES table -- so we have two additional SQL (recursive depth 1) calls for each row !These additional SQL calls are quite an overhead !

I know the data well. I know that my Sales records have only 3 different Sale Quantities (SALE_QTY) and 4 different Sale Prices (SALE_PRICE). Therefore, the probability of the same SALE_QTY and SALE_PRICE being repeated in the next row fetched is quite high. If I had very many different Quantities and/or Prices, the probability of repetition would be very low.With the high probability of repetition, I know that I can instruct Oracle to assume that the computed Sales Tax would be the same for the next row from the SALES table if the SALE_QTY and SALE_PRICE were the same ! I can use a DETERMINISTIC function.

This time, the total computation took only 1.21seconds ! A vast improvement ! (Had there been 3.6million rows instead of 36,400 only, the difference in time would be very very noticeable -- provided that the pattern of SALE_QTY and SALE_PRICE held even for the larger volume).

(WHY did I lock the SALES_TAX_RATES table ? I wanted to ensure that no values were changed while my computation was running. If the rates in this table were changed, the computed Sales Tax amounts wouldn't be accurate with a DETERMINISTIC definition for the function !)

The two SQLs in the fuction were executed only 12 times. That means that the function was executed only 12 times. At every row, Oracle evaluated the SALE_QTY and SALE_PRICE versus the values from the previous row. It would re-execute the function only if either of these values differed. If the values are the same, it avoids the overhead of the Function call itself as I have explicitly (with my DETERMINISTIC definition) assured Oracle that the computed value of the Function wouldn't change if the input values were the same !

This is how (if I know the data -- the pattern of SALE_QTY and SALE_PRICE values !) I could use a DETERMINISTIC definition.

UPDATE : Let me clarify that I would use a DETERMINSTIC function in this manner only if :

a. I knew that the data in the target table (SALES) follows such a pattern that the function predicates will not vary frequently

b. I knew that the reference table (SALES_TAX_RATES) would not be updated frequently. This is an important lookup table where change controls mandate that data is not updated until and approved and only in specific windows. Of course, I add the LOCK TABLE SALES_TAX_RATES as a matter of abundant caution.

(btw : There's another way to speed up the execution as well, still using a Function but rewriting it again. ... That is actually a very simple exercise).

15 April, 2011

(before I return to my series on Deterministic functions ........... )

UPDATE March 2017: See my new series on SingleInstance DataGuard for RAC, beginning here.

This week I have had two sets of questions about standby databases -- what a coincidence ! The last time I really worked with a Standby was in 8.1.7 -- without the DataGuard "backport" from 9i. (and, before that, in 7.0). All of a sudden, I have had questions about 10g and 11 DG.

Anyways, since I had started "exploring" 11.2 DG (skipping over 9i and 10g) a few days ago, and had setup a DG environment in my VM, here are some entries from the Standby's alert.log :

There are 19 rows with 5 distinct dates but the function was executed 12 times ! Isn't there a mismatch ? Should the function have been executed 19 times or 5 times ?

.

.

.

But wait ..... a DATE datatype in Oracle captures Hours:Minutes:Seconds besides Day:Month:Year. Therefore, two rows with the "Date" '09-APR-11' may not necessarily have really the same Date Value stored in the database !

I rerun the query but also print the Hours:Minutes:Seconds and capture the Binds in the trace file.

That adds up to 12 executions. One execution afresh each time the retrieved value changes. Note : The sequence of values that appear in the trace file isn't exactly the same as that in the SQL output. The trace file shows how the function is applied to each row fetched *before* the rows are sorted. The function is applied before the ORDER BY is applied.

05 April, 2011

In my opinion, DETERMINISTIC functions are not very well explained. Some DBAs and Developers know that a Function must be DETERMINISTIC if it is to be used to create a Function Based Index. Also, common understanding is that a Function can be DETERMINISTIC if it is guaranteed to return always return the same output value if the input value supplied is not changed.

"Indicates that the function returns the same result value whenever it is called with the same values for its parameters."

Yet, how does Oracle optimise the execution of a DETERMINISTIC function ? The documentation says "When the database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than reexecuting the function."

We know that every time we pass the same value to "trunc(:b1/100)", the result will always be the same. This function had to be executed 16 times because each value passed to the function was different -- each time a new OBJ_ID was being passed.

So, what happens if the same OBJ_ID is passed to the function each time ?

SQL> UPDATE my_objects set obj_id = 1000000;

16 rows updated.

SQL> commit;

Commit complete.

SQL>SQL> pause Press ENTER to proceed with new sessionPress ENTER to proceed with new session

Now, the function is executed only once ! Apparently, Oracle actually "peeks" (for want of a better word) at the value passed to the function. If it is the same as the last value (or one of the last four values ?-- see Jonathan Lewis's last paragraph in his note), it decides to not re-execute the function. That also means, that it holds the return value from the previous execution and re-uses that value -- to be returned again for the next row.