Oracle Optimizer: Moving to and working with CBO - Part 6 - Page 2

December 23, 2003

12. Statistics for SYS schema.

One
issue that has always been in doubts is whether to generate statistics for SYS
schema. Generating statistics for dictionary tables owned by SYS is not
recommended in Oracle 8i. The dictionary views that reference the SYS
tables execute efficiently with the Rule Based Optimizer.

You
may generate statistics in Oracle 9i but you will have to evaluate this
option for your setup. As per a note I came across, Oracle does not perform any
regression testing with dictionary analyzed and there may be a possibility of
performance issues. Oracle 10 and above would require statistic generation for
SYS schema as RBO will be desupported.

This
way, RBO will be used when accessing the dictionary and CBO when your application
runs. The only catch is that CBO will resort to ALL_ROWS that may cause issues
in OLTP systems. Setting the initialization parameters appropriately and
extensive use of hints for application queries will stabilize the system in due
course.

2.
Run your setup in ALL_ROWS or FIRST_ROWS mode. Generate statistics for
application specific schemas. Avoid doing so for SYS schema. Make extensive use
of RULE hints for dictionary queries that are slow.

This
way, Dictionary related queries will still be on RBO and the application can
run on CBO. Some internal recursive queries may be affected on some setups, if
the time taken is significant, do raise a TAR with Oracle support.

3.
Run your setup in ALL_ROWS or FIRST_ROWS mode. Generate statistics for
application specific schemas. Generate statistics for SYS schema! Make
extensive use of RULE hint for dictionary queries that are slow, or allow the
dictionary to run in Cost.

This
is not recommended for Oracle 8i. Some internal recursive queries may run slow
in this scenario also.

You
may of course arrive at a strategy for your own setup. From my experience all
the three are good options, depending on what will be appropriate for your
setup.

Verifying SYS statistics

To
verify if SYS schema has statistics, check the LAST_ANALYZED column for the
dictionary tables.

If
you are generating statistics at database level, chances are that SYS is also being
analyzed. You may remove statistics for SYS by using the following option. This
can be added to your auto-statistics generation process, if any.

exec dbms_stats.delete_schema_stats('SYS');

13. How to analyze execution plans in CBO?

DML
performance tuning in CBO can be a challenging and interesting task. With many
new dependencies present, one will have to do more than just check for the use
of indexes. I present here a brief note on what developers should look at when
writing or fine-tuning queries.

13.1 Basic checks

These
are some basic things that one should check for when ever a performance issue
is reported in CBO. You may add more checks as per your setup requirements.

3.
Check if the degree is set to a value greater than 1, to invoke parallel
processing on the tables.

select degree from dba_tables where table_name = 'GL_INTERFACE';

4.
Check if SYS schema is analyzed/not analyzed, as per your setup requirements.

5.
Check if the parameters affecting the optimizer are set as per the original
setup specifications. You may store the original parameter settings in a table
and compare it with V$PARAMETER to identify any changes done in the setup. This
strategy is very important for maintaining multiple installations of the same
application.

E.g.:
Maintain a table called sys_recommended_syspar_values (or as per your standards)
in SYS schema that has the original setup values. In case of any issues,
comparing the present setup with the recommended setup can provide a clue as to
what could have gone wrong.

6.
If you are not sure about what query is causing a performance issue, generate
trace at session level or for the concerned processes and evaluate the
resulting output file to identify resource intensive and time-consuming
queries.

Also
consider using DBMS_PROFILER; I have found this tool to be very handy to
identify time-consuming lines of code.

13.2 Elements to evaluate in execution plans

With
CBO, many new execution paths have been made available. The execution plan shows
information about the access path chosen; this should be evaluated in terms of
best throughput and response time.

Expensive
operations can be identified by checking three crucial elements:

Response time (mainly for OLTP systems)

Cost

Cardinality

Bytes (optional)

Most
often, if a portion of the execution plan shows an extremely high cost or
cardinality, it may be a good place to start for tuning the query.

Response
time is a crucial aspect for OLTP systems. Execution time does not form part of
the Explain plan generated, and should be explicitly measured. One way of doing
this is with the SQL*Plus TIMING option that shows the elapsed time between the
firing of the query and the return of the results. The latest versions of
SQL*Plus have timings displayed in "hour : minutes : seconds . milli-seconds"
format.

e.g.:

Elapsed: 00:00:00.61

13.3 Generating the execution plans

Execution
plan for individual queries can be derived by using the EXPLAIN PLAN option of
Oracle. This can be invoked by using the EXPLAIN PLAN command or by enabling
AUTOTRACE in SQL*Plus. A PLAN_TABLE is required for storing the execution plans;
create it by using the UTLXPLAN.SQL script.

Below
is an example of using the EXPLAIN PLAN command. Two scripts, UTLXPLS.SQL
(serial) and UTLXPLP.SQL (parallel executions), are provided by oracle to show
the formatted execution plans. You may alternatively use your own customized
query on PLAN_TABLE.