Let’s try to test a couple of ideas that were suggested either at the end of my previous blog article, or in the comments section of that blog article. What if we change…

Let’s try changing the Oracle Database release version, the PGA_AGGREGATE_TARGET, and the server. That is likely too many changes to allow identifying what changed and why it changed, but we will use this server as the baseline for today’s article. Let’s check a couple of the settings for the database instance:

The above plan shows two hash joins, with the second of the hash joins (the first hash join executed) performed on the ROWIDs returned by two index fast full scans. Under 11.1.0.7 the plan looked like this:

The above was a simple hash join before – telling the optimizer that we only want the first 1,000 rows was sufficient to switch from a hash join to a nested loops join. Now let’s try again, this time joining on the column that has an index with a high clustering factor caused by the descending number sequence in that column:

The index clustering factor, caused by the descending number sequence in column COL2, this time did not cause a change in the execution plan, unlike when we joined on the column with the ascending sequence of numbers. Continuing:

The above is the same as the baseline. So, we did see some changes when the OPTIMIZER_MODE was set to FIRST_ROWS_1000. Any predictions what will happen when the OPTIMIZER_MODE is set to FIRST_ROWS_100, FIRST_ROWS_10, or FIRST_ROWS_1?

PGA_AGGREGATE_TARGET = 2000M, OPTIMIZER_MODE=FIRST_ROWS_100, just listing the changes from when OPTIMIZER_MODE=FIRST_ROWS_1000:

The above execution plan, which was originally predicted to return 100 rows is now predicted to return 11 rows, so the one hash join became two nested loops joins. At this point all of the excution plans for the test queries are showing nested loops joins, when previously they showed hash joins – so there is little point in looking at the output from the test case that shows what happens when the OPTIMIZER_MODE is set to FIRST_ROWS_1.

Changing the OPTIMIZER_INDEX_CACHING parameter had no effect on the execution plans when those execution plans were compared with the baseline output.

Changing the OPTIMIZER_INDEX_COST_ADJ to a value of 50 had no effect on the execution plans, except for the last execution plan. The execution plan now shows a hash join while the baseline shows two nested loops joins. OPTIMIZER_INDEX_COST_ADJ=50:

So, we saw a small change when experimenting with OPTIMIZER_INDEX_COST_ADJ, setting it to half of its normal value. What if we try a couple of more values? Don’t say that someone didn’t warn you not to adjust that parameter. With OPTIMIZER_INDEX_COST_ADJ=20:

I think that this could be a problem. What else do we see in the output? Just like when OPTIMIZER_INDEX_COST_ADJ was set to 50, the final execution plan now shows a hash join rather than two nested loops joins as seen in the baseline output. What happened when OPTIMIZER_INDEX_COST_ADJ was decreased to 10? With the exception of the first two sort merge joins, the excution plans were identical to those found in the baseline output – the last execution plan switched back to two nested loops joins:

So, what happens when OPTIMIZER_INDEX_COST_ADJ is set to 1, a value that I have seen recommended a couple of times in various documents and a couple of forum posts? The first two execution plans that included sort merge joins with the other tested values of OPTIMIZER_INDEX_COST_ADJ now show an execution plan similar to the following:

The baseline output showed a hash join, with the entire cost of the execution plan at 14.

Pavan Kumar decided to try a couple of modifications to my original test case, allowing the HASH_AREA_SIZE to default to 128KB with an unset PGA_AGGREGATE_TARGET and a 10 character filler value in COL3 rather than a 200 character filler value in that column. He managed to obtain at least one sort merge join in an execution plan. You can see his test case here. I might try a similar change to the original test case to see what happens.

So, what else do we need to test? Changing the row lengths, changing the join columns from NUMBER columns to VARCHAR2 columns, and adjusting the system (CPU) statistics from the default values to something else (hopefully realistic values) as suggested by Jan-Marten Spit.

I encountered a fun problem today while performing a couple of tests with the AUTOTRACE feature in SQL*Plus. I am using a 32 bit 11.2.0.1 Oracle client connected to a 64 bit 11.2.0.1 Oracle database. It seems that 50% of the time when I issue “SET AUTOTRACE OFF” or “set autotrace off” I receive the SP2-0575 error and my session is disconnected from the database, although it seems to happen randomly when disabling AUTOTRACE. Interesting problem I think. Here is a partial output from the script:

Hints for Posting Code Sections in Comments

********************
When the spacing of text in a comment section is important for readability (execution plans, PL/SQL blocks, SQL, SQL*Plus output, etc.) please use a <pre> tag before the code section and a </pre> tag after the code section:
<pre>

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;
TODAY
---------
01-MAR-12

</pre>
********************
When posting test case samples, it is much easier for people to reproduce the test case when the SQL*Plus line prefixes are not included - if possible, please remove those line prefixes. This:

SELECT
SYSDATE TODAY
FROM
DUAL;

Is easier to execute in a test case script than this:

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;

********************
Greater than and Less than signs in code sections are often interpretted as HTML formatting commands. Please replace these characters in the code sections with the HTML equivalents for these characters: