Comments on: SELECT Statement is Fast, INSERT INTO Using the SELECT Statement is Brutally Slow 1https://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/
Miscellaneous Random Oracle Topics: Stop, Think, ... UnderstandThu, 07 Dec 2017 00:01:11 +0000hourly1http://wordpress.com/By: Charles Hooperhttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2485
Mon, 20 Dec 2010 00:58:23 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2485Great tips for troubleshooting the problem. I wonder if the OPTIMIZER_FEATURES_ENABLE value of 10.1.0.4 causes the optimizer to unconditionally perform the complex view merging like 9i. There is a definite behavior change in the optimization of some queries with the OPTIMIZER_FEATURES_ENABLE parameter set to 10.1.0.4 compared to 10.1.0.5 or 10.2.0.1. I need to spend some time digging through 10053 trace files to satisfy a curiosity about this unexpected performance issue.
]]>By: Charles Hooperhttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2484
Mon, 20 Dec 2010 00:50:00 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2484Jonathan,

Thank you for providing the link to that article. I recall reading about that feedback loop in your “Cost-Based Oracle Fundamentals” book. Speaking of which, I am still waiting for volumes 2 and 3 of the book series. 🙂

It is interesting that the performance problem is present in the test case script (found in the follow-up article) when the OPTIMIZER_FEATURES_ENABLE is set to 10.1.0.4 in both Oracle Database 10.2.0.2 and 11.2.0.1, but the problem is not present in Oracle Database 10.2.0.5 when the test is repeated in either the Standard Edition or the Enterprise Edition (that will be illustrated in part 3).

]]>By: Charles Hooperhttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2475
Sat, 18 Dec 2010 17:00:17 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2475Part 2 of the blog article series is now on this blog. I am able to reproduce the execution plan change with a reproducible test case on Oracle Database 11.2.0.1 (see the execution plans at the end of the second article).
]]>By: Jonathan Lewishttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2474
Sat, 18 Dec 2010 16:55:02 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2474Charles,

I forgot to say – after checking what was going on, I’d try to model it with a simpler example – “insert as select from two table join which could be satisfied by index-only acceses” (similar to the original nested loop join section.

I am a late arrival at this party – but my approach would be as follows:
Comparing lines 4 – 7 in the first plan with lines 4 – 10 in the second plan, there’s clearly an example of complex view merging taking place (“group by then join” turns into “join then group by”).

The complex view merging would have been unconditional in 9i – but should be costed in 10g: so (although I rarely use the 10053 trace this early in an investigation) I would assume in this case that there was a bug in the optimizer code somewhere in the handling of the “insert as select”.

I’d use the no_merge hint suggested by Flado to check that the behaviour changed to the desired behavious. And I’d look at three trace files (select, insert unhinted, insert hinted) to see if I could spot the place where CVM (complex view merging) failed to appear when expected, or appeared with an unsuitable cost; or where the cost of not merging appeared to be higher than the cost of merging.

]]>By: Charles Hooperhttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2465
Fri, 17 Dec 2010 13:18:57 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2465I have not had any time to investigate this particular SQL statement – the NO_QUERY_TRANSFORMATION hint generated the execution plan that I expected to see, so I moved on to other items.

The execution plans that were produced from the 10053 trace follow (note that if a hard parse was not performed, nothing will be written to a trace file, so that probably explains why some of the trace file results are missing):
select_10.1.0.4:

According to the timing information, the SELECT statements with OPTIMIZER_FEATURES_ENABLE set to 10.1.0.4 completed in about 0.34 seconds, while the INSERT INTO completed in about 3 minutes and 16 seconds (I am not sure why I was seeing times of about 20 minutes before). With OPTIMIZER_FEATURES_ENABLE set to 10.2.0.2 with the default values for OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING the SELECT statements completed in about 1.3 seconds and the INSERT INTO statements completed in about 2 minutes and 40 seconds.

Thanks for the explaination. Can’t wait to see your test cases to simulate this.
Would you mind validating my claim below?While investigating this performance problem, if using hints (like NO_QUERY_TRANSFORMATION or NO_MERGE) result in original plan (for SELECT) being used and the estimated cost of the revised query is less than the estimated cost of the plan for the slow query, it is safe to conclude that it is an optimizer bug. This is because optimizer has clearly failed to select the cheapest plan to execute the query
While the estimated cost of the “fast” plan is visible above (157), the estimated cost of the “slow” plan is not visible. Would you mind sharing the estimated cost details for the “slow” plan above? Considering that the “slow” plan expects large TEMP usage as opposed to no TEMP usage expected by “fast” plan, I would be surprised if CBO has indeed considered the “slow” plan to be the cheapest plan.

Great idea to use the APPEND hint, and extra credit (bonus points) for mentioning one of the potential issues with implementing the APPEND hint (inserting above the high watermark, triggers, integrity constraints, etc.). In this particular case only about 900 rows were being inserted into the table (I intentionally withheld that piece of information so that people responding to this blog article, and specifically to those people who later read the blog article, would see a broader range of suggestions, and the potential warnings that should accompany the suggestions).

The ORDER BY clause was left in the INSERT statement for a couple of reasons:
* Initially I forgot to remove it when converting the SELECT statement into an INSERT INTO … SELECT type statement.
* I left the ORDER BY clause in the INSERT INTO … SELECT statement to take advantage of the nearly pre-sorted ordering of the rows in the table’s blocks. For the initial pass through the table when it is actually selected from, this will be slightly helpful. However, the INSERT INTO … SELECT is only the starting point of what actually ends up in that table. Immediately after the INSERT INTO … SELECT is performed, a recursive routine is performed that explodes up to 7 levels of bills of material for those parts, and the rows are inserted back into the same table. The CLUSTERING_FACTOR for the index that will eventually be created on the PART_ID column was a consideration.
* If I had removed the ORDER BY clause, it *might* be expected that the execution plan for the standard SELECT statement would differ from the execution plan for the INSERT INTO … SELECT SQL statement – so there is a risk that I might cause the performance to change.

I am once again impressed with the quality of the comments that are posted to this blog.

]]>By: Charles Hooperhttps://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/#comment-2458
Thu, 16 Dec 2010 12:16:50 +0000http://hoopercharles.wordpress.com/?p=4047#comment-2458Good questions. WORKAREA_SIZE_POLICY is set to AUTO, and PGA_AGGREGATE_TARGET is set to 2000M. Once I build a test case that simulates the problem, I can try different parameter settings to see if I am able to identify the cause of the execution plan change.
]]>