If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I have one huge complex query which I am planning of breaking in few small queries. This huge query takes a long time to execute.. It has too bad performance.
I read in some Oracle book that few small queries instead of one big can improve performance. Is this true ?
How does it make a performance improvement as these small queries will be doing the same processing as one giant query ?

However, there are a few caveats, but first to the 'conventional logic':

- Big queries use more resources at one shot. Breaking it down into smaller pieces can help.

- The optimizer can handle the smaller pieces better.

Reply to conventional logic:

- While this *might* be true, it is certainly at the cost of the performance of the individual query. In other words, this *might* be better for the server as a whole, but is worse for the individual query. Kind of the opposite of parallel queries. Parallel queries use as many resources as necessary to return the result as quickly as possible - best for query, worst for server. Breaking the query into multiple sequential pieces will use fewer resources, but at the cost of elongating the total query time - worst for the query, best for the server.
Therefore, breaking the query up is not an optimization that helps the programmer - it helps the DBA.

- The optimizer can, indeed, handle smaller queries better, but this is relative! The optimizer, by default, will *not* attempt to evaluate every single join permutation in the query as soon as the query has more than 5 tables. For this reason, and the max_permutations setting as well, smaller queries can truly be optimized better than large queries.
*However*, that is not comparing apples to apples. You already *have* a large query:
- 1 big parse
- 1 big optimization pass for an execution plan
- 1 big execution
- 1 result set
Breaking that into multiple queries gives you:
- 1 medium parse
- 1 medium optimization pass for an execution plan
- 1 medium execution
- 1 result set *inserted into a temp table!!*
...repeat, almost...
- 1 medium parse
- 1 medium optimization pass for an execution plan
- 1 medium execution *using a temp table with no index!!*
- 1 result set *inserted into another temp table!!*
...rinse and repeat!

Temp tables can become a pain when integrated into further complex queries. If they don't have an index, then getting the data is a big pain. If they do have an index, then the inserts are slowed. In any case, *they are overhead!*

Again, IMHO, it is a far better idea to tune the query itself. First, make sure you have given the optimizer every chance to get it right, which primarily means : Join *everything that can be joined*!!!

If you have A=B AND B=C you must remember to add AND A=C. I won't go into my full speech on the reasoning behind this - you'll have to wait for the book . Beyond that, make sure you are using binds, so that the parse and optimization steps can be avoided on the next call. Finally, hint the thing if you need to until it follows the best path.

I would be willing to bet that any complex query can be made to run faster in a single statement with the proper care than it can be made to run as separate statements.