12cR1

12c introduces another interesting new Parallel Execution feature - the parallel evaluation of FILTER subqueries. In pre-12c FILTER subqueries always had to be evaluated in the Query Coordinator. This had several consequences, in particular the data driving the FILTER subquery always had to flow through the Query Coordinator, and hence represented a forced serial execution part of a parallel execution plan. This limitation also meant that depending on the overall plan shape the parallel plan was possibly decomposed into multiple DFO trees, leading to other side effects I've outlined in some of my other publications already.In 12c now the FILTER subquery can be evaluated in the Parallel Slaves, and the driving data no longer needs to be processed in the Query Coordinator. However, the resulting plan shape can be a little bit confusing.

SQL plan directives are a new concept introduced in version 12.1. Their purpose is to help the query optimizer cope with misestimates. To do so, they store in the data dictionary information about the predicates that cause misestimates. Simply put, the purpose of SQL plan directives is to instruct the database engine to either use dynamic sampling or automatically create extended statistics (specifically, column groups).

Since the database engine automatically maintains (e.g. creates and purges) SQL plan directives, in some situations it is necessary to copy the SQL plan directives created in one database to another one. This can be done with the help of the DBMS_SPD package.

When certain SQL features get used in pre-12c versions that force non-parallel evaluation, like using ROWNUM or certain Analytic Functions like LAG/LEAD, then - depending on the overall plan shape - Oracle can start to decompose the parallel execution plan into several so called DFO trees (If you want learn more about DFO trees and DFOs I recommend watching my online tutorial on my Youtube channel).Now having multiple DFO trees in a single parallel execution plan comes with several side effects that are confusing and complicate matters unnecessarily, like each DFO tree allocates its own PX slave set(s), and so each one can potenially end up with a different DOP, which means you can have more than one DOP in a single parallel execution plan.Depending on the overall plan shape this might also mean that a DFO tree can get started multiple t

Continuing from the previous part - which was about the Temp Table Transformation and join cardinality estimates - using the same simple table setup here is a slight variation of the previously used query to demonstrate the potential impact on single table cardinality estimates:

Having published recently twonotes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.Looking at the difference in the join cardinality estimates of following simple example:

Some time ago I've demonstrated the non-cost based decision for applying the temp table transformation when using CTEs (Common Table/Subquery Expressions). In this note I want to highlight another aspect of this behaviour.Consider the following data creating a table with delibrately wide columns:

This is just a heads-up for those thinking about using Edition Based Redefinition (EBR) and enabling it on an existing schema with objects. Although EBR isn't exactly a new feature its current adoption level is probably not that high (which probably changes in future as Oracle E-Business Suite uses EBR now as part of their default upgrade procedure as far as I understood).I was recently contacted by someone who enabled EBR on an existing schema using ALTER USER ... ENABLE EDITIONS and had to use the "FORCE" option since there were (according to the official ALTER USER documentation) "objects that are not editionable and that depend on editionable type objects in the schema. ...

In the final part of this instalment I want to focus on the possible optimisation of remote access that I outlined in the initial part, which is based on the idea of running multiple concurrent remote branches of a UNION ALL to overcome the query coordinator bottleneck of straightforward remote queries that need to transfer larger amounts of data.For that purpose I now simply change my sample query to access the serial table T2 via the DB link defined in the setup of the initial part, like that: