March 5, 2014

12c pq_replicate

Another day, another airport lounge – another quick note: one of the changes that appeared in 12c was a tweak to the “broadcast” distribution option of parallel queries. I mentioned this in a footnote to a longer article a couple of months ago; this note simply expands on that brief comment with an example. We’ll start with a simple two-table hash join – which I’ll first construct and demonstrate in 11.2.0.4:

In this plan slave set 2 scans table t1 in parallel and broadcasts the result set to slave set 1 (lines 5 – 7). The significance of the broadcast option is that each slave in slave set 2 sends all the rows it has read to every slave in slave set 1. For a fairly large table with a high degree of parallelism this could be a lot of inter-process communication; the total number of rows passing through the PX message pool is “DOP x number of row filtered from t1″.

After a slave in slave set 1 has receive the whole of the t1 result set it builds an in-memory hash table and starts scanning rowid ranges (PX BLOCK ITERATOR) from table t2, probing the in-memory hash table to effect the join (lines 3,4, 8,9). Since each slave has a copy of the whole result set from t1 it can scan any chunk of t2 and handle the contents locally. Moreover, because slave set 1 isn’t reading its second input from a virtual table it is able to write its output immediately the virtual table (:TQ10001) that feeds the query coordinator with the result (lines 1,2) – we don’t have to do a “hash join buffered” operation and buffer the entire second input before starting to execute the join.

So how does 12c change things. With the same starting data and query, here’s the execution plan:

Notice, in particular, that we only have one virtual table (or table queue :TQ10000) rather than two – and that’s from a parallel query slave set to the query co-ordinator, parallel to serial; the query only uses one set of parallel query slaves. Until you run the query with rowsource execution statistics enabled and look at the output from v$pq_tqstat it’s not going to be immediately obvious what has happened, but we should see that somehow Oracle is no longer broadcasting the first table even though it’s still doing something in parallel with both tables.

The run-time statistics confirm that we’ve only used one set of slaves, and each slave in the slave set has scanned the whole of table t1. This means each slave can build the full hash table and then go on to read rowid ranges from table t2. We’ve managed to get the benefit of broadcasting t1 (every slave has the whole of t1 so we don’t have to scan and distribute the big table t2 through the PX message pool) but we haven’t had to clone it multiple times through the PX message pool.

Clearly there’s a trade-off here that Oracle Corp. has decided is worth considering. I’m guessing it’s biased towards Exadata where you might run queries with a very high degree of parallelism. In that case the overhead of task switching as large numbers of messages are passed around may (and this is pure supposition) be greater than the added cost of loading the table into the buffer cache (of each instance) and having each slave scan it from there. (Reminder – 11g introduced two “opposite” changed to tablescans: “serial direct reads” and “parallel in-memory scans”.)

There’s one little oddity in this replication – there’s a pair of hints: pq_replicate and no_pq_replicate to control the effect if you think the optimizer is making the wrong choice. I would have guessed that in my example the hint would read: /*+ pq_replicate(t1) */ as it’s table t1 that is read by every single slave. Strangely, though, this is what the outline section of the execution plan showed:

Footnote

Here’s a little anomaly, and a generic warning about “optimizer_features_enable”: I found that if I used the hint /*+ optimizer_features_enable(‘11.2.0.4′) */ in 12c I could still get the pq_replicate() hint to work. Unfortunately there are a few places where the hint (or parameter) isn’t guaranteed to take the optimizer code backwards the full 100%.

Related

I was just thinking, if we have segments which are left over with default parallel degree more than 1. Then querying/joining such tables/segments or using CTAS on such tables would certainly lead to Parallel distribution. So, it may replicate the same behavior on 11.2.0.4 and 12c.

Hi Jonathan,
Why do you think the hint syntax is an oddity ? I think that PQ_REPLICATE argument identifies the join rather than the table, as with the USE_HASH and PQ_DISTRIBUTE.
If you prefer to see PQ_REPLICATE(T1) you can run the query with LEADING(T2 T1) USE_HASH(T1) SWAP_JOIN_INPUTS(T1). Same plan, just different way to hint (and possible here because we have only 2 tables).
PQ_REPLICATE(T2) just means it uses PQ_REPLICATE when joining to T2. The table is determined because it will be the build table of the hash join.
Of course, if we inverse the LEADING order we will have PQ_DISTRIBUTE(T1 NONE BROADCAST) instead of PQ_DISTRIBUTE(T2 BROADCAST NONE).
Regards,
Franck.

I’ll certainly buy that as an explanation – but there’s a different oddity to explain … why have the hint at all ?

If you have to look at all the hash-join related hints to work out which table is going to be replicated, why not change the pq_distribute hint – after all with your interpretation the pq_replicate() hint is essentially saying: “when looking at the pq_distribute hint, change BROADCAST into REPLICATE. So why not make (alias none replicate) and (alias replicate none) legal options for pq_distribute ? (I don’t think replication occurs for anything other than changing broadcast – but I could be wrong.)