That means that only the required SKU(s) are fed to the view, and proper indexes (such as RANGES_PK in this case) can be used. So, if you need to refresh only a few skus the response time is going to be almost istantaneous – provided that you have only sane (a few) ranges per sku. Hence you can use the same view for both calculating prices of all skus (say, in a nightly batch) and calculating a small subset of skus (say, online), and that is a great help for maintenance and testing.

running in parallel

Another desirable property is that the view can operate efficiently in parallel, at least in 11.2.0.3 (I have not tested other versions):

-------------------------------------------------------------------

| Operation |IN-OUT| PQ Distrib |

-------------------------------------------------------------------

| SELECT STATEMENT | | |

| PX COORDINATOR | | |

| PX SEND QC (RANDOM) | P->S | QC (RAND) |

| VIEW | PCWP | |

| WINDOW SORT | PCWP | |

| VIEW | PCWP | |

| WINDOW SORT | PCWP | |

| VIEW | PCWP | |

| WINDOW BUFFER | PCWP | |

| VIEW | PCWP | |

| WINDOW SORT PUSHED RANK | PCWP | |

| HASH JOIN | PCWP | |

| PX RECEIVE | PCWP | |

| PX SEND HASH | P->P | HASH |

| PX BLOCK ITERATOR | PCWC | |

| TABLE ACCESS FULL | PCWP | |

| PX RECEIVE | PCWP | |

| PX SEND HASH | P->P | HASH |

| VIEW | PCWP | |

| WINDOW SORT | PCWP | |

| PX RECEIVE | PCWP | |

| PX SEND HASH | P->P | HASH |

| VIEW | PCWP | |

| SORT UNIQUE | PCWP | |

| PX RECEIVE | PCWP | |

| PX SEND HASH | P->P | HASH |

| UNION-ALL | PCWP | |

| PX BLOCK ITERATOR | PCWC | |

| INDEX FAST FULL SCAN| PCWP | |

| PX BLOCK ITERATOR | PCWC | |

| INDEX FAST FULL SCAN| PCWP | |

-------------------------------------------------------------------

There’s no point of serialization (all servers communicate parallel-to-parallel), the rows are distributed evenly using an hash distribution function (probably over the sku) and all operations are parallel.

sku subsetting and partitioning

It is well known that analytics functions use sort operations heavily, and that means (whether or not you are running in parallel) that the temporary tablespace may be used a lot, possibly too much – as it actually happened to me , leading to (in my case) unacceptable performance.

Side note: as I’m writing this, I realize now that I had probably been hit by the bug illustrated by Jonathan Lewis in Analytic Agony, but of course, overuse of temp could happen, for large datasets, without the bug kicking in.

A possible solution is to process only a sub-batch of the skus at a time, to keep the sorts running in memory (or with one-pass to temp), leveraging the predicate-pushability of the view. In my case, I have made one step further: I have partitioned the table “ranges” by “sku_group”, replaced in the view every occurrence of “sku” with the pair “sku_group, sku”, and then run something like: