Sunday, December 9, 2012

Introduction

A few years ago Jonathan Lewis published a blog post that described one of the interesting side effects of Oracle's Parallel Execution implementation: Sometimes operations that usually are non-blocking will be turned into blocking ones. Mostly these are represented by additional BUFFER SORT operation that show up in the parallel version of an execution plan from 10g on (pre-10g does the same internally but doesn't show in the execution plan), but there is a special case which is the HASH JOIN BUFFERED operation that gets used with the HASH data distribution of the join row sources.
Jonathan came to the conclusion that the HASH JOIN BUFFERED turns into a blocking operation by buffering the result set of the join by looking at different trace files (the 10104 hash join trace file and 10046 extended trace) and the fact that when the join produced no result (no matches between the two row sources) the obvious spill to disk didn't happen anymore.
He showed this using an example that simply joined two tables using Parallel Execution returning the result of the join to the client. Another interesting point of this example is that the BUFFERed operation takes place although it is not entirely obvious why if you follow the explanation that "at most two Parallel Slave Sets can be active per Data Flow Operation" and hence sometimes both sets are busy and a third set would be required to receive the data produced by the other two. This is however not the case with this simple example, as the data produced by the two sets simply needs to be returned to the client by sending it to the Query Coordinator.
While I prepared my "Parallel Execution" seminar I wanted to demonstrate this via a simple test case where two rather small tables are joined but result in a quite large result set. Since the HASH JOIN BUFFERED is supposed to buffer this large result set before returning it to the client this should make this special behaviour quite obvious.

Test Case Results

However the test case showed some interesting results that seemed to suggest that Jonathan's conclusions weren't entirely correct:
1. Although the 10104 hash join trace file reads "BUFFER(Compile) output of the join for Parallel Query" and "BUFFER(Execution) output of the join for PQ" it looks like the HASH JOIN BUFFERED operation in fact does buffer the second row source rather than the result of the join operation. I'll demonstrate below why I believe this is so
2. Although I could reproduce Jonathan's test case, in particular that no spill to disk takes place when the two row sources do not match, I believe that the point of the HASH JOIN BUFFERED is that it can take advantage of the fact that the hash table for the first row source is already built when the second row source is accessed. So in principle it looks like the operation only buffers data from the second row source that has a match in the hash table. Data that doesn't match isn't buffered - that's the special functionality of the HASH JOIN BUFFERED that makes it different from separate BUFFER SORT operations that buffer unconditionally and allows to explain why no obvious buffering takes place if the two row sources don't match.
3. When looking into the 10104 HASH JOIN trace file it becomes obvious that the spilling to disk of the odd 9th partition as described in Jonathan's post takes place before the actual probe phase seems to begin (kxhfSetPhase: phase=PROBE_2), which again I believe suggests that it cannot be the result set that gets buffered, since this will only be produced as soon as the probe phase begins
4. The implementation restriction of Oracle's Parallel Execution that requires these additional, artificial blocking operations does not seem to be "at most two Parallel Slave Sets can be active at the same time", but more precisely it seems to be
"At most one data distribution can be active at the same time"
This includes the final data distribution to the Query Coordinator process in case of queries and explains why the simple case of a two table join using HASH distribution results in a BUFFERED operation: The PX SEND HASH operation for the second row source would have to be active at the same time as the PX SEND QC operation returning data to the Query Coordinator as the HASH JOIN by default is only blocking when consuming the first row source to build the hash table but it isn't blocking when processing the second row source probing the hash table.
Since it doesn't seem to be supported to have two PX SEND operations active at the same time, some artificial blocking operation needs to be introduced, in this case the HASH JOIN BUFFERED, that first consumes the second row source completely before starting the actual probe phase. By doing so, the PX SEND operation used to distribute the second row source to the Parallel Slaves performing the hash join is no longer active when the actual probe phase starts and therefore the result set can be produced and sent to the Query Coordinator using the then only active PX SEND QC operation.
The following formatted execution plan highlights the two PX SEND operations that would have to be active at the same time if there wasn't a blocking operation in between:

I've used manual workarea sizing to make the test repeatable.
Note that I've added a comment to the query that should make the query text unique in order to generate a new parent cursor for each test run (so you would need to modify this comment for each test run). The only reason for this is the limitation of DBMS_XPLAN.DISPLAY_CURSOR with Parallel Execution as outlined in one of my previous posts, otherwise the "ALLSTATS ALL" option of DISPLAY_CURSOR would aggregate the statistics over all executions rather than only the last parallel execution.
Because I also tested some other costing related issues I disabled CPU costing for this test, however the results should be exactly the same when enabling CPU costing.
So there are basically two sets of data: T2 is 1,000K rows and approx. 100MB in size, and T4 is twice the size and rows, however only 100MB out of the 200MB represent data that matches T2 on either ID or FK.
The result set is approx. 200 bytes per row, so for example 200MB if 1,000K rows are produced.
The tables are compressed using BASIC compression, which results in this case in a very good compression ratio as the FILLER column is 100 bytes in size but only has one distinct value and therefore can benefit a lot from the symbol replacement performed by BASIC compression.
The point of this compression is that it makes in this particular case very obvious that while you can benefit a lot from compression at storage, I/O level and Buffer Cache in general, at SQL execution time Oracle has to process uncompressed row sources (and not compressed blocks), so all the workareas that have to be used for hash tables, sorts or simple buffering won't benefit from the compression but have to be big enough for the uncompressed row source data. Likewise any data that gets distributed using PX SEND also represents uncompressed data volume.
There are two important variations possible to this test case query:
1. Instead of joining on ID which produces 1,000K rows join the two tables on the FK column which results in a huge result set of 1,000M rows (each row matches 1,000 rows from the other row source)
2. Use this variation of the query that doesn't require an artificial blocking due to the fact that a true blocking operation gets used:

This way the impact of the BUFFERED operation can be easily separated from other potential activity like a too small workarea for an optimal hash join. If the operation completes without TEMP I/O activity when using a true blocking operation but spills to TEMP when running in BUFFERED mode then the I/O activity very likely comes from the additional buffering of data.

Detailed Analysis

Small Result Set

Running the variation in BUFFERED mode where only 1,000K rows get produced, the following can be seen from the DBMS_XPLAN.DISPLAY_CURSOR output (assuming you don't run this cross-instance in RAC as DBMS_XPLAN can only show relevant statistics for the local instance), using a block size of 8KB and a 32bit version of Oracle:

The statement got executed with a Parallel Degree of 2. Notice that only 2K (16MB) resp. 4K (32MB) blocks were processed for reading the two row sources, but the hash join had to read/write two times 50MB (100MB uncompressed data volume in total). If it was the result set that got buffered I would expect it to read/write 200MB in total.
If you repeat this variation with the true blocking operation then it shouldn't spill to disk at all, as indicated above by the O=2 (two optimal hash joins) in the column "O/1/M", which is interesting on its own, since it confirms that the HASH JOIN operated in optimal mode albeit the fact that it spilled to disk due to the BUFFERED operation.

Large Result Set

Running the variation in BUFFERED mode that joins on FK and therefore produces a huge result set looks like this (cancelled after a short while, so not run to completion in this case here):

There are a few interesting points to mention:
1. If the query is executed without the outer query that filters all data, the first rows are returned pretty quickly. If it was the result set that got buffered, this shouldn't be the case here, instead a huge TEMP space usage should be observed until finally the result set is returned to the parent operation/client
2. The second row source is consumed completely before the join operation is completed and a steady TEMP read activity can be observed while the data is returned to the client
3. The data volume written to TEMP corresponds roughly to what was written to TEMP in the first example, and stays like that during the whole execution. It doesn't increase any more during the join operation.
The difference in TEMP usage of the first example might come from the fact that I've used 75000000 as workarea size for some of my initial test runs and therefore the output above comes from such an early run.
So this variation pretty clearly shows that it is not the result set that gets buffered. It looks like the second row source is what gets buffered, as it is already consumed completely before the join is completed.

Large Result Set, true blocking operation

Repeating the same variant with a true blocking operation that doesn't require the BUFFERED mode of the HASH JOIN, looks like that (again not run to completion here):

Notice the difference: No TEMP activity, and the second row source gets gradually consumed as the join processes and data is returned to the parent operations.

BUFFERED vs. BUFFER SORT

In order to demonstrate the cunning optimization the HASH JOIN BUFFERED can perform, let's use a different distribution method that results in the second row source getting distributed via BROADCAST and therefore
requires in this case here an artificial, separate BUFFER SORT operation when receiving the broadcasted data (as otherwise again two PX SEND operations would be active at the same time):

Notice how now the complete T4 row source was buffered twice (that's the side effect of broadcasting it to each Parallel Slave, in this case with degree 2), resulting in more than four times more TEMP space usage than in the BUFFERED variant. So the separate BUFFER SORT operation obviously wasn't able to avoid the buffering of the data from T4 that doesn't match T2 (in which case it should have buffered only approx. 100MB of data twice), whereas the HASH JOIN BUFFERED simply discarded that data from T4 immediately without bothering to buffer it.
It is also interesting to note that the BUFFER SORT operation was reported as "optimal", although it obviously spilled to disk (roughly two times 221MB, the 221K is the old defect of DBMS_XPLAN.DISPLAY_CURSOR to report the TEMP usage using the wrong unit)

Very Small Result Set

I also did a complementary test where the result set generated is much smaller than the second row source, just in case there is another cunning optimization in place that could decide to buffer either the second row source or the result set depending on which is estimated to be smaller.

So although the result set is estimated only to be 24MB in size in this case the amount of data that spilled to disk is still roughly 100MB, which seems to suggest that it is always the second row source that gets buffered.

10104 Trace File Snippet

Finally, a sample extract from the 10104 hash join trace file showing that spilling to disk takes place before the PROBE_2 phase begins:

Footnote

The tests were performed on recent (11.2) releases as well as 10.2.0.3 which is one of the versions Jonathan used for testing in his original post. All versions tested showed the same behaviour, so it doesn't look like the buffering of the second row source is a change that got introduced in recent releases.
Furthermore please note that my OTN mini series on Parallel Execution that I wrote already a couple of months ago but was only published recently doesn't include this knowledge here and therefore explains the BUFFERED operation and the reasons for the blocking operations partially incorrectly.

Summary

The test case results show that the HASH JOIN BUFFERED operation seems to buffer the second row source. In principle it operates like a BUFFER SORT operation on the second row source but takes advantage of the fact that it only needs to buffer data that matches data in the first row source.
The limitation why the artificial blocking operations are introduced seems to revolve around the fact that at most a single PX SEND operation can be active concurrently.

Friday, December 7, 2012

Introduction

DBMS_XPLAN.DISPLAY_CURSOR can be used to get more insights into the actual resource consumption on execution plan operation level when using the GATHER_PLAN_STATISTICS hint (from 10g on), or increasing the STATISTICS_LEVEL parameter to ALL (on session level, on system level the overhead is probably prohibitive).
As soon as a SQL execution is done (either successfully, cancelled or with error) the corresponding extended data in the child cursor gets populated/updated and the additional information about the actual runtime profile can be accessed via V$SQL_PLAN_STATISTICS resp. V$SQL_PLAN_STATISTICS_ALL - this is what DISPLAY_CURSOR uses to populate the additional columns in the formatted output of the execution plan.
This works well for normal, serial execution where a single session performs the SQL execution and allows gathering extended information about the following statistics on execution plan line level:
- Actual cardinalities vs. estimated cardinalities
- Actual time spent on each operation (only reliable when using STATISTICS_LEVEL = ALL or setting "_rowsource_sample_freq" to 1 which can have a significant impact on runtime due to overhead)
- Logical I/O
- Physical I/O
- Memory usage of workareas
- TEMP space usage
One crucial information that is not available via this interface is how the actual time was spent (CPU vs. Wait Events).

Parallel Execution

However getting information about these Rowsource Statistics by default doesn't work well with Parallel Execution. When using the formatting option "ALLSTATS LAST" that is usually recommended to obtain the extended statistics for the last execution of the SQL statement you'll only see the statistics related to the work performed by the Query Coordinator, but no information related to the work performed by the Parallel Slaves.
Here is an example DBMS_XPLAN.DISPLAY_CURSOR output for a Parallel Execution using the "ALLSTATS LAST" option:

You can partially work around this limitation by using the format option "ALLSTATS ALL" instead. This option means that the information provided by DBMS_XPLAN.DISPLAY_CURSOR is based on a different column set of V$SQL_PLAN_STATISTICS_ALL that aggregates across all executions of the SQL statement. For Parallel Exection the statistics representing the work performed by the Parallel Slaves will be added to these columns, so using "ALLSTATS ALL" includes that information.
However, you need to be careful, since this means that you can't distinguish between the last and previous executions of the same cursor. So if you execute the same cursor multiple times in parallel, "ALLSTATS ALL" will show statistics that cover all these executions. You can work around this in a test case scenario by deliberately modifying the SQL text, for example by using a corresponding comment, that leads to the creation of a separate, unique parent cursor. This way it is ensured that "ALLSTATS ALL" effectively only displays information related to the last execution, since there is only a single (parallel) execution of that cursor.
Here is again the same Parallel Execution as above, this time using the "ALLSTATS ALL" option:

Notice the difference - the work performed by the Parallel Slaves is (mostly) visible now. Apart from that the "ALL" formatting option added some columns that are not shown when using the "LAST" option, which you can customize using the more granular formatting options of DBMS_XPLAN.
Reading the output is not that simple as for serial executions, in particular because there is a mixture of wall-clock / DB Time time for the activities related to the Query Coordinator and the aggregated DB Time for the Parallel Slaves.
Furthermore the rule that applies to serial execution plans that the values for time / work are cumulative is not adhered to for Parallel Execution, at least not across Parallel Slave Sets / Table Queues and the Query Coordinator.

Multiple DFOs And Cross-Instance

However, depending on the exact details of the execution plan and the actual execution of the SQL statement, V$SQL_PLAN_STATISTICS_ALL respectively DISPLAY_CURSOR still might miss information about the Parallel Execution even when using "ALLSTATS ALL".
In particular the following two points are important to consider:
1. If the Parallel Execution is cross-instance (runs on multiple nodes of a RAC) then DBMS_XPLAN.DISPLAY_CURSOR will only show information about the work performed on the local instance, since it only gathers information from the local V$SQL_PLAN_STATISTICS_ALL dynamic performance view. DBMS_XPLAN.DISPLAY_CURSOR doesn't show the complete picture in such cases.
Here is again the same execution plan as above, this time executed cross instance on two nodes participating in the execution:

Note how only "half" of the work is reported (except for the Query Coordinator work). When running DBMS_XPLAN.DISPLAY_CURSOR for the corresponding cursor on the second participating node, I get the other "half":

2. If the parallel execution plan consists of multiple so called "Data Flow Operations" (DFOs, you can read more about those DFOs in my OTN mini series about Parallel Execution), indicated by multiple PX COORDINATOR operations, then these different DFOs will be represented by multiple child cursors at execution time. So each DFO ends up with a separate child cursor.
Since DBMS_XPLAN.DISPLAY_CURSOR cannot aggregate information across multiple child cursors the information displayed again will be incomplete in such cases.
You can run DISPLAY_CURSOR for each of the child cursor generated, but this doesn't give you the same level of information. Furthermore, depending on the version and actual circumstances, the additional child cursors might not inherit the corresponding rowsource statistics setting, so these child cursors might not even contain any additional information in V$SQL_PLAN_STATISTICS_ALL
Here is again a similar execution plan as above, this time using a parallel TEMP table transformation that automatically results in a separate DFO and therefore a separate child cursor at runtime. The execution in this case was using a degree of 2 and was single instance:

Notice how the output suggests that the parallel execution part of the TEMP table transformation didn't start at all and didn't perform any work. If however the second child cursor related to the other DFO is analyzed, the following information gets reported:

Here you can see the missing parallel execution work actually performed.

Real-Time SQL Monitoring And XPLAN_ASH

If you are on 11g already and have the corresponding Diagnostic + Tuning Pack license, the best way to get a complete picture about Parallel Execution is using the "active" Real-Time SQL Monitoring report. It shows information already while the statement executes and doesn't have above mentioned limitations, so can work with cross-instance executions and execution plans using multiple DFOs (although there are bugs in the current versions related to such plans), besides that offers even more insights into the execution details than DBMS_XPLAN.DISPLAY_CURSOR / V$SQL_PLAN_STATISTICS_ALL.
It is interesting to note that Real-Time SQL Monitoring doesn't show the actual time consumed on execution plan line level as extended Rowsource Statistics does, which explains why it doesn't come with the same overhead. Since Real-Time SQL Monitoring analyzes ASH data instead, it can still come up with some reasonable execution plan line level work distribution information (including the differentation between CPU time and waits), although not as accurate as the actual timing information that can be gathered via Rowsource Statistics.
If you don't have a Tuning Pack license but at least Diagnostic Pack, or you're still on 10g (+ Diagnostic Pack license) then you can use my XPLAN_ASH tool to gather some interesting information about Parallel Execution. Since Active Session History is available cross instance and isn't limited to particular child cursors, it doesn't have above limitations and therefore can provide the full picture about a SQL execution based on ASH data. In 10g however, the ASH data doesn't have a relation to execution plan lines and misses some other information available from 11g on, so some important analysis on execution plan line level that can be done with 11g ASH data is not available in 10g.

Footnote

If you look carefully at above execution plans you'll notice the HASH JOIN BUFFERED operations that are reported as "Optimal" Hash Joins. This in principle means that the Hash Join operation itself could be done completely in memory. Why does DBMS_XPLAN.DISPLAY_CURSOR then show Read/Write/TEMP activity for the HASH JOIN BUFFERED operation? I'll cover this in detail in my next post - I believe the explanations (including my own) published so far for that type of operation are incorrect.

Summary

DBMS_XPLAN.DISPLAY_CURSOR doesn't work very well with Parallel Execution. For simple execution plans consisting only of a single DFO, and single-instance executions the "ALLSTATS ALL" option can be used as a workaround.
If available, Real-Time SQL Monitoring is the tool to use for Parallel Execution analysis. My XPLAN_ASH tool also offers some unique insights, in particular regarding systematic analysis of Parallel Execution work distribution skew.

Wednesday, October 24, 2012

A new version 2.0 of the XPLAN_ASH utility introduced here is available for download.
You can download the latest version here.
The change log tracks the following changes:
- Access check
- Conditional compilation for different database versions
- Additional activity summary
- Concurrent activity information (what is/was going on at the same time)
- Experimental stuff: Additional I/O summary
- More pretty printing
- Experimental stuff: I/O added to Average Active Session Graph (renamed to Activity Timeline)
- Top Execution Plan Lines and Top Activities added to Activity Timeline
- Activity Timeline is now also shown for serial execution when TIMELINE option is specified
- From 11.2.0.2 on: We get the ACTUAL DOP from the undocumented PX_FLAGS column added to ASH
- All relevant XPLAN_ASH queries are now decorated with comments so it should be easy to identify them in the Library Cache
- More samples are now covered and a kind of "read consistency" across queries on ASH is introduced
- From 11.2.0.2 on: Executions plans are now pulled from the remote RAC instance Library Cache if necessary
- Separate Parallel Slave activity overview (similar to "Parallel" tab of Real-Time SQL Monitoring)
- Support for the limited ASH capabilities of Oracle 10.2
So that's quite a big list of changes. In particular noteworthy is that the script now supports the limited capabilities of 10gR2 ASH. Although the coolest features of the script (Activity and Parallel Distribution information on Execution Plan line level) are not available with 10g, the remaining stuff still can be very useful - in particular the Parallel Slave and Activity Timeline information for Parallel Execution analysis - so that I believe this is a nice addition for those not yet on 11g and one of most powerful ways of analysing Parallel Execution on 10g.
When enabling Rowsource Statistics you can even get actual Rowsource cardinalities, however not all Parallel Execution plans are supported via DBMS_XPLAN.DISPLAY_CURSOR - Parallel Execution plans using multiple Data Flow Operations (DFOs, see the previous post about XPLAN_ASH for a little bit more info about DFOs) will generate multiple child cursors (one per DFO) and DISPLAY_CURSOR cannot "aggregate" the statistics information across multiple child cursors (besides that the "Rowsource Statistics" attribute is not preserved when enabling it on session level and those multiple child cursors get generated).
Functionality-wise I believe the most important stuff is now covered by this version of the script, and therefore the next releases will focus on fixing the numerous bugs that certainly have been introduced by this new version, and in particular on an improved representation of the information.
Very likely a HTML report similar to the HTML version of Real-Time SQL Monitoring would allow getting a much better overview of the information provided rather than the messy output of the current SQL*Plus query results.
Due to the way the queries on the ASH data are now written, it should be possible to come up with a version that also supports S-ASH, the free ASH implementation.

Summary Of Changes

1. It runs a reasonable pre-check and informs if any of the objects potentially required by the script aren't accessible. You'll see an output similar to the following in such cases:

2. Furthermore the script no longer appears to "hang" if privileges are missing or anything else unexpected happens. Actually the script wasn't hanging but usually waited for input with termout set to OFF, since some of the defines populated under regular conditions weren't properly populated when things went wrong. This has been corrected and you should now see the above privilege check output along with regular follow-up error messages, but no hanging script any longer.
3. The output has been re-aligned and further sections have been added. In particular the various "Activity" / "Concurrent Activity" summaries should be helpful to get a better understanding of the overall activity profile of the query and concurrently ongoing database activity. See below for more details about these new sections.
Note that in case of a cross-instance Parallel Execution the summaries will also be provided on instance level in addition.

10.2 Support

Except for a few sections that simply aren't available when running on pre-11g the script shows quite similar information when used with 10.2
The biggest difference when using the script with 10.2 is caused by the fact that 10.2 ASH neither has a clear indicator of a SQL execution instance (the SQL_EXEC_START / SQL_EXEC_ID columns added in 11g) nor is there a relation to the execution plan lines (SQL_PLAN_LINE_ID etc.)
Therefore the SQL_EXEC_START parameter with 10.2 is mandatory, and simply specifies the lower sample time of ASH data to query. Instead of the non-existing SQL_EXEC_ID the script expects a SQL_EXEC_END date input similar to SQL_EXEC_START that obviously determines the upper sample time for ASH data and is also mandatory.
Likewise in 10.2 the script doesn't support searching for the most recent execution like it does on 11g, so - as already outlined - it is up to you to determine what ASH data should be searched for a particular SQL_ID by specifying both SQL_EXEC_START and SQL_EXEC_END.
Note that if the ASH samples at SQL_EXEC_START don't include samples for the SQL_ID in question, then the script uses the "closest" samples to SQL_EXEC_START found within the range SQL_EXEC_START / SQL_EXEC_END to determine the session information that will be used to limit the ASH data.

Short Introduction

Since the initial post what a bit lengthy, allow me to use this opportunity to summarize what the main purpose of the script is:
1. It provides an output that to a significant degree is similar to information provided by the Real-Time SQL Monitoring even in cases where the Real-Time SQL Monitoring report is no longer available.
In particular it allows pulling this information even from the DBA_HIST* views, something that is not possible with Real-Time SQL Monitoring, and therefore allows pulling detailed reports about SQL executions as long as ago as the configured AWR retention.
Note that this is applicable to both serial and Parallel Execution, and the tool can also provide information about currently executing SQL statements.
2. For Parallel Execution, it offers some important clues. Some of them aren't provided elsewhere, even not by Real-Time SQL Monitoring (At least not up to version 11.2):
- It shows information about the Average Active Sessions (AVERAGE_AS column below) with different granularity. This information is crucial in understanding how well the work was distributed among the Parallel Slaves. When looking at the sample report below it becomes immediately obvious that the AVERAGE_AS figures are way below the Parallel degree used.

In particular the "Parallel Slave activity", which corresponds to the "Parallel" tab of the Real-Time SQL Monitoring and the "Activity Timeline", which corresponds to a mixture of the "Activity" and "Metrics" tab of the Real-Time SQL Monitoring, allows spotting any anomalies regarding skew (both temporal and data distribution skew).
Below example shows that one Parallel Slave is sampled much more than the others. The Activity Timeline (scroll to the right to see the relevant columns) shows that only for a short period of time the expected number of slaves where active at the same time, and a long period of only one average active session.

Note that it is important to understand that this information doesn't tell you anything about the efficiency of the execution plan performed, it just tells you how many Parallel Slaves were active concurrently on average. Whether the operation performed is efficient or not is a different question, however even an otherwise efficient execution plan might run much longer than necessary if the work isn't distributed well among the Parallel Slaves.
- Furthermore it allows systematic troubleshooting of data distribution skew by showing the number of samples per process on exection plan line level. This makes it easy to spot those execution plan operations that were affected by data distribution skew. Armed with this knowledge it can be systematically evaluated if changing the data distribution method of the corresponding join operations allows influencing this reasonably.
The "Parallel Distribution ASH" and "Parallel Distribution Graph ASH" columns added to the DBMS_XPLAN.DISPLAY* output provide that information.
The "Parallel Distribution ASH" shows the process names along with the number of samples found (in descending order). For Parallel Slaves, this is Pxxx, for all other process names it is the actual name found in the PROCESS column. If the execution is cross-instance, the name is prefixed with the instance number:

Ideally the number of samples per process should be quite similar, if there is a significant difference, then very likely the underlying cause is an uneven data distribution among the Parallel Slaves.
This is how it should look like ideally:

The "Parallel Distribution Graph ASH" attempts a graphical representation of the information that you see in the "Parallel Distribution ASH" column.
The idea of the "Graph" is that you don't need to work through all the rows of the execution plan in the "Parallel Distribution ASH" column and look at the actual sample numbers, but simply can sweep through the "Graph" in order to easily identify the potential skew candidates.
So, if you have a uniform distribution of work among the Parallel Slaves, you'll see something like this:
0123456789ABCDEF... (it's just 16 repeating hex-characters if you have more than 16 slaves sampled for an execution plan line)
But, if you have a non-uniform distribution of work among the Parallel Slaves, you'll see something like this:
000000000001111223
or in extreme cases
0000000000000
which means that only a few slaves (or only one in the last example) had to do the all of work.
If the graph doesn't correspond to what you think to see in the "Parallel Distribution ASH" column - the distribution of work looks skewed, but the graph doesn't correspond to what I've just explained, then try the "ASH,DISTRIB,TIMELINE" option instead.
In this case even a single sample of a single Parallel Slave will show up as a very wide (so this option might produce false positives):
0000000000000000000000000
which you then need to cross-check with the "Parallel Distribution ASH" column for relevance: If you have only have a few samples per line, then it is probably not relevant since you simply haven't spend significant time there.
But the advantage of this representation is that skew becomes more visible that might not be obvious when using the default DISTRIB_REL option.
The underlying reason is that the default option shows the distribution graph relative to the operation with the greatest number of samples assigned to. So if there are execution plan lines with a much higher sample number than the remaining lines (but these samples are not skewed), the graph might be so compressed that the skew doesn't become obvious.

Similar to the previous version this section shows a general summary of the ASH samples found for this SQL execution. If a cross-instance execution is detected, the summary will also be shown on instance level in addition which allows understanding to what extend the instances where involved in the execution.

This is a new section that takes advantage of the fact that ASH data covers all activity and therefore shows a summary of the ASH samples found during the execution that do not belong to this SQL execution. This can provide important clues why you might see variations in runtime although execution plans and data are similar. Note that for cross-instance executions the concurrent activity covered is limited to the instances participating in the execution. Although in principle the activity on instances not participating can also influence this SQL execution for various reasons I have decided to exclude that information - which can of course easily be changed in the corresponding queries.

This section is more or less unchanged to the previous version and only shows up if Parallel Execution gets used, you're running 11g+ and an execution plan could be found for the SQL statement.
It will show details about the different Data Flow Operations (DFOs) a Parallel Execution could consist of, in particular the activity related to each DFO, the Parallel Degree per DFO (could be different per DFO) and if the DFOs were active at the same time or not. In case of a cross-instance Parallel Execution this information will be broken down per participating instance.
Note that from 11.2.0.2 on the actual Parallel Degree is stored in the newly added PX_FLAGS column of the ASH data, so you get an additional column ACTUAL_DEGREE, whereas the ASSUMED_DEGREE uses the number of sampled slaves and the execution plan operations from the corresponding execution plan section for an educated guess about the degree.

This new section corresponds to the "Parallel" tab of the Real-Time SQL Monitoring and only shows up if Parallel Execution was detected. In particular for 10g it is very helpful for Parallel Execution analysis since the even more helpful analysis on Execution Plan line level is not available.
It will show the activities found in the ASH data per every process sampled as part of the Parallel Execution.
The most useful information in this section is very likely the "Activity Graph" column as it allows easily spotting distribution problems, although it doesn't allow the systematic troubleshooting of such issues that the execution plan line level information does, as explained above.
Depending on the version also the "Top Activities" and "Top Active Plan Lines" will also be shown per process. As usual the N of the Top N can be configured in the configuration section of the script.

This is again a new section that summarizes the activity found for this SQL execution and in principle corresponds to some part of the header information shown in Real-Time SQL Monitoring.
The activity is broken down between CPU and non-CPU, and for non-CPU the different Activity Classes will be shown. In case of a cross-instance Parallel Execution this section will be repeated, this time broken down per instance.

This new section corresponds to the previous one but shows a more granular breakdown of the activity on wait event level rather than wait class. Apart from that the same applies as above, also it will be repeated in case of a cross-instance Parallel Execution on instance level.

This section was previously called "Average Active Session Graph", but since a lot of information has been added it is now called "Activity Timeline" and will also be shown for serial execution if the "TIMELINE" option was specified.
In addition to the already previously available information about the average active sessions and their activity (CPU or non-CPU) plus PGA/TEMP now the top active plan lines (11g+), activities and processes per bucket will be shown. Note that the N of the "Top N sections" can be configured in the "configuration" section of the script.

This section is pretty much unchanged to the previous version and shows the execution plan (if available) along with the activity per execution plan line, as explained in the original post for version 1.0 of the script. Note that the script detects whether an execution plan is available or not and shows the activity per execution plan line in a different format if no execution plan could be found.
A note to RAC users: In versions prior to 11.2.0.2 the execution plan can only be pulled from the local instance - in worst case this execution plan is an incorrect one, since for the same SQL_ID and CHIlD_CURSOR different plans could be in the different Library Caches of the RAC instances.
From 11.2.0.2 on the correct execution plan will be pulled from the corresponding RAC instance by using an (undocumented) function that allows a "remote" instance execution of a query.

Experimental Stuff

There is a global switch _EXPERIMENTAL at the beginning of the configuration section.
By default this is disabled because the stuff shown could be called "unreliable" and potentially "misleading" information.
If you enable it by setting the configuration switch to an empty string, the I/O figures from the ASH data (only from 11.2+ on)
will be shown at various places of the report. Note that this data is unreliable and usually falls short of
the actual activity (I've never seen it reporting more than the actual activities). Since sometimes unreliable
figures can be much better than nothing at all you can enable it in such cases where you want for example get an
idea if the I/O was in the range of MBs or GBs - this is something you should be able to tell from the ASH data.
Likewise the average and median wait times from ASH will be shown at different places of the report if experimental is turned on.
It is important to understand what these wait times are: These are waits that were "in-flight", not completed when the sampling took place.
Doing statistical analysis based on such sampled, in-flight wait times is sometimes called "Bad ASH math", but again, if you know what you are doing
and keep telling yourself what you're looking at, there might be cases where this information could be useful, for example, if you see that
hundreds or thousands of those "in-flight" waits were sampled with a typical wait time of 0.5 secs (for example a multiblock read) where you expect a typical wait time of 0.005 secs.
This might be an indication that something was broken or went wrong and could be worth further investigation.
This is what the affected sections look like with experimental turned on:

Note again that the wait times shown (average and median) are not true wait times since the wait was ongoing when the sample of the session was taken, which means that it wasn't completed yet, so the actual wait time for that wait will have been longer, and most of the waits won't be covered by the sampling anyway, so as already explained, you need to be very catious with that information.

This section, only available from 11.2 on, and only shown when the so called "experimental" mode is enabled (disabled by default, see the configuration section of the script).
From 11.2 on Oracle enriches the ASH data with accumulated statistics related to DB Time, CPU Time and I/O activity. This is a new approach since the data is not truly sampled, but accumulated from sample to sample, so in theory represents exact delta figures.
Note that this also means that the historic ASH data written to AWR (all samples every tenth second) requires these values to be aggregated for all samples in between that are not written to AWR in order to have meaningful information in these columns of the historic ASH data.
However, as it turns out, at least in 11.2, obviously not all I/O is covered by these statistics, for example I/O to the TEMP tablespace is not represented correctly and usually falls short of the actual I/O performed.
Furthermore there seem to be inconsistencies in the data that could indicate that it is not necessarily a reliable data source (the script to some extent attempts to detect such outliers and filter them out). Hence that information can be quite incorrect and therefore it is marked as "experimental". Nevertheless it can be much better than no I/O related information at all, therefore I decided to add it as a separate section.
Note that the I/O layer figures can be more than simply read plus write I/O, because these values include any software mirroring performed by Oracle via the ASM layer. So for example any write to ASM with ASM normal redundancy will be doubled in the I/O layer figures.
Please note furthermore that the Cell Offload Efficiency figure is only relevant to Exadata environments where the smart scan capabilities can reduce the actual amount of I/O. Of course you can easily end up with negative efficiency percentages if a lot of mirroring during writing takes place, or HCC compressed data gets uncompressed in the cells and sent to the compute nodes.

From 11.2 on, and if the "experimental" mode is enabled (as explained above already), the I/O figures per bucket will also be shown.
Although not part of the experimental stuff, please note that in particular the TEMP figures have turned out not to be very reliable, furthermore short peaks in PGA / TEMP usage are not well represented in the ASH data.