Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

4.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 4 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
A Very Brief Overiew of Active Session History
Active Session History (ASH) was introduced in Oracle 10g. It samples the activity of each
active1 database session every second. The data is held in a buffer in memory in the database.
The design goal is to keep about an hour (your mileage will vary). If a session is not active it
will not be sampled. The in-memory buffer is exposed via a view called
v$active_session_history.
You could sort of simulate some of ASH by taking a snapshot of v$session for every session,
but the overhead would be prohibitive. ASH is built into the Oracle kernel, so its overhead is
minimal.
When an AWR snapshot is taken, 1 row in 10 from the ASH buffer is copied down into the
AWR repository. It can also be flushed to disk between snapshots when the buffer reaches
66% full, so there is no missed data.The data is stored in
WRH$_ACTIVE_SESSION_HISTORY and it is exposed via dba_hist_active_sess_history.
ASH is enabled by default, but before you rush off to use it, be aware that it is a licenced
feature. It is part of the Diagnostic Pack, so you have to pay for it. I don’t like that either, but
that’s how it is.
1 I want to emphasise that if the session is not active it will not be sampled. You can actually
set a parameter _ash_enable_all = TRUE to force all sessions, including idle sessions, to be
sampled.
But as Doug Burns points out in his blog posting
(http://oracledoug.com/serendipity/index.php?/archives/1395-ASH-and-the-psychology-of-
Hidden-Parameters.html), these are undocumented, unsupported parameters, and they are set
this way for a reason – you have been warned.

5.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 5
ASH in Oracle Enterprise Manager
Of course, OEM provides a way to run ASH reports, and here you see I have picked a
particular time window, and I have specified a module name – in this case the main payroll
calculation process.
And this is great. The report is easy to produce, and it tells you lots of things. Which SQL
statements are consuming the most time, which objects have the most I
You can see in this example I picked a module that was responsible for 86% of the total, and
there were an average of 14.8 active sessions (I know there were 32 concurrent processes).
But, you don’t get execution plans, and for that you will need to dig deeper yourself, and learn
to use the DBMS_XPLAN package.

6.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 6 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
What data does ASH retain?
Most of the columns on v$active_session_history are taken directly from column of the same
name on v$session, some have different name, and there is some additional information that is
not available elsewhere.
Column on v$active_session_history Correspondence to v$session
SAMPLE_ID ID of ASH Sample
SAMPLE_TIME Time of ASH Sample
IS_AWR_SAMPLE New in 11gR2
SESSION_ID V$SESSION.SID
SESSION_SERIAL# V$SESSION.SERIAL#
USER_ID V$SESSION.USER#
SQL_ID √
IS_SQL_ID_CURRENT New in 11gR2
SQL_CHILD_NUMBER √
FORCE_MATCHING_SIGNATURE not on V$SESSION
SQL_OPCODE √
TOP_LEVEL_SQL_ID New in 11gR1
TOP_LEVEL_SQL_OPCODE New in 11gR1
SQL_PLAN_HASH_VALUE not on V$SESSION
SQL_PLAN_LINE_ID New in 11gR1
SQL_PLAN_OPERATION New in 11gR1
SQL_PLAN_OPTIONS New in 11gR1
SQL_EXEC_ID √ New in 11gR1
SQL_EXEC_START √ New in 11gR1
PLSQL_ENTRY_OBJECT_ID √
PLSQL_ENTRY_SUBPROGRAM_ID √
PLSQL_OBJECT_ID √
PLSQL_SUBPROGRAM_ID √
SERVICE_HASH V$ACTIVE_SERVICES.NAME_HASH

10.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 10 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
Comparison with SQL Trace
ASH and SQL*Trace are not the same thing, but both are valuable tools for finding out about
where processes spend time.
SQL*Trace (or event 10046 as we used to call it) has been my weapon of choice for solving
performance issues for a very long time, and it is extremely effective, and there is still a place
for it.
There are difficulties with using SQL trace, especially in a production environment.
• Firstly, it does have a run time overhead. You could afford to trace a single process,
but you certainly couldn’t trace the entire database.
• You have to work with trace in a reactive way. You will probably not already be
tracing a process when you experience a performance problem, so you need to run
the process again and reproduce the poor performance with trace.
• Trace will tell you if a session is blocked waiting on a lock. However, it will not tell
you who is blocking you. ASH will do this (although there are limitations).
• A trace file records everything that happens in a session, whereas ASH data samples
the session every seconds. Short-lived events will be missed, so the data has to be
handled statistically (see page 14).
• There are problems with both approaches if you have the kind of application where
you have lots of different SQL statements because the application uses literal values
rather than bind variables (and cursor sharing is EXACT).
• Oracle’s TKPROF trace file profiler cannot aggregate these statements, but I have
found another called ORASRP (www.oracledba.ru/orasrp) that can. With ASH, you
will see different SQL_IDs, but it can be effective to group statements with the same
execution plan.
• You may have trouble finding the SQL text in the SGA (or via the DBMS_XPLAN
package) because it has already been aged out of the library cache. You may have
similar problems with historical ASH data because the statement had been aged out
when the AWR snapshot was taken.
• A trace file, with STATISTICS_LEVEL set to ALL, will give you timings for each
operation in the execution plan. So, you can see where in the execution plan the time
was spent. ASH will only tell you how long the whole statement takes to execute,
and how long was spent on which wait event.

11.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 11
Through the rest of this document you will see SQL_IDs. However, in a SQL trace
the statements are identified by hash_value. Those hash values do not show up if you
profile your trace file with tkprof, but they do if you use OraSRP. SQL_ID is just a
fancy representation of hash value, so you can convert from a SQL_ID to a
hash_value. Oracle supply function DBMS_UTILITY.SQLID_TO_SQLHASH(),
but as the comment on the blog says Tanel’s script is much cooler2.
You can’t get the whole of the SQL_ID back from the hash values (because it is trimmed off),
but you can get the last 5 or 6 characters it help you find or match SQL statements3
2 See Tanel Poder’s blog: http://blog.tanelpoder.com/2009/02/22/sql_id-is-just-a-fancy-
representation-of-hash-value/
3 And I could never have written this without seeing Tanel’s code!
CREATE OR REPLACE FUNCTION h2i (p_hash_value NUMBER) RETURN VARCHAR2 IS
l_output VARCHAR2(10) := '';
BEGIN
FOR i IN (
SELECT substr('0123456789abcdfghjkmnpqrstuvwxyz',1+floor(mod(p_hash_value/(POWER(32,LEVEL-1)),32)),1) sqlidchar
FROM dual CONNECT BY LEVEL <= LN(p_hash_value)/LN(32) ORDER BY LEVEL DESC
) LOOP
l_output := l_output || i.sqlidchar;
END LOOP;
RETURN l_output;
END;
/

12.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 12 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
Application Instrumentation
Oracle has provided a package called DBMS_APPLICATION_INFO since at least Oracle 8.
This allows you to set two attributes; MODULE and ACTION for a session. That value then
appears in v$session, and can be very useful to help you identify what database sessions relate
to what part of an application. These values are then also captured by ASH.
I cannot over-emphasise the importantance of this instrumentation when analysing
performance issues. Without sensible values in these columns all you have is the program
name. You will probably struggle to identify ASH data for the sessions which are of interest.
These values are not set by default. Instead DBAs are dependent on developers to include
them in their code. For example, Oracle E-Business Suite has built this into the application.
PeopleSoft Specific Instrumentation
However, other application vendors have not. PeopleSoft, for example, only write the name
of the executable into the module. This is really no help at all because the executable name is
held in another column.
For batch processes, I have developed a trigger which is fired by batch processes as they start
and which sets a meaningful process name, and puts the unique process instance number into
the action.
CREATE OR REPLACE TRIGGER sysadm.psftapi_store_prcsinstance
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst FOR EACH ROW
WHEN ( (new.runstatus IN('3','7','8','9','10') OR old.runstatus IN('7','8'))
AND new.prcstype != 'PSJob')
BEGIN
…
psftapi.set_action(p_prcsinstance=>:new.prcsinstance
,p_runstatus=>:new.runstatus
,p_prcsname=>:new.prcsname);
…
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/

13.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 13
The results of this instrumentation are visible in Enterprise Manager
Later, you will see the value of this instrumentation as I use it to join a combination of data in
the application about batch processes with the ASH repository to identify where a given
process spent time.

14.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 14 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
Using SQL to Analyse ASH Data
Statistical Analysis Approach
ASH data is a sample and so must be handled statistically. If something happens that lasts 10
seconds, then it will be sampled about 10 times.
However, not everything that happens is captured. If something happens that last less than a
second, but it happens very frequently, some of them will be captured. For example, if
something happens which lasts for 1/10th
of a second, but happens 100 times then you would
expect to capture it about 10 times. In all, the 100 occurences lasted 10 times. So by counting
each ASH row as worth 1 seconds of wait time you come out at the right answer. This is what
I mean by taking a statistical approach.
So, if you are looking at a current or recent process you the raw ASH data, and the query that
you have to construct when working with is something along these lines
SELECT …
, SUM(1) ash_secs
FROM v$active_session_history
WHERE …
GROUP BY …
And if you are going further back in time then you have to work with the historical data, only
1 in 10 rows are kept, so now each row is worth 10 seconds
SELECT …
, SUM(10) ash_secs
FROM dba_hist_active_sess_history
WHERE …
GROUP BY …
And of course, you won’t see recent data in this view until there is an AWR snapshot for the
ASH buffer fills to 2/3 and flushes.

15.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 15
ASH History is exposed by the view DBA_HIST_ACTIVE_SESSION_HISTORY. It is
stored in the table SYS. WRH$_ACTIVE_SESSION_HISTORY which is range partitioned
on DBID and SNAP_ID. To make the SQL work efficiently you need to specify the snap ID,
for that I use dba_hist_snapshotS to identify the range of snapshots that you want to use, and
the partitions first so that you eliminate unwanted partitions. You may need the LEADING
hint to force Oracle to start with the snapshot view, and then the USE_NL hint to force it to
work through each snapshot, which will guarantee a single partition access. Otherwise your
queries could run for ever!
SELECT /*+LEADING(x) USE_NL(h)*/ …
, SUM(10) ash_secs
FROM dba_hist_active_sess_history h
, dba_hist_snapshot x
WHERE x.snap_id = h.snap_id
AND x.dbid = h.dbid
AND x.instance_number = h.instance_number
AND x.end_interval_time >= …
AND x.begin_interval_time <= …
AND …
GROUP BY …
Objectives
Ask yourself what you are trying to find out.
• Are you interested in a single database session, or a group of sessions, or the whole
database?
• All ASH Data –v- One Wait Event
• Time Window

16.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 16 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
PeopleSoft Specific ASH Queries
To get the most out of ASH you need to know how to relate database session to processes.
That starts with using DBMS_APPLICAITON_INFO to register the process name and
process instance of batch processes on the session (see page 12). But there is more.
Batch Processes
The start and end time of a batch process is recorded on the process request table, and you can
use that to identify the snapshots, and thence the active session history.
SELECT /*+LEADING(r x h) USE_NL(h)4*/
r.prcsinstance
, h.sql_id
--, h.sql_child_number
, h.sql_plan_hash_value
, (r.enddttm-r.begindttm)*86400 exec_secs
, SUM(10) ash_secs
FROM dba_hist_snapshot x
, dba_hist_active_sess_history h
, sysadm.psprcsrqst r5
WHERE x.end_interval_time >= r.begindttm6
AND X.begin_interval_time <= r.enddttm
AND h.sample_time BETWEEN r.begindttm AND r.enddttm7
AND h.snap_id = x.snap_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module like r.prcsname8
AND h.action LIKE 'PI='||r.prcsinstance||'%'9
AND r.prcsinstance = 195633810
GROUP BY r.prcsinstance, r.prcsname, r.begindttm, r.enddttm, h.sql_id, h.sql_plan_hash_value
ORDER BY 1
/
4 Specify a hint to ensure good performance. Start with the process request table, then go to
the snapshots, finally go to the ASH data and look it up with a nested loop join.
5 This table described the process
6 Identify the AWR snapshots that coincide with the period that the process was running
7 Filter ASH data to exactly the period that the process was running.
8 Filter ASH data by Module which is the name of the process on the process request table
9 Filter ASH data by Action which includes the process instance number
10 Uniquely identify process

17.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 17
On-Line Activity
I have used the PeopleSoft Performance Monitor (PPM) to find a period in time when the
system exhibits degraded performance.
With on-line activity it is not possible to add module and action instrumentation. At the
moment the program name is copied to module, and that is no advantage at all because I
already have program in the ASH data
Enhancement Request: PeopleSoft added instrumentation for Performance Monitor, the
context information they there use there for a PIA transaction could also be set in
DBMS_APPLICATION_INFO. Combine Component and Page to Module, and set Action as
Action
So, all I can do is query ASH data relating to PSAPPSRV programs. If you have separte
PSQRYSRV processes, you can analyse that separately too.
SELECT /*+LEADING(x h) USE_NL(h)*/
h.sql_id
, h.sql_plan_hash_value
, SUM(10) ash_secs
FROM dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE x.end_interval_time >= TO_DATE('201002010730','yyyymmddhh24mi')
AND x.begin_interval_time <= TO_DATE('201002010830','yyyymmddhh24mi')
AND h.sample_time BETWEEN TO_DATE('201002010730','yyyymmddhh24mi')
AND TO_DATE('201002010830','yyyymmddhh24mi')
AND h.snap_id = x.snap_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module like 'PSAPPSRV%'
GROUP BY h.sql_id, h.sql_plan_hash_value
ORDER BY ash_secs DESC
/

20.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 20 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
Now I want to see what SQL Statements that were executed by those processes, and what
were their execution plans.
SELECT /*+LEADING(r f d x h) USE_NL(h)*/
r.prcsinstance
, h.sql_id
--, h.sql_child_number
, h.sql_plan_hash_value
, (r.enddttm-r.begindttm)*86400 exec_secs
, SUM(10) ash_secs
FROM dba_hist_snapshot x
, dba_hist_active_sess_history h
, sysadm.psprcsrqst r
, sysadm.ps_cdm_file_list f
, sysadm.psxprptdefn d
WHERE x.end_interval_time between r.begindttm AND r.enddttm
AND h.sample_time BETWEEN r.begindttm AND r.enddttm
AND h.snap_id = x.snap_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module like r.prcsname
AND h.action LIKE 'PI='||r.prcsinstance||'%'
AND r.prcsinstance = f.prcsinstance
AND NOT f.cdm_file_type IN('AET','TRC','LOG')
AND d.report_defn_id = SUBSTR(f.filename,1,instr(f.filename,'.')-1)
AND d.report_defn_id = 'XXX_WK_LATE'
AND r.prcsname = 'PSXPQRYRPT'
AND r.begindttm BETWEEN TO_DATE('201001200000','yyyymmddhh24mi')
AND TO_DATE('201001211600','yyyymmddhh24mi')
GROUP BY r.prcsinstance, r.prcsname, r.begindttm, r.enddttm, h.sql_id, h.sql_plan_hash_value
ORDER BY 1
/
One of the challenges of PeopleSoft Queries with Operator related row-level security is that a
precate on the operator ID as added to the query, and the operator ID is a litteral value not a
bind variable. That means that if two different operators run the same query, they will
generate different SQL_IDs.
SQL_ID djqf1zcypm5fm
--------------------
SELECT ...
FROM PS_TL_EXCEPTION A, PS_PERSONAL_DATA B, PS_PERALL_SEC_QRY B1,
…
WHERE B.EMPLID = B1.EMPLID AND B1.OPRID = '12345678'
…
This is rather perverse considering all the other parameters in a query are proper bind
variables, so if a use runs the same query with different paramters that will usually have the
same SQL_ID!

38.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 38 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
What Kind of Single Block Read
I created a temporary working storage table with a classification for each tablespace. Here my
classification is by object type in the tablespace. This is relatively easy if you have a
reasonable tablespace naming convention.
drop table dmk_data_files
/
create table dmk_data_files as
SELECT tablespace_name
, file_id
, CASE
WHEN f.tablespace_name LIKE 'SYS%' THEN 'SYSTEM'
WHEN f.tablespace_name LIKE 'UNDO%' THEN 'UNDO'
WHEN f.tablespace_name LIKE '%IDX%' THEN 'INDEX'
WHEN f.tablespace_name LIKE '%INDEX%' THEN 'INDEX'
ELSE 'TABLE'
END as tablespace_type
FROM dba_data_files f
ORDER BY tablespace_name
/
create unique index dmk_data_files on dmk_data_files(file_id)
/
I recommend that you do not work directly with DBA_DATA_FILES, because the resulting
query will be slow. Instead, build a working storage table.

39.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 39
When ASH reports a wait on file I/O it also logs the object, file and block numbers.
Although, beware, because the values may not have been cleared out FROM the previous
sample.
So you know which database, and hence which tablespaces was accessed.
It’s a simple matter work out how much time was spent writing to which type of tablespace
SELECT /*+LEADING(x h) USE_NL(h f)*/
f.tablespace_type
, SUM(10) ash_secs
FROM dba_hist_snapshot x
, dba_hist_active_sess_history h
, dmk_data_files f
WHERE x.end_interval_time <= TO_DATE('201002161300','yyyymmddhh24mi')
AND x.begin_interval_time >= TO_DATE('201002161100','yyyymmddhh24mi')
AND h.sample_time BETWEEN TO_DATE('201001261100','yyyymmddhh24mi')
AND TO_DATE('201001261300','yyyymmddhh24mi')
and h.snap_id = x.snap_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.event LIKE 'db file%'
AND h.p1text = 'file#'
and h.p2text = 'block#'
AND h.event IS NOT NULL
AND f.file_id = h.p1
GROUP BY f.tablespace_type
ORDER BY ash_secs DESC
/
Here, we can see we are spending more time on index reads that table reads, and very little on
the undo tablespace, so there is not too much work to maintain read consistency occurring.
TABLES ASH_SECS
------ ----------
INDEX 30860
TABLE 26970
UNDO 1370
SYSTEM 490
Of course, you could classify your tablespaces differently. You might have different
applications all in one database. You might want to know how much of the load comes
FROM which application.
I suppose you could look go down to each individual object being accessed, but that will be
more involved, and I haven’t tried that.

41.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 41
And rather reassuringly the ASH total agrees quite well with AWR. The top statement alone
is costing us nearly 5 hours.
SQL Plan
SQL_ID Hash Value ASH_SECS
------------- ---------- ----------
7qxdrwcn4yzhh 3723363341 26030
652mx4tffq415 1888029394 11230
c9jjtvk0qf649 3605988889 6090
artqgxug4z0f1 8450529 240
gtj7zuzy2b4g6 2565837323 100
Let’s look at the statements involved. They all come FROM the PeopleSoft Publish and
Subcribe Servers.
The first statement shows a homemade sequence. PeopleSoft is a platform agnostic
development, so it doesn’t use Oracle sequence objects. The other two statements show an
update to a queue management table.
SQL_ID 7qxdrwcn4yzhh
--------------------
UPDATE PSIBQUEUEINST SET QUEUESEQID=QUEUESEQID+:1 WHERE QUEUENAME=:2
SQL_ID 652mx4tffq415
--------------------
UPDATE PSAPMSGPUBSYNC SET LASTUPDDTTM=SYSDATE WHERE QUEUENAME=:1
SQL_ID c9jjtvk0qf649
--------------------
UPDATE PSAPMSGSUBCSYNC SET LASTUPDDTTM=SYSDATE WHERE QUEUENAME=:1
There is nothing I can do about any of these because the code is deep inside PeopleTools and
cannot be changed. This is the way that the Integration Broker works.
I cannot find the statement that is blocking these statements. Oracle doesn’t hold that
information. It is probably another instance of the same statement, but that it isn’t the
question. The real question is ‘what is the session that is holding the lock doing while it is
holding the lock, and can I do something about that?’
The ASH data has three columns that help me to identify the blocking session.
• BLOCKING_SESSION_STATUS – this column has the value VALID if the
blocking session is within the same instance, but GLOBAL if is in another instance.
• BLOCKING_SESSION – this is the session ID of the blocking session if the session
is within the same instance, otherwise it is null.
• BLOCKING_SESSION_SERIAL# - this is the serial number of the blocking session
if the session is within the same instance, otherwise it is null.

43.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 43
This is the top of list of statements.
Note that two of the statements that appear in this list were the original SQL_IDs that we
started with. I’ll come back to this below.
SQL_ID SQL_PLAN_HASH_VALUE ASH_SECS
------------- ------------------- ----------
29210
5st32un4a2y92 2494504609 10670
652mx4tffq415 1888029394 7030
artqgxug4z0f1 8450529 580
7qxdrwcn4yzhh 3723363341 270
The first line in the report is blank because there is no ASH data for the session holding the
lock because it is not active on the database. This indicates that the client process is busy, or
waiting on something else outside the database. This is where the majority of the time is
spent, and there is nothing that can be done within the database to address this. It is a matter
of looking at the client process.
However the line in the report says that a statement blocks other sessions for 10670 seconds.
We can look at that.
SELECT * FROM table(dbms_xplan.display_awr('5st32un4a2y92',2494504609,NULL,'ADVANCED'));
Note also that this is the execution plan when the query was first seen. The cost is the cost
then, not now. The value of the bind variable was the value then not now!
SQL_ID 5st32un4a2y92
--------------------
SELECT 'X' FROM PS_CDM_LIST WHERE CONTENTID = :1
Plan hash value: 2494504609
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 22 (100)| |
| 1 | INDEX FAST FULL SCAN| PS_CDM_LIST | 1 | 5 | 22 (10)| 00:00:01 |
------------------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$1 / PS_CDM_LIST@SEL$1
Peeked Binds (identified by position):
--------------------------------------
1 - :1 (NUMBER): 17776

50.
A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C 31 MA R C H 2011
PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 50 G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L
Did my Execution Plan Change?
We were experiencing a problem with a query in a particular report. We fixed it by adding a
hint. I wanted to prove that when the hint was put into production, the execution plan
changed. This query is very similar to the one described in Batch Processes (see page 16), but
here I want to list all the queries run by all instances of a named report, and see if the exection
plan changed.
SELECT /*+LEADING(r f d x h) USE_NL(h)*/
r.prcsinstance
, r.begindttm
, h.sql_id
--, h.sql_child_number
, h.sql_plan_hash_value
, (r.enddttm-r.begindttm)*86400 exec_secs
, SUM(10)g ash_secs
FROM dba_hist_snapshot x
, dba_hist_active_sess_history h
, sysadm.psprcsrqst r
, sysadm.ps_cdm_file_list f
, sysadm.psxprptdefn d
WHERE x.end_interval_time >= r.begindttm
AND x.begin_interval_time <=r.enddttm
AND h.sample_time BETWEEN r.begindttm AND r.enddttm
AND h.snap_id = x.snap_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = r.prcsname
AND h.action LIKE 'PI='||r.prcsinstance||'%'
AND r.prcsinstance = f.prcsinstance
AND NOT f.cdm_file_type IN('AET','TRC','LOG')
AND d.report_defn_id = SUBSTR(f.filename,1,instr(f.filename,'.')-1)
AND d.report_defn_id = 'XXX_WK_LATE'
AND r.prcsname = 'PSXPQRYRPT'
AND r.begindttm >= TRUNC(SYSDATE)
ORDER BY begindttm
And we can see that after the fix was applied and the users were told they could start to run
this report again, the execution plan changed and the run time was much better.
PRCSINSTANCE BEGINDTTM SQL_ID SQL_PLAN_HASH_VALUE EXEC_SECS ASH_SECS
------------ ------------------- ------------- ------------------- ---------- ----------
1964975 08:30:52 22/01/2010 46smbgcfcrb8d 2602481067 20379 20080
1965250 09:08:51 22/01/2010 fpftdx2405zyq 2602481067 20983 20690
1968443 16:42:51 22/01/2010 3rxad5z3ccusv 3398716340 105 80
1968469 16:47:21 22/01/2010 3rxad5z3ccusv 3398716340 90 70
1968485 16:50:19 22/01/2010 3rxad5z3ccusv 3398716340 62 40
1968698 17:40:01 22/01/2010 0ku8f514k3nt0 3398716340 76 50
1968866 18:19:19 22/01/2010 cbmyvpsxzyf5n 3398716340 139 120
1968966 18:34:24 22/01/2010 5jb1sgmjc7436 3398716340 187 170
So, not only have I diagnosed a problem with ASH, I have also proven that the fix, when
applied to production has successfully resolved the issue.

51.
31 MA R C H 2011 A M O N O G R A P H O N AS H - P R A C T I C A L _ASH.D O C
G O -F A S T E R CO N S U L T A N C Y LT D . - CO N F I D E N T I A L PR A C T I C A L US E O F O R A C L E A C T I V E S E S S I O N HI S T O R Y 51
What was the Effect of a Stored Outlines
I have experienced unstable execution plans with processing of Payroll calculations. The
performance of the larger pay group is fine, but some of the execution plans for the smaller
paygroups are different, and performance can be poor.
A set of stored outlines were created for a full payroll identification and calculation process
for the larger payroll, and applied to all subsequent payrolls. Now, I want to prove not only
that the outlines were used, but that they have a beneficial effect.
I have three test scenarios.
1. A large streamed payroll calculation was run. It ran without using outlines for 2h
42m, which can considered to be good performance (in fact I used this process to
collect the stored outlines).
2. A small non-streamed payroll calculation without outlines. This ran for over 8 hours
before it was cancelled. Hence, I don’t have data for all statements for this scenario.
3. A small non-streamed payroll calculation again, but this time with outlines enabled.
It ran for 2h5m. Not great, considering it has a lot fewer payees than a single stream
of the large payroll, but better than scenario 2.
I can use the ASH data to see whether the execution plan changed, and what effect that had on
performance.
The SQL to perform the comparison looks horrendous, but it is effectively the usual query for
each test scenario in in-line views that are then joined together.
set pages 40
column sql_plan_hash_value heading 'sql_plan_hash_value' format 999999999999
column sql_plan_hash_value2 heading 'sql_plan_hash_value' format a12
SELECT /*+ LEADING(@q1 r1@q1 x1@q1 h1@q1) USE_NL(h1@q1)
LEADING(@q2 r2@q2 x2@q2 h2@q2) USE_NL(h2@q2)
LEADING(@q3 r3@q3 x3@q3 h3@q3) USE_NL(h3@q3) */
q1.sql_id
, q1.sql_plan_hash_value, q1.ash_secs
, DECODE(q1.sql_plan_hash_value,q2.sql_plan_hash_value,'**SAME**',
q2.sql_plan_hash_value) sql_plan_hash_value2
, q2.ash_secs
, DECODE(q1.sql_plan_hash_value,q3.sql_plan_hash_value,'**SAME**',
q3.sql_plan_hash_value) sql_plan_hash_value2
, q3.ash_secs
FROM (
SELECT /*+qb_name(q1)*/
h1.sql_id
, h1.sql_plan_hash_value
, (NVL(r1.enddttm,SYSDATE)-r1.begindttm)*86400 exec_secs
, SUM(10) ash_secs
FROM dba_hist_snapshot x1
, dba_hist_active_sess_history h1
, sysadm.psprcsrqst r1
WHERE x1.end_interval_time >= r1.begindttm
AND x1.begin_interval_time <= NVL(r1.enddttm,SYSDATE)
AND h1.sample_time BETWEEN r1.begindttm AND NVL(r1.enddttm,SYSDATE)
AND h1.Snap_id = x1.Snap_id