Introduction to Oracle Trace

Oracle Trace is a general-purpose data collection product and is part of the Oracle Enterprise Manager systems management product family. The Oracle server uses Oracle Trace to collect performance and resource utilization data such as SQL Parse, Execute, Fetch statistics, and Wait statistics.

See Also:

Oracle Trace User's Guide and Oracle Trace Developer's Guide in the Oracle Diagnostics Pack documentation set. These books contain a complete list of events and data that you can collect for the Oracle server as well as information on how to implement tracing in your own products and applications.

Using Oracle Trace Data

Among the many advantages of using Oracle Trace is the integration of Oracle Trace with many other applications. You can use Oracle Trace data collected for the Oracle server in the following applications as shown in Figure 15-1:

Oracle Expert

You can use information collected with Oracle Trace as an optional source of SQL workload data in Oracle Expert. This SQL data is used when recommending the addition or removal of indexes. Refer to the Oracle Tuning Pack documentation for additional information.

Oracle Trace Data Viewer

Oracle Trace Data Viewer is a simple viewer for inspecting Oracle Trace collections containing SQL and Wait statistics. You can export Oracle Trace Data to the following products for further analysis:

SQL Analyze

You can select one or more rows in Data Viewer and save the SQL statement text to a file that you can import into SQL Analyze. You can then use SQL Analyze to tune these individual statements.

Microsoft Excel

SQL in Data Viewer can be saved to a CSV (Comma Separated Value) file for viewing in Microsoft Excel.

Figure 15-1 Integration of Oracle Trace with Other Applications

Importing Oracle Trace Data into Oracle Expert

You can use Oracle Trace to collect workload data for use in the Oracle Expert application. Oracle Trace collects resource utilization statistics for SQL statements executing against a database in real time. Oracle Trace allows you to collect data about all the SQL statements executing against a database during periods of poor performance.

You control the duration of an Oracle Trace collection period. To obtain SQL workload data for a 15-minute period of poor performance, stop collection immediately after the poor performance interval ends.

Importing Data Viewer SQL Into Oracle SQL Analyze

While using Data Viewer, you can select one or more rows in the top portion of the Data View window to save to a file. When you choose SQL (SQL Analyze Format) from File/Save, a file containing query text will be saved. You can then import this *.sql file into Oracle SQL Analyze for tuning of the selected statements.

Oracle SQL Analyze can show you the execution plan for individual queries and let you experiment with various optimizer modes and hints.

Importing Data Viewer Information into Microsoft Excel

While using Data Viewer, you can select one or more rows in the top portion of the Data View window to save to a file. When you choose the CSV file format, Oracle Trace creates a *.csv file that you can load into a Microsoft Excel spreadsheet.

Using Oracle Trace Manager

The Oracle server has been coded with Oracle Trace API calls to collect both SQL and Wait statistics with a minimum of overhead. Using the Oracle Trace Manager graphical user interface you can:

Schedule collections

Filter collections by user

Filter collections by type of Wait event.

Format collected data to database tables to preserve historical data

View SQL and Wait statistics using Oracle Trace Data Viewer

Managing Collections

Use and control of Oracle Trace revolves around the concept of a "collection." A collection is data collected for events that occurred while an instrumented product was running.

With the Oracle Trace Manager, you can schedule and manage collections. When creating a collection, you define the attributes of the collection, such as the collection name, the products and event sets to be included in the collection, and the start and end times. The Oracle Trace Manager includes a Collection Wizard that facilitates the creation and execution of collections.

Once you create a collection you can execute it immediately or schedule it to execute at a specific time or at specified intervals. When a collection executes, it produces a file containing the event data for the products participating in the collection. You can also use a collection as a template for creating other similar collections.

Collecting Event Data

An event is the occurrence of some activity within a product. Oracle Trace collects data for predefined events occurring within a software product created with the Oracle Trace API. That is, the product is embedded with Oracle Trace API calls. An example of an event is a parse or fetch.

There are two types of events:

Point events

Point events represent an instantaneous occurrence of something in the instrumented product. An example of a point event is an error occurrence.

Duration events

Duration events have a beginning and ending. An example of a duration event is a transaction. Duration events can have other events occur within them; for example, an error can occur within a transaction.

The Oracle server is instrumented for 13 events. Three of these events are:

Database Connection: A point event that records data such as the server login user name.

SQL Parse: One of the series of SQL processing duration events. This event records a large set of data such as sorts, resource use, and cursor numbers.

RowSource: Data about the execution plan, such as SQL operation, position, object identification, and number of rows processed by a single row source within an execution plan.

Accessing Collected Data

During a collection, Oracle Trace buffers event data in memory and periodically writes it to a collection binary file. This method ensures low overhead associated with the collection process. You can access event data collected in the binary file by formatting the data to predefined tables which makes the data available for fast, flexible access. These predefined tables are called "Oracle Traceformatter tables."

Oracle Trace Manager provides a mechanism for formatting collection data immediately after a collection or at a later time.

When formatting a collection, you identify the database where Oracle Trace Manager creates the formatted collection as follows:

Using Oracle Trace Manager, select a collection to format.

Choose the Format command.

Specify a target database where the data is to reside.

The collection you select determines which collection definition file and data collection file will be used. The formatted target database determines where the formatted collection data will be stored.

Once the data is formatted, you can access the data using the Data Viewer or by using SQL reporting tools and scripts.

Also, you can access event data by running the Detail report from the Oracle Trace reporting utility. This report provides a basic mechanism for viewing a collection's results. You have limited control over what data is reported and how it is presented.

See Also:

The Oracle Trace Developer's Guide in the Oracle Diagnostics Pack documentation set for additional information about predefined SQL scripts and the Detail report.

Using Oracle Trace Data Viewer

After using Oracle Trace to collect data, run the Oracle Trace Data Viewer by selecting "View Formatted Data..." from the Oracle Trace Collection menu. Or you can select it directly from the Oracle Diagnostics Pack toolbar. Data Viewer can compute SQL and Wait statistics and resource utilization metrics from the raw data that is collected. Once Data Viewer computes statistics, targeting resource intensive SQL becomes a much simpler task.

Data Viewer computes SQL statistics from data collected by Oracle Trace Manager for all executions of a query during the collection period. Resource utilization during a single execution of a SQL statement may be misleading due to other concurrent activities on the database or node. Combining statistics for all executions may lend a clearer picture about the typical resource utilization occurring when a given query is executed.

Note:

You can omit recursive SQL from all data views.

Oracle Trace Predefined Data Views

SQL and Wait statistics are presented in a comprehensive set of Oracle Trace predefined data views. Within Wait statistics, a data view is the definition of a query into the data collected by Oracle Trace. A data view consists of items or statistics to be returned and optionally a sort order and limit of rows to be returned.

With the data views provided by Data Viewer, you can:

Examine important statistical data, for example, elapsed times or disk-to-logical-read hit rates.

Drill down as needed to get additional details about the statement's execution.

In addition to the predefined data views, you can define your own data views using the Oracle Trace Data View Wizard.

Once Data Viewer has computed SQL and Wait statistics, a dialog box showing the available data views appears. SQL Statistic data views are grouped by I/O, Parse, Elapsed Time, CPU, Row, Sort, and Wait statistics as shown in Figure 15-2. A description of the selected data view is shown on the right-hand side of the screen.

Logical data block reads include data block reads from both memory and disk.

Input/output is one of the most expensive operations in a database system. I/O intensive statements can monopolize memory and disk usage causing other database applications to compete for these resources.

Disk Reads

Queries that incur the greatest number of disk reads.

Disk reads for parses, executions, and fetches.

Disk reads also known as physical I/O are database blocks read from disk. The disk read statistic is incremented once per block read regardless of whether the read request was for a multiblock read or a single block read. Most physical reads load data, index, and rollback blocks from the disk into the buffer cache.

A physical read count can indicate a high miss rate within the data buffer cache.

Logical Reads/Rows Fetched Ratio

Number of logical reads divided by the number of rows fetched for all executions of the current query.

Total logical I/O.

Total number of rows fetched.

The more blocks accessed relative to the number of rows actually returned the more expensive each row is to return.

Can be a rough indication of relative expense of a query.

Disk Reads/Rows Fetched Ratio

Number of disk reads divided by the number of rows fetched for all executions of the current query.

Total disk I/O.

Total number of rows fetched.

The greater the number of blocks read from disk for each row returned the more expensive each row is to return.

Can be a rough indication of relative expense of a query.

Disk Reads/Execution Ratio

Total number of disk reads per distinct query divided by the number of executions of that query.

Total disk I/O.

Logical I/O for the query as well as the number of executions of the query.

Indicates which statements incur the greatest number of disk reads per execution.

Disk Reads/Logical Reads Ratio

Greatest miss rate ratio of disk to logical reads.

Individual logical reads.

Disk reads for the query as well as the miss rate.

The miss rate indicates the percentage of times the Oracle server needed to retrieve a database block on disk as opposed to locating it in the data buffer cache in memory.

The miss rate for the data block buffer cache is derived by dividing the physical reads by the number of accesses made to the block buffer to retrieve data in consistent mode plus the number of blocks accessed through single block gets.

Memory access is much faster than disk access, the greater the hit ratio, the better the performance.

Re-Parse Frequency

Queries with the greatest reparse frequency.

Number of cache misses.

Total number of parses.

Total elapsed time parsing.

Total CPU clock ticks spent parsing.

The Oracle server determines whether there is an existing shared SQL area containing the parsed representation of the statement in the library cache. If so, the user process uses this parsed representation and executes the statement immediately.

If missed in the library cache, re-check the statement for syntax, valid objects, and security. Also, the optimizer must determine a new execution plan.

The parse count statistic is incremented for every parse request, regardless of whether the SQL statement is already in the shared SQL area.

Parse/Execution Ratio

Number of parses divided by the number executions per statement.

Individual number of parses.

Number of executions.

The count of parses to executions should be as close to one as possible. If there are a high number of parses per execution then the statement has been needlessly reparsed. This could indicate the lack of use of bind variables in SQL statements or poor cursor reuse.

Reparsing a query means that the SQL statement has to be re-checked for syntax, valid objects and security. Also a new execution plan will need to be determined by the optimizer.

Average Elapsed Time

Greatest average time spent parsing, executing and fetching on behalf of the query.

Individual averages for parse, execution and fetch.

The average elapsed time for all parses, executions and fetches-per-execution are computed, then summed for each distinct SQL statement in the collection.

Total Elapsed Time

Greatest total elapsed time spent parsing, executing and fetching on behalf of the query.

Individual elapsed times for parses, executions and fetches.

The total elapsed time for all parses, executions and fetches are computed, then summed for each distinct SQL statement in the collection.

Parse Elapsed Time

Total elapsed time for all parses associated with a distinct SQL statement.

SQL cache misses.

Elapsed times for execution and fetching.

Total elapsed time.

During parsing the Oracle server determines whether there is an existing shared SQL area containing the parsed representation of the statement in the library cache. If so, the user process uses this parsed representation and executes the statement immediately.

If missed in the library cache, the statement needs to be rechecked for syntax, valid objects and security. Also a new execution plan will need to be determined by the optimizer.

Execute Elapsed Time

Greatest total elapsed time for all executions associated with a distinct SQL statement.

Total elapsed time.

Individual elapsed times for parsing and fetching.

The total elapsed time of all execute events for all occurrences of the query within an Oracle Trace collection.

Fetch Elapsed Time

Greatest total elapsed time for all fetches associated with a distinct SQL statement.

Number of rows fetched.

Number of fetches.

Number of executions.

Total elapsed time.

Individual elapsed times for parsing and executing.

The total elapsed time spent fetching data on behalf of all occurrences of the current query within the Oracle Trace collection.

CPU Statistics

Total CPU clock ticks spent parsing, executing and fetching on behalf of the SQL statement.

CPU clock ticks for parses, executions and fetches.

Number of SQL cache misses and sorts in memory.

When SQL statements and other types of calls are made to an Oracle server, a certain amount of CPU time is necessary to process the call. Average calls require a small amount of CPU time. However, a SQL statement involving a large amount of data, a runaway query, in memory sorts or excessive reparsing can potentially consume a large amount of CPU time.

CPU time displayed is in terms of the number of CPU clock ticks on the operating system housing the database.

Number of Rows Returned

Greatest total number of rows returned during execution and fetch for the SQL statement.

Number of rows returned during the fetch operation as well as the execution rows.

Targets queries that manipulate the greatest number of rows during fetching and execution. May mean that high gains can be made by tuning row intensive queries.

Rows Fetched/Fetch Count Ratio

Number of rows fetched divided by the number of fetches.

Individual number of rows fetched.

Number of fetches.

This ratio shows how many rows were fetched at a time. It may indicate the level to which array fetch capabilities have been utilized. A ratio close to one may indicate an opportunity to optimize code by using array fetches.

Sorts on Disk

Queries that did the greatest number of sorts on disk.

Sort statistics for SQL statements.

Number of in memory sorts.

Total number of rows sorted.

Sorts on disk are sorts that could not be performed in memory, therefore they are more expensive because memory access is faster than disk access.

Sorts in Memory

Queries that did the greatest number of sorts in memory.

Sort statistics for SQL statements.

Number of disk sorts.

Total number of rows sorted.

Sorts in memory are sorts that could be performed completely within the sort buffer in memory without using the temporary tablespace segments.

Rows Sorted

Queries that sorted the greatest number of rows.

Number of in memory sorts.

Number of sorts on disk.

Returns sort statistics for SQL statements ordered by queries that sorted the greatest number of rows.

Waits by Total Wait Time

Highest total wait time per distinct type of wait.

Average wait time, total wait time and number of waits per wait type.

Waits are sorted by wait description or type that had the greatest cumulative wait time for all occurrences of the wait type within the collection.

Waits by Average Wait Time

Highest average wait time per wait type.

Average wait time, total wait time and number of waits per wait type.

Waits are sorted by wait description or type that had the greatest average wait time for all occurrences of the wait type within the collection.

Waits by Event Frequency

Frequency of waits per wait type.

Number of waits per wait type, average wait time, and total wait time.

Waits are sorted by wait events or wait descriptions that appear most frequently within the collection.

Viewing Oracle Trace Data

Double clicking on SQL or Wait event data views provided by Data Viewer causes Oracle trace to query the collection data and display data sorted by criteria described in the data view's description.

Double clicking the "Average Elapsed Time" data view returns data sorted by queries that took the greatest average elapsed time to parse, execute, and fetch. It also displays the average elapsed times for parsing, execution, and fetching.

Figure 15-3 shows data in the "Average Elapsed Time" data view. Query text and statistics appear in the top portion of the window. Clicking any column headers causes the Data Viewer to sort rows by the statistic in that column.

Figure 15-3 Oracle Trace Data Viewer - Data View Screen

The currently selected data view's SQL text is shown in the lower portion of the window in the SQL Statement property sheet. Full statistical details about the currently selected data view also appear in the Details property sheet.

When examining a data view like that shown in Figure 15-3, you can print the following:

Data view statistics, located in the top portion of the screen, or

Current SQL statement text in formatted output plus details on all statistical data collected for the currently selected query, located in the Details property page.

Window focus at the time of printing determines which portion of the screen is printed. For example, if focus is on the top portion of the screen, the tabular form of all statistics and SQL for this data view is printed.

SQL Statement Property Page

The SQL Statement property page displays the currently selected query in a formatted output.

Details Property Page

The Details property page displays a detailed report on statistics for all executions of a given query within an Oracle Trace collection. Text for the currently selected SQL statement is posted at the end of the property page.

Example of Details Property Page

Statistics for all parses, executions, and fetches of the SQL statement.

Getting More Information on a Selected Query

There are two convenient ways to obtain additional data for the currently selected SQL statement:

To modify a data view to add or remove statistics or items, select Modify from the Data View menu. You may add or remove statistics in the Items property sheet. These statistics appear as new columns in the data view. The selected query in Figure 15-3 is:

This query counts distinct values in the WAIT_TIME column of the v_192216243_f_5_e_13_8_0 table. By modifying the existing data view, you can add other statistics that may be of interest such as "Sort Rows", which is the number of rows sorted, or "Execute CPU", which is the number of CPU clock ticks during execution.

This query counts distinct values in the WAIT_TIME column of the WAITS table. By modifying the existing data view you can add other statistics that may be of interest such as "Execute Rows", which is the number of rows processed during execution, or "Execute CPU", which is the number of CPU clock ticks during execution.

You can also remove existing columns, change the sort order, or change the default number of rows to view. You can save a modified view to a new, user-defined data view. Oracle stores user-defined data views in the Custom data view container following the Data Viewer supplied list of SQL and Wait data views.

Drill to statistics on all parses, executions and fetches of the selected query by clicking the Drill icon in the toolbar. The Drill down Data View dialog is displayed as shown in Figure 15-4.

Figure 15-4 Oracle Trace Data Viewer - Drill Down Data View Screen

Drill-down data views show individual statistics for all parses, executions, and fetches.

In Figure 15-4 the "Basic Statistics for Parse/Execute/Fetch" drill-down data view is selected. It displays statistics similar to those from TKPROF.

Table 15-2 Drill-down Data Views

Parse, Execution, and Fetch statistics which are similar to statistics from TKPROF.

CPU Statistics for Parse/Execute/Fetch

Greatest number of CPUs.

CPU total.

Pagefaults.

CPU and pagefault statistics for Parses, Executions, and Fetches of the current query.

CPU total is the number of clock ticks in both user and system mode. The clock tick granularity is specific to the operating system on which the database resides.

I/O Statistics for Parse/Execute/Fetch

Greatest number of disk I/Os.

Logical and Disk I/O statistics.

Pagefault I/O (number of hard pagefaults).

Input I/O (number of times the file system performed input).

Output I/O (number of times the file system performed output).

I/O statistics for parses, executions, and fetches.

Parse Statistics

Greatest elapsed time.

Current user identifier.

Schema identifiers.

Parse statistics, for example, whether the current statement was missed in library cache, Oracle optimizer mode, current user identifier, and schema identifier.

Row Statistics for Execute/Fetch

Greatest number of rows returned.

Number of rows returned.

Number of rows sorted.

Number of rows returned during a full table scan.

Execution and fetch row statistics.

Sort Statistics for Parse/Execute/Fetch

Greatest elapsed time.

Sorts on disk.

Sorts in memory.

Number of rows sorted.

Number of rows returned from a full table scan.

Parse, execution, and fetch sort statistics.

Wait Parameters

Wait_time.

Description.

Wait_time.

P1.

P2.

P3.

Investigating waits may help identify sources of contention.

P1, P2, and P3 parameters are values that provide more information about specific wait events. The parameters are foreign keys to views that are wait event dependent. For example, for latch waits, P2 is the latch number that is a foreign key to V$LATCH.

The meaning of each parameter is specific to each wait type.

Manually Collecting Oracle Trace Data

Though the Oracle Trace Manager is the primary interface to Oracle Trace, you can optionally force a manual collection of Oracle Trace data. You can do this by using a command-line interface, editing initialization parameters, or by executing stored procedures.

Using the Oracle Trace Command-Line Interface

Another option for controlling Oracle Trace server collections is the Oracle Trace CLI (Command-line Interface). The CLI collects event data for all server sessions attached to the database at collection start time. Sessions that attach after the collection is started are excluded from the collection. The CLI is invoked by the OTRCCOL command for the following functions:

OTRCCOL START job_id input_parameter_file

OTRCCOL STOP job_id input_parameter_file

OTRCCOL FORMAT input_parameter_file

OTRCCOL DCF col_name cdf_file

OTRCCOL DFD col_name username password service

The parameter JOB_ID can be any numeric value but must be unique and you must remember this value to stop the collection. The input parameter file contains specific parameter values required for each function as shown in the following examples. COL_NAME (collection name) and CDF_FILE (collection definition file) are initially defined in the START function input parameter file.

The OTRCCOL START command invokes a collection based upon parameter values contained in the input parameter file. For example:

OTRCCOL START 1234 my_start_input_file

Where file MY_START_INPUT_FILEcontains the following input parameters:

col_name

my_collection

dat_file

<usually same as collection name>.dat

cdf_file

<usually same as collection name>.cdf

fdf_file

<server event set>.fdf

regid

1 192216243 0 0 5 <database SID>

The server event sets that can be used as values for the fdf_file are ORACLE, ORACLEC, ORACLED, ORACLEE, and ORACLESM.

Where my_stop_input_file contains the collection name and cdf_file name.

The OTRCCOL FORMAT command formats the binary collection file to Oracle tables. An example of the FORMAT command is:

otrccol format my_format_input_file

Where my_format_input_file contains the following input parameters:

username

<database username>

password

<database password>

service

<database service name>

cdf_file

<usually same as collection name>.cdf

full_format

<0/1>

A full_format value of 1 produces a full format; a value of 0 produces a partial format.

See Also:

"Formatting Oracle Trace Data to Oracle Tables" for information on formatting part or all of an Oracle Trace collection and for other important information on creating the Oracle Trace formatting tables prior to running the format command.

Using Initialization Parameters to Control Oracle Trace

Six parameters are set up by default to control Oracle Trace. By logging into the administrator account in your database and executing the SHOW PARAMETERS TRACE command, you will see the following parameters as shown in Table 15-3:

Table 15-3 Oracle Trace Initialization Parameters

Name

Type

Value

ORACLE_TRACE_COLLECTION_NAME

string

[null]

ORACLE_TRACE_COLLECTION_PATH

string

$ORACLE_HOME/otrace/admin/cdf

ORACLE_TRACE_COLLECTION_SIZE

integer

5242880

ORACLE_TRACE_ENABLE

boolean

FALSE

ORACLE_TRACE_FACILITY_NAME

string

oracled

ORACLE_TRACE_FACILITY_PATH

string

$ORACLE_HOME/otrace/admin/cdf

You can modify the Oracle Trace initialization parameters and use them by adding them to your initialization file.

Note:

This chapter references file path names on UNIX-based systems. For the exact path on other operating systems, please see your Oracle platform-specific documentation.

Enabling Oracle Trace Collections

The ORACLE_TRACE_ENABLE parameter is set to FALSE by default. A value of FALSE disables any use of Oracle Trace for that Oracle server.

To enable Oracle Trace collections for the server, set the parameter to TRUE. Having the parameter set to TRUE does not start an Oracle Trace collection, but instead allows Oracle Trace to be used for that server. You can then start Oracle Trace in one of the following ways:

Using the Oracle Trace Manager application supplied with the Oracle Diagnostics Pack.

Setting the ORACLE_TRACE_COLLECTION_NAME parameter.

When ORACLE_TRACE_ENABLE is set to TRUE, you can start and stop an Oracle Trace server collection by either using the Oracle Trace Manager application that is supplied with the Oracle Diagnostics Pack, or you can enter a collection name in the ORACLE_TRACE_COLLECTION_NAME parameter. The default value for this parameter is NULL. A collection name can be up to 16 characters in length. You must then shut down your database and start it up again to activate the parameters. If a collection name is specified, when you start the server, you automatically start an Oracle Trace collection for all database sessions, which is similar in functionality to SQL Trace.

To stop the collection that was started using the ORACLE_TRACE_COLLECTION_NAME parameter, shut down the server instance and reset the ORACLE_TRACE_COLLECTION_NAME to NULL. The collection name specified in this value is also used in two collection output file names: the collection definition file (collection_name.cdf) and the binary data file (collection_name.dat).

Determining the Event Set that Oracle Trace Collects

The ORACLE_TRACE_FACILITY_NAME initialization parameter specifies the event set that Oracle Trace collects. The name of the DEFAULT event set is ORACLED. The ALL event set is ORACLE, the EXPERT event set is ORACLEE, the SUMMARY event set is ORACLESM, and the CACHEIO event set is ORACLEC.

If once restarted, the database does not begin collecting data, check the following:

The event set file, identified by ORACLE_TRACE_FACILITY_NAME, with .fdf appended to it, should be in the directory specified by the ORACLE_TRACE_FACILITY_PATH initialization parameter. The exact directory that this parameter specifies is platform-specific.

The following files should exist in your Oracle Trace administrative directory: REGID.DAT, PROCESS.DAT, and COLLECT.DAT. If they do not, you must run the OTR../../inter.815/a67843/toc.htmexecutable to create them.

The Oracle Trace parameters should be set to the values that you changed in the initialization file. Use Instance Manager to identify Oracle Trace parameter settings.

Look for an EPC_ERROR.LOG file to see more information about why a collection failed. Oracle Trace creates the EPC_ERROR.LOG file in the current default directory of the Oracle Intelligent Agent when it runs the Oracle Trace Collection Services OTRCCOL image. Depending on whether you are running Oracle Trace from the Oracle Trace Manager or from the command-line interface, you can find the EPC_ERROR.LOG file in one of the following locations:

$ORACLE_HOME or $ORACLE_HOME/network/agent (on UNIX)

%ORACLE_HOME%\network\agent or %ORACLE_HOME%\net80\agent (on NT)

$ORACLE_HOME\rdbmsnn on NT ($ORACLE_HOME\rdbms on UNIX)

in current working directory, if you are using the command-line interface

To find the EPC_ERROR.LOG file on UNIX, change directories to the $ORACLE_HOME directory and execute the command:

find . -name EPC_ERROR.LOG -print .

Note:

On UNIX, the EPC_ERROR.LOG file name is case sensitive and is in uppercase.

Look for *.trc files in the directory specified by the USER_DUMP_DEST initialization parameter. Searching for "epc" in the *.trc files may give errors. These errors and their descriptions are located in the $ORACLE_HOME/otrace/include/epc.h file.

Using Stored Procedures to Control Oracle Trace

Using the Oracle Trace stored procedures you can invoke an Oracle Trace collection for your own session or for another session. To collect Oracle Trace data for your own database session, execute the following stored procedure package syntax:

Be sure the server event set file identified by SERVER_EVENT_SETexists. If there is no full file specification on this field, then the file should be located in the directory identified by ORACLE_TRACE_FACILITY_PATHin the initialization file.

The following files should exist in your Oracle Trace admin directory: REGID.DAT, PROCESS.DAT, and COLLECT.DAT. If they do not, you must run the OTR../../inter.815/a67843/toc.htm executable to create them.

The stored procedure packages should exist in the database. If the packages do not exist, run the OTRCSVR.SQL file (in your Oracle Trace admin directory) to create the packages.

The user has the EXECUTE privilege on the stored procedure.

Oracle Trace Collection Results

Running an Oracle Trace collection produces the following collection files:

You can access the Oracle Trace data in the collection files in the following ways:

You can create Oracle Trace reports from the binary file.

The data can be formatted to Oracle tables for Data Viewer, SQL access, and reporting.

Formatting Oracle Trace Data to Oracle Tables

You can format Oracle Trace server collection Oracle tables for more flexible access SQL reporting tools. Oracle Trace produces a separate table for each event collected. For example, a parse event table is created to store data for all parse events occurring during a server collection. Before you can format data, you must first set up the Oracle Trace formatter tables by executing the OTRCFMTC.SQL script on the server host machine.

If you omit user/password@database, Oracle prompts you for this information.

Oracle Trace allows data to be formatted while a collection is occurring. By default, Oracle Trace formats only the portion of the collection that has not been formatted previously. If you want to reformat the entire collection file, use the optional parameter -f.

Oracle Trace provides several SQL scripts that you can use to access the server event tables. For more information on server event tables and scripts for accessing event data and improving event table performance, refer to the Oracle Trace User's Guide.

Oracle Trace Statistics Reporting Utility

The Oracle Trace statistics reporting utility displays statistics for all items associated with each occurrence of a server event. These reports can be quite large. You can control the report output by using command parameters. Use the following command and optional parameters to produce a report:

OTRCREP [optional parameters] collection_name.CDF

First, you may want to run a report called "PROCESS.txt". You can produce this report to provide a listing of specific process identifiers for which you want to run another report.

You can manipulate the output of the Oracle Trace reporting utility by using the following optional report parameters:

output_path

Specifies a full output path for the report files. If not specified, the files will be placed in the current directory.

-p

Organizes event data by process. If you specify a process ID (pid), you will have one file with all the events generated by that process in chronological order. If you omit the process ID, you will have one file for each process that participated in the collection. The output files are named collection_Ppid.txt.

-P

Produces a report called collection_PROCESS.txt that lists all processes that participated in the collection. It does not include event data. You could produce this report first to determine the specific processes for which you might want to produce more detailed reports.

-w#

Sets report width, such as -w132. The default is 80 characters.

-l#

Sets the number of report lines per page. The default is 63 lines per page.