Oracle Unleashed- P8

Oracle Unleashed- P8

Oracle Unleashed- P8: When I first started using Oracle many years ago, it was possible to know the database and the tools available. With the rash of recent releases of different options for the database and the spate of new tools, only people who wear their underpants over their trousers will be able to know everything there is to know about the Oracle products.

Nội dung Text: Oracle Unleashed- P8

If the structure of a database table is changed—using an alter table command for example—the programs that
create the SQL*Loader readable files must be changed. If this is not done, proper backups of all data within the database
object will not be taken.
For some sites, however, this type of backup is practical despite the obvious constraints. For a more detailed discussion
of the implementation of SQL*Loader, refer to the Oracle7 Server Utilities User's Guide.
Types of Database Failure
Every DBA experiences a database failure at some point. It might be a minor failure in which the users never even know
that they lost service, or it might be a severe loss that lasts for several days. Most failures fall somewhere in the middle.
Most failures result primarily from the loss or corruption of a physical data file. Of course, many other factors can cause
database problems. Indeed, problems can occur in the memory structures (the SGA), the system hardware, or even the
Oracle software that prevent the DBA from starting up the database. The following sections describe the most common
types of failures.
Tablespace
If a loss or corruption takes place in any of the database files that make up a tablespace, media recovery is required. The
extent of the media recovery needed depends largely on the extent of the data file loss or corruption. The three types of
recovery available for this type of recovery are
q Database recovery
q Tablespace recovery
q Data file recovery
The database recovery method is generally chosen if the SYSTEM tablespace has been damaged, in which case it syncs
all the data files within the database during the recovery procedure. The tablespace recovery method is used if recovery
is needed for multiple tablespaces that had become damaged, such as from the loss of a disk drive. The data file recovery
method is performed if only a single database file has been damaged. The commands used to implement these methods
are
recover database;
recover tablespace users;
recover datafile '/u03/oradata/norm/rbs01.dbf';
Control File
Whenever a database loses a control file, there is generally little impact on the database itself as long as the DBA has
mirrored the control files. To recover the control file, follow these steps (which assume that the control file has been
mirrored):
1. From Oracle Server*Manager, do connect internal and perform a shutdown (or shutdown immediate) on the
database.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

2. Copy one of the existing control files over the corrupted file. If it is not possible to do this, copy it to another
location and reflect the change in the CONTROL_FILES parameter of the INIT.ORA parameter file, or remove
it completely.
3. From Oracle Server*Manager, do connect internal and perform a startup on the database.
The database will bring the control file in sync with the database, and the users will experience no loss of service or
downtime.
If a control file has been lost and there is no backup, Oracle continues to run until it attempts to access the control file. At
that point, the Oracle instance aborts. Two options available to the DBA:
q Create a new control file
q Restore the control file from backup
To create a control file, you must first create a SQL script that will adequately recreate the existing control file. If a
backup to trace is part of regular backups, the script already exists in the USER_DUMP_DEST directory. Use ls -lt in
UNIX to find the most recent one. Use view to make sure that it creates a control file and is not simply SQL*Trace
output). Perform the following steps:
1. Locate or create a SQL script.
2. From Oracle Server*Manager, do connect internal.
3. If a new create script was created, issue the startup nomount command. Execute the SQL script. Then execute
the commands
recover database;
alter system archive log all;
alter database open;
4. If the create control file script is from a backup to trace, execute the script from a shutdown database. It will
execute all the intermediate steps and open the database.
If you choose to use a backup control file, issue the following recover command in place of the standard recover
command:
recover database using backup controlfile;
Redo Logs
As with control files, there are two possible scenarios: loss of mirrored redo logs and loss of nonmirrored redo logs. If at
least one member in each redo log group is usable and not corrupted, the database continues to function normally. You
should determine what caused the failure or corruption of the redo log member. Then you should rectify the problem by
dropping and recreating the log member.
If all the members of a redo log group became corrupted or were lost, the scenario is entirely different. Dealing with the
loss of an entire redo log group is the same as dealing with an unmirrored redo log. The two possibilities are
q The redo logs were not the active group.
q The redo logs were the active group.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

If the redo log group was not the active group, the corrupt group and its members eventually cause the database to shut
down. The trick is to recognize that damage has been done and to react before the database shuts down. Restore the
online redo log from tape, or copy it from an existing redo log group if they are the same size. If the disk itself is corrupt
and unavailable, rename the redo log group. If you are lucky enough to catch the database at this time, this is the best
alternative. Otherwise, if the database attempts to access the corrupted redo log, the redo log must be recovered as if the
active redo log was lost (see below).
The more likely scenario is that the database aborted because it lost an inactive online redo log. The recovery steps are
basically the same, but they are done in an offline fashion. Recover the offending redo log group, or make a copy of an
existing group if they are the same physical size. From Oracle Server*Manager, do connect internal and start up the
database. The downtime involved should be minimal.
A loss of the current online redo log requires a limited recovery scenario. Although a full database recovery is not
actually applied, you must to make the database think that one has occurred. Only then can processing continue. The
steps are
1. From Oracle Server*Manager, do connect internal. Use shutdown, shutdown immediate, or shutdown abort to
shut down the database.
2. Execute startup mount on the database instance.
3. Once the database has been mounted, issue the recover database command. At the next prompt, enter cancel.
4. Issue an alter database rename... command to move the corrupted redo logs to a new location. The new files are
created automatically.
5. Execute the alter database open resetlogs; command from Oracle Server*Manager. The database is brought
back online for continued operations.
Operations that require restarting an aborted Oracle database instance can be quite complex. The complications that can
arise during an operation as sensitive as a recovery are numerous. If the recovery process does not seem to work
properly, stop and contact Oracle technical support immediately.
Archive Logs
You have been forced to tinker with startups, shutdowns, and renaming and recovering physical database files. At least
losing archive logs does not affect the continued operations of the database.
Well, almost.
Unlike losing a database file, a control file, or a redo log—which ultimately causes an Oracle database instance to
abort—losing an archive log has no visible effect on the database. After all, the logs are retained offline and are accessed
only when they are created as archives of the online redo logs and when they are used for database recovery.
Even though the loss of an archive log does not affect the continued operations of the database—which why
NOARCHIVELOG mode is available— if anything occurs that requires database recovery before the next backup, it will
be impossible to recover the database.
Because archive logs facilitate recovery, their loss is often realized only when it is too late. It is a difficult position for a
DBA, and there is no clear right or wrong solution. It depends on the backup schedule. It is easier to wait a few hours
until the next hot backup than to wait several days for the next cold backup.
We recommend that you immediately initiate a hot backup of the database. It will slow down things and cause the
system to choke a little on processing, especially during peak usage time. It is far better, though, than waiting and hoping
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

that nothing will happen.
Recovery Methods
There are several methods for performing database recovery. Each methods offers a trade-off between speed and
simplicity. The following sections describe the major types of recovery available through the Oracle RDBMS.
Cold Restore
In a cold restore, all the database files, control files, and redo logs are restored from tape or disk, and the database is
restarted. It is the simplest, most complete recovery operation to perform. The primary drawback is that anything done to
the database since the last backup is lost.
The steps in a cold restore are
1. Make sure that the current Oracle database instance is shut down.
2. Replace all the existing database files, control files, and redo logs with earlier versions from tape or disk.
The time and date stamps on all of the files from the recovery should be for the same period of time. If they are
not, the database will be out of sync and will not open properly.
1. From Oracle Server*Manager, do connect internal and issue a startup command. The database is now ready for
use.
Full Database Recovery
In a full database recovery, also called a complete recovery, data changed since the last backup can be restored. One or
more database files are restored from backup. Archive logs are then applied to them until they are in sync with the rest of
the database.
The steps in a full database recovery are
1. Make sure that the database instance is shut down.
2. Restore the data file from tape or disk.
3. From Oracle Server*Manager, do connect internal and perform startup mount on the database instance.
4. Issue the recover database automatic command from within Oracle Server*Manager. Oracle Server*Manager
responds by applying all the required changes to the database instance. Depending on the length of time since
the last backup and the size and number of the archive logs, this wait can take a few seconds or several hours.
5. After the SVRMGR> prompt returns, issue the alter database open noresetlogs; command. The database is now
completely recovered and available for use.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

There are several variations of the recover database command, including recover datafile and recover tablespace.
Time-Based Recovery
Sometimes a recovery is required, but not everything in the archive logs is necessary. Suppose, for example, that an
overzealous developer deploys a job that deletes every other row in a transaction processing table. In this case, a full
recovery will not work. Because the transactions that corrupted the table are in the archive logs, a full recovery simply
restores from the last backup and processes all the transactions, including the haphazard delete. If you know that the job
ran at 2:30 p.m., you can use time-based recovery to recover until 2:29 p.m. That way, the table is exactly as it appeared
before the job ran. This is also called an incomplete recovery.
A time-based recover is performed exactly like a full recovery, with the exception of the recover database command. The
steps are
1. Make sure that the database instance is shut down.
2. Restore the data file from tape or disk.
3. From Oracle Server*Manager, do connect internal and perform startup mount on the database instance.
4. Issue the recover database until time 'YYYY-MM-DD:HH24:MI:SS' command from within Oracle
Server*Manager. This is a mask for the time and day on which the recovery should stop. Oracle
Server*Manager responds by applying all the required changes to the database instance. Depending on the
length of time since the last backup and the size and number of the archive logs, this wait can take a few
seconds or several hours.
5. After the SVRMGR> prompt returns, issue the alter database open resetlogs; command. The database is now
completely recovered and available for use.
Cancel-Based Recovery
Even if you do not know the exact time when an error occurred, you might feel reasonably certain that you can isolate
when to terminate the recovery based on the thread/sequence number. Perhaps there was a break in the archive logs
because you had the database out of ARCHIVELOG mode for a short time, or perhaps you want more control over what
archive logs are applied as part of the recovery. The solution is cancel-based recovery.
Under cancel-based recovery, you are prompted after each archive log is applied. The recovery process continues until
either the recovery is complete or you enter cancel at the prompt. The prompt appears within Oracle Server*Manager as
Specify log: [ for suggested | AUTO | FROM logsource | CANCEL]
Once you enter cancel at the prompt, the recovery stops.
The steps in a cancel-based recovery are
1. Make sure that the database instance is shut down.
2. Restore the data file from tape or disk.
3. From Oracle Server*Manager, do connect internal and perform startup mount on the database instance.
4. Issue the recover database until cancel command from within Oracle Server*Manager. Oracle Server*Manager
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

responds by prompting you before each archive log is applied. The recovery ends when the database encounters
the final archive log or when you enter cancel.
5. The SVRMGR> prompt will return. If the recovery ran until completion, issue the alter database open
noresetlogs; command. If you entered cancel to end the recovery, issue the alter database open resetlogs;
command. The database will be recovered until the point of completion or cancellation.
Sample Database Backup Scripts
The code examples in the following sections show you how to set up and execute hot and cold backup schemes. These
are not highly intensive processing modules. There are certainly ways to make them more sophisticated. For example,
you could make the Oracle data dictionary determine which files to backup. Figure 14.7 shows the sample database that
scripts try to backup.
Figure 14.7. Sample Oracle database layout.
Cold Backup
This cold backup script issues a shutdown immediate command to terminate database operations. It then performs a mass
copy of all the database files from the operating system to tape. When it is finished, it restarts the Oracle database
instance.
#!/bin/sh
# Oracle RDBMS Cold Backup
# shutdown the database
$ORACLE_HOME/bin/svrmgrl /dev/rmt/0hc
# startup the database
$ORACLE_HOME/bin/svrmgrl

Hot Backup
This hot backup script shows a backup that occurs to disk instead of to tape, as in the cold backup. Whereas the cold
backup shuts down the database and does a mass file copy, the hot backup tediously copies the database files for each
tablespace. The cold backup is more dynamic than the hot backup because it uses wildcards and the OFA. Whenever a
new database file is added or changed, the hot backup must be changed. If it is not changed, an adequate backup is not be
taken.
Unlike the cold backup script, which makes its copies to tape, the hot backup script makes copies of the Oracle files to
disk. Either type of copy is acceptable for either backup method.
#!/bin/sh
# Oracle Hot Backup Script
$ORACLE_HOME/bin/svrmgrl

!cp /u03/oradata/norm/prod01.dbf /b03/oradata/norm/prod01.dbf
!cp /u05/oradata/norm/prod02.dbf /b03/oradata/norm/prod02.dbf
alter tablespace prod end backup;
REM ** Perform Control file backup
alter database backup controlfile to '/b01/oradata/norm/control.ctl';
alter database backup controlfile to trace;
REM ** Backup OnLine Redo Logs
!cp /u03/oradata/norm/redo*.log /b03/oradata/norm
!cp /u05/oradata/norm/redo*.log /b05/oradata/norm
exit
EOF
Summary
This chapter discusses the resources available to an Oracle RDBMS to ensure database integrity and consistency. You
learned how to implement them in a real-world backup strategy for mission-critical systems.
The information in this chapter barely scratches the surface of the backup and recovery functionality of the Oracle
RDBMS. Keep in mind that an Oracle RDBMS is basically a collection of physical database files. Backup and recovery
problems are most likely to occur at this level. Three types of files must be backed up: database files, control files, and
online redo log files. If you omit any of these files, you have not made a successful backup of the database.
Cold backups shut down the database. Hot backups take backups while the database is functioning. There are also
supplemental backup methods, such as exports. Each type of backup has its advantages and disadvantages. The major
types of instance recovery are cold restore, full database recovery, time-based recovery, and cancel-based recovery.
This chapter also contains sample scripts that you can use to build your own backup scripts.
Previous Next
TOC Home
Page Page
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Ultimately, this task falls to the DBA. A DBA really has two levels of responsibility: actual and perceived.
Actual responsibility means the tasks for which a DBA is genuinely responsible: keeping the database available for day-
to-day business needs, creating new user accounts, monitoring the overall health of the database, and so on. Perceived
responsibility means the responsibility incurred when there is any problem with the database—or even a conflict in the
corporate IS structure. A DBA is often asked why the database is down when a link has broken in the WAN, or why the
database is performing slow when a poorly written application is deployed into a production environment.
Because all database problems are perceived to be the responsibility of the DBA, it falls to him—whether he likes it or
not—to validate the claims or dispel the rumors. The DBA must have a solid foundation of knowledge to base his
decisions on. In many larger IS departments, the DBA may not be responsible for performance tuning. In others, the
DBA may be responsible only for database—but not application—performance tuning. At some sites, the DBA is
responsible for all performance tuning functions of the database.
This chapter deals with the art of performance tuning.
For more information about performance considerations while designing a database, see Chapter 17, "Designing
a Database.";
General Concepts in Database Tuning
When you are called on to optimize or tune a system, it is of paramount importance that you distinguish between the two
levels of performance tuning: applications tuning and database tuning. They are distinct areas of expertise and are often
handled by different people. The DBA should have at least an overview of the importance and functions of each type of
tuning.
At the base of everything is the operating system, which drives the physical functionality—such as how to access the
physical disk devices. On top of this level rests the RDBMS, which interacts with the operating system to store
information physically. Applications communicate with the RDBMS to perform business tasks.
Applications Tuning
Applications tuning deals with how the various applications—forms, reports, and so on—are put together to interact with
the database. Previous chapters discussed how a database is little more than a series of physical data files. Essentially, an
application is nothing more than a program that issues calls to the database, which in turn are interpreted as physical
reads and writes from the physical data files. Applications tuning means controlling the frequency and amount of data
that the application requests from or sends to the database.
Here are some general guidelines for tuning applications:
q Generate an EXPLAIN PLAN on all the queries in the application. This helps you determine whether a query
has been properly optimized. The EXPLAIN PLAN is discussed later in this chapter.
q Check the EXPLAIN PLAN of database views. This is important because views are indistinguishable from
tables when they are used in queries. Because the SQL for a view is not executed until it is queried, an
inefficient view can drastically slow down the performance of an otherwise efficient application. Be especially
wary of joining views with other views.
q If an application that was performing acceptably begins to perform slowly, stop and determine what has
changed. In many cases, queries run fine in test environments and in the first few months of production until
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

data accumulates; an index might now be needed to expedite the database searches. In other cases, however, an
index that invalidates existing EXPLAIN PLANs might have been added. This is a real danger when too many
people can create indexes on production tables. The more indexes that a table has, the longer it takes to load or
change data in a database table; it also impacts the speed with which the database returns query results.
q Match SQL where possible. Applications should use the same SQL statements wherever possible to take
advantage of Oracle's Shared SQL Area. The SQL must match exactly to take advantage of this.
q Be as specific as possible. The more specific a database query is, the faster a query executes. For example,
querying a table by a ROWID is far more specific than querying with the LIKE clause. Unless it is necessary to
use less specific queries in an application, always write queries that can use the PRIMARY KEY or other
indexed information.
q Be aware of how often queries are made against the database and whether they are necessary. Avoid too
frequent or unnecessary calls, such calling a loop that initially queries the DUAL table for the name of the user.
Each time the loop executes, the query is executed. Other types of queries are even more expensive. Whenever
possible, process data in memory and refrain from querying the database.
q SQL is not a file handler. One of the most common mistakes in SQL programming is made by people who have
previous programming experience using file handlers, such as BTRIEVE or ISAM. Software developers should
be wary of writing two separate queries for master/detail relationships—that is, one query for the master and
another for the details for that master—instead of just a single query. They involve extra processing overhead
that can have a substantial overhead for applications programs.
q Tuning does not solve the problems of poor design. This is the most essential truth in applications tuning. It
emphasizes what everyone who has ever worked in systems development knows: Spend time proactively, not
reactively. No matter how many indexes are created, how much optimization is done to queries, or how many
caches and buffers are tweaked and tuned—if the design of a database is faulty, the performance of the overall
system suffers.
These are only guidelines for applications tuning. Each site has its own specific problems and issues that affect the
problems that occur in applications. More often than not, it is the duty of the developers to tune and modify their own
programs without the involvement of the DBA. Because of perceived responsibility, however, the DBA must work with
the applications development staff to resolve these problems.
Database Tuning
Whereas applications development addresses how a task is accomplished, tuning at the database level is more of a nuts
and bolts affair. Performance tuning at the applications level relies on a methodical approach to isolating potential areas
to improve. Tuning at the database level, however, is more hit and miss. It concentrates on things such as enlarging
database buffers and caches by increasing INIT.ORA parameters or balancing database files to achieve optimum
throughput.
Unlike applications tuning, which can be done by an applications group or the DBA depending on the environment,
database tuning is the almost exclusive province of the DBA. Only in rare cases where there are multiple DBA groups,
one of which specializes in performance tuning, does database tuning fall outside the domain of the DBA.
At the database level, there are three kinds of tuning:
q Memory tuning
q I/O tuning
q Contention tuning
Each kind has a distinct set of areas that the DBA must examine. Memory tuning deals with optimizing the numerous
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

caches, buffers, and shared pools that reside in memory and compose the core memory structures for the Oracle
RDBMS. I/O tuning is concerned with maximizing the speed and efficiency with which the RDBMS accesses the
physical data files that make up its basic storage units. Contention tuning seeks to resolve problems in which the
database fights against itself for database resources.
There are only four basic steps involved in database tuning. They hold true for all three types of tuning:
1. Gather information.
2. Determine optimal changes.
3. Implement changes.
4. Monitor the database.
As with applications tuning, the more proactively the process is done, the more effective it is. The process is seldom
effective when it is done on the fly or without the proper amount of research.
Operating System Tuning
Tuning at the operating system level is beyond the scope of this chapter. This task falls to the system
administrator—only in rare cases to the DBA. However, it is often the role of the DBA to offer suggestions. Some issues
to consider are
q Paging and swapping. At the operating system level, paging and swapping is used to transfer information from
the system's memory (RAM) to disk and back again. This enables the system to manipulate more information
than it normally could handle in real memory. However, excessive paging and swapping can cause system
performance to degrade. The DBA and the system administrator should work together to optimize memory to
reduce or eliminate paging and swapping.
q Stripping, mirroring, and RAID. In many cases, the disk drives write a piece of information across several disks
(striping), write all the information across a pair of disks (mirroring), or writing all the information across every
disk in a fixed-number set (RAID). These disk drive configurations can help make I/O more efficient by
distributing reads and writes across many disks. In some cases, they increase the fault-tolerance of the system
itself. It is important for the DBA to be aware that many of these configurations have an impact on I/O
performance. RAID drives, for example, must access each disk in the set for every read and write operation.
q Shared memory. Used for communication between processes, shared memory settings are usually configured
within the operating system. The Oracle Installation and Configuration Guide gives the minimum settings for
configuring shared memory for an Oracle RDBMS. These settings are the minimum required for running the
RDBMS. In practice, they should generally be set higher.
q Maximum processes. One of the drawbacks of Oracle (or any other RDBMS) is the amount of overhead that
required from the system on which it runs. One of the areas that Oracle takes extended overhead is in processes.
The database itself consists of background process—PMON, SMON, LGWR, DBWR, and so on—plus an
additional processes for each user who connects to the database. Although this value can be limited at the
database level through the PROCESSES parameter in the INIT.ORA parameter file, it is important to make
certain that the operating system supports the number of processes. It is also important to ensure that the
number of available processes allows for growth.
q Maximum open files. A particular problem in an Oracle RDBMS is the maximum number of open files that a
single process can hold. The number of files is defined at the operating system and RDBMS levels. The
RDBMS sets this limit with the MAXDATAFILES parameter of the create database statement. There is also a
limit at operating system level, which depends on the configuration of the operating system. In both cases, the
DBA can change the value.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

It is not advisable to change the operating system priority of the Oracle background processes. If these values
are altered, the database might process information less efficiently. If you must modify them, set all database processes
to the same value.
Performance Tools
In tuning a database, the first and most crucial step is gathering statistics on the current database performance. These
tools give a benchmark of how the database is currently performing and enable the DBA to gauge progress by measuring
improvement.
Viewing SGA and Parameter Settings
Use the Oracle Server*Manager to view current parameter settings for an Oracle RDBMS instance. The show sga
command shows the current size and makeup of the SGA. You can also display the INIT.ORA parameters with the show
parameter command. To display only a particular parameter, add it to the command. For example,
% svrmgrl
SVRMGR> Connect internal
Connected.
SVRMGR> show parameter block
All the database parameters are shown, even ones that have not been explicitly set in the INIT.ORA parameter file.
Parameters that the DBA has not set are shown with their default values. By spooling this list to a data file, the DBA can
get an accurate snapshot of a database's settings.
UTLBSTAT and UTLESTAT
To determine what needs to be fixed in an Oracle RDBMS instance, you must first determine what is broken. In some
cases, performance problems occur sporadically; however, they are usually have a specific pattern. Do they occur around
lunch time? At night? Early in the morning? One of the keys to performing successful performance tuning is being able
to identify when the problem is occurring.
Oracle provides tools that enable you to examine in detail what the Oracle RDBMS was doing during a specific period of
time. They are the begin statistics utility (utlbstat) and the end statistics utility (utlestat). These scripts enable you to take
a snapshot of how the instance was performing during an interval of time. They use the Oracle dynamic performance (V
$) tables to gather information.
It is important to use the utlbstat and utlestat utilities only against a database instance that has been running for a
while. Because an Oracle RDBMS instance reinitializes its dynamic performance table during database startup,
information gathered from a database that has not been running and had time to gather information is inconclusive.
To use utlbstat and utlestat, the database must have been started with the value of TIMED_STATISTICS in the INIT.
ORA parameter file set to TRUE. Oracle does not collect some of the information required for the report if this
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

parameter is not set to TRUE. Setting TIMED_STATISTICS to TRUE, however, causes the database instance to incur
overhead. The amount is small—only about 4-8 percent in quantitative terms—and it is necessary to take an accurate
snapshot of the database performance. Many DBAs set this parameter to TRUE only when they gather statistics.
Once you have set the required parameters, the database has run for a sufficient period of time, and you have identified
the window, you take the snapshot by using utlbstat. To execute either script, you must have the ability to connect
internal to the database. Running utlbstat tells the RDBMS instance to begin gathering statistics until told otherwise. It is
executed as follows:
% svrmgrl
SVRMGR> @$ORACLE_HOME/rdbms/admin/utlbstat
From the moment when this script is executed, the Oracle RDBMS instance gathers performance statistics. It continues
to do so until you run the utlestat script, which stops gathering performance statistics. It is important that the database
remain active and not be shut down while utlbstat is running.
% svrmgrl
SVRMGR> @$ORACLE_HOME/rdbms/admin/utlestat
When you run utlestat, the database creates a report called REPORT.TXT in the current directory, which contains the
statistical information gathered. Each report contains the following information:
q Library cache statistics
q System summary statistics
q System-wide wait event statistics
q The average length of the dirty buffer write queue
q File I/O statistics
q SGA and cache statistics
q Latch statistics
q Rollback segment statistics
q Current initialization parameter settings
q Dictionary cache statistics
q Start and stop time statistics
A sample report called REPORT.TXT is included on the CD-ROM and shows what a report produced by utlestat might
look like.
Generating the report is simple; interpreting it is another matter entirely. The rest of this chapter looks at what this
information means. The report itself gives some brief hints. When in doubt, always remember to keep hit rates high and
wait times low.
EXPLAIN PLAN
Performance tuning does not always have to happen on a global, database-level view. In theory, most tuning should take
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

place at much lower, scalable levels where the performance impact is more easily measured. A fundamental truth of
database tuning and optimization is that performance tuning is not sorcery or magic. Optimizing a database will not
make a poorly tuned application run faster; the reverse is also true, though less common. It is important to examine how
the database handles processing at the application, or SQL, level.
To do this, Oracle provides a tool in the form of the EXPLAIN PLAN, which enables the DBA to pass a SQL statement
through the Oracle optimizer and learn how the statement will be executed by the database—the execution plan. That
way, it is possible to learn whether the database is performing as expected—for example, whether it uses an index on a
table instead of scanning the entire database table.
Several factors can affect the results returned by an EXPLAIN PLAN. They include
q Changes in statistics when running the database under the Cost-Based Optimizer
q The use of HINTS under the Rule-Based Optimizer that cause the query to select a particular execution path
q The addition or deletion of new indexes on one of the tables in the SQL statement when running the database
under the Rule-Based Optimizer
q Subtle changes in the WHERE or FROM clause of a SQL SELECT statement when running the database under
the Rule-Based Optimizer
q The presence of database objects with the same name as the object being referenced in the schema of the user
executing the query
It is important to understand that the results of an EXPLAIN PLAN are, therefore, by no means fixed and finite. The
DBA must be aware of changes made to database objects—such as adding new indexes—and how fast the tables are
growing.
The Oracle RDBMS uses the EXPLAIN PLAN by storing information about how a query is executing in a table within
the user's schema. The table must exist for the EXPLAIN PLAN to work. To create the table, the user must execute the
following script. Of course, he must have the CREATE TABLE and RESOURCE or quota privileges on his default
tablespace.
% svrmgrl
SVRMGR> connect scott/tiger
Connected.
SVRMGR> @$ORACLE_HOME/rdbms/admin/utlxplan.sql
Statement Processed.
Once the table has been created, an EXPLAIN PLAN can be generated from a query by prefacing the query with the
command to perform an EXPLAIN PLAN. The following script shows how to format a query for an EXPLAIN PLAN:
CONNECT /
EXPLAIN PLAN
SET STATEMENT_ID = 'QUERY1'
INTO PLAN_TABLE FOR
SELECT O.ORDER_DATE, O.ORDERNO, O.PARTNO, P.PART_DESC, O.QTY
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

FROM ORDER O, PART P
WHERE O.PARTNO = P.PARTNO
Note the SET STATEMENT and INTO clauses of the EXPLAIN PLAN. The value of SET STATEMENT is used to
make the execution of the EXPLAIN PLAN stored within the table unique; it can be virtually any string up to 30
characters in length. Specifying a table in the INTO clause, on the other hand, tells the EXPLAIN PLAN where to place
information about the query execution. In the previous example, the execution of the query is identified as QUERY1 and
has its information stored in the table PLAN_TABLE.
Now that the EXPLAIN PLAN has loaded the table with information, there is the obvious question of how to retrieve
and interpret the information provided. Oracle provides a script in the Oracle7 Server Utilities Guide that displays
information in a tree-like fashion. It is
SELECT LPAD(' ', 2*(LEVEL-1))||operation||' '||
options, object_name ÒQUERY PLANÓ
FROM plan_table
START WITH id = 0 AND statement_id = 'QUERY1'
CONNECT BY PRIOR id = parent_id
/
By running a SQL query through the EXPLAIN PLAN, a pseudo-graph similar to the following is produced:
QUERY PLAN
------------------------------------------------------------------------------
SORT ORDER BY
NESTED LOOPS
FILTER
NESTED LOOPS OUTER
TABLE ACCESS FULL HEADER
TABLE ACCESS BY ROWID DETAIL
INDEX RANGE SCAN DETAIL_PK
INDEX RANGE SCAN DETAIL_PK
TABLE ACCESS FULL HEADER
TABLE ACCESS BY ROWID DETAIL
INDEX RANGE SCAN DETAIL_PK
INDEX RANGE SCAN DETAIL_PK
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

When you interpret the output, it is important to understand that all operations, as reported by the EXPLAIN PLAN, are
basically operation/option combinations. There is no way to discuss all these combinations or the possible interpretations
of all the EXPLAIN PLAN scenarios. As with many aspects of the IS industry—especially relational databases—the
only true teacher is experience. However, here are some of the more common operation/option pairs that EXPLAIN
PLANs returns:
Eliminates rows from a table by conditions specified in the WHERE clause of a SQL
FILTER
statement
Accesses information in the table via a non-unique index (specified in the object_name
INDEX/RANGE SCAN
column)
Accesses information in the table via a unique or primary key index (specified in the
INDEX/UNIQUE
object_name column)
MERGE/JOIN Combines two sorted lists of data into a single, sorted list; used on multi-table queries
SORT/GROUP BY Sorts table data as specified in a GROUP BY clause of the SQL statement
SORT/JOIN Performs a sort on the data from the tables before a MERGE JOIN operation
SORT/ORDER BY Sorts table data as specified in an ORDER BY clause of a SQL statement
SORT/UNIQUE Performs a sort on table data being returned and eliminates duplicate rows
TABLE ACCESS/FULL Performs a full scan of the database table to locate and return required data
TABLE ACCESS/ROWID Locates a row in a database table by using its unique ROWID
VIEW
Returns information from a database view
The EXPLAIN PLAN is a powerful tool for software developers because it enables them to ensure that their queries are
properly tuned. Of course, changes made to database objects can adversely affect the results of the EXPLAIN PLAN, but
they are useful in determining where the performance drains on an application will occur.
SQL*Trace and TKPROF
Oracle SQL*Trace and EXPLAIN PLAN are similar in that they are both used to do performance tuning at the
application level and that they both show the manner in which the Oracle RDBMS executes a query. Unlike the
EXPLAIN PLAN, which simply shows how the database optimizer chooses to execute a query to return specified
information, SQL*Trace reveals the quantitative numbers behind the SQL execution. In addition to an execution plan,
SQL*Trace generates factors such as CPU and disk resources, in addition to an execution plan. This is often considered
a lower-level view of how a database query is performing, for it shows factors at both the operating system and RDBMS
levels.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

To use SQL*Trace, you must first set some parameters in the INIT.ORA parameter file:
Denotes the maximum size for an Oracle-generated file. This value is the number in
MAX_DUMP_FILE_SIZE
operating system blocks (which may differ from the size in database blocks).
Causes a trace file to be written for every user who connects to the database when it is set
SQL_TRACE to TRUE. Because of disk space requirements and database overhead, it should be used
judiciously.
Causes the database to gather database statistics when this value is set to TRUE. It causes
TIMED_STATISTICS
overhead of 4-8 percent.
USER_DUMP_DEST
The directory path where trace files will be written.
Once you have set the INIT.ORA parameters have been set, you can invoke the SQL*Trace utility manually. If the
SQL_TRACE parameter is set, it is not necessary to invoke SQL*Trace manually because a trace file will be written
automatically; however, it is more common to call it manually. To invoke SQL*Trace, use either SQL or PL/SQL.
Use SQL when there is specific query to be analyzed. For example,
% sqlplus
SQL> ALTER SESSION SET SQL_TRACE = TRUE;
SQL> @/tmp/enter_your_query.sql
SQL> ALTER SESSION SET SQL_TRACE = FALSE;
SQL> EXIT
You can either type in the query at the SQL prompt or source it in from an external file that contains the query.
In many cases, especially through applications such as SQL*Forms, it is necessary to invoke the trace facility by using
PL/SQL. This is especially helpful when you are dealing with a third-party application for which the SQL syntax is not
readily obvious. To invoke SQL*Trace, use the following PL/SQL statement:
BEGIN
DBMS_SESSION.SET_SQL_TRACE (TRUE);
/* PL/SQL code goes here */
As with SQL*Plus, the trace gathers information until the session disconnects or is deactivated.
/* PL/SQL code goes here */
DBMS_SESSION.SET_SQL_TRACE (FALSE);
END;
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

After the trace file has been generated, it must be converted into a readable format. Oracle provides the TKPROF utility
to accomplish this task. Using TKPROF, you can convert the raw trace file into a readable report.
Locating a trace file in the dump directory can be quite a task, especially if many other files exist. Two
tricks speed this process. The first is to use the UNIX command ls -lt to list the files in date order, with the newest file
listed first. The other option is to use a SELECT USERNAME FROM DUAL as part of the trace and issue a grep
USERNAME *.trc to find the trace file.
Once the trace file has been located, it is necessary to run the TKPROF utility against it to produce readable output. This
information is statistical and shows how queries perform at the database and operating system level. The report produced
by TKPROF contains CPU usage, disk utilization, and the count of rows returned by the query (or queries) enclosed in
the trace file output. You can also have TKPROF return EXPLAIN PLAN information from each query in the trace.
TKPROF is invoked as follows:
% tkprof ora_4952.trc ora_4952.log
This statement takes the trace output from the ORA_4952.TRC SQL*Trace file and generates its output in the file
named ORA_4952.LOG. This particular statement does not generate an EXPLAIN PLAN for any of the queries
contained in the trace file. Supplemental options enable you to control a certain extent or the information that is
produced. They are
Enables you to specify a username and password that will generate an EXPLAIN PLAN for each query
EXPLAIN
TKPROF analyzes
Specifies where to dump both the SQL statements in the trace file and the data contained in the insert
INSERT
statements
Designates the number of queries in the trace file to examine—especially useful for trace files that contain
PRINT
many SQL statements
RECORD Enables you to specify an output file that will contain all the statements in the trace file
SORT Enables you to control the order in which the analyzed queries are displayed
SYS Indicates whether to include queries run against the SYS tables (the data dictionary) in the trace output
TABLE
Specifies the schema.tablename to use when generating a report with the EXPLAIN option
When you run the trace file through TKPROF, it generates a report. For example,
************************************************************
select o.ordid, p.partid, o.qty, p.cost, (o.qty * p.cost)
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.