Although some Oracle tools and applications simplify or mask the use of SQL, all database operations are performed using SQL, to take advantage of the security and data integrity features built into Oracle.

Overview of SQL Statement Execution

Figure 7-1 outlines the stages commonly used to process and execute a SQL statement. In some cases, these steps might be executed in a slightly different order. For example, the DEFINE stage could occur just before the FETCH stage, depending on how your code is written.

For many Oracle tools, several of the stages are performed automatically. Most users do not need to be concerned with, or aware of, this level of detail. However, you might find this information useful when writing Oracle applications.

Identifying Extensions to SQL92 (FIPS Flagging)

The Federal Information Processing Standard for SQL (FIPS 127-2) requires a way to identify SQL statements that use vendor-supplied extensions. Oracle provides a FIPS flagger to help you write portable applications.

When FIPS flagging is active, your SQL statements are checked to see whether they include extensions that go beyond the ANSI/ISO SQL92 standard. If any non-standard constructs are found, then the Oracle Server flags them as errors and displays the violating syntax.

The FIPS flagging feature supports flagging through interactive SQL statements submitted using Enterprise Manager or SQL*Plus. The Oracle Precompilers and SQL*Module also support FIPS flagging of embedded and module language SQL.

When flagging is on and non-standard SQL is encountered, the following message is returned:

ORA-00097: Use of Oracle SQL feature not in SQL92 level Level

Where level can be either ENTRY, INTERMEDIATE, or FULL.

Figure 7-1 The Stages in Processing a SQL Statement

Grouping Operations into Transactions

In general, only application designers using the programming interfaces to Oracle are concerned with which types of actions should be grouped together as one transaction. Transactions must be defined properly so that work is accomplished in logical units and data is kept consistent. A transaction should consist of all of the necessary parts for one logical unit of work, no more and no less. Data in all referenced tables should be in a consistent state before the transaction begins and after it ends. Transactions should consist of only the SQL statements or PL/SQL blocks that comprise one consistent change to the data.

A transfer of funds between two accounts (the transaction or logical unit of work), for example, should include the debit to one account (one SQL statement) and the credit to another account (one SQL statement). Both actions should either fail or succeed together as a unit of work; the credit should not be committed without the debit. Other non-related actions, such as a new deposit to one account, should not be included in the transfer of funds transaction.

Improving Transaction Performance

In addition to determining which types of actions form a transaction, when you design an application, you must also determine if you can take any additional measures to improve performance. You should consider the following performance enhancements when designing and writing your application. Unless otherwise noted, each of these features is described in Oracle9i Database Concepts.

Use the BEGIN_DISCRETE_TRANSACTION procedure to improve the performance of short, non-distributed transactions.

Use the SETTRANSACTION command with the USEROLLBACKSEGMENT parameter to explicitly assign a transaction to an appropriate rollback segment. This can eliminate the need to dynamically allocate additional extents, which can reduce overall system performance.

Use the SETTRANSACTION command with the ISOLATIONLEVEL set to SERIALIZABLE to get ANSI/ISO serializable transactions.

Establish standards for writing SQL statements so that you can take advantage of shared SQL areas. Oracle recognizes identical SQL statements and allows them to share memory areas. This reduces memory usage on the database server and increases system throughput.

Use the ANALYZE command to collect statistics that can be used by Oracle to implement a cost-based approach to SQL statement optimization. You can supply additional "hints" to the optimizer as needed.

Call the DBMS_APPLICATION_INFO.SET_ACTION procedure before beginning a transaction to register and name a transaction for later use when measuring performance across an application. You should specify what type of activity a transaction performs so that the system tuners can later see which transactions are taking up the most system resources.

Increase user productivity and query efficiency by including user-written PL/SQL functions in SQL expressions as described in "Calling Stored Functions from SQL Expressions".

Create explicit cursors when writing a PL/SQL application.

When writing precompiler programs, increasing the number of cursors using MAX_OPEN_CURSORS can often reduce the frequency of parsing and improve performance.

Committing Transactions

To commit a transaction, use the COMMIT command. The following two statements are equivalent and commit the current transaction:

COMMIT WORK;
COMMIT;

The COMMIT command lets you include the COMMENT parameter along with a comment (less than 50 characters) that provides information about the transaction being committed. This option is useful for including information about the origin of the transaction when you commit distributed transactions:

Rolling Back Transactions

To roll back an entire transaction, or to roll back part of a transaction to a savepoint, use the ROLLBACK command. For example, either of the following statements rolls back the entire current transaction:

ROLLBACK WORK;
ROLLBACK;

The WORK option of the ROLLBACK command has no function.

To roll back to a savepoint defined in the current transaction, use the TO option of the ROLLBACK command. For example, either of the following statements rolls back the current transaction to the savepoint named POINT1:

Commits all actions performed by the first DML statement (the DELETE statement) and the last DML statement (the second INSERT statement)

All other statements (the second and the third statements) of the transaction were rolled back before the COMMIT. The savepoint A is no longer active.

Privileges Required for Transaction Management

No privileges are required to control your own transactions; any user can issue a COMMIT, ROLLBACK, or SAVEPOINT statement within a transaction.

Ensuring Repeatable Reads with Read-Only Transactions

By default, the consistency model for Oracle guarantees statement-level read consistency, but does not guarantee transaction-level read consistency (repeatable reads). If you want transaction-level read consistency, and if your transaction does not require updates, then you can specify a read-only transaction. After indicating that your transaction is read-only, you can execute as many queries as you like against any database table, knowing that the results of each query in the read-only transaction are consistent with respect to a single point in time.

A read-only transaction does not acquire any additional data locks to provide transaction-level read consistency. The multi-version consistency model used for statement-level read consistency is used to provide transaction-level read consistency; all queries return information with respect to the system control number (SCN) determined when the read-only transaction begins. Because no data locks are acquired, other transactions can query and update data being queried concurrently by a read-only transaction.

Changed data blocks queried by a read-only transaction are reconstructed using data from rollback segments. Therefore, long running read-only transactions sometimes receive a "snapshot too old" error (ORA-01555). Create more, or larger, rollback segments to avoid this. You can also issue long-running queries when online transaction processing is at a minimum, or you can obtain a shared lock on the table before querying it, preventing any other modifications during the transaction.

A read-only transaction is started with a SETTRANSACTION statement that includes the READONLY option. For example:

SET TRANSACTION READ ONLY;

The SETTRANSACTION statement must be the first statement of a new transaction; if any DML statements (including queries) or other non-DDL statements (such as SET ROLE) precede a SETTRANSACTIONREADONLY statement, an error is returned. Once a SETTRANSACTIONREADONLY statement successfully executes, only SELECT (without a FORUPDATE clause), COMMIT, ROLLBACK, or non-DML statements (such as SETROLE, ALTERSYSTEM,LOCKTABLE) are allowed in the transaction. Otherwise, an error is returned. A COMMIT, ROLLBACK, or DDL statement terminates the read-only transaction; a DDL statement causes an implicit commit of the read-only transaction and commits in its own transaction.

PL/SQL implicitly declares a cursor for all SQL data manipulation statements, including queries that return only one row. For queries that return more than one row, you can explicitly declare a cursor to process the rows individually.

A cursor is a handle to a specific private SQL area. In other words, a cursor can be thought of as a name for a specific private SQL area. A PL/SQL cursor variable enables the retrieval of multiple rows from a stored procedure. Cursor variables allow you to pass cursors as parameters in your 3GL application. Cursor variables are described in PL/SQL User's Guide and Reference.

Although most Oracle users rely on the automatic cursor handling of the Oracle utilities, the programmatic interfaces offer application designers more control over cursors. In application development, a cursor is a named resource available to a program, which can be specifically used for parsing SQL statements embedded within the application.

Declaring and Opening Cursors

There is no absolute limit to the total number of cursors one session can have open at one time, subject to two constraints:

Each cursor requires virtual memory, so a session's total number of cursors is limited by the memory available to that process.

A system-wide limit of cursors for each session is set by the initialization parameter named OPEN_CURSORS found in the parameter file (such as INIT.ORA).

Explicitly creating cursors for precompiler programs can offer some advantages in tuning those applications. For example, increasing the number of cursors can often reduce the frequency of parsing and improve performance. If you know how many cursors may be required at a given time, then you can make sure you can open that many simultaneously.

Using a Cursor to Execute Statements Again

After each stage of execution, the cursor retains enough information about the SQL statement to re-execute the statement without starting over, as long as no other SQL statement has been associated with that cursor. This is illustrated in Figure 7-1. Notice that the statement can be re-executed without including the parse stage.

By opening several cursors, the parsed representation of several SQL statements can be saved. Repeated execution of the same SQL statements can thus begin at the describe, define, bind, or execute step, saving the repeated cost of opening cursors and parsing.

To understand the performance characteristics of a cursor, a DBA can retrieve the text of the query represented by the cursor using the V$SQL catalog view. Because the results of EXPLAIN PLAN on the original query might differ from the way the query is actually processed, the DBA can get more precise information by examining the V$SQL_PLAN and V$SQL_PLAN_STATS catalog views. The V$SQL_PLAN_ENV catalog view shows what parameters have changed from their default values, which might cause the EXPLAIN PLAN output to differ from the actual execution plan for the cursor.

Closing Cursors

Closing a cursor means that the information currently in the associated private area is lost and its memory is deallocated. Once a cursor is opened, it is not closed until one of the following events occurs:

The user program terminates its connection to the server.

If the user program is an OCI program or precompiler application, then it explicitly closes any open cursor during the execution of that program. (However, when this program terminates, any cursors remaining open are implicitly closed.)

Cancelling Cursors

Cancelling a cursor frees resources from the current fetch.The information currently in the associated private area is lost but the cursor remains open, parsed, and associated with its bind variables.

You want transaction-level read consistency or "repeatable reads"--where transactions query a consistent set of data for the duration of the transaction, knowing that the data has not been changed by any other transactions. This level of consistency can be achieved by using explicit locking, read-only transactions, serializable transactions, or overriding default locking for the system.

A transaction requires exclusive access to a resource. To proceed with its statements, the transaction with exclusive access to a resource does not have to wait for other transactions to complete.

The automatic locking mechanisms can be overridden at two different levels:

Type of Explicit Locking

How to Enable

transaction level

Transactions including the following SQL statements override Oracle's default locking: the LOCKTABLE command, the SELECT command including the FORUPDATE clause, and the SETTRANSACTION command with the READONLY or ISOLATIONLEVELSERIALIZABLE options. Locks acquired by these statements are released after the transaction is committed or rolled back.

system level

An instance can be started with nondefault locking by adjusting the initialization parameters SERIALIZABLE and ROW_LOCKING.

The following sections describe each option available for overriding the default locking of Oracle. The initialization parameter DML_LOCKS determines the maximum number of DML locks allowed.

Although the default value is usually enough, you might need to increase it if you use additional manual locks.

Caution:

If you override the default locking of Oracle at any level, be sure that the overriding locking procedures operate correctly: Ensure that data integrity is guaranteed, data concurrency is acceptable, and deadlocks are not possible or are appropriately handled.

Choosing a Locking Strategy

A transaction explicitly acquires the specified table locks when a LOCKTABLE statement is executed. A LOCKTABLE statement manually overrides default locking. When a LOCKTABLE statement is issued on a view, the underlying base tables are locked. The following statement acquires exclusive table locks for the EMP_TAB and DEPT_TAB tables on behalf of the containing transaction:

LOCK TABLE Emp_tab, Dept_tab
IN EXCLUSIVE MODE NOWAIT;

You can specify several tables or views to lock in the same mode; however, only a single lock mode can be specified for each LOCKTABLE statement.

Note:

When a table is locked, all rows of the table are locked. No other user can modify the table.

You can also indicate if you do or do not want to wait to acquire the lock. If you specify the NOWAIT option, then you only acquire the table lock if it is immediately available. Otherwise an error is returned to notify that the lock is not available at this time. In this case, you can attempt to lock the resource at a later time. If NOWAIT is omitted, then the transaction does not proceed until the requested table lock is acquired. If the wait for a table lock is excessive, then you might want to cancel the lock operation and retry at a later time; you can code this logic into your applications.

When to Lock with ROW SHARE and ROW EXCLUSIVE Mode

ROW SHARE and ROW EXCLUSIVE table locks offer the highest degree of concurrency. You might use these locks if:

Your transaction needs to prevent another transaction from acquiring an intervening share, share row, or exclusive table lock for a table before the table can be updated in your transaction. If another transaction acquires an intervening share, share row, or exclusive table lock, no other transactions can update the table until the locking transaction commits or rolls back.

Your transaction needs to prevent a table from being altered or dropped before the table can be modified later in your transaction.

When to Lock with SHARE Mode

LOCK TABLE Emp_tab IN SHARE MODE;

SHARE table locks are rather restrictive data locks. You might use these locks if:

Your transaction only queries the table, and requires a consistent set of the table's data for the duration of the transaction.

You can hold up other transactions that try to update the locked table, until all transactions that hold SHARE locks on the table either commit or roll back.

Other transactions may acquire concurrent SHARE table locks on the same table, also allowing them the option of transaction-level read consistency.

Caution:

Your transaction may or may not update the table later in the same transaction. However, if multiple transactions concurrently hold share table locks for the same table, no transaction can update the table (even if row locks are held as the result of a SELECT... FORUPDATE statement). Therefore, if concurrent share table locks on the same table are common, updates cannot proceed and deadlocks are common. In this case, use share row exclusive or exclusive table locks instead.

For example, assume that two tables, EMP_TAB and BUDGET_TAB, require a consistent set of data in a third table, DEPT_TAB. For a given department number, you want to update the information in both of these tables, and ensure that no new members are added to the department between these two transactions.

Although this scenario is quite rare, it can be accommodated by locking the DEPT_TAB table in SHAREMODE, as shown in the following example. Because the DEPT_TAB table is rarely updated, locking it probably does not cause many other transactions to wait long.

Note:

You may need to set up data structures similar to the following for certain examples to work:

Your transaction also ensures transaction-level read consistency for the locked table until the transaction is committed or rolled back.

You are not concerned about low levels of data concurrency, making transactions that request exclusive table locks wait in line to update the table sequentially.

Privileges Required

You can automatically acquire any type of table lock on tables in your schema. To acquire a table lock on a table in another schema, you must have the LOCKANYTABLE system privilege or any object privilege (for example, SELECT or UPDATE) for the table.

Letting Oracle Control Table Locking

Letting Oracle control table locking means your application needs less programming logic, but also has less control, than if you manage the table locks yourself.

Table locks are also controlled by the ROW_LOCKING and SERIALIZABLE initialization parameters. By default, SERIALIZABLE is set to FALSE and ROW_LOCKING is set to ALWAYS. In almost every case, these parameters should not be altered. They are provided for sites that must run in ANSI/ISO compatible mode, or that want to use applications written to run with earlier versions of Oracle. Only these sites should consider altering these parameters, as there is a significant performance degradation caused by using other than the defaults.

The settings for these parameters should be changed only when an instance is shut down. If multiple instances are accessing a single database, then all instances should use the same setting for these parameters.

Summary of Nondefault Locking Options

Three combinations of settings for SERIALIZABLE and ROW_LOCKING, other than the default settings, are available to change the way locking occurs for transactions. Table 7-1 summarizes the nondefault settings and why you might choose to execute your transactions in a particular way.

Table 7-1 Summary of Nondefault Locking Options

Case

Description

SERIALIZABLE

ROW_LOCKING

1

Equivalent to Version 5 and earlier Oracle releases (no concurrent inserts, updates, or deletes in a table)

Table 7-2 illustrates the difference in locking behavior resulting from the three possible settings of the SERIALIZABLE option and ROW_LOCKING initialization parameter, as shown in Table 7-1.

Table 7-2 Nondefault Locking Behavior

Statement

Case 1: row

Case 1: table

Case 2: row

Case 2: table

Case 3: row

Case 3: table

SELECT

-

-

-

S

-

S

INSERT

X

SRX

X

RX

X

SRX

UPDATE

X

SRX

X

SRX

X

SRX

DELETE

X

SRX

X

SRX

X

SRX

SELECT...FOR UPDATE

X

RS

X

S

X

S

LOCK TABLE... IN..

-

-

-

-

-

-

ROWSHAREMODE

-

RS

-

RS

-

RS

ROW EXCLUSIVE MODE

-

RX

-

RX

-

RX

SHARE MODE

-

S

-

S

-

S

SHARE ROW EXCLUSIVE MODE

-

SRX

-

SRX

-

SRX

EXCLUSIVE MODE

-

X

-

X

-

X

DDL statements

-

X

-

X

-

X

Explicitly Acquiring Row Locks

You can override default locking with a SELECT statement that includes the FORUPDATE clause. This statement acquires exclusive row locks for selected rows (as an UPDATE statement does), in anticipation of updating the selected rows in a subsequent statement.

You can use a SELECT... FORUPDATE statement to lock a row without actually changing it. For example, several triggers in Chapter 15, "Using Triggers", show how to implement referential integrity. In the EMP_DEPT_CHECK trigger (see "Foreign Key Trigger for Child Table"), the row that contains the referenced parent key value is locked to guarantee that it remains for the duration of the transaction; if the parent key is updated or deleted, referential integrity would be violated.

SELECT... FORUPDATE statements are often used by interactive programs that allow a user to modify fields of one or more specific rows (which might take some time); row locks are acquired so that only a single interactive program user is updating the rows at any given time.

If a SELECT... FORUPDATE statement is used when defining a cursor, the rows in the return set are locked when the cursor is opened (before the first fetch) rather than as they are fetched from the cursor. Locks are only released when the transaction that opened the cursor is committed or rolled back, not when the cursor is closed.

Each row in the return set of a SELECT... FORUPDATE statement is locked individually; the SELECT... FORUPDATE statement waits until the other transaction releases the conflicting row lock. If a SELECT... FORUPDATE statement locks many rows in a table, and if the table experiences a lot of update activity, it might be faster to acquire an EXCLUSIVE table lock instead.

When acquiring row locks with SELECT... FORUPDATE, you can specify the NOWAIT option to indicate that you are not willing to wait to acquire the lock. If you cannot acquire then lock immediately, an error is returned to signal that the lock is not possible at this time. You can try to lock the row again later.

By default, the transaction waits until the requested row lock is acquired. If the wait for a row lock is too long, you can code logic into your application to cancel the lock operation and try again later.

About User Locks

You can use Oracle Lock Management services for your applications by making calls to the DBMS_LOCK package. It is possible to request a lock of a specific mode, give it a unique name recognizable in another procedure in the same or another instance, change the lock mode, and release it. Because a reserved user lock is the same as an Oracle lock, it has all the features of an Oracle lock, such as deadlock detection. Be certain that any user locks used in distributed transactions are released upon COMMIT, or an undetected deadlock can occur.

Example of a User Lock

The following Pro*COBOL precompiler example shows how locks can be used to ensure that there are no conflicts when multiple people need to access a single device.

*****************************************************************
* Print Check *
* Any cashier may issue a refund to a customer returning goods. *
* Refunds under $50 are given in cash, above that by check. *
* This code prints the check. The one printer is opened by all *
* the cashiers to avoid the overhead of opening and closing it *
* for every check. This means that lines of output from multiple*
* cashiers could become interleaved if we don't ensure exclusive*
* access to the printer. The DBMS_LOCK package is used to *
* ensure exclusive access. *
*****************************************************************
CHECK-PRINT
*
* Get the lock "handle" for the printer lock.
MOVE "CHECKPRINT" TO LOCKNAME-ARR.
MOVE 10 TO LOCKNAME-LEN.
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.ALLOCATE_UNIQUE ( :LOCKNAME, :LOCKHANDLE );
END; END-EXEC.
*
* Lock the printer in exclusive mode (default mode).
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.REQUEST ( :LOCKHANDLE );
END; END-EXEC.
* We now have exclusive use of the printer, print the check.
...
*
* Unlock the printer so other people can use it
*
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.RELEASE ( :LOCKHANDLE );
END; END-EXEC.

Viewing and Monitoring Locks

Oracle provides two facilities to display locking information for ongoing transactions within an instance:

The UTLLOCKT.SQL script displays a simple character lock wait-for graph in tree structured fashion. Using any ad hoc SQL tool (such as SQL*Plus) to execute the script, it prints the sessions in the system that are waiting for locks and the corresponding blocking locks. The location of this script file is operating system dependent. (You must have run the CATBLOCK.SQL script before using UTLLOCKT.SQL.)

Using Serializable Transactions for Concurrency Control

By default, the Oracle Server permits concurrently executing transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.

If a transaction A attempts to update or delete a row that has been locked by another transaction B (by way of a DML or SELECT... FORUPDATE statement), then A's DML command blocks until B commits or rolls back. Once B commits, transaction A can see changes that B has made to the database.

For most applications, this concurrency model is the appropriate one, because it provides higher concurrency and thus better performance. But some rare cases require transactions to be serializable. Serializable transactions must execute in such a way that they appear to be executing one at a time (serially), rather than concurrently. Concurrent transactions executing in serialized mode can only make database changes that they could have made if the transactions ran one after the other.

The ANSI/ISO SQL standard SQL92 defines three possible kinds of transaction interaction, and four levels of isolation that provide increasing protection against these interactions. These interactions and isolation levels are summarized in Table 7-3.

Table 7-3 Summary of ANSI Isolation Levels

Isolation Level

Dirty Read (1)

Non-Repeatable Read (2)

Phantom Read (3)

READ UNCOMMITTED

Possible

Possible

Possible

READ COMMITTED

Not possible

Possible

Possible

REPEATABLE READ

Not possible

Not possible

Possible

SERIALIZABLE

Not possible

Not possible

Not possible

Notes:

(1) A transaction can read uncommitted data changed by another transaction.

(2) A transaction rereads data committed by another transaction and sees the new data.

(3) A transaction can execute a query again, and discover new rows inserted by another committed transaction.

The behavior of Oracle with respect to these isolation levels is summarized below:

Isolation Level

Description

READ UNCOMMITTED

Oracle never permits "dirty reads." Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle.

READ COMMITTED

Oracle meets the READCOMMITTED isolation standard. This is the default mode for all Oracle applications. Because an Oracle query only sees data that was committed at the beginning of the query (the snapshot time), Oracle actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READCOMMITTED isolation.

REPEATABLE READ

Oracle does not normally support this isolation level, except as provided by SERIALIZABLE.

SERIALIZABLE

You can set this isolation level using the SETTRANSACTION command or the ALTERSESSION command.

Figure 7-2 Time Line for Two Transactions

How Serializable Transactions Interact

Figure 7-3 shows how a serializable transaction (Transaction B) interacts with another transaction (A, which can be either SERIALIZABLE or READCOMMITTED).

When a serializable transaction fails with an ORA-08177 error ("cannot serialize access"), the application can take any of several actions:

Commit the work executed to that point

Execute additional, different, statements, perhaps after rolling back to a prior savepoint in the transaction

Roll back the entire transaction and try it again

Oracle stores control information in each data block to manage access by concurrent transactions. To use the SERIALIZABLE isolation level, you must use the INITRANS clause of the CREATETABLE or ALTERTABLE command to set aside storage for this control information. To use serializable mode, INITRANS must be set to at least 3.

Setting the Isolation Level of a Transaction

You can change the isolation level of a transaction using the ISOLATIONLEVEL clause of the SETTRANSACTION command, which must be the first command issued in a transaction.

Use the ALTERSESSION command to set the transaction isolation level on a session-wide basis.

The INITRANS Parameter

Oracle stores control information in each data block to manage access by concurrent transactions. Therefore, if you set the transaction isolation level to serializable, then you must use the ALTERTABLE command to set INITRANS to at least 3. This parameter causes Oracle to allocate sufficient storage in each block to record the history of recent transactions that accessed the block. Higher values should be used for tables that will undergo many transactions updating the same blocks.

Referential Integrity and Serializable Transactions

Because Oracle does not use read locks, even in SERIALIZABLE transactions, data read by one transaction can be overwritten by another. Transactions that perform database consistency checks at the application level should not assume that the data they read will not change during the execution of the transaction (even though such changes are not visible to the transaction). Database inconsistencies can result unless such application-level consistency checks are coded carefully, even when using SERIALIZABLE transactions. Note, however, that the examples shown in this section are applicable for both READCOMMITTED and SERIALIZABLE transactions.

Figure 7-3 shows two different transactions that perform application-level checks to maintain the referential integrity parent/child relationship between two tables. One transaction checks that a row with a specific primary key value exists in the parent table before inserting corresponding child rows. The other transaction checks to see that no corresponding detail rows exist before deleting a parent row. In this case, both transactions assume (but do not ensure) that data they read will not change before the transaction completes.

Figure 7-3 Referential Integrity Check

The read issued by transaction A does not prevent transaction B from deleting the parent row, and transaction B's query for child rows does not prevent transaction A from inserting child rows. This scenario leaves a child row in the database with no corresponding parent row. This result occurs even if both A and B are SERIALIZABLE transactions, because neither transaction prevents the other from making changes in the data it reads to check consistency.

As this example shows, sometimes you must take steps to ensure that the data read by one transaction is not concurrently written by another. This requires a greater degree of transaction isolation than defined by SQL92 SERIALIZABLE mode.

Using SELECT FOR UPDATE

Fortunately, it is straightforward in Oracle to prevent the anomaly described above:

Transaction A can use SELECTFORUPDATE to query and lock the parent row and thereby prevent transaction B from deleting the row.

Transaction B can prevent Transaction A from gaining access to the parent row by reversing the order of its processing steps. Transaction B first deletes the parent row, and then rolls back if its subsequent query detects the presence of corresponding rows in the child table.

Referential integrity can also be enforced in Oracle using database triggers, instead of a separate query as in Transaction A above. For example, an INSERT into the child table can fire a BEFORE INSERT row-level trigger to check for the corresponding parent row. The trigger queries the parent table using SELECTFORUPDATE, ensuring that parent row (if it exists) remains in the database for the duration of the transaction inserting the child row. If the corresponding parent row does not exist, the trigger rejects the insert of the child row.

SQL statements issued by a database trigger execute in the context of the SQL statement that caused the trigger to fire. All SQL statements executed within a trigger see the database in the same state as the triggering statement. Thus, in a READCOMMITTED transaction, the SQL statements in a trigger see the database as of the beginning of the triggering statement's execution, and in a transaction executing in SERIALIZABLE mode, the SQL statements see the database as of the beginning of the transaction. In either case, the use of SELECTFORUPDATE by the trigger correctly enforces referential integrity.

READ COMMITTED and SERIALIZABLE Isolation

Oracle gives you a choice of two transaction isolation levels with different characteristics. Both the READCOMMITTED and SERIALIZABLE isolation levels provide a high degree of consistency and concurrency. Both levels reduce contention, and are designed for deploying real-world applications. The rest of this section compares the two isolation modes and provides information helpful in choosing between them.

Transaction Set Consistency

A useful way to describe the READCOMMITTED and SERIALIZABLE isolation levels in Oracle is to consider:

A collection of database tables (or any set of data)

A sequence of reads of rows in those tables

The set of transactions committed at any moment

An operation (a query or a transaction) is transaction set consistent if its read operations all return data written by the same set of committed transactions. When an operation is not transaction set consistent, some reads reflect the changes of one set of transactions, and other reads reflect changes made by other transactions. Such an operation sees the database in a state that reflects no single set of committed transactions.

Oracle transactions executing in READCOMMITTED mode are transaction set consistent on a per-statement basis, because all rows read by a query must be committed before the query begins.

Oracle transactions executing in SERIALIZABLE mode are transaction set consistent on a per-transaction basis, because all statements in a SERIALIZABLE transaction execute on an image of the database as of the beginning of the transaction.

In other database systems, a single query run in READ COMMITTED mode provides results that are not transaction set consistent. The query is not transaction set consistent, because it may see only a subset of the changes made by another transaction. For example, a join of a master table with a detail table could see a master record inserted by another transaction, but not the corresponding details inserted by that transaction, or vice versa. Oracle's READCOMMITTED mode avoids this problem, and so provides a greater degree of consistency than read-locking systems.

In read-locking systems, at the cost of preventing concurrent updates, SQL92 REPEATABLEREAD isolation provides transaction set consistency at the statement level, but not at the transaction level. The absence of phantom protection means two queries issued by the same transaction can see data committed by different sets of other transactions. Only the throughput-limiting and deadlock-susceptible SERIALIZABLE mode in these systems provides transaction set consistency at the transaction level.

Choosing an Isolation Level for Transactions

Choose an isolation level that is appropriate to the specific application and workload. You might choose different isolation levels for different transactions. The choice depends on performance and consistency needs, and consideration of application coding requirements.

For environments with many concurrent users rapidly submitting transactions, you must assess transaction performance against the expected transaction arrival rate and response time demands, and choose an isolation level that provides the required degree of consistency while performing well. Frequently, for high performance environments, you must trade-off between consistency and concurrency (transaction throughput).

Both Oracle isolation modes provide high levels of consistency and concurrency (and performance) through the combination of row-level locking and Oracle's multi-version concurrency control system. Because readers and writers do not block one another in Oracle, while queries still see consistent data, both READ COMMITTED and SERIALIZABLE isolation provide a high level of concurrency for high performance, without the need for reading uncommitted ("dirty") data.

READCOMMITTED isolation can provide considerably more concurrency with a somewhat increased risk of inconsistent results (due to phantoms and non-repeatable reads) for some transactions. The SERIALIZABLE isolation level provides somewhat more consistency by protecting against phantoms and non-repeatable reads, and may be important where a read/write transaction executes a query more than once. However, SERIALIZABLE mode requires applications to check for the "can't serialize access" error, and can significantly reduce throughput in an environment with many concurrent transactions accessing the same data for update. Application logic that checks database consistency must take into account the fact that reads do not block writes in either mode.

Application Tips for Transactions

When a transaction runs in serializable mode, any attempt to change data that was changed by another transaction since the beginning of the serializable transaction causes an error:

ORA-08177: Can't serialize access for this transaction.

When you get this error, roll back the current transaction and execute it again. The transaction gets a new transaction snapshot, and the operation is likely to succeed.

To minimize the performance overhead of rolling back transactions and executing them again, try to put DML statements that might conflict with other concurrent transactions near the beginning of your transaction.

Autonomous Transactions

This section gives a brief overview of autonomous transactions and what you can do with them.

At times, you may want to commit or roll back some changes to a table independently of a primary transaction's final outcome. For example, in a stock purchase transaction, you may want to commit a customer's information regardless of whether the overall stock purchase actually goes through. Or, while running that same transaction, you may want to log error messages to a debug table even if the overall transaction rolls back. Autonomous transactions allow you to do such tasks.

An autonomous transaction (AT) is an independent transaction started by another transaction, the main transaction (MT). It lets you suspend the main transaction, do SQL operations, commit or roll back those operations, then resume the main transaction.

An autonomous transaction executes within an autonomous scope. An autonomous scope is a routine you mark with the pragma (compiler directive) AUTONOMOUS_TRANSACTION. The pragma instructs the PL/SQL compiler to mark a routine as autonomous (independent). In this context, the term routine includes:

Top-level (not nested) anonymous PL/SQL blocks

Local, standalone, and packaged functions and procedures

Methods of a SQL object type

PL/SQL triggers

Figure 7-4 shows how control flows from the main routine (MT) to an autonomous routine (AT) and back again. As you can see, the autonomous routine can commit more than one transaction (AT1 and AT2) before control returns to the main routine.

Figure 7-4 Transaction Control Flow

When you enter the executable section of an autonomous routine, the main transaction suspends. When you exit the routine, the main transaction resumes. COMMIT and ROLLBACK end the active autonomous transaction but do not exit the autonomous routine. As Figure 7-4 shows, when one transaction ends, the next SQL statement begins another transaction.

A few more characteristics of autonomous transactions:

The changes autonomous transactions effect do not depend on the state or the eventual disposition of the main transaction. For example:

An autonomous transaction does not see any changes made by the main transaction.

When an autonomous transaction commits or rolls back, it does not affect the outcome of the main transaction.

The changes an autonomous transaction effects are visible to other transactions as soon as that autonomous transaction commits. This means that users can access the updated information without having to wait for the main transaction to commit.

Autonomous transactions can start other autonomous transactions.

Figure 7-5 illustrates some of the possible sequences autonomous transactions can follow.

Figure 7-5 Possible Sequences of Autonomous Transactions

Examples of Autonomous Transactions

The two examples in this section illustrate some of the ways you can use autonomous transactions.

As these examples illustrate, there are four possible outcomes that can occur when you use autonomous and main transactions. The following table presents these possible outcomes. As you can see, there is no dependency between the outcome of an autonomous transaction and that of a main transaction.

Autonomous Transaction

Main Transaction

Commits

Commits

Commits

Rolls back

Rolls back

Commits

Rolls back

Rolls back

Entering a Buy Order

In this example, a customer enters a buy order. That customer's information (such as name, address, phone) is committed to a customer information table--even though the sale does not go through.

Figure 7-6 Example: A Buy Order

Example: Making a Bank Withdrawal

In the following banking application, a customer tries to make a withdrawal from his or her account. In the process, a main transaction calls one of two autonomous transaction scopes (AT Scope 1, and AT Scope 2).

The following diagrams illustrate three possible scenarios for this transaction.

Scenario 1: There are sufficient funds to cover the withdrawal and therefore the bank releases the funds

Scenario 2: There are insufficient funds to cover the withdrawal, but the customer has overdraft protection. The bank therefore releases the funds.

Scenario 3: There are insufficient funds to cover the withdrawal, the customer does not have overdraft protection, and the bank therefore withholds the requested funds.

Scenario 1:

There are sufficient funds to cover the withdrawal and therefore the bank releases the funds

Defining Autonomous Transactions

Note: This section is provided here to round out your general understanding of autonomous transactions. For a more thorough understanding of autonomous transactions, see PL/SQL User's Guide and Reference.

To define autonomous transactions, you use the pragma (compiler directive) AUTONOMOUS_TRANSACTION. The pragma instructs the PL/SQL compiler to mark the procedure, function, or PL/SQL block as autonomous (independent).

You can code the pragma anywhere in the declarative section of a procedure, function, or PL/SQL block. But, for readability, code the pragma at the top of the section. The syntax follows:

You cannot use the pragma to mark all subprograms in a package (or all methods in an object type) as autonomous. Only individual routines can be marked autonomous. For example, the following pragma is illegal:

Resuming Execution After a Storage Error Condition

When a long-running transaction is interrupted by an out-of-space error condition, your application can suspend the statement that encountered the problem and resume it after the space problem is corrected. This capability is known as resumable storage allocation. It lets you avoid time-consuming rollbacks, without the need to split the operation into smaller pieces and write your own code to track its progress.

What Operations Can Be Resumed After an Error Condition?

Queries, DML operations, and certain DDL operations can all be resumed if they encounter an out-of-space error. The capability applies if the operation is performed directly by a SQL statement, or if it is performed within a stored procedure, anonymous PL/SQL block, SQL*Loader, or an OCI call such as OCIStmtExecute().

Operations can be resumed after these kinds of error conditions:

Out of space errors, such as ORA-01653.

Space limit errors, such as ORA-01628.

Space quota errors, such as ORA-01536.

Limitations on Resuming Operations After an Error Condition

Certain storage errors cannot be handled using this technique. In dictionary-managed tablespaces, you cannot resume an operation if you run into the limit for rollback segments, or the maximum number of extents while creating an index or a table. Oracle encourages users to use locally managed tablespaces and automatic undo management in combination with this feature.

Writing an Application to Handle Suspended Storage Allocation

When an operation is suspended, your application does not receive the usual error code. Instead, perform any logging or notification by coding a trigger to detect the AFTER SUSPEND event and call the functions in the DBMS_RESUMABLE package to get information about the problem. Using this package, you can:

Within the body of the trigger, you can perform any notifications, such as sending a mail message to alert an operator to the space problem.

Alternatively, the DBA can periodically check for suspended statements using the data dictionary views DBA_RESUMABLE, USER_RESUMABLE, and V$_SESSION_WAIT.

When the space condition is corrected (usually by the DBA), the suspended statement automatically resumes execution. If it is not corrected before the timeout period expires, the operation causes a SERVERERROR exception.

To reduce the chance of out-of-space errors within the trigger itself, you must declare it as an autonomous transaction so that it uses a rollback segment in the SYSTEM tablespace. If the trigger encounters a deadlock condition because of locks held by the suspended statement, the trigger is aborted and your application receives the original error condition, as if it was never suspended. If the trigger encounters an out-of-space condition, the trigger and the suspended statement are rolled back. You can prevent the rollback through an exception handler in the trigger, and just wait for the statement to be resumed.

Example of Resumable Storage Allocation

This trigger handles applicable storage errors within the database. For some kinds of errors, it aborts the statement and alerts the DBA that this has happened through a mail message. For other errors that might be temporary, it specifies that the statement should wait for eight hours before resuming, with the expectation that the storage problem will be fixed by then.

Querying Data at a Point in Time (Flashback Query)

By default, operations on the database use the most recent committed data available. If you want to query the database as it was at some time in the past, you can do so with the flashback query feature. It lets you specify either a time or a system change number (SCN) and query using the committed data from the corresponding time.

Some potential applications of flashback query are:

Recovering lost data or undoing incorrect changes, even after the changes are committed. For example, a user who deletes or updates rows and then commits can immediately repair a mistake.

Comparing current data against the data at some time in the past. For example, you might run a daily report that shows the change from yesterday, rather than just the current data.

Checking the state of transactional data at a particular time. For example, you might want to verify an account balance on a certain day.

Simplifying application design by removing the need to store some kinds of temporal data.

Enabling packaged applications, such as report generation tools, to work on past versions of data.

The flashback query mechanism is most effective when you use automatic undo management. The DBA requests that undo data be kept for a specified period of time. Depending on the available storage capacity, the database might not always be able to keep all the requested undo data. If you use flashback queries, you might need to familiarize yourself with automatic undo management to understand its capabilities and limitations.

Other features are available to recover lost data. The unique feature of flashback query is that you can see the data as it was in the past, then choose exactly how to process the information; you might do an analysis, undoing the changes, or capture changed data for further processing.

Setting Up the Database for Flashback Query

Before you can perform flashback queries, enlist the help of your DBA. Ask them to:

Use automatic undo management to maintain read consistency, rather than the older technique using rollback segments. In particular, the DBA should:

Set the UNDO_RETENTION initialization parameter to a value that represents how far in the past you might want to query. The value depends on your needs. If you only need to recover data immediately after a mistaken change is committed, the parameter can be set to a small value. If you need to recover deleted data from days before, you might need several days worth of data.

Set the initialization parameter UNDO_MANAGEMENT=AUTO.

Create an UNDO tablespace, with enough space to keep the required data. The more often the data is updated, the more space is required. Calculating the space requirements is a job for the DBA; you can find the formula in the Oracle9i Database Administrator's Guide.

Grant FLASHBACK privilege on appropriate tables, or FLASHBACK ANY TABLE privilege, to users, roles, or applications that need to perform flashback queries through the AS OF clause of SQL.

Grant EXECUTE privilege on the DBMS_FLASHBACK package to users, roles, or applications that need to perform flashback queries using this package.

Use the ALTER TABLE command with the RETENTION option to enable flashback queries on specific LOB columns, if appropriate. Because LOB columns might require significant amounts of storage to keep undo data, you must define which LOB columns can be used with flashback query.

Writing an Application that Uses Flashback Query

To use the flashback query feature in an application, use these coding techniques:

Use the AS OF clause in SQL queries to specify a past time. You can specify or omit this clause for each table, and specify different times for different tables. You can perform DDL operations such as creating and truncating tables, and DML operations such as inserts and deletes, in the same session as queries using the AS OF clause.

Put calls to the DBMS_FLASHBACK package around sets of queries that apply to data at a past time, or queries whose SQL you cannot change:

Before doing the first query, call DBMS_FLASHBACK.ENABLE_AT_TIME or DBMS_FLASHBACK.ENABLE_AT_SYSTEM_CHANGE_NUMBER.

After doing the query, call DBMS_FLASHBACK.DISABLE.

Between these calls, you can only perform queries and not any DDL or DML statements.

To use the results of a flashback query in DDL or DML statements against the current state of the database, it is simplest to use the AS OF clause inside INSERT or CREATE TABLE AS SELECT statements.

If you use the DBMS_FLASHBACK package, you must open a cursor before calling DBMS_FLASHBACK.DISABLE. You can fetch results from past data from the cursor, then issue INSERT or UPDATE statements against the current state of the database.

To compare current data against past data using the DBMS_FLASHBACK package, you can open a cursor with the flashback feature enabled, then disable it and open another cursor. Fetching from the first cursor retrieves data based on the flashback time; fetching from the second cursor retrieves current data. You can store the older data in a temporary table and then use set operators such as MINUS or UNION to show differences in the data or combine past and current data.

At certain points in the application, you might call DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER. You can store the returned number, and later use it to perform flashback queries against the data at that point in time. Perform a COMMIT before calling this procedure, so that the database is in a consistent state to which you can return.

Flashback Query Restrictions

Some DDLs that alter the structure of a table, such as drop/modify column, move table, drop partition, truncate table/partition, and so on, invalidate the old undo data for the table. It is not possible to retrieve a snapshot of data from a point earlier than the time such DDLs were executed. Trying such a query results in a ORA-1466 error. This restriction does not apply to DDL operations that alter the storage attributes of a table, such as PCTFREE, INITTRANS, MAXTRANS, and so on.

The time specified in DBMS_FLASHBACK.ENABLE_AT_TIME or in the AS OF TIMESTAMP clause is mapped to an SCN value. Currently, the SCN-time mapping is recorded every 5 minutes after database startup. Thus, it might appear as if the specified time is being rounded down by up to 5 minutes.

For example, assume that the SCN values 1000 and 1005 are mapped to the times 8:41 and 8:46 AM respectively. A flashback query for a time anywhere between 8:41:00 and 8:45:59 AM is mapped to SCN 1000; a flashback query for 8:45 AM is mapped to SCN 1005.

Due to this time-to-SCN mapping, a flashback query for a time immediately after creation of a table may result in an ORA-1466 error. An SCN-based flashback query therefore gives you a more precise way to retrieve a past snapshot of data.

Because SCNs are only recorded every 5 minutes for use by flashback queries, you might specify a time or SCN that is slightly after a DDL operation, but the database might use a slightly earlier SCN that is before the DDL operation. So you might get the same error as in the previous restriction if you try to perform flashback queries to a point just after a DDL operation.

Currently, the flashback query feature keeps track of times up to a maximum of 5 days. This period reflects server uptime, not wall-clock time. For example, if the server is down for a day during this period, then you can specify as far back as 6 days. To query data farther back than this, you must specify an SCN rather than a date and time. You must record the SCN yourself at the time of interest, such as before doing a DELETE.

When using the DBMS_FLASHBACK package, you must disable flashback before enabling it again for a different time. You cannot nest ENABLE /DISABLE pairs. You can call DISABLE multiple times in succession, although this introduces a little extra overhead, as shown in the following examples.

Only the state of table data is affected by a flashback query. During a query, the current state of the data dictionary is used. For example, if you have changed the character set since storing the data, queries use the current character set.

You cannot perform a flashback query on a remote table through a database link.

You cannot retrieve past data from a V$ view in the data dictionary. Performing a flashback query on such a view just returns the current data. You can perform flashback queries on other views in the data dictionary, such as USER_TABLES.

The SYS user cannot make calls to the DBMS_FLASHBACK package. This user can still use the AS OF clause and perform flashback queries that way.

Flashback queries against materialized views do not take advantage of query rewrite.

Tips for Using Flashback Query

For good performance, generate statistics on all tables involved in flashback queries by using the DBMS_STATS package, and keep the statistics current. Flashback queries always use the cost-based optimizer, which relies on these statistics.

Use the AS OF SCN clause for any operations where the data is critical, because it is not subject to the potential 5-minute rounding error. Use the AS OF TIMESTAMP clause more for ad-hoc queries and reporting, where convenience in specifying the time is more important.

Because flashback queries on a table might not work for a few minutes after the table is created, make sure that to create all tables well in advance, those with the original data and any tables used for temporary storage. This tip applies more to situations where you are demonstrating or testing flashback query, rather than production environments where the tables would already be set up.

While experimenting with flashback queries, you might accidentally leave your session with uncommitted data or in flashback mode. Make a habit of including a COMMIT or ROLLBACK statement (where appropriate) and a call to DBMS_FLASHBACK.DISABLE at the beginning of SQL scripts that perform flashback queries.

The performance of a flashback query depends on how much data must be re-created. Use flashback query mostly for selecting small sets of data using indexes, rather than queries that require full table scans. If you must do a full table scan, consider adding a parallel hint to the query.

You can create a view that refers to past data by using the AS OF clause in the SELECT statement that defines the view. If you specify a relative time by subtracting from SYSDATE, the past time is recalculated for each query. For example:

When using this technique, remember that daylight savings time and leap years can cause anomalous results. For example, SYSDATE - 1 might refer to 23 or 25 hours ago, shortly after a change in daylight savings time.

You can compute or retrieve the past time used in the AS OF clause, by using a function return value as the timestamp or SCN argument. For example, you can do date and time calculations by adding or subtracting an INTERVAL value to the value of the SYSTIMESTAMP function.

You can use the AS OF clause in self-joins, or in set operations such as INTERSECT and MINUS, to extract or compare data from two different times. You can store the results by wrapping the queries in CREATE TABLE AS SELECT or INSERT INTO TABLE SELECT statements.

Use the AS OF clause for SQL that you write, because it is flexible enough to do comparisons and store results in a single query. Put calls to the DBMS_FLASHBACK package around SQL that you do not control, or when you want to use the same past time for several consecutive queries.

When you use the AS OF SCN form of the clause, you use DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER first to find out the SCN to which to return.

Remember that all processing is done using the current national language and other settings, not the ones that were in effect in the past.

Examples of Flashback Query

The flashback query mechanism is flexible enough to be used in many situations. You can:

Query data as it existed in the past.

Compare current data to past data. You can compare individual rows, or do more complex comparisons such as finding the intersection or union.

Recover deleted or changed data.

Refer to a point in time using a DATE or TIMESTAMP value, or record system change numbers.

Retrieving Data in the Past: Example

-- Find out someone's salary at the beginning of a specified day.
-- Because of the limit on undo information, the date must be recent.
set serveroutput on;
set echo off;
-- Use a single SQL query.
SELECT salary FROM employees AS OF TIMESTAMP
to_timestamp('20-FEB-2002','DD-MON-YY') WHERE employee_id = 205;
-- Use the DBMS_FLASHBACK package. It involves more code, but several
-- operations can be performed all referring to the same past time.
DECLARE
the_salary NUMBER;
num_employees NUMBER;
BEGIN
dbms_flashback.enable_at_time(to_timestamp('20-FEB-2002','DD-MON-YY'));
SELECT salary INTO the_salary FROM employees WHERE employee_id = 205;
dbms_output.put_line('Salary = ' || the_salary);
SELECT COUNT(*) INTO num_employees FROM employees;
dbms_output.put_line('Number of employees = ' || num_employees);
dbms_flashback.disable;
END;
/

Remember that a date with no time represents the very beginning of that day. Because of the limited amount of mapping data that is stored for times, such a query might only be able to look back a few days in the past. To look back farther, you need to store the SCN at the time of interest. Even then, the data might be unavailable if the saved undo data does not extend back that far.

Recovering Incorrectly Updated or Deleted Data: Examples

This example makes an incorrect update, commits the changes, then immediately recovers the old information using a flashback query:

Using the AS OF Clause with a System Change Number: Example

-- A precise method of specifying the flashback point uses the SCN directly,
-- instead of specifying a timestamp, but also requires that we store the SCN in
-- advance. Go to this extra trouble whenever it is crucial to retrieve data
-- from a precise point.
set serveroutput on;
-- Make a copy of the EMPLOYEES table without the constraints, so we
-- can make arbitrary changes.
DROP TABLE employees2;
CREATE TABLE employees2 AS SELECT * FROM employees;
COMMIT;
-- Wait for slightly more than 5 minutes, because the table is so new.
-- Only needed in demo situations like this.
EXECUTE dbms_lock.sleep(320);
SELECT count(*) FROM employees2 WHERE salary = 9000;
DECLARE
TYPE employee_cursor IS REF CURSOR;
c employee_cursor;
cvar employees%ROWTYPE;
old_scn NUMBER;
BEGIN
COMMIT;
dbms_flashback.disable;
old_scn := dbms_flashback.get_system_change_number;
DELETE FROM employees2 WHERE salary = 9000;
COMMIT;
-- Those updates and deletes were in error. We need to recover the data.
-- Use the data as it existed immediately before the update and delete.
OPEN c FOR 'SELECT * FROM employees2 AS OF SCN :1 WHERE salary = 9000'
USING old_scn;
LOOP
FETCH c INTO cvar;
EXIT WHEN c%NOTFOUND;
dbms_output.put_line('Recovering employee: ' || cvar.last_name);
INSERT INTO employees2 VALUES cvar;
END LOOP;
COMMIT;
END;
/
select count(*) from employees2 where salary = 9000;

Finding All Rows That Were Added Today: Example

-- TRUNC(SYSDATE) gives us midnight this morning, the start of the day.
-- By using the MINUS operator, we can see which rows are present now but
-- were not present at the start of the day. They might be entirely new
-- employees, or employees whose data was changed today so their row doesn't
-- match the row from yesterday.
DROP TABLE employees_changed_today;
CREATE TABLE employees_changed_today AS
SELECT * FROM employees
MINUS
SELECT * FROM employees AS OF TIMESTAMP TRUNC(SYSDATE);

Performing a Flashback Query Using a Subquery: Example

-- A query similar to the previous one shows the flashback query
-- can be inside a subquery.
SELECT MAX(salary) FROM
(
SELECT * FROM employees
MINUS
SELECT * FROM employees AS OF TIMESTAMP TRUNC(SYSDATE)
);

Using Explicit and Implicit Cursors with DBMS_FLASHBACK: Example

-- It is more efficient to use an explicit cursor than a FOR loop with an
-- implicit cursor. To use flashback query effectively, it helps to understand
-- different techniques for using cursors.
-- Before: many employees have null values for this column.
select count(*) from employees where commission_pct is null;
-- Most efficient technique. Open the cursor, disable flashback, then use
-- the older data in DML statements on the current table.
DECLARE
TYPE employee_cursor IS REF CURSOR;
c employee_cursor;
cvar employees%ROWTYPE;
BEGIN
-- Make sure at the start that we aren't in the middle of a transaction.
COMMIT;
-- Make sure we are not already in flashback mode.
dbms_flashback.disable;
-- Flashback to a known time.
dbms_flashback.enable_at_time(SYSDATE - 1);
-- Open a cursor to retrieve data from the past.
OPEN c FOR 'SELECT * FROM employees WHERE employee_id < 200';
-- When flashback is disabled, the cursor continues to refer to past data,
-- and we can perform DML again.
dbms_flashback.disable;
LOOP
FETCH c INTO cvar;
EXIT WHEN c%NOTFOUND;
-- If the employee had a null value for commission_pct yesterday,
-- then overwrite the current row, setting the value to zero.
IF cvar.commission_pct IS NULL THEN
cvar.commission_pct := 0;
UPDATE employees SET ROW = cvar WHERE employee_id = cvar.employee_id;
END IF;
COMMIT;
END LOOP;
END;
/
-- After: no employee should have a null value for this column.
select count(*) from employees where commission_pct is null;

You can also use an implicit cursor, although this is slightly less efficient because DBMS_FLASHBACK.DISABLE is called for each loop iteration:

-- Setup: create an empty table with the same definition as EMPLOYEES.
drop table yesterdays_employees;
create table yesterdays_employees as select * from employees where 1 = 0;
set serveroutput on size 500000;
-- Less efficient technique. To allow DML statements against the current
-- table within the loop body, DISABLE must be called for each loop iteration.
-- Make sure database is in a consistent state.
COMMIT;
-- First make sure flashback is turned off initially.
EXECUTE DBMS_FLASHBACK.DISABLE;
-- Then set it to a known value.
EXECUTE DBMS_FLASHBACK.ENABLE_AT_TIME(SYSDATE - 1);
BEGIN
-- The FOR loop is examining data values from the past.
FOR c in (SELECT * FROM employees WHERE employee_id < 200)
LOOP
-- DISABLE gets called multiple times, but that's OK.
DBMS_FLASHBACK.DISABLE;
dbms_output.put_line('Inserting employee #' || c.employee_id);
-- Because flashback is disabled within the loop body, we can access the
-- present state of the data, and issue DML statements to undo the changes
-- or store the old data somewhere else.
INSERT INTO yesterdays_employees VALUES c;
END LOOP;
END;
/
-- After the loop, once more make sure flashback is turned off.
EXECUTE DBMS_FLASHBACK.DISABLE;
select count(*) from yesterdays_employees;