The script content on this page is for navigation purposes only and does not alter the content in any way.

1 SQL Processing for Application Developers

This chapter explains what application developers must know about how Oracle Database processes SQL statements. Before reading this chapter, read the basic information about SQL processing in Oracle Database Concepts.

Description of SQL Statement Processing

This topic provides an example of what happens during the execution of a SQL statement in each stage of processing. While this example specifically processes a data manipulation language (DML) statement, you can generalize it for other types of SQL statements. For information about how execution of other types of SQL statements might differ from this description, see "Processing Other Types of SQL Statements".

Assume that you are using a Pro*C program to increase the salary for all employees in a department. The program you are using has connected to Oracle Database and you are connected to the proper schema to update the employees table. You can embed this SQL statement in your program:

Department_id is a program variable containing a value for department number. When the SQL statement is run, the value of department_id is used, as provided by the application program.

Stages of SQL Statement Processing

These are the stages necessary for each type of statement processing. (For a flowchart of this process, see Oracle Database Concepts.

Open or create a cursor.

A program interface call opens or creates a cursor. The cursor is created independent of any SQL statement: it is created in expectation of a SQL statement. In most applications, cursor creation is automatic. However, in precompiler programs, cursor creation can either occur implicitly or be explicitly declared.

Parse the statement.

During parsing, the SQL statement is passed from the user process to Oracle Database, and a parsed representation of the SQL statement is loaded into a shared SQL area. Many errors can be caught during this stage of statement processing.

This stage is necessary only if the characteristics of a query's result are not known; for example, when a query is entered interactively by a user. In this case, the describe stage determines the characteristics (data types, lengths, and names) of a query's result.

If the statement is a query, define its output.

In this stage, you specify the location, size, and data type of variables defined to receive each fetched value. These variables are called define variables. Oracle Database performs data type conversion if necessary.)

At this point, Oracle Database knows the meaning of the SQL statement but still does not have enough information to run the statement. Oracle Database needs values for any variables listed in the statement; in the example, Oracle Database needs a value for department_id. The process of obtaining these values is called binding variables.

A program must specify the location (memory address) where the value can be found. End users of applications may be unaware that they are specifying bind variables, because the Oracle Database utility can simply prompt them for a value.

Because you specify the location (binding by reference), you need not rebind the variable before reexecution. You can change its value and Oracle Database looks up the value on each execution, using the memory address.

You must also specify a data type and length for each value (unless they are implied or defaulted) if Oracle Database must perform data type conversion.

See Also:

For more information about specifying a data type and length for a value:

Oracle Database can parallelize queries and some data definition language (DDL) operations such as index creation, creating a table with a subquery, and operations on partitions. Parallelization causes multiple server processes to perform the work of the SQL statement so it can complete faster.

Run the statement.

At this point, Oracle Database has all necessary information and resources, so the statement is run. If the statement is a query or an INSERT statement, no rows must be locked because no data is being changed. If the statement is an UPDATE or DELETE statement, however, all rows that the statement affects are locked until the next COMMIT, ROLLBACK, or SAVEPOINT for the transaction. This ensures data integrity.

For some statements you can specify multiple executions to be performed. This is called array processing. Given n number of executions, the bind and define locations are assumed to be the beginning of an array of size n.

If the statement is a query, fetch its rows.

In the fetch stage, rows are selected and ordered (if requested by the query), and each successive fetch retrieves another row of the result until the last row has been fetched.

Close the cursor.

The final stage of processing a SQL statement is closing the cursor.

Shared SQL Areas

Oracle Database automatically notices when applications send similar SQL statements to the database. The SQL area used to process the first occurrence of the statement is shared—that is, used for processing subsequent occurrences of that same statement. Therefore, only one shared SQL area exists for a unique statement. Because shared SQL areas are shared memory areas, any Oracle Database process can use a shared SQL area. The sharing of SQL areas reduces memory use on the database server, thereby increasing system throughput.

In evaluating whether statements are similar or identical, Oracle Database considers both SQL statements issued directly by users and applications and recursive SQL statements issued internally by a DDL statement.

DDL Statement Processing

The execution of DDL statements differs from the execution of DML statements and queries, because the success of a DDL statement requires write access to the data dictionary. For these statements, parsing (Stage 2) actually includes parsing, data dictionary lookup, and execution.

Transaction Control Statement Processing

In general, only application designers using the programming interfaces to Oracle Database are concerned with the types of actions that are grouped as one transaction. Transactions must be defined so that work is accomplished in logical units and data is kept consistent. A transaction consists of all of the necessary parts for one logical unit of work, no more and no less.

Data in all referenced tables should be in a consistent state before the transaction begins and after it ends.

Transactions should consist of only the SQL statements that make one consistent change to the data.

For example, a transfer of funds between two accounts (the transaction or logical unit of work) should include the debit to one account (one SQL statement) and the credit to another account (one SQL statement). Both actions should either fail or succeed as a unit of work; the credit should not be committed without the debit. Other unrelated actions, such as a deposit to one account, should not be included in the transfer of funds transaction.

Other Processing Types

Transaction management, session management, and system management SQL statements are processed using the parse and run stages. To rerun them, simply perform another EXECUTE.

Deciding How to Group Operations in Transactions

In general, deciding how to group operations in transactions is the concern of application designers who use the programming interfaces to Oracle Database. When deciding how to group transactions:

Define transactions such that work is accomplished in logical units and data remains consistent.

Ensure that data in all referenced tables is in a consistent state before the transaction begins and after it ends.

Ensure that each transaction consists only of the SQL statements or PL/SQL blocks that comprise one consistent change to the data.

For example, suppose that you write a Web application that enables users to transfer funds between accounts. The transaction must include the debit to one account, which is executed by one SQL statement, and the credit to another account, which is executed by a second SQL statement. Both statements must fail or succeed as a unit of work; the credit must not be committed without the debit. Other unrelated actions, such as a deposit to one account, must not be included in the same transaction.

Improving Transaction Performance

As an application developer, you must consider whether you can improve performance. Consider these performance enhancements when designing and writing your application:

Use the SETTRANSACTION statement with the USEROLLBACKSEGMENT clause to explicitly assign a transaction to a rollback segment. This technique can eliminate the need to allocate additional extents dynamically, which can reduce system performance. This clause is valid only if you use rollback segments for undo. If you use automatic undo management, then Oracle Database ignores this clause.

Establish standards for writing SQL statements so that you can take advantage of shared SQL areas. Oracle Database recognizes identical SQL statements and enables them to share memory areas. This reduces memory usage on the database server and increases system throughput.

Collect statistics that can be used by Oracle Database to implement a cost-based approach to SQL statement optimization. You can supply additional "hints" to the optimizer as needed.

For the collection of most statistics, use the DBMS_STATS package, which lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. For more information about this package, see Oracle Database PL/SQL Packages and Types Reference.

For statistics collection not related to the cost-based optimizer (such as collecting information about free list blocks), use the SQL statement ANALYZE. For more information about this statement, see Oracle Database SQL Language Reference.

Invoke the DBMS_APPLICATION_INFO.SET_ACTION procedure before beginning a transaction to register and name a transaction for later use when measuring performance across an application. Specify which type of activity a transaction performs so that the system tuners can later see which transactions are taking up the most system resources.

Committing Transactions

To commit a transaction, use the COMMIT statement. These two statements are equivalent and commit the current transaction:

COMMIT WORK;
COMMIT;

The COMMIT statements lets you include the COMMENT parameter along with a comment that provides information about the transaction being committed. This option is useful for including information about the origin of the transaction when you commit distributed transactions:

COMMIT COMMENT 'Dallas/Accts_pay/Trans_type 10B';

Managing Commit Redo Action

When a transaction updates the database, it generates a corresponding redo entry. Oracle Database buffers this redo entry to the redo log until the transaction completes. When the transaction commits, the log writer process (LGWR) writes redo records for the commit, with the accumulated redo entries of all changes in the transaction, to disk. By default, Oracle Database writes the redo entries to disk before the call returns to the client. This action introduces a latency in the commit because the application must wait for the redo entries to be persistent on disk.

Oracle Database lets you change the handling of commit redo depending on the needs of your application. If your application requires very high transaction throughput and you are willing to trade commit durability for lower commit latency, you can change the default COMMIT options so that the application need not wait for the database to write data to the online redo logs.

Ensures that the commit returns only after the corresponding redo information is persistent in the online redo log. When the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media.

A failure that occurs after a successful write to the log might prevent the success message from returning to the client, in which case the client cannot tell whether the transaction committed.

NOWAIT

The commit returns to the client regardless of whether the write to the redo log has completed. This behavior can increase transaction throughput.

BATCH

Buffers the redo information to the redo log, along with other concurrently running transactions. After collecting sufficient redo information, initiates a disk write to the redo log. This behavior is called group commit, because it writes redo information for multiple transactions to the log in a single I/O operation.

IMMEDIATE (default)

LGWR writes the transaction redo information to the log. Because this operation option forces a disk I/O, it can reduce transaction throughput.

Caution:

With the NOWAIT option of COMMIT, a failure that occurs after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent.

You cannot change the default IMMEDIATE and WAIT action for distributed transactions.

If your application uses OCI, then you can modify redo action by setting these flags in the OCITransCommit function within your application:

OCI_TRANS_WRITEBATCH

OCI_TRANS_WRITENOWAIT

OCI_TRANS_WRITEIMMED

OCI_TRANS_WRITEWAIT

Caution:

There is a potential for silent transaction loss when you use OCI_TRANS_WRITENOWAIT. Transaction loss occurs silently with shutdown termination, startup force, and any instance or node failure. On a RAC system asynchronously committed changes might not be immediately available to read on other instances.

The specification of the NOWAIT and BATCH options has a small window of vulnerability in which Oracle Database can roll back a transaction that your application view as committed. Your application must be able to tolerate these scenarios:

The database host fails, which causes the database to lose redo that was buffered but not yet written to the online redo logs.

A file I/O problem prevents log writer from writing buffered redo to disk. If the redo logs are not multiplexed, then the commit is lost.

Rolling Back Transactions

To roll back an entire transaction, or to roll back part of a transaction to a savepoint, use the ROLLBACK statement. For example, either of these statements rolls back the entire current transaction:

ROLLBACK WORK;
ROLLBACK;

The WORK option of the ROLLBACK statement has no function.

To roll back to a savepoint defined in the current transaction, use the TO option of the ROLLBACK statement. For example, either of these statements rolls back the current transaction to the savepoint named POINT1:

Commits all actions performed by the first DML statement (the DELETE statement) and the last DML statement (the second INSERT statement)

All other statements (the second and the third statements) of the transaction were rolled back before the COMMIT. The savepoint A is no longer active.

Ensuring Repeatable Reads with Read-Only Transactions

By default, Oracle Database guarantees statement-level read consistency, but not transaction-level read consistency. With statement-level read consistency, queries in a statement produce consistent data for the duration of the statement, not reflecting changes by other statements. With transaction-level read consistency (repeatable reads), queries in the transaction produce consistent data for the duration of the transaction, not reflecting changes by other transactions.

To ensure transaction-level read consistency for a transaction that does not include DML statements, specify that the transaction is read-only. The queries in a read-only transaction see only changes committed before the transaction began, so query results are consistent for the duration of the transaction.

A read-only transaction provides transaction-level read consistency without acquiring additional data locks. Therefore, while the read-only transaction is querying data, other transactions can query and update the same data.

A read-only transaction begins with this statement:

SET TRANSACTION READ ONLY [ NAME string ];

Only DDL statements can precede the SETTRANSACTIONREADONLY statement. After the SETTRANSACTIONREADONLY statement successfully runs, the transaction can include only SELECT (without FORUPDATE), COMMIT, ROLLBACK, or non-DML statements (such as SETROLE, ALTERSYSTEM, and LOCKTABLE). A COMMIT, ROLLBACK, or DDL statement ends the read-only transaction.

Long-running queries sometimes fail because undo information required for consistent read (CR) operations is no longer available. This happens when committed undo blocks are overwritten by active transactions. Automatic undo management provides a way to explicitly control when undo space can be reused; that is, how long undo information is retained. Your database administrator can specify a retention period by using the parameter UNDO_RETENTION.

For example, if UNDO_RETENTION is set to 30 minutes, then all committed undo information in the system is retained for at least 30 minutes. This ensures that all queries running for 30 minutes or less, under usual circumstances, do not encounter the OER error "snapshot too old."

Using Cursors

PL/SQL implicitly declares a cursor for all SQL data manipulation statements, including queries that return only one row. For queries that return multiple rows, you can explicitly declare a cursor to process the rows individually.

A cursor is a handle to a specific private SQL area. In other words, a cursor can be thought of as a name for a specific private SQL area. A PL/SQL cursor variable enables the retrieval of multiple rows from a stored subprogram. Cursor variables enable you to pass cursors as parameters in your 3GL application. Cursor variables are described in Oracle Database PL/SQL Language Reference.

Although most Oracle Database users rely on the automatic cursor handling of the database utilities, the programmatic interfaces offer application designers more control over cursors. In application development, a cursor is a named resource available to a program, which can be specifically used for parsing SQL statements embedded within the application.

Explicitly creating cursors for precompiler programs has advantages in tuning those applications. For example, increasing the number of cursors can reduce the frequency of parsing and improve performance. If you know how many cursors might be required at a given time, you can open that many cursors simultaneously.

Using a Cursor to Reexecute a Statement

After each stage of execution, the cursor retains enough information about the SQL statement to reexecute the statement without starting over, if no other SQL statement was associated with that cursor. The statement can be reexecuted without including the parse stage.

By opening several cursors, the parsed representation of several SQL statements can be saved. Repeated execution of the same SQL statements can thus begin at the describe, define, bind, or run step, saving the repeated cost of opening cursors and parsing.

To understand the performance characteristics of a cursor, a DBA can retrieve the text of the query represented by the cursor using the V$SQL dynamic performance view. Because the results of EXPLAINPLAN on the original query might differ from the way the query is actually processed, a DBA can get more precise information by examining these dynamic performance views:

View

Description

V$SQL_PLAN

Execution plan information for each child cursor loaded in the library cache.

V$SQL_STATISTICS

Execution statistics at the row source level for each child cursor.

V$SQL_STATISTICS_ALL

Memory usage statistics for row sources that use SQL memory (sort or hash-join). This view concatenates information in V$SQL_PLAN with execution statistics from V$SQL_PLAN_STATISTICS and V$SQL_WORKAREA.

Scrollable Cursors

Execution of a cursor puts the results of the query into a set of rows called the result set, which can be fetched sequentially or nonsequentially. Scrollable cursors are cursors in which fetches and DML statements need not be forward sequential only. Interfaces exist to fetch previously fetched rows, to fetch the nth row in the result set, and to fetch the nth row from the current position in the result set.

See Also:

Oracle Call Interface Programmer's Guide for more information about using scrollable cursors in OCI

Closing a Cursor

Closing a cursor means that the information in the associated private area is lost and its memory is deallocated. Once a cursor is opened, it is not closed until one of these events occurs:

The user program terminates its connection to the server.

If the user program is an OCI program or precompiler application, then it explicitly closes any open cursor during the execution of that program. (However, when this program terminates, any cursors remaining open are implicitly closed.)

Canceling a Cursor

Canceling a cursor frees resources from the current fetch.The information in the associated private area is lost but the cursor remains open, parsed, and associated with its bind variables.

Locking Tables Explicitly

Oracle Database has default locking mechanisms that ensure data concurrency, data integrity, and statement-level read consistency. However, you can override these mechanisms by locking tables explicitly. Locking tables explicitly is useful in situations such as these:

A transaction in your application needs exclusive access to a resource, so that the transaction does not have to wait for other transactions to complete.

The initialization parameter DML_LOCKS (described in Oracle Database Reference) determines the maximum number of DML locks. Although its default value is usually enough, you might need to increase it if you use explicit locks.

Caution:

If you override the default locking of Oracle Database at any level, ensure that data integrity is guaranteed, data concurrency is acceptable, and deadlocks are either impossible or appropriately handled.

Privileges Required to Acquire Table Locks

No special privileges are required to acquire any type of table lock on a table in your own schema. To acquire a table lock on a table in another schema, you must have either the LOCKANYTABLE system privilege or any object privilege (for example, SELECT or UPDATE) for the table.

Choosing a Locking Strategy

A transaction explicitly acquires the specified table locks when a LOCKTABLE statement is executed. A LOCKTABLE statement explicitly overrides default locking. When a LOCKTABLE statement is issued on a view, the underlying base tables are locked. This statement acquires exclusive table locks for the employees and departments tables on behalf of the containing transaction:

LOCK TABLE employees, departments
IN EXCLUSIVE MODE NOWAIT;

You can specify several tables or views to lock in the same mode; however, only a single lock mode can be specified for each LOCKTABLE statement.

Note:

When a table is locked, all rows of the table are locked. No other user can modify the table. For information about locking individual rows, see "Explicitly Acquiring Row Locks".

In the LOCKTABLE statement, you can also indicate how long you want to wait for the table lock:

If you do not want to wait, specify either NOWAIT or WAIT 0.

You acquire the table lock only if it is immediately available; otherwise, an error notifies you that the lock is not available now.

To wait up to n seconds to acquire the table lock, specify WAITn, where n is greater than 0 and less than or equal to 100000.

If the table lock is still unavailable after n seconds, an error notifies you that the lock is not available now.

The database waits indefinitely until the table is available, locks it, and returns control to you. When the database is running DDL statements concurrently with DML statements, a timeout or deadlock can sometimes result. The database detects such timeouts and deadlocks and returns an error.

When to Lock with ROW SHARE MODE and ROW EXCLUSIVE MODE

ROWSHAREMODE and ROWEXCLUSIVEMODE table locks offer the highest degree of concurrency. You might use these locks if:

Your transaction must prevent another transaction from acquiring an intervening share, share row, or exclusive table lock for a table before your transaction can update that table.

If another transaction acquires an intervening share, share row, or exclusive table lock, no other transactions can update the table until the locking transaction commits or rolls back.

Your transaction must prevent a table from being altered or dropped before your transaction can modify that table.

When to Lock with SHARE MODE

SHAREMODE table locks are rather restrictive data locks. You might use these locks if:

Your transaction only queries the table, and requires a consistent set of the table data for the duration of the transaction.

You can hold up other transactions that try to update the locked table, until all transactions that hold SHAREMODE locks on the table either commit or roll back.

Other transactions might acquire concurrent SHAREMODE table locks on the same table, also giving them the option of transaction-level read consistency.

Caution:

Your transaction might not update the table later in the same transaction. However, if multiple transactions concurrently hold share table locks for the same table, no transaction can update the table (even if row locks are held as the result of a SELECTFORUPDATE statement). Therefore, if concurrent share table locks on the same table are common, updates cannot proceed and deadlocks are common. In this case, use share row exclusive or exclusive table locks instead.

Scenario: Tables employees and budget_tab require a consistent set of data in a third table, departments. For a given department number, you want to update the information in employees and budget_tab, and ensure that no members are added to the department between these two transactions.

Solution: Lock the departments table in SHAREMODE, as shown in Example 1-1. Because the departments table is rarely updated, locking it probably does not cause many other transactions to wait long.

Change the settings for these parameters only when an instance is shut down. If multiple instances are accessing a single database, then all instances must use the same setting for these parameters.

Explicitly Acquiring Row Locks

You can override default locking with a SELECT statement that includes the FORUPDATE clause. This statement acquires exclusive row locks for selected rows (as an UPDATE statement does), in anticipation of updating the selected rows in a subsequent statement.

You can use a SELECTFORUPDATE statement to lock a row without actually changing it. For example, several triggers in Oracle Database PL/SQL Language Reference show how to implement referential integrity. In the EMP_DEPT_CHECK trigger, the row that contains the referenced parent key value is locked to guarantee that it remains for the duration of the transaction; if the parent key is updated or deleted, referential integrity is violated.

SELECTFORUPDATE statements are often used by interactive programs that enable a user to modify fields of one or more specific rows (which might take some time); row locks are acquired so that only a single interactive program user is updating the rows at any given time.

If a SELECTFORUPDATE statement is used when defining a cursor, the rows in the return set are locked when the cursor is opened (before the first fetch) rather than being locked as they are fetched from the cursor. Locks are only released when the transaction that opened the cursor is committed or rolled back, not when the cursor is closed.

Each row in the return set of a SELECTFORUPDATE statement is locked individually; the SELECTFORUPDATE statement waits until the other transaction releases the conflicting row lock. If a SELECTFORUPDATE statement locks many rows in a table, and if the table experiences a lot of update activity, it might be faster to acquire an EXCLUSIVE table lock instead.

Note:

The return set for a SELECTFORUPDATE might change while the query is running; for example, if columns selected by the query are updated or rows are deleted after the query started. When this happens, SELECTFORUPDATE acquires locks on the rows that did not change, gets a read-consistent snapshot of the table using these locks, and then restarts the query to acquire the remaining locks.

This can cause a deadlock between sessions querying the table concurrently with DML statements when rows are locked in a nonsequential order. To prevent such deadlocks, design your application so that concurrent DML statements on the table do not affect the return set of the query. If this is not feasible, you might want to serialize queries in your application.

By default, the SELECTFORUPDATE statement waits until the requested row lock is acquired. To change this behavior, use the NOWAIT, WAIT, or SKIPLOCKED clause of the SELECTFORUPDATE statement. For information about these clauses, see Oracle Database SQL Language Reference.

Examples of Concurrency Under Explicit Locking

Table 1-3 shows how Oracle Database maintains data concurrency, integrity, and consistency when the LOCKTABLE statement and the SELECT statement with the FORUPDATE clause are used. For brevity, the message text for ORA-00054 ("resource busy and acquire with NOWAIT specified") is not included. User-entered text is bold.

Using Oracle Lock Management Services (User Locks)

Your applications can use Oracle Lock Management services (user locks) by invoking subprograms the DBMS_LOCK package. An application can request a lock of a specific mode, give it a unique name recognizable in another subprogram in the same or another instance, change the lock mode, and release it. Because a reserved user lock is an Oracle Database lock, it has all the features of a database lock, such as deadlock detection. Ensure that any user locks used in distributed transactions are released upon COMMIT, otherwise an undetected deadlock can occur.

Example 1-2 shows how the Pro*COBOL precompiler uses locks to ensure that there are no conflicts when multiple people must access a single device.

Example 1-2 How the Pro*COBOL Precompiler Uses Locks

******************************************************************
* Print Check *
* Any cashier may issue a refund to a customer returning goods. *
* Refunds under $50 are given in cash, more than $50 by check. *
* This code prints the check. One printer is opened by all *
* the cashiers to avoid the overhead of opening and closing it *
* for every check, meaning that lines of output from multiple *
* cashiers can become interleaved if you do not ensure exclusive *
* access to the printer. The DBMS_LOCK package is used to *
* ensure exclusive access. *
******************************************************************
CHECK-PRINT
* Get the lock "handle" for the printer lock.
MOVE "CHECKPRINT" TO LOCKNAME-ARR.
MOVE 10 TO LOCKNAME-LEN.
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.ALLOCATE_UNIQUE ( :LOCKNAME, :LOCKHANDLE );
END; END-EXEC.
* Lock the printer in exclusive mode (default mode).
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.REQUEST ( :LOCKHANDLE );
END; END-EXEC.
* You now have exclusive use of the printer, print the check.
...
* Unlock the printer so other people can use it
EXEC SQL EXECUTE
BEGIN DBMS_LOCK.RELEASE ( :LOCKHANDLE );
END; END-EXEC.

From the Additional Monitoring Links section of the Database Performance page, click Database Locks to display user blocks, blocking locks, or the complete list of all database locks. See Oracle Database 2 Day DBA for more information.

UTLLOCKT.SQL

The UTLLOCKT.SQL script displays a simple character lock wait-for graph in tree structured fashion. Using any SQL tool (such as SQL*Plus) to run the script, it prints the sessions in the system that are waiting for locks and the corresponding blocking locks. The location of this script file is operating system dependent. (You must have run the CATBLOCK.SQL script before using UTLLOCKT.SQL.)

Using Serializable Transactions for Concurrency Control

By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.

If a transaction A attempts to update or delete a row that has been locked by another transaction B (by way of a DML or SELECTFORUPDATE statement), then A's DML statement blocks until B commits or rolls back. Once B commits, transaction A can see changes that B has made to the database.

For most applications, this concurrency model is the appropriate one, because it provides higher concurrency and thus better performance. But some rare cases require transactions to be serializable. Serializable transactions must run in such a way that they appear to be running one at a time (serially), rather than concurrently. Concurrent transactions running in serialized mode can make only the database changes that they could make if the transactions ran one after the other.

The SQL standard defines three possible kinds of transaction interaction, and four levels of isolation that provide increasing protection against these interactions. These interactions and isolation levels are summarized in Table 1-5.

Footnote 1 A transaction can read uncommitted data changed by another transaction.

Footnote 2 A transaction rereads data committed by another transaction and sees the new data.

Footnote 3 A transaction can run a query again, and discover rows inserted by another committed transaction.

Table 1-6 summarizes the action of Oracle Database for these isolation levels.

Table 1-6 ANSI Isolation Levels and Oracle Database

Isolation Level

Description

READUNCOMMITTED

Oracle Database never permits "dirty reads." Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle Database.

READCOMMITTED

Oracle Database meets the READCOMMITTED isolation standard. This is the default mode for all Oracle Database applications. Because an Oracle Database query only sees data that was committed at the beginning of the query (the snapshot time), Oracle Database actually offers more consistency than is required by the SQL standard for READCOMMITTED isolation.

REPEATABLEREAD

Oracle Database does not normally support this isolation level, except as provided by SERIALIZABLE.

SERIALIZABLE

Oracle Database does not provide this isolation level by default, but you can request it.

How Serializable Transactions Interact

Figure 1-1 shows how a serializable transaction (Transaction B) interacts with another transaction (A, which can be either SERIALIZABLE or READCOMMITTED).

When a serializable transaction fails with ORA-08177, the application can take any of several actions:

Commit the work executed to that point

Run additional, different, statements, perhaps after rolling back to a prior savepoint in the transaction

Roll back the entire transaction and try it again

Oracle Database stores control information in each data block to manage access by concurrent transactions. To use the SERIALIZABLE isolation level, you must use the INITRANS clause of the CREATETABLE or ALTERTABLE statement to set aside storage for this control information. To use serializable mode, INITRANS must be set to at least 3.

Oracle Database stores control information in each data block to manage access by concurrent transactions. Therefore, if you set the transaction isolation level to SERIALIZABLE, then you must use the ALTERTABLE statement to set INITRANS to at least 3. This parameter causes Oracle Database to allocate sufficient storage in each block to record the history of recent transactions that accessed the block. Use higher values for tables for which many transactions update the same blocks.

Referential Integrity and Serializable Transactions

Because Oracle Database does not use read locks, even in SERIALIZABLE transactions, data read by one transaction can be overwritten by another. Transactions that perform database consistency checks at the application level must not assume that the data they read will not change during the execution of the transaction (even though such changes are not visible to the transaction). Database inconsistencies can result unless such application-level consistency checks are coded carefully, even when using SERIALIZABLE transactions.

Note:

Examples in this topic apply to both READCOMMITTED and SERIALIZABLE transactions.

Figure 1-2 shows two different transactions that perform application-level checks to maintain the referential integrity parent/child relationship between two tables. One transaction checks that a row with a specific primary key value exists in the parent table before inserting corresponding child rows. The other transaction checks to see that no corresponding detail rows exist before deleting a parent row. In this case, both transactions assume (but do not ensure) that data they read will not change before the transaction completes.

The read issued by transaction A does not prevent transaction B from deleting the parent row, and transaction B's query for child rows does not prevent transaction A from inserting child rows. This scenario leaves a child row in the database with no corresponding parent row. This result occurs even if both A and B are SERIALIZABLE transactions, because neither transaction prevents the other from making changes in the data it reads to check consistency.

As this example shows, sometimes you must take steps to ensure that the data read by one transaction is not concurrently written by another. This requires a greater degree of transaction isolation than defined by the SERIALIZABLE mode in the SQL standard.

Fortunately, it is straightforward in Oracle Database to prevent the anomaly described:

Transaction A can use SELECTFORUPDATE to query and lock the parent row and thereby prevent transaction B from deleting the row.

Transaction B can prevent Transaction A from gaining access to the parent row by reversing the order of its processing steps. Transaction B first deletes the parent row, and then rolls back if its subsequent query detects the presence of corresponding rows in the child table.

Referential integrity can also be enforced in Oracle Database using database triggers, instead of a separate query as in Transaction A. For example, an INSERT into the child table can fire a BEFOREINSERT row-level trigger to check for the corresponding parent row. The trigger queries the parent table using SELECTFORUPDATE, ensuring that parent row (if it exists) remains in the database for the duration of the transaction inserting the child row. If the corresponding parent row does not exist, the trigger rejects the insert of the child row.

SQL statements issued by a database trigger run in the context of the SQL statement that caused the trigger to fire. All SQL statements executed within a trigger see the database in the same state as the triggering statement. Thus, in a READCOMMITTED transaction, the SQL statements in a trigger see the database as of the beginning of the triggering statement execution, and in a transaction running in SERIALIZABLE mode, the SQL statements see the database as of the beginning of the transaction. In either case, the use of SELECTFORUPDATE by the trigger correctly enforces referential integrity.

READ COMMITTED and SERIALIZABLE Isolation

Oracle Database gives you a choice of two transaction isolation levels with different characteristics. Both the READCOMMITTED and SERIALIZABLE isolation levels provide a high degree of consistency and concurrency. Both levels reduce contention, and are designed for deploying real-world applications. The rest of this topic compares the two isolation modes and provides information helpful in choosing between them.

Transaction Set Consistency

A useful way to describe the READCOMMITTED and SERIALIZABLE isolation levels in Oracle Database is to consider:

A collection of database tables (or any set of data)

A sequence of reads of rows in those tables

The set of transactions committed at any moment

An operation (a query or a transaction) is transaction set consistent if its read operations all return data written by the same set of committed transactions. When an operation is not transaction set consistent, some reads reflect the changes of one set of transactions, and other reads reflect changes made by other transactions. Such an operation sees the database in a state that reflects no single set of committed transactions.

Oracle Database transactions running in READCOMMITTED mode are transaction-set consistent on an individual-statement basis, because all rows read by a query must be committed before the query begins.

Oracle Database transactions running in SERIALIZABLE mode are transaction set consistent on an individual-transaction basis, because all statements in a SERIALIZABLE transaction run on an image of the database as of the beginning of the transaction.

In other database systems, a single query run in READCOMMITTED mode provides results that are not transaction set consistent. The query is not transaction set consistent, because it might see only a subset of the changes made by another transaction. For example, a join of a master table with a detail table can see a master record inserted by another transaction, but not the corresponding details inserted by that transaction, or vice versa. The READCOMMITTED mode avoids this problem, and so provides a greater degree of consistency than read-locking systems.

In read-locking systems, at the cost of preventing concurrent updates, the SQL standardREPEATABLEREAD isolation provides transaction set consistency at the statement level, but not at the transaction level. The absence of phantom protection means two queries issued by the same transaction can see data committed by different sets of other transactions. Only the throughput-limiting and deadlock-susceptible SERIALIZABLE mode in these systems provides transaction set consistency at the transaction level.

Choosing an Isolation Level for Transactions

Choose an isolation level that is appropriate to the specific application and workload. You might choose different isolation levels for different transactions. The choice depends on performance and consistency needs, and consideration of application coding requirements.

For environments with many concurrent users rapidly submitting transactions, you must assess transaction performance against the expected transaction arrival rate and response time demands, and choose an isolation level that provides the required degree of consistency while performing well. Frequently, for high performance environments, you must trade-off between consistency and concurrency (transaction throughput).

Both Oracle Database isolation modes provide high levels of consistency and concurrency (and performance) through the combination of row-level locking and Oracle Database's multi-version concurrency control system. Because readers and writers do not block one another in Oracle Database, while queries still see consistent data, both READCOMMITTED and SERIALIZABLE isolation provide a high level of concurrency for high performance, without the need for reading uncommitted ("dirty") data.

READCOMMITTED isolation can provide considerably more concurrency with a somewhat increased risk of inconsistent results (from phantoms and unrepeatable reads) for some transactions. The SERIALIZABLE isolation level provides somewhat more consistency by protecting against phantoms and unrepeatable reads, and might be important where a read/write transaction runs a query more than once. However, SERIALIZABLE mode requires applications to check for the "cannot serialize access" error, and can significantly reduce throughput in an environment with many concurrent transactions accessing the same data for update. Application logic that checks database consistency must consider the fact that reads do not block writes in either mode.

Application Tips for Transactions

When a transaction runs in serializable mode, any attempt to change data that was changed by another transaction since the beginning of the serializable transaction causes ORA-08177.

When you get this error, roll back the current transaction and run it again. The transaction gets a transaction snapshot, and the operation is likely to succeed.

To minimize the performance overhead of rolling back transactions and running them again, try to put DML statements that might conflict with other concurrent transactions near the beginning of your transaction.

Autonomous Transactions

An autonomous transaction (AT) is an independent transaction started by another transaction, the main transaction (MT). An autonomous transaction lets you suspend the main transaction, do SQL operations, commit or roll back those operations, and then resume the main transaction.

For example, in a stock purchase transaction, you might want to commit a customer's information regardless of whether the purchase succeeds. Or, you might want to log error messages to a debug table even if the transaction rolls back. Autonomous transactions enable you to do such tasks.

An autonomous transaction runs within an autonomous scope; that is, within the scope of an autonomous routine—a routine that you mark with the AUTONOMOUS_TRANSACTION pragma. For the definition of routine in this context, see Oracle Database PL/SQL Language Reference.

Figure 1-3 shows how control flows from the main transaction (MT) to an autonomous transaction (AT) and back again. As you can see, the autonomous transaction can commit multiple transactions (AT1 and AT2) before control returns to the main transaction.

When you enter the executable section of an autonomous transaction, the main transaction suspends. When you exit the transaction, the main transaction resumes. COMMIT and ROLLBACK end the active autonomous transaction but do not exit the autonomous transaction. As Figure 1-3 shows, when one transaction ends, the next SQL statement begins another transaction.

A few more characteristics of autonomous transactions:

The changes autonomous transactions effect do not depend on the state or the eventual disposition of the main transaction. For example:

An autonomous transaction does not see any changes made by the main transaction.

When an autonomous transaction commits or rolls back, it does not affect the outcome of the main transaction.

The changes an autonomous transaction effects are visible to other transactions as soon as that autonomous transaction commits. Therefore, users can access the updated information without having to wait for the main transaction to commit.

Examples of Autonomous Transactions

As these examples illustrate, there are four possible outcomes when you use autonomous and main transactions (see Table 1-8). There is no dependency between the outcome of an autonomous transaction and that of a main transaction.

Table 1-8 Possible Transaction Outcomes

Autonomous Transaction

Main Transaction

Commits

Commits

Commits

Rolls back

Rolls back

Commits

Rolls back

Rolls back

Ordering a Product

In the example illustrated by Figure 1-5, a customer orders a product. The customer's information (such as name, address, phone) is committed to a customer information table—even though the sale does not go through.

Resuming Execution After Storage Allocation Error

When a long-running transaction is interrupted by an out-of-space error condition, your application can suspend the statement that encountered the problem and resume it after the space problem is corrected. This capability is known as resumable storage allocation. It lets you avoid time-consuming rollbacks. It also lets you avoid splitting the operation into smaller pieces and writing code to track its progress.

What Operations Can Be Resumed After an Error Condition?

Queries, DML statements, and certain DDL statements can be resumed if they encounter an out-of-space error. The capability applies if the operation is performed directly by a SQL statement, or if it is performed within a stored subprogram, anonymous PL/SQL block, SQL*Loader, or an OCI call such as OCIStmtExecute.

Operations can be resumed after these kinds of error conditions:

Out of space errors, such as ORA-01653.

Space limit errors, such as ORA-01628.

Space quota errors, such as ORA-01536.

Certain storage errors cannot be handled using this technique. In dictionary-managed tablespaces, you cannot resume an operation if you run into the limit for rollback segments, or the maximum number of extents while creating an index or a table. Use locally managed tablespaces and automatic undo management in combination with this feature.

Handling Suspended Storage Allocation

When a statement is suspended, your application does not receive the usual error code. Therefore, it must do any logging or notification by coding a trigger to detect the AFTERSUSPEND event and invoke functions in the DBMS_RESUMABLE package to get information about the problem.

Within the body of the trigger, you can perform any notifications, such as sending e-mail to alert an operator to the space problem.

Alternatively, the DBA can periodically check for suspended statements using the static data dictionary view DBA_RESUMABLE and the dynamic performance view V$_SESSION_WAIT.

When the space condition is corrected (usually by the DBA), the suspended statement automatically resumes execution. If not corrected before the timeout period expires, the statement raises a SERVERERROR exception.

To reduce the chance of out-of-space errors within the trigger itself, declare it as an autonomous transaction, so that it uses a rollback segment in the SYSTEM tablespace. If the trigger encounters a deadlock condition because of locks held by the suspended statement, the trigger terminates and your application receives the original error condition, as if the statement was never suspended. If the trigger encounters an out-of-space condition, both the trigger and the suspended statement are rolled back. You can prevent the rollback through an exception handler in the trigger, and wait for the statement to be resumed.

The trigger in Example 1-4 handles storage errors within the database. For some kinds of errors, it terminates the statement and alerts the DBA that this has happened through an e-mail. For other errors, which might be temporary, it specifies that the statement waits for eight hours before resuming, expecting the storage problem to be fixed by then. To run this example, you must be logged in as SYSDBA.