9.1 Configuring and Managing a Logical Standby Database

The DBMS_LOGSTDBY PL/SQL package provides procedures to help you configure and manage a logical standby database. You can use the DBMS_LOGSTDBY PL/SQL package to perform management tasks such as the following on a logical standby database:

9.1.2 Controlling User Access to Tables in a Logical Standby Database

The SQL ALTER DATABASE GUARD statement controls user access to tables in a logical standby database. The database guard is set to ALL by default on a logical standby database.

The ALTER DATABASE GUARD statement allows the following keywords:

ALL

Specify ALL to prevent all users, other than SYS, from making changes to any data in the logical standby database.

STANDBY

Specify STANDBY to prevent all users, other than SYS, from making DML and DDL changes to any table or sequence being maintained through SQL Apply.

NONE

Specify NONE if you want typical security for all data in the database.

For example, use the following statement to enable users to modify tables not maintained by SQL Apply:

SQL> ALTER DATABASE GUARD STANDBY;

Privileged users can temporarily turn the database guard off and on for the current session using the ALTER SESSION DISABLE GUARD and ALTER SESSION ENABLE GUARD statements, respectively. This statement replaces the DBMS_LOGSTDBY.GUARD_BYPASS PL/SQL procedure that performed the same function in Oracle9i. The ALTER SESSION [ENABLE|DISABLE] GUARD statement is useful when you want to temporarily disable the database guard to make changes to the database, as described in Section 9.1.4.

Note:

Be careful not to let the primary and logical standby databases diverge while the database guard is disabled.

9.1.3 Deleting Archived Redo Log Files No Longer Needed By SQL Apply

Periodically, you need to remove archived redo log files that are no longer needed by SQL Apply to reclaim disk space. Perform the following steps to remove archived redo log files from the file system:

To purge the logical standby session of metadata that is no longer needed, enter the following PL/SQL statement:

SQL> EXECUTE DBMS_LOGSTDBY.PURGE_SESSION;

This statement also updates the DBA_LOGMNR_PURGED_LOG view that displays the archived redo log files that are no longer needed.

Query the DBA_LOGMNR_PURGED_LOG view to list the archived redo log files that can be removed:

Use an operating system-specific command to delete the archived redo log files listed by the query.

9.1.4 Modifying a Logical Standby Database

You can override the database guard to allow changes to the logical standby database by executing the ALTER SESSION DISABLE GUARD statement. Privileged users can issue this statement to turn the database guard off for the current session.

The following sections provide some examples. The discussions in these sections assume that the database guard is set to ALL or STANDBY.

9.1.4.1 Performing DDL on a Logical Standby Database

This section describes how to add an index to a table maintained through SQL Apply.

By default, only accounts with SYS privileges can modify the database while the database guard is set to ALL or STANDBY. If you are logged in as SYSTEM or another privileged account, you will not be able to issue DDL statements on the logical standby database without first bypassing the database guard for the session.

The following example shows how to stop SQL Apply, bypass the database guard, execute SQL statements on the logical standby database, and then reenable the guard:

This sample procedure could be used to execute other DDL statements. Oracle recommends that you do not perform DML operations while the database guard bypass is enabled. This will introduce deviations between the primary and standby databases that will make it impossible for the logical standby database to be maintained. It is unlikely that you will be able to modify rows in a table in such a way that the logical standby database can incrementally maintain the rows.

9.1.4.2 Modifying Tables That Are Not Maintained by SQL Apply

Sometimes, a reporting application must collect summary results and store them temporarily or track the number of times a report was run. Although the main purpose of an application is to perform reporting activities, the application might need to issue DML (insert, update, and delete) operations on a logical standby database. It might even need to create or drop tables.

You can set up the database guard to allow reporting operations to modify data as long as the data is not being maintained through SQL Apply. To do this, you must:

Specify the set of tables on the logical standby database to which an application can write data by executing the DBMS_LOGSTDBY.SKIP procedure. Skipped tables are not maintained through SQL Apply.

Set the database guard to protect only standby tables.

In the following example, it is assumed that the tables to which the report is writing are also on the primary database.

The example stops SQL Apply, skips the tables, and then restarts SQL Apply so that changes can be applied to the logical standby database. The reporting application will be able to write to MYTABLES% in MYSCHEMA. They will no longer be maintained through SQL Apply.

The example then queries the DBA_LOGSTDBY_PARAMETERS view to verify the logical standby database is updated. Verification can take a while so you might need to repeat the query until no rows are returned, as shown in the following example:

Finally, the example sets the database guard to allow updates to the tables.

SQL> ALTER DATABASE GUARD STANDBY;
Database altered.

9.1.5 How Triggers and Constraints Are Handled on a Logical Standby Database

You do not need to take any action to enable or handle triggers and constraints on logical standby databases. Triggers and constraints are enabled on the standby database but they are not executed. The following describes how triggers and constraints are handled on a logical standby database:

For triggers and constraints on tables maintained by SQL Apply:

Constraints -- Check constraints are evaluated on the primary database and do not need to be re-evaluated on the logical standby database.

Triggers -- The effects of the triggers executed on the primary database are logged and applied on the standby database.

For triggers and constraints on tables notmaintained by SQL Apply:

Constraints are evaluated.

Triggers are fired.

9.1.6 Skipping SQL Statements on a Logical Standby Database

If only a subset of activity on a primary database is of interest on the standby database, use the DBMS_LOGSTDBY.SKIP procedure to define filters that prevent SQL Apply from issuing the SQL statements on the logical standby database. (See Section 4.1.1.1 for information about SQL statements that are skipped automatically.)

Tables continue applying SQL statements after filtering out unsupported datatypes or statements automatically. However, you must use the DBMS_LOGSTDBY.SKIP procedure to skip tables that you do not want to apply to the logical standby database. The following list shows typical examples of the types of SQL statements that can be filtered or skipped so that they are not applied on the logical standby database:

DML or DDL changes for tables

CREATE, ALTER, or DROP INDEX DDL statements

CREATE, ALTER, DROP, or TRUNCATE TABLE statements

CREATE, ALTER, or DROP TABLESPACE statements

CREATE or DROPVIEW statements

Example 9-1 demonstrates how to skip all SQL statements that reference the EMP table in a logical standby database.

In addition to skipping DML and DDL statements for schema and non-schema operations, you can also skip specific DML and DDL operations as well. Example 9-2 shows how to skip ALTERTABLESPACE and CREATE TABLESPACE for non-schema DDL operations.

How to ensure table rows in the primary database can be uniquely identified

How to determine if the primary database contains datatypes or tables that are not supported by a logical standby database

The following list and Example 9-3 show how to re-create a table and resume SQL Apply on that table:

Stop SQL Apply.

Ensure no operations are being skipped for that table by querying the DBA_LOGSTDBY_SKIP view.

If any operations are being skipped for that table, resume application of each operation that is currently being skipped by using the DBMS_LOGSTDBY.UNSKIP procedure. If multiple filters were created on the table, you will need to execute the procedure multiple times.

Re-create the table in the logical standby database using the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure. In addition to creating a table, this procedure also imports the data from the primary table using a database link.

Resume SQL Apply.

Before accessing data in the newly added table, archive the current online redo log file on the primary database and ensure the archived redo log file is applied to the logical standby database.

Example 9-3 demonstrates how to add the EMP table to a logical standby database.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> SELECT FIRST_CHANGE# FROM V$LOG WHERE STATUS = 'CURRENT';
When the value returned by the DBA_LOGSTDBY_PROGRESS.APPLIED_SCN
procedure is equal to (FIRST_CHANGE# - 1) or less than the value selected from the
query of the V$LOG view, the database is consistent and you can safely run reports
again.

When you query the DBA_LOGSTDBY_EVENTS view, it displays a table of events that records activity about SQL Apply. In particular, DDL execution or anything that generates an error is recorded in the events table. You can control what and how much activity is recorded in the events table. By default, 100 records are stored in this table, but you can increase it. For example:

SQL> DBMS_LOGSTDBY.APPLY_SET('MAX_EVENTS_RECORDED', 200);

Additionally, you can indicate what type of events you want recorded. By default, everything is recorded in the table. However, you can set the RECORD_SKIP_DDL, RECORD_SKIP_ERRORS, and RECORD_APPLIED_DDL parameters to FALSE to avoid recording these events.

Errors that cause SQL Apply to stop are always recorded in the events table (unless there is insufficient space in the system tablespace). These events are always put into the ALERT.LOG file as well, with the keyword 'LOGSTDBY' included in the text. When querying the view, select the columns in order by EVENT_TIME, COMMIT_SCN, and CURRENT_SCN. This ordering ensures a shutdown failure appears last in the view.

9.1.9 Understanding and Viewing SQL Apply Activity

SQL Apply uses a collection of parallel execution servers and background processes that apply changes from the primary database to the logical standby database. Figure 9-1 shows the flow of information and the role that each process performs.

Figure 9-1 SQL Apply Processing

The PREPARER processes do the heavy computing required to convert the block changes into table changes, or logical change records (LCR). At this point, the LCRs do not represent any specific transactions.

The BUILDER process assembles completed transactions from the individual LCRs.

The ANALYZER process examines the records, possibly eliminating transactions and identifying dependencies between the different transactions.

The COORDINATOR process (LSP):

Assigns transactions

Monitors dependencies between transactions and coordinates scheduling

Authorizes the commitment of changes to the logical standby database

The APPLIER processes:

Applies the LCRs to the database

Asks the COORDINATOR process to approve transactions with unresolved dependencies

Commits the transactions

You can query the V$LOGSTDBY view to see what each process is currently doing; the TYPE column describes the task being performed. When querying the V$LOGSTDBY view, pay special attention to the HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the V$LOGSTDBY view, progress is being made. The STATUS column gives a text description of the current activity. For example:

Another place to get information about current activity is the V$LOGSTDBY_STATS view, which provides state and status information. All of the options for the DBMS_LOGSTDBY.APPLY_SET procedure have default values, and those values (default or set) can be seen in the V$LOGSTDBY_STATS view. In addition, a count of the number of transactions applied or transactions ready will tell you if transactions are being applied as fast as they are being read. Other statistics include information on all parts of the system. For example:

This query shows how long SQL Apply has been running and how many transactions have been applied in that time. It also shows how many transactions are available to be applied.

9.1.10 Enabling Real-Time Apply

By default, Data Guard waits for the full archived redo log file to arrive on the standby database before recovering it to the standby database. However, if you have configured a standby redo log on the standby database, you can optionally enable real-time apply, which recovers redo data from the standby redo log files as they are being filled up by the remote file server (RFS) process. With real-time apply enabled, SQL Apply recovers redo data from standby redo log files at the same time the log files are being written to, as opposed to when a log switch occurs. Immediately applying standby redo log files in this manner keeps the logical standby database closely caught up with the primary database, without requiring the standby redo log files to be archived at the standby site. This can result in quicker switchovers and failovers.

To start real-time apply on the logical standby database, issue the following statement:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

9.1.11 Determining How Much Redo Data Was Applied

Transaction data in the redo stream can span multiple redo log files. For this reason, logical standby databases use an SCN range of redo data, rather than individual archived redo log files to report the progress of SQL Apply.

The DBA_LOGSTDBY_PROGRESS view displays APPLIED_SCN, NEWEST_SCN, and READ_SCN information. The APPLIED_SCN indicates that committed transactions less than or equal to that SCN were applied. The NEWEST_SCN is the maximum SCN to which data could be applied if no more redo data is received. This is usually the MAX(NEXT_CHANGE#)-1 from DBA_LOGSTDBY_LOG when there are no gaps in the list.

Archived redo log files with a NEXT_CHANGE# value that is less than the READ_SCN value are no longer needed. The information in those log files was applied or persistently stored in the database. The time values associated with these SCN values are only estimates based on log times. They are not meant to be accurate times of when those SCN values were written on the primary database.

To see which archived redo log files were applied or were not applied, issue the following query:

9.1.12 Recovering from Errors

Logical standby databases maintain user tables, sequences, and jobs. To maintain other objects, you must reissue the DDL statements seen in the redo data stream. Tables in the SYS schema are never maintained, because only Oracle metadata is maintained in the SYS schema.

If SQL Apply fails, an error is recorded in the DBA_LOGSTDBY_EVENTS table. The following sections demonstrate how to recover from two such errors.

9.1.12.1 DDL Transactions Containing File Specifications

DDL statements are executed the same way on the primary database and the logical standby database. If the underlying file structure is the same on both databases, the DDL will execute on the standby database as expected. However, if the structure of the file system on the standby system differs from the file system on the primary system, it is likely that an error might result because the DB_FILE_NAME_CONVERT will not convert the filenames of one or more sets of datafiles on the primary database to filenames on the standby database for a logical standby database.

If an error was caused by a DDL transaction that contained a file specification that does not match in the logical standby database environment, perform the following steps to fix the problem:

Use the ALTER SESSION DISABLEGUARD statement to bypass the database guard so you can make modifications to the logical standby database:

SQL> ALTER SESSION DISABLE GUARD;

Execute the DDL statement, using the correct file specification, and then reenable the database guard. For example:

In some situations, the problem that caused the transaction to fail can be corrected and SQL Apply restarted without skipping the transaction. An example of this might be when available space is exhausted. (Do not let the primary and logical standby databases diverge when skipping transactions. If possible, you should manually execute a compensating transaction in place of the skipped transaction.)

The following example shows SQL Apply stopping, the error being corrected, and then restarting SQL Apply:

When SQL Apply restarts, the transaction that failed will be re-executed and applied to the logical standby database.

9.1.12.2 Recovering from DML Failures

Do not use the SKIP_TRANSACTION procedure to filter DML failures. Not only is the DML that is seen in the events table skipped, but so is all the DML associated with the transaction. Thus, multiple tables might be damaged by such an action.

DML failures usually indicate a problem with a specific table. For example, assume the failure is an out-of-storage error that you cannot resolve immediately. The following steps demonstrate one way to respond to this problem.

Bypass the table, but not the transaction, by adding the table to the skip list:

From this point on, DML activity for the SCOTT.EMP table will not be applied. After you correct the storage problem, you can fix the table, provided you set up a database link to the primary database that has administrator privileges to run procedures in the DBMS_LOGSTDBY package.

Using the database link to the primary database, drop the local SCOTT.EMP table and then re-create it, and pull the data over to the standby database.

Because the SCOTT.EMP table will contain records as of when the INSTANTIATE_TABLE procedure was performed (in Step 2), it is possible for the SCOTT.EMP table to contain records for a department not in the SCOTT.DEPT table.

9.1.13 Refreshing Materialized Views

Materialized views refreshed on the primary database are not automatically refreshed separately on a logical standby database. To refresh materialized views on a logical standby database, use the ALTER SESSION DISABLE GUARD and ENABLEGUARD statements. For example:

If you are using the DBMS_LOGSTDBY.APPLY_SET procedure but you are not using the FULL option (the default) for the TRANSACTION_CONSISTENCY parameter, you should stop SQL Apply before refreshing materialized views on the logical standby database.

9.2 Upgrading the Oracle Database Software Version

Using a logical standby database, you can upgrade Oracle database software and patch sets with almost no downtime. This section provides a conceptual overview of the upgrade process. For complete database upgrade information, see the ReadMe file for the applicable Oracle Database 10g patchset release.

Note:

If you cannot use a logical standby database because of the datatypes in your application, then perform the upgrade as documented in Oracle Database Upgrade Guide.

Figure 9-2 shows a the Data Guard configuration before the upgrade begins, with the primary and logical standby databases both running the same Oracle software version.

Figure 9-2 Data Guard Configuration Before Upgrade

The Data Guard configuration operates with mixed database versions at several times so that the upgrade can be validated. The steps in this section indicate where you can end the upgrade and downgrade the software without data loss.

During these steps, consider having a second standby database in the Data Guard configuration to provide additional data protection.

Step 1 Stop SQL Apply and upgrade the logical standby database.

To begin the upgrade, stop SQL Apply and upgrade the Oracle database software on the logical standby database to version n+1.

For more information about upgrading the Oracle database software version, see the ReadMe file for the applicable Oracle Database 10g patchset release

Figure 9-3 shows the primary database running version n, and the logical standby database running version n+1. During the upgrade, redo data accumulates on the primary system.

Figure 9-3 Upgrade the Logical Standby Database Version

Step 2 Restart SQL Apply.

Restart SQL Apply and operate with version n on the primary database and version n+1 on the standby database. The Data Guard configuration can run the mixed versions shown in Figure 9-4 for an arbitrary period while you verify the upgraded Oracle software version is running properly in the production environment.

The redo data that was accumulating on the primary system is automatically transmitted and applied on the newly upgraded logical standby database.

Figure 9-4 Running Mixed Versions

Step 3 Perform a switchover.

When you are satisfied that the upgraded software is operating properly, you can reverse the database roles by performing a switchover (see Section 7.3.1). This may take only a few seconds. Activate the user applications and services on the new primary database. If application service levels degrade for some reason, then you can open the previous primary database again, switch users back, and quit the previous steps.

After the switchover, you cannot send redo data from the new primary database (B) that is running the new database software version to the new standby database (A) that is running an older software version. This means that:

Redo data is accumulating on the new primary database.

The new primary database is unprotected at this time.

Figure 9-5 shows the former standby database (version n+1) is now the primary database, and the former primary database (version n) is now the standby database. The users are connected to the new primary database.

Figure 9-5 After a Switchover

Step 4 Upgrade the new logical standby database.

Upgrade the new logical standby database.

For more information about upgrading the Oracle database software version, see the ReadMe file for the applicable Oracle Database 10g patchset release. Figure 9-6 shows the system after both databases were upgraded to version n+1.

Figure 9-6 Both Databases Upgraded

Step 5 Start SQL Apply.

When you start SQL Apply, the redo that was accumulating on the primary database is sent to the logical standby database. The primary database is protected against data loss once the redo data is available on the standby database.

Step 6 Raise the compatibility level on both databases.

Raise the compatibility level of both databases by setting the COMPATIBLE initialization parameter. Set the COMPATIBLE parameter on the standby database before you set it on the primary database. See Chapter 11 for more information about the COMPATIBLE initialization parameter.

Step 7 Optionally, perform another switchover.

Optionally, perform a another switchover of the databases so the original primary database is once again running in the production database role (as shown in Figure 9-2).

9.3 Recovering Through the OPEN RESETLOGS Statement

Data Guard allows recovery on a logical standby database to continue after the primary database was opened with the RESETLOGS option. When an ALTER DATABASE OPEN RESETLOGS statement is issued on the primary database, the incarnation of the database changes, creating a new branch of redo data.

When a logical standby database receives a new branch of redo data, SQL Apply stops and the logical standby process (LSP) on the standby database terminates. For logical standby databases, no manual intervention is required if the standby database did not apply redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how to resynchronize the standby database with the primary database branch:

If the standby database. . .

Then. . .

Perform these steps. . .

Has not applied redo data past the new resetlogs SCN (past the start of the new branch of redo data)

No manual intervention is necessary. SQL Apply will automatically take the new branch of redo data.

Restart SQL Apply to continue applying redo data. The LSP automatically resynchronizes the standby database with the new branch of redo data.

Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is enabled on the standby database

The standby database is recovered in the future of the new branch of redo data.

Follow the procedure in Section 10.4.2 to flash back a logical standby database.

9.4 Tuning Logical Standby Databases

The following sections describe actions you can take to increase system performance.

9.4.1 Create a Primary Key RELY Constraint

On the primary database, if a table does not have a primary key or a unique index, and you know the rows are indeed unique because you have ensured this some other way, then create a primary key RELY constraint. On the logical standby database, create an index on the columns that make up the primary key. The following query generates a list of tables with no index information that can be used by a logical standby database to apply to uniquely identify rows. By creating an index on the following tables, performance can be improved significantly.

9.4.2 Gather Statistics for the Cost-Based Optimizer

Statistics should be gathered on the standby database because the cost-based optimizer (CBO) uses them to determine the optimal query execution path. New statistics should be gathered after the data or structure of a schema object is modified in ways that make the previous statistics inaccurate. For example, after inserting or deleting a significant number of rows into a table, collect new statistics on the number of rows.

Statistics should be gathered on the standby database because DML and DDL operations on the primary database are executed as a function of the workload. While the standby database is logically equivalent to the primary database, SQL Apply might execute the workload in a different way. This is why using the stats pack on the logical standby database and the V$SYSSTAT view can be useful in determining which tables are consuming the most resources and table scans.

9.4.3 Adjust the Transaction Consistency

Use the TRANSACTION_CONSISTENCY parameter of the DBMS_LOGSTDBY.APPLY_SET procedure to control how transactions are applied to the logical standby database. The default setting is FULL, which applies transactions to the logical standby database in the same order in which they were committed on the primary database.

Regardless of the consistency level chosen, the data in the logical standby database will be transactionally consistent with the primary database when SQL Apply is stopped normally.

Specify one of the following values:

FULL (the default)

Transactions are applied to the logical standby database in the exact order in which they were committed on the primary database. This option results in the lowest performance, but it is recommended when the logical standby database is used for generic reporting applications.

READ_ONLY

Transactions are committed out of order, but SQL SELECT statements executed on the standby database always return consistent results based on the last consistent SCN known to SQL Apply.

The READ_ONLY option provides better performance than the FULL option, and SQL SELECT statements return read-consistent results. This is particularly beneficial when you are using the logical standby database to generate reports. The READ_ONLY option is recommended when the logical standby database is used for read-only reporting.

Note:

The READ_ONLY option should be used only when ALTER DATABASE GUARD ALL is set.

NONE

Transactions are applied out of order from how they were committed on the primary database. This results in the best performance of the three values. The NONE option is useful when the logical standby database is in catch-up mode after temporary loss of network connectivity or anytime there a number of logs to apply. This option also works well as long as applications that are reading the logical standby database make no assumptions about transaction order. For example:

On the primary database, one transaction adds a new customer, and a second transaction adds a new order for that customer.

On the standby database, those transactions may be reversed. The order for the new customer might be added first. If you then run a reporting application on the standby database that expects to find a customer for the new order, the reporting application might fail because constraints are not checked and triggers are not fired.

For example, the timeline in Figure 9-7 shows Transaction 2 starts a transaction around the time when Transaction 1 ends, and Transaction 2 commits soon after Transaction 1 commits. If TRANSACTION_CONSISTENCY is set to:

FULL--SQL Apply guarantees Transaction 1 commits before Transaction 2

READ_ONLY--SQL Apply guarantees either:

Transaction 1 will commit before Transaction 2.

Transaction 1 and Transaction 2 will commit at the same time.

NONE--SQL Apply does not guarantee order; it is possible that Transaction 2 will commit before Transaction 1.

9.4.4 Adjust the Maximum Number of Parallel Execution Processes

SQL Apply uses parallel execution processes to perform processing and parallel apply algorithms to maintain a good SQL Apply performance level. You can adjust the maximum number of parallel execution processes for an instance by setting the PARALLEL_MAX_SERVERS initialization parameter. The default value for this parameter is derived from the values of the CPU_COUNT, PARALLEL_AUTOMATIC_TUNING, and PARALLEL_ADAPTIVE_MULTI_USER initialization parameters. This parameter must not be set to a value less than 5 on a logical standby database. However, for best results, set PARALLEL_MAX_SERVERS to a minimum of 9.

You can use the MAX_SERVERS parameter of the DBMS_LOGSTDBY.APPLY_SET procedure to limit the number of parallel servers used by SQL Apply. The default value of this parameter is set to 9. If you set this parameter explicitly, do not set it to a value less than 5, or greater than the value of the PARALLEL_MAX_SERVERS initialization parameter.

Increasing the number of parallel execution processes for an instance can speed up execution operations, but this improvement must be balanced against the consumption of additional system resources by the processes.

9.4.5 Control Memory Usage on the Logical Standby Database

You can use the MAX_SGA parameter of the DBMS_LOGSTDBY.APPLY_SET procedure to set the maximum amount of shared pool space used by SQL Apply for redo cache. By default, SQL Apply will use up to one quarter of the shared pool. Generally speaking, increasing the size of the shared pool or the amount of shared pool space used by SQL Apply will improve the performance of a logical standby database. See PL/SQL Packages and Types Reference for more information about the DBMS_LOGSTDBY.APPLY_SET procedure.