In this second part we will setup the standby database and enable Fast Start Fail Over.

Standby Database Creation

Start the standby database instance in NOMOUNT start up mode:

SQL> startup nomount pfile=initdg2.ora;

Now that the configuration of the standby server is complete, let’s perform the duplication from the primary site.

Primary Server

Use the Recovery Manager (RMAN) to duplicate the primary database to the standby database.
Invoke RMAN; connect to the primary database as the sys user. Make an auxiliary connection to the standby instance:

The RMAN duplication process is finished without errors. After the database duplication, we will force the log switch on the primary database to archive the current redo log group.

SQL> alter system switch logfile;

Standby Server

On the standby database, run the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE command to start redo apply.The USING CURRENT LOGFILE means the redo is applied as soon as it is received on the standby.The DISCONNECT option means the redo apply will run in background session.

The Observer

The server dg3 will act as the observer in the Fast-Start Fail Over configuration. The Oracle client binaries have been installed with administrator option. Confirm the connectivity with both the primary and the standby databases:

The prerequisites for FSFO have been met. So FSFO can be configured, enabled and started. The FSFO observer process will be started using the DGMGRL session and will be logged to a file named observer.log:

That all depends on your requirements, the observer can be placed in a third site separated from primary and standby sites with a HA observer; this means 2 hosts; one observer process running on the first host and the other on standby (not running) on the second host. Only one observer can run at any one time.

31 Comments

One thing I noticed in this article force logging is not being set to yes if this is not set and append is issues those transaction will not come over to the standby since they are not logged. This will cause corruption on the standby but you will not see this until someone tries to select on those blocks on the standby.

Hi Agam,
I don't have to manually create the standby control file because I am using the "DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE" command for duplication. This command created a standby control file ...
But, If you use the "duplicate target database for standby" command then in that case you will have to create the standby controlfile backup of the Primary controlfile. You can always recreate manually a physical standby controlfile (refer to the MOS ID: [ID 459411.1]).
Hope it helps,
Wissem

I wanted to share the following SQL statement that can be used to find unrecoverable datafiles that have had append operations. This can be executed on a primary database to see if any datafiles have had append operations. If the query returns row when force logging was not turned on you will need to rebuild standby database or perform and incremental recovery of the standby database. col name format a50 set linesize 200 SELECT NAME, UNRECOVERABLE_CHANGE#, TO_CHAR (UNRECOVERABLE_TIME,'DD-MON-YYYY HH:MI:SS') as unrecoverable_dateFROM V$DATAFILE;

Hi,
Congratulations on a clear easy to follow paper.
Since you have titled the paper "best practices", I am disappointed that you omitted to include details on setting parameters based on your link speed. Oracle's own best practice paper suggests increasing the SDU and setting buffer sizes based on link speed. In practice I have found these are essential to the performance of the primary database.
The same is true of setting force logging. It is recommended to provide blanket protection, but is not REQUIRED. A better method is to control force logging at the tablespace level. In this way specific tablespaces can be constructed to hold transient load and work objects still with NOLOGGING while persistent objects would reside in tablespaces with FORCE LOGGING. This can have an enormous impact on primary performance and network bandwidth.
Andrew

I have a query: If DG is running in max performance mode and acceptable data loss is set to 30 minutes, and lets say if standby is lagging behind by 45 minutes and if disaster strikes, then what would happen? Will autofailover i.e. FSFO work or no?

hi
thank for you very good document
i have one question.
after automatic faileover in observer my standby server will change to primary role. what should i do if i want to set my old primary to primary role after fix it up?

hi tanx for you useful blogwhen i startup my old primary database i faced to this errorDatabase dismounted.ORACLE instance shut down.Operation requires startup of instance "orcl" on database "orcl"Starting instance "orcl"...Unable to connect to databaseORA-12545: Connect failed because target host or object does not existFailed.Warning: You are no longer connected to ORACLE.Please complete the following steps and reissue the REINSTATE command:start up and mount instance "orcl" of database "orcl"can you tell me what should i do?

Very interesting post...
I've a question about this scenario. Maybe you can help me...
I've 3 servers oracle01 (primary DB), oracle02 (standby) and oracle03 (observer), all of them are virtual machines.
I need to move oracle01 to another virtual machine using vmotion.
I want to stop oracle03, oracle01 and oracle02...use vmotion to move the virtual server and afterwards startup the servers.
I think i need to start first oracle01, afterwards oracle02 and finally oracle03 (observer), but i don0t know if oracle02 mus be started as MOUNT STANDBY DATABASE;
Is this correct?

hi - thanks for sharing this with us. Once you have a primary and physical standby db set up with a separate observer, how to do a prober jdbc connect ( gridlink ) with weblogic so it can handle swichovers and failovers as it requires an Oracle Notification System (=ONS ) Host. Can you configure this on the observer machine in the "ons.config" and if so would you mind sharing an example with us please?

First of all thanks for such document you have shared.
Though it its a old thread I am a beginner in Data guara setup.
Here I am using ASM in primary and stand by both the database server.
My commend was:
------------------
run{
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'MIDBFL02','MIDBFL2S'
set db_unique_name='MIDBFL2S'
set db_file_name_convert='/MIDBFL02/','/MIDBFL2S/'
set log_file_name_convert='/MIDBFL02/','/MIDBFL2S/'
set control_files='/data/MIDBFL2S.ctl'
set log_archive_max_processes='5'
set fal_client='MIDBFL2S'
set fal_server='MIDBFL02'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(MIDBFL02,MIDBFL2S)'
set log_archive_dest_2='service=MIDBFL02 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MIDBFL02'
;
}
------------------------------------
And the error stack trace was:

Hi Wiseem,
Please help on following points:
(1) Can Fast-start failover be enabled in maximum Protection mode?
(2) If I am having 2 Standby DBs (Local and Remote), both Sync - Max Protection mode, how can I set FSFO? Which parameter will decide where to failover, in the case of primary disaster?
(3) Assume, my Primary goes down and local standby have become primary now. What Automatic configuration can be done to pass the archive log from new primary to remote standby?

HI,
I am not an Oracle expert, with our DBA we are implementing DR site using Data Guard. We need to know if "We prepare DR servers and configure Data Guard at Main Office and then move to the DR site office, both offices are on different subnet, in this case can we change IP settings for Data Guard to communicate?"

Cary Millsap’s latest book

The Method R Guide to Mastering Oracle Trace Data, Second Edition contains the richest description of Oracle extended SQL trace data that you’ll ever find, and over 100 pages of worked examples, using the software tools built by Cary’s Method R Corporation.