In an Oracle DBA’s life, you’ll be regularly asked to work on applying a new patchset on a database and then you will apply it starting from the development database to the production database and this process can be quite long depending on the organization you are working for.

In an Oracle DBA’s life, you’ll be regularly asked to refresh a development database (or any environment before the production) with the production data for development, test or whatever needs. For years now, RMAN has helped us a lot to perform this kind of task easily.

And what should happen always happens and one day you will be asked to refresh your more recent patchset DEV database (let’s say 11.2.0.4) with your PROD data (let’s say that it’s running against an 11.2.0.3 version). And let’s call a spade a spade, that could be a bit tricky — and specially if you discover that the versions are different once the RESTORE / DUPLICATE is terminated because you have launched the usual refresh scripts forgetting this little detail…

A solution could be to ask some GB to the sys admin team, copy an 11.2.0.3 ORACLE_HOME from another server, quickly clone it on the DEV server, start a RMAN DUPLICATE / RESTORE DATABASE from the 11.2.0.3 PROD to the 11.2.0.3 DEV and then upgrade it to 11.2.0.4. But this will probably be quite long and in the case that adding some GB to a server requires some procedures, validations, etc… it could take many days to refresh the DEV database which is obviously not what everybody wants. And this possibility does not exists if you face the issue after the RESTORE / DUPLICATE is finished.

Hopefully, there’s a way to achieve this goal by directly RESTORE / DUPLICATE a database to a more recent patchset (note that this method is also working for 10g databases). Let’s explore the two cases you can face doing a direct RESTORE / DUPLICATE to a more recent patchset database.

RESTORE / DUPLICATE DATABASE case

Whether we are restoring or duplicating the production database from a backup, here is what will happen on the DEV database:

DUPLICATE FROM ACTIVE DATABASE case

Starting from 11g, we have the cool DUPLICATE FORM ACTIVE DATABASE feature that we can also use to perform this kind of refresh. When you perform a DUPLICATE FROM ACTIVE DATABASE operation from a 11.2.0.3 to a 11.2.0.4 version, the procedure is different from the previous one as the RESETLOGS will begin but will not be able to finish properly and you will face this error :

At this stage, it’s not possible to open the database in UPGRADE mode nor RECOVER the database and not even generate a BACKUP CONTROLFILE TO TRACE.

SQL> recover database using backup controlfile until cancel ;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
SQL>

So we have to recreate the controlfile. By using these queries, we can easily create a new CREATE CONTROLFILE statement (or we could generate a BACKUP CONTROLFILE TO TRACE from the source database and then adapt it for the destination database).

SQL> select name from v$datafile order by file#;
SQL> select group#, member from v$logfile;
SQL> select name, bytes from v$tempfile order by file#;

]]>https://www.pythian.com/blog/life-oracle-dbas-producing-patches/feed/776153Oracle RMAN Restore to the Same Machine as the Original Databasehttps://www.pythian.com/blog/oracle-rman-restore-to-the-same-machine-as-the-original-database/
https://www.pythian.com/blog/oracle-rman-restore-to-the-same-machine-as-the-original-database/#commentsFri, 11 Apr 2014 13:52:17 +0000http://www.pythian.com/blog/?p=65289

Among the most critical but often most neglected database administration tasks is testing restore from backup. But sometimes, you don’t have a test system handy, and need to test the restore on the same host as the source database. In such situations, the biggest fear is overwriting the original database. Here is a simple procedure you can follow, which will not overwrite the source.

Add an entry to the oratab for the new instance, and source the new environment:

Create a pfile and spfile with a minimum set of parameters for the new instance. In this case the source database is named ‘orcl’ and the new database will have a DB unique name of ‘foo’. This example will write all files to the +data ASM diskgroup, under directories for ‘foo’. You could use a filesystem directory as the destination as well. Just make sure you have enough space wherever you plan to write:

Now, using the backup pieces from your most recent backup, try restoring the controlfile only. Start with the most recently written backup piece, since RMAN writes the controlfile at the end of the backup. It may fail once or twice, but keep trying backup pieces until you find the controlfile:

Mount the database with the newly restored controlfile, and perform a restore to the new location. The ‘set newname’ command changes the location that RMAN will write the files to the db_create_file_dest of the new instance. The ‘switch database’ command updates the controlfile to reflect the new file locations. When the restore is complete, use media recovery to apply the archived redologs.

Before opening the database, we need to re-create the controlfile so that we don’t step on any files belonging to the source database. The first step is to generate a “create controlfile” script, and to locate the trace file where it was written:

Next, we need to edit the controlfile creation script so that all we have left is the “create controlfile … resetlogs” statement, and so that all file paths to the original database are removed or changed to reference the db_unique_name of the test database.Below is a pipeline of clumsy awks I created that creates a script called create_foo_controlfile.sql. It should take care of most permutations of these trace controlfile scripts.

For me, every Monday morning starts with the same routine. I refresh some QA databases from PeopleSoft production one. Even though the process is almost fully automated (all commands are automatically generated) it takes several hours to perform the final steps because DUPLICATE FROM ACTIVE DATABASE performs image copies to the destination server.

I am so happy now because the DUPLICATE mechanism in #DB12C has been changed. It now uses PULL method and backup sets over the network to perform duplication from an active database. This will shorten the time of the operation since “USING COMPRESSED BACKUPSET” and “SECTION SIZE” can now be used to optimize the process.

I wanted to start the post and provide results of duplicate and output for the operation. However, the real functionality of new features sometimes deviates from what is claimed. The reasons for the discrepancies can vary, from version restrictions to bugs, but the bottom line is that the results are not what’s expected.

That’s why many DBAs are very careful about new features. No, not afraid, just careful. They need to make 100% sure that what was claimed is working properly, without problems, so it can be properly implemented on the systems they support and care about. There are no plans that I am aware of yet to implement a 12c database on any servers; however, I decided to test duplication on an active database to see how compressed backup sets and section size clauses influence the process…in case I am asked about it one day…

I started with a 12c 2 nodes RAC database. The duplication without compression and usage of section size went well, but failed due to a missing log error with section size and compression. I thought it might be because I used an easy naming connection instead of TNS, but I tested it with TNS and got the same result. I decided to test it for a standalone database – got the same result. It was failing on recovery of the duplicated database every time I used either COMPRESSED BACKUPSET and/or SECTION SIZE clauses independent of the type of connection string I was using – ASM or file system – but duplication with the clauses worked without any issues.

Even though I haven’t resolved the issue, the story isn’t over. I created SR on MOS, uploaded the required trace and log files, and am waiting for their response. I hope someone made it work, but I have not heard yet that anyone has. It will definitely work one day and the error I have been receiving will be overcome:

As you can see, the duplication has been nicely done and PULL method was used to get backup sets over the network from the source database. I assume with SDU set to its maximum value of 2Mb in #DB12C it can increase the speed of duplication and decrease the time I need to wait to complete post clone steps.

Thank you for reading the post and I hope to update it soon with a response from Oracle Support.

On another note, how many of you who blog bring such “unresolved” issues to the public? Do you think it’s worth publishing or better to wait until the issue has been resolved, post successful solutions, and leave it for investigation on support sites and within communities?