Oracle DBA

Monday, April 27, 2015

After data refresh/copy from one environment to another, usually developer realizes that they are using sequence and need to change the sequence's max value. To perform this I have developed this handy script:This script assumes that the executor of this script has DBA (or DBA like privileges). Ability to select from DBA views and alter any sequence.

Monday, April 6, 2015

I have used DB_Unique_Name when I create a standby databases. This morning one of my colleague, when duplicating database using the production backup on non-production cluster, asked me the question why does files in the ASM are being created in folder with the old database name and not with the new database name. I reviewed and found out that DB_Unique_Name parameter was left unchanged which caused this issue. This prompted me to believe that DB_Unique_Name is being used for creating the OMF file structure. This is not documented clearly in Oracle Documentation, but I found this nice article which explains the difference between DB_Name and DB_Unique_Name.http://ora12c.blogspot.com/2012/08/difference-between-dbname-dbuniquename.html

Tuesday, March 11, 2014

We followed the steps I described in earlier post with some minor tweaks. Tweaks were mainly due to the fact that earlier setup had some database instance names in uppercase and some in lower case. To bring consistency we changed all instance names to lower case. Similarly, archive log destination as well as RMAN backup directory were not consistent. For better administration purpose we changed all the directory names to lower case as well.

Migration preparation:

Identify databases are being migrated.

Collect the database and related database service configuration.

Prepare commands to recreate the services as well adding databases to the OCR on the target cluster.

2. Create a database: Here I used dbca to create database quickly with all database files - redo log, control files, data files, temp files, etc. are on SAN_POC disk group.Also, I created a new user and table with couple rows in it. This will help to validate the database status after migration for the end user objects. To keep this post concise, I am omitting details for these trivial tasks. For the new database, created a new service using following command:old01 $ srvctl add service -d poc -s poc_svc -r poc1 -a poc2 -P PRECONNECT -y AUTOMATIC -q TRUE -e select -m basic -z 10 -w 5old01 $ srvctl status service -d pocService poc_svc is not running.old01 $ srvctl start service -d poc -s poc_svcold01 $ srvctl status service -d poc

Service poc_svc is running on instance(s) poc1

3. As a prep work, we can copy database initialization file and password file from ${ORACLE_HOME}/dbs directory to the new cluster. Also, need to create audit dump destination, archive or any other OS directory structure that's used by database.4. Next step is to shutdown database and un-mount disk group from all the nodes.

srvctl stop database -d poc

ASMCMD [+] > umount san_poc

5. Worked with system admin to detach LUNs from the existing cluster and present them to the new cluster. In this process we identified that there were identical ASM LUN name on both the servers. To resolve this, we renamed LUN using asmlib on one of the existing cluster node. On the other node we used oracleasm scandisks command. DO NOT use "oracleasm deletedisk" command. System admin released LUNs from OS as well as through storage admin console.

LUNs released from the existing cluster were presented to the new cluster. On both the new cluster nodes execute "oracleasm scandisks".

Now, this was the new excitement phase...

6. I logged in to new cluster node. To validate disk is intact, I ran kfed command:

$ kfed dev=/dev/oracleasm/disks/ASM_0005 op=read cnt=1 blknum=0

kfbh.endian: 1 ; 0x000: 0x01

kfbh.hard: 130 ; 0x001: 0x82

kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD

kfbh.datfmt: 1 ; 0x003: 0x01

kfbh.block.blk: 0 ; 0x004: blk=0

kfbh.block.obj: 2147483648 ; 0x008: disk=0

kfbh.check: 2930213453 ; 0x00c: 0xaea7824d

kfbh.fcn.base: 0 ; 0x010: 0x00000000

kfbh.fcn.wrap: 0 ; 0x014: 0x00000000

kfbh.spare1: 0 ; 0x018: 0x00000000

kfbh.spare2: 0 ; 0x01c: 0x00000000

kfdhdb.driver.provstr: ORCLDISKASM_0005 ; 0x000: length=16

kfdhdb.driver.reserved[0]: 1598903105 ; 0x008: 0x5f4d5341

kfdhdb.driver.reserved[1]: 892350512 ; 0x00c: 0x35303030

kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000

kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000

kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000

kfdhdb.compat: 168820736 ; 0x020: 0x0a100000

kfdhdb.dsknum: 0 ; 0x024: 0x0000

kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL

kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER

kfdhdb.dskname: SAN_POC_0000 ; 0x028: length=12

kfdhdb.grpname: SAN_POC ; 0x048: length=7

kfdhdb.fgname: SAN_POC_0000 ; 0x068: length=12

Hold my breath.... attempt to mount diskgroup.

mount san_poc

And I was successful. Repeated same step on another node.

7. Now time to making sure that we can start database[POC] that was residing on this diskgroup. Before starting database, added database to the OCR by issuing following commands: