Quite a packed program right ? Well the Linux part i was merely a spectator i could only wait and see how the server was bounced and see that My Oracle Environment ( both the databases and the listener) came back up after bouncing so the Restart environment was performing well. After the third reboot it became time to do my actions. Below you will see a case study which i wanted to start at first cause I thought that would save time .. In the end I implemented Plan B ( always good to have one available). Well better buckle up and let’s get started with it.

Summary:

Below you will find three scenarios you could follow, Scenario 1 would enable you to do installation in parallel and even a week in advance but would need extra steps ( relinking software, enable the Grid Infra structure in a number of steps). In the End when setting this up and testing with it i came to the conclusion that it does not save that much time to do the installs before the Linux upgrade . Average installation is about 10 – 15 Minutes so i recommend to first have the Linux upgrade in place and after the Databases and listeners come back online when they (Linux) boot the server proceed with the Installations and patching . The scenario is less complicated and i think even more error prone. So this will be the scenario 2 and i will follow it on the environments where i am asked to do.

Addendum

Meanwhile i have implemented the plan and upgraded the databases ( 25 in total ). I was not very pleased with the use of DBUA (used it in a script with silent option) because i indeed felt less control on the process. Two major setbacks i witnessed during the upgrade with DBU: 1) in my case it bugged me that it added a local listener to the init.oras while doing the upgrade cause that crashed the upgrade ( at a restart of database with that new generated – altered init.ora the db would of course not restart ) 2) the Grid agent kept altering the Oracle Home of the databases ( so was pointing to wrong env.). Well Together with a colleague in the end we did save the day . But that was because we failed back to the manual upgrade Method. I have listed the activities and i will add it to the Scenario : Real Life Implemented plan.

PS . by customers Request and due to the fact that the following Parameter altered its Default behavior ( was FALSE in 11.2.0.3 became TRUE) i had to make sure the following parameter was set again in spfile:

alter system set “_use_adaptive_log_file_sync”=FALSE scope = both;

Important Add on. In all scenarios as a baseline i ran utlu112i.sql on all Databases in scope. And the good news was that all components installed where valid! AND i created a list of invalid objects per schema to compare to the situation after the upgrade ( as proof that this dba did not break the application).

As always happy reading,

Mathijs

Scenario 1 Installing software only as a preparation:

When I started my preparations it seemed like the best thing to install both the Software parts ( GI and Rdbms) as software only and perform the needed steps after that. In this case following would have been performed:

Install 11.2.0. 3 Rdbms as “software only”

Install 11.2.0.3 GI as software only

After The Linux upgrade would have to relink the software (described below)

Would have to perform various steps to activate the 11.2.0. GI.

Would have to implement PSU October 2013

Upgrade the Databases.

Scenario 1 After the Linux upgrade would have to relink my software in full again:

Stopping the databases under control in an easy way:
As prep for the relinking of the software I performed following step:
srvctl status home -o /opt/oracle/product/112_ee_64/db -s /var/tmp/state_file.status
srvctl stop home -o /opt/oracle/product/112_ee_64/db -s /var/tmp/state_file.dmp
In Order to relink the Rdbms software:
After shutting down the databases (see above):
Had the ORACLE_HOME set properly
$ORACLE_HOME/bin/relink all
Note: writing relink log to: /opt/oracle/product/112_ee_64/db/install/relink.log
In Order to relink the Oracle Restart software:
Prepare the Oracle Grid Infrastructure for a Standalone Server home for modification using the following procedure:

Log in as the Oracle Grid Infrastructure software owner user and change the directory to the path Grid_home/bin, where Grid_home is the path to the Oracle Grid Infrastructure home:

cd /opt/crs/product/112_ee_64/crs/bin

Shut down the Oracle Restart stack using the following command:

crsctl stop has –f
This will show:
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'MySrvr1hr'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'MySrvr1hr'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'MySrvr1hr' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'MySrvr1hr'
CRS-2677: Stop of 'ora.evmd' on 'MySrvr1hr' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'MySrvr1hr' has completed
CRS-4133: Oracle High Availability Services has been stopped.
oracle@MySrvr1hr:/opt/crs/product/112_ee_64/crs/bin [+ASM]#
oracle@MySrvr1hr:/opt/crs/product/112_ee_64/crs/bin [+ASM]#
Then:
Relink Oracle Grid Infrastructure for a Standalone Server using the following procedure:

Login as root

Log in as the Oracle Grid Infrastructure for a Standalone Server owner:

As i wrote this is my preferred scenario. It will involve less steps and the runInstaller will enable you to Upgrade the existing environement. Please be aware that the 11.2.0.3 Installations are a so-called out-of-place installation requiring new Oracle Homes . For that purpose i have requested ( and got ) extra space +15 Gb in /opt/oracle and same space in /opt/crs. I followed the steps below bullet by bullet.

DBUA messed up by adding local listener to Init.ora and continued altering the oratab by the Grid agent. That is why i recommend against the DBU for bulk upgrades . I would script the Upgrade using a Fixed Oracle_HOME ( the new one ) and a dedicated initora / spfile for the MIG.

Steps for Manual Upgrade:

Preferred WAY !

Create a new spfile from a migration pfile

mig pfile has larger shared_pool-size

1) Start sqlplus and run catupgrd.sql script from the NEW $ORACLE_HOME/rdbms/admin

sqlplus ” / as sysdba “

spool /tmp/upgrade<DB>.log

startup upgrade

set echo on

@?/rdbms/admin/catupgrd.sql;

After Catupgrd.sql finishes it will shutdown the database

2) Check catupgrd.sql spool file for errors.

3) Restart the database in normal mode.

4) @$ORACLE_HOME/rdbms/admin/catuppst.sql;

Post steps for the migration

5) @$ORACLE_HOME/rdbms/admin/utlrp.sql;

alter system set “_use_adaptive_log_file_sync”=FALSE scope = both;

Requested by customer

set lines 2000

select instance_name from v$instance;

Check sanity of upgrade

select * from v$version;

Check sanity of upgrade

select COMP_NAME,VERSION,STATUS,MODIFIED from dba_registry order by 1;

Check sanity of upgrade all the installed components should be valid !

Post navigation

4 thoughts on “Upgrade to 11.2.0.3 Grid Infra and Rdbms with Psu October 2013”

Hi,
My scenario is exact same as mentioned above. I had upgraded Grid Infra. standalone server from 11.2.0.2 to 11.2.0.4. We upgraded one dev server with following steps. But it seems during these steps, I had experienced the same issues.
Steps:
1) Installed and upgraded grid. infra using ./runInstaller to 11.2.0.4 with out-place method.
2) Ran rootupgrade.sh at the end.
3)verified cluster resources using crsctl .

ISSUE:-> During this step we have found that upgrade process created spfileasm.ora and it had local_listener parameter which was not in our old asm init.ora file. Hence ASM was not starting after Grid Infra. upgrade.
-> we removed that new parameter , made same pfile for asm which we had in old grid infra home.
-> Now, Grid infra. upgrade completed and all cluster resource ran fine including listener which ran with new grid home.

ISSUE: when we checked stauts of resources, DB were showing down in crsctl status command but manually databases were coming up .

We were under impression that upgrade process would update new oracle home internally but it wouldn’t
we also modified oracle home to point new oracle home using crsctl modify command.
After this, database were coming up and showing ONLINE in cluster status.

all looking good so far…. but today when i query database ” show parameter spfile” & show parameter pfile. it is showing empty.
It is obvious becase before upgrade we need to copy init.ora file from old_ora_home to new_ora_home. even after upgrade , i created spfile from pfile but it is creating on local filesystem under $ORACLE_HOME/dbs, not in ASM diskgroup.

I believe database must be using spfile and that must be present in ASM diskgroup.

1)How can I fix this issue ?
2) What are the other post upgrade things/points you had performed that i need to make sure after upgrading grid.infra with standalone along with database upgrade ?

Even oracle doesn’t have any best practice doc. on this scenario so i want to make sure each post upgrade steps before we upgrade prod.

Hi John when i read your comment two things come to mind :
1) use the srvctl config database -d to check settings . This should show you for the 11.2 environment something similar to:
srvctl config database -d MYDB:

And i agree that best practices is hard . In your scenario i would add the step after migration to use srvctl config for the migrated database . With that information you find there i would user srvctl modify database -d MYDB -o and the srvctl modify database -d -p

and general thought .. Indeed i would prefer to use the manual upgrade with an appropriate sized pfile , Migrate the database , check everything after mig , start all instance(s) by hand shut them down and modify the cluster ( inform the cluster about the change) and start the environment there after with the srvctl.

Thank you for reply. I have made notes in my upgrade docs.
Have you tried to downgrade Grid Infra. standalone?
I want to test rollback plan just in case for any reason i need to downgrade whole upgrade (Grid Infra. standalone & db) .
I have just one test server/environment where i did upgrade and went fine.I have to test downgrade on this env.
I am thinking to implement following steps as rollback plan.
1) Downgrade db using oracle best practice
2) downgrade Grid Infra. standalone( No docs so far)

once everything will go well, I will /have to be upgrading this env. again to 11.2.0.4.