lab-time: upgrading grid infrastructure (gi) from 12.1 to 18c – the final version

In an earlier blogpost, I was playing around in an unsupported way to upgrade my lab Grid Infrastructure from 12.1 to 18c. The problem there was, that the 18c software was not officially available for on-premises installations. Now it is! During my holidays I had a little time to play around with it and this is how I upgraded my cluster.

Reading the documentation it seems very easy (and it really is). Unzip the software and run gridsetup.sh. I wouldn’t write a blogpost if I didn’t encounter something, would I?

Installation

Software staging

Create the new directories

1

2

3

[root@labvmr01n01~]# mkdir -p /u01/app/18.0.0/grid

[root@labvmr01n01~]# chown -R grid:oinstall /u01/app/18.0.0

[root@labvmr01n01~]#

And this has to be done on all the nodes.

Unzipping the software, has to be done as the owner of the Grid Infrastructure on the first node only:

Remark. My active patch level is 3544584551. This is important. Because the installer checks for patch 21255373 being installed on your software. It’s a full rolling patch which is applied using opatchauto and I did not have any issues on my environment, so I won’t cover that here.

Setup

As written earlier, doing the upgrade is simple. Unzip the software and run gridSetup.sh. When you have a response file, you can use that to do a silent installation upgrade, otherwise just use the gui.

I have an x-server available and a stable network, so this time I did it interactively:

You need to verify if all your nodes are listed. My 4 nodes are correctly listed and I have SSH key equivalence already setup from my previous installation.

I’m fine with the default Oracle Base and my Software Location (GI home) matches the directory in which the software has been unzipped previously.

Let’s be a little lazy and check if it works, so I entered my root password so Oracle can run the root.sh and other configuration scripts for me.

I like this option for bigger clusters actually. You can determine batches in which the root and configuration scripts will be executed. As you will see later in the wizard, it OUI gives you the choice to execute the scripts now or on a later moment in time. I can think of some use cases in which this comes in useful. So let’s try it, and I created 2 batches.

you won’t escape, the OUI will do some pre-checks as well

There we go. After unpacking the 18c software on my first node, he thinks /u01 is too small. It actually is not, but this is my lab and I will remove the 12.1 software afterwards anyhow and I have the necessary space available. So in this particular case it is safe to ignore. I would not continue in a production environment. It would be better to extend the /u01 filesystem. But again, it’s a lab and I know it fits, so I could ignore this one safely.

Mandatory confirmation

and off we can go. I usually save my response file for later use

And the installers takes off. Of cours I forgot to take the initial screenshot but the next one is interesting

How nice. The installer is being nice to you and asks permission to use the root password.

This excites the root scripts

and here it asks if you want to perform the scripts on the second batch as well. I want this, so “Execute now”.

And finally my upgrade succeeded! Yay!

Post tasks

First things first, verify if all went well

1

2

3

4

5

6

7

8

9

10

11

12

13

14

[root@labvmr01n01~]# which crsctl

/u01/app/18.0.0/grid/bin/crsctl

[root@labvmr01n01~]# crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is[18.0.0.0.0]

In my case they were not enabled by default. You can choose, or you do it in the brand new fancy asmca and click around. In the settings box, you can enter the root password, which makes life a little easier, or you use the commandline. It’s up to you.

right click the GHCKPT volume and click “enable on all nodes”

and in my case it threw me an error

“CRS-2501: Resource ‘ora.DATA.GHCHKPT.advm’ is disabled”

now what …

ok… back to the CLI then, because I didn’t found a quick way to do it in the asmca interface:

first check the volumes

1

2

3

4

5

6

7

8

9

10

11

12

13

14

[grid@labvmr01n01~]$srvctl config volume

Diskgroup name:ACFS

Volume name:VOL_ACFS

Volume device:/dev/asm/vol_acfs-159

Volume isenabled.

Volume isindividually enabled on nodes:

Volume isindividually disabled on nodes:

Diskgroup name:DATA

Volume name:GHCHKPT

Volume device:/dev/asm/ghchkpt-436

Volume isdisabled.

Volume isindividually enabled on nodes:

Volume isindividually disabled on nodes:

[grid@labvmr01n01~]$

Then we now know the device name and we can enable it.

1

2

[grid@labvmr01n01~]$srvctl enable volume-device/dev/asm/ghchkpt-436

[grid@labvmr01n01~]$

When we now retry the same operation in the asmca gui, the operation succeeds

Then you can ask the interface to show the acfs mount command:

and it tells you exactly what to do

So basically … you should mount the filesystem yourself. That’s ok for me.

Gotcha

Biggest gotcha’s during this upgrade was basically the advm and acfs volume which weren’t enabled by default. Is this a problem? Not really. Just something to take into account and something to check/verify. It also depends if you want this or not.

Something else I noticed.

I did not document this here as such, but in order to perform the upgrade (coming from 12.1.0.2), you need 23,5GB usable free in the diskgroup for the cluster. In my case the +DATA diskgroup. To do this, (on 12.1) I moved the GIMR (MGMTDB) out of ASM and I have put it into an acfs filesystem: