Upgrade Grid Infrastructure 11g (11.2.0.3) to 12c (12.1.0.2)
I have recently tested the upgrade to RAC Grid Infrastructure 12.1.0.2 on my test RAC Oracle Virtualbox Linux 6.5 x86-64 environment.
The upgrade went very smoothly but we have to take a few things into account – some things have changed in 12.1.0.2 as compared to Grid Infrastructure 12.1.0.1.
The most notable change regards the Grid Infrastructure Management Repository (GIMR).
In 12.1.0.1 we had the option of installing the GIMR database – MGMTDB. But in 12.1.0.2 it is mandatory and the MGMTDB database is automatically created as part of the upgrade or initial installation process of 12.10.2 Grid Infrastructure.
The GIMR primarily stores historical Cluster Health Monitor metric data. It runs as a container database on a single node of the RAC cluster.
The problem I found is that the datafiles for the MGMTDB database are created on the same ASM disk group which holds the OCR and Voting Disk and there is a prerequisite that there is at least 4 GB of free space in that ASM disk group – or an error INS-43100 will be returned as shown in the figure below.
I had to cancel the upgrade process and add another disk to the +OCR ASM disk group to ensure that at least 4 GB of free space was available and after that the upgrade process went through very smoothly.
On both the nodes of the RAC cluster we will create the directory structure for the 12.1.0.2 Grid Infrastructure environment as this is an out-of-place upgrade.
Also it is very important to check the health of the RAC cluster before the upgrade (via the crsctl check cluster -all command) and also run the cluvfy.sh script to verify all the prerequisites for the 12c GI upgrade are in place.
[oracle@rac1 bin]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]
[oracle@rac1 bin]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]
[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u02/app/12.1.0/grid -dest_version 12.1.0.2.0
[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]
[oracle@rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[oracle@rac1 ~]$ ps -ef |grep pmon
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1
[root@rac1 bin]# ./oclumon manage -get reppath
CHM Repository Path = +OCR/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.873212089
[root@rac1 bin]# ./srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.
[root@rac1 bin]# ./srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB
No comments:
Post a Comment