Wednesday, January 29, 2014

Oracle PSU patching - Oracle 11gR2 GRID environment - 3 node RAC

Overview:

1.Download psu patch and interim patch
2.Move to server
3.Check for conflict patch
4.Bring down the instance on each node  to roll back the conflict patch
5.Rollback the conflict patch
6.Apply the PSU patch on each node
7.On last node load the SQL (catbundle.sql)
8.Apply the rollbacked patch/interim patch.
--------------------------------------------------------------------------------------
Pre-implementation tasks

1.We need root privileges to install PSU patch, bcos it needs to be installed from Grid_Home
2./u01 should have minimum 6.5G free space available to apply PSU patch.
3.Take count & list of invalid objects
4.Take output of dba_registry and dba_registry_history
5.Ensure datbase backup is completed successfully
6.Take backup of Oracle home, Grid home
cd /u01/app/oracle/product
find . -print|cpio -pdmv /u02/PSU_OCT2013/
cd /u01/app/11.2.0
find . -print|cpio -pdmv /u02/PSU_OCT2013/

7.check status of all crs services on all nodes
$GRID_HOME/bin/crsctl check cluster -all

8. Set the environment variables.
export GRID_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export PATH=$PATH:/u01/app/oracle/product/11.2.0/db_1/OPatch/
export PATH=$PATH:$GRID_HOME/OPatch
export PATH=$PATH:$ORACLE_HOME/OPatch

9.check status of cluster resources for all nodes
crsctl stat res -t

10.Check OPatch utility version 11.2.0.3.4 or later to apply this patch.
$GRID_HOME/OPatch/opatch version
$ORACLE_HOME/OPatch/opatch version

####OPatch utility upgrade

(1) Take a backup of $ORACLE_HOME/OPatch into a dedicated backup location.
Ignoring this as Oracle home backup is taken

(2) Remove the contents of $ORACLE_HOME/OPatch directory (Please do not
remove $ORACLE_HOME/OPatch directory itself)

(3) unzip the OPatch downloaded zip into $ORACLE_HOME directory.
 unzip -d /u01/app/oracle/product/11.2.0/db_1/OPatch/ p6880880_112000_Linux-x86-64.zip

To check the version of the opatch utility installed in the above step,
Go to the $ORACLE_HOME/OPatch directory and run "opatch version".
$GRID_HOME/OPatch/opatch version
$ORACLE_HOME/OPatch/opatch version

11.Check if emctl process is running
if it is running, stop the process before applying the PSU patch.
emctl stop dbconsole

12.Validation of Oracle Inventory
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME
$GRID_HOME/OPatch/opatch lsinventory -detail -oh  $GRID_HOME
Save the output so you have the status prior to the patch apply

13.Download and Unzip the Patch
The directory should have read permission for the ORA_INSTALL group
cd /u02/PSU_OCT2013/PATCH/
Unzip the patch as grid home owner.
$ unzip -d p17272731_112030_.zip

14. One-off Patch Conflict Detection and Resolution.
Go to unzipped files directory
then
Go to each patch directoy
and
Execute the below with Oracle home and Grid home to check for conflicts.
For ex:
cd /u02/PSU_OCT2013/PATCH/
cd 17272731
opatch prereq CheckConflictAgainstOHWithDetail -ph ./ -invPtrLoc $ORACLE_HOME/oraInst.loc
/u01/app/11.2.0/grid/OPatch/opatch  prereq CheckConflictAgainstOHWithDetail -ph ./ -invPtrLoc  /u01/app/11.2.0/grid/oraInst.loc

Patch Implementation steps:

1. In case rollback of a patch is required, perform the below
rollback steps:
Bring down the instance on each node and run the rollback command.
If the patch to be rollbacked is from GRID_HOME run the command from GRID_HOME, if database run from ORACLE_HOME
In our case it is Oracle Home
opatch rollback -id 13775960 -local -oh /u01/app/oracle/product/11.2.0/db_1 -invPtrLoc /u01/app/oracle/product/11.2.0/db_1/oraInst.loc

2. Create a ocm response file  ($ORACLE_HOME/OPatch/ocm/bin/emocmrsp)
It has been made mandatory
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp  -no_banner -output /u02/PSU_OCT2013/unconfig.rsp
* creates the response in location specified by the parameter "-output"
* running without "-output /file.rsp"  creats the file in current directory with default name

3.Patch Installation
Important note:
In Oracle README , it is mentioned to patch GRID_HOME it needs to run under root user.
opatch auto command will only work under root user.
Under root user no other options for opatch will work.

opatch auto -ocmrf /home/oracle/ocm.rsp
For ex:
opatch auto /u02/PSU_OCT2013/PATCH -ocmrf /u02/PSU_OCT2013/unconfig.rsp

Generally the above command should work, if all the pre-requiste steps have been perfomed properly
like stopping emctl stop dbconsole & space availabe in /u01 filesystem 6.5G
If the above fails check the logfiles based on the errors take action.
Logfile location
/u01/app/11.2.0/grid/cfgtoollogs/opatch
/u01/app/11.2.0/grid/cfgtoollogs
Below are commands used to test and try to apply the patch when auto option failed.
opatch napply /u02/PSU_OCT2013/PATCH/17076717 -local -ocmrf /u02/PSU_OCT2013/unconfig.rsp -oh /u01/app/11.2.0/grid -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc
Below option was tried, we got the error stating few oracle binaries were still active and were accessed.
Later we found out that emctl process was running. We brought it down and continued with normal installation as mentioned in step 1 with auto option.
opatch napply ./ -oh /u01/app/oracle/product/11.2.0/db_1 -invPtrLoc /u01/app/oracle/product/11.2.0/db_1/oraInst.loc -local '

4. If the above opatch auto install completes successfully, then follow the same steps on all other nodes.

5. In the final node run the below to load the SQL (catbundle.sql)
>>>>>>>Loading Modified SQL Files into the Database
The following steps load modified SQL files into the database.
For an Oracle RAC environment, perform these steps on only one node.
For each database instance running on the Oracle home being patched,
connect to the database using SQL*Plus.
 Connect as SYSDBA and run the catbundle.sql script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @catbundle.sql psu apply
SQL> QUIT
The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.

Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:
catbundle_PSU__APPLY_.log
catbundle_PSU__GENERATE_.log
where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS.

>>>>>>>>>>Upgrade Oracle Recovery Manager Catalog
If you are using the Oracle Recovery Manager,
the catalog needs to be upgraded. Enter the following command to upgrade it:
$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;
Example:

6. If the conflict patch was removed, then as per Oracle's advise we need to download and install the latest one-off patch
Before applying the patch, bring down the instance using srvctl
srvctl stop instance -d DBRAC -i DBRAC1

Go to the interim patch location
for ex: cd /u02/patch/13775960
Run opatch from oracle home
as oracle user

opatch apply -local
then start the instance
srvctl start instance -d DBRAC -i DBRAC1

--------------------------------------------------------------------------
Few commands to keep it handy when patching Grid environment.
###check status of all services in 11gr2 rac
1.check the status of CRS on specific node:
crsctl check crs
2. check the status of services on all nodes
srvctl status nodeapps
3.check the status of complete clusterware stack on all nodes
crsctl status resource -t
4.check the status of clusterware servers
crsctl status server -f
-----5.to check the status of specific Instance across cluster nodes
srvctl status instance -d DBRAC -i DBRAC2
6.to check the status of all instances across the cluster nodes
srvctl status instance -d  DBRAC -i DBRAC1,DBRAC2,DBRAC3

7.check the status of database across cluster nodes
srvctl status database -d DBRAC
==============================================
##shutdown/startup commands
1.to stop crs on specific node
Run this command as root user only.
./crsctl stop has
2.to start crs on specific node:
./crsctl start has
3.to stop specific Instance across cluster nodes
srvctl stop instance -d DBRAC -i DBRAC1
4.to start specific Instance across cluster nodes
srvctl start instance -d DBRAC -i DBRAC3
========================================


No comments: