Saturday, June 21, 2014

Oracle RAC - HOW DOES 11G R2 CLUSTERWARE START ASM WHEN ASM SPFILE IS STORED ON ASM ITSELF?

HOW DOES 11G R2 CLUSTERWARE START ASM WHEN ASM SPFILE IS STORED ON ASM ITSELF?

Beginning with the version 11g Release 2, the ASM spfile is stored automatically in the first disk group created during Grid Infrastructure installation.
Since voting disk/OCR are stored on ASM, ASM needs to be started on the node. To startup ASM, its SPfile is needed. But SPFILE is again located on ASM diskgroup only.  How does clusterware resolve this issue?
- When a node of an Oracle Clusterware cluster restarts, OHASD is started by platform-specific means. OHASD accesses OLR (Oracle Local Registry) stored on the local file system to get the data needed to complete OHASD initialization
-  OHASD brings up GPNPD and CSSD. CSSD accesses the GPNP Profile stored on the local file system which contains the following vital bootstrap data;
a.ASM_DISKSTRING parameter (if specified) to locate the disks on which ASM disks are configured.
b. SM SPFILE location : Name of the diskgroup containing ASM spfile
c. Location of  Voting Files : ASM
- CSSD scans the headers of all ASM disks ( as indicated in ASM_DISKSTRING in GPnP profile) to identify the disk containing the voting file.  Using the pointers in ASM disk headers, the Voting Files locations on ASM Disks are accessed by CSSD and CSSD is able to complete initialization and start or join an existing cluster.
-To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device’s header. OHASD reads the header of the ASM disk containing ASM SPfile (as read from GPnP profile) and using the pointers in disk header, contents of ASM spfile are read. Thereafter, ASM instance is started.
-  With an ASM instance operating and its Diskgroups mounted, access to Clusterware’s OCR is available to CRSD.
-  OHASD starts CRSD with access to the OCR in an ASM Diskgroup.
-  Clusterware completes initialization and brings up other services under its control.
Demonstration :
In my environment, the ASM disk group DATA created with EXTERNAL  redundancy is used exclusively for ASM spfile, voting and OCR files:
- Let us read  gpnp profile to find out the location of ASM SPfile
[grid@host01 peer]$ cd /u01/app/11.2.0/grid/gpnp/host01/profiles/peer
                                  gpnptool getpval -asm_spf
Warning: some command line parameters were defaulted. Resulting command line:
         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-
+DATA/cluster01/asmparameterfile/registry.253.793721441
– Let us find out the disks in DATA diskgroup
[grid@host01 peer]$ asmcmd lsdsk -G DATA
Path
ORCL:ASMDISK01
ORCL:ASMDISK010
ORCL:ASMDISK02
ORCL:ASMDISK03
ORCL:ASMDISK04
ORCL:ASMDISK09
– Let us find out which ASM disk maps to which partition
   Note down major/minor device numbers of the disks in DATA diskgroup
[root@host01 ~]# ls -lr /dev/oracleasm/disks/*
brw-rw—- 1 grid asmadmin 8, 26 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK09
brw-rw—- 1 grid asmadmin 8, 25 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK08
brw-rw—- 1 grid asmadmin 8, 24 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK07
brw-rw—- 1 grid asmadmin 8, 23 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK06
brw-rw—- 1 grid asmadmin 8, 22 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK05
brw-rw—- 1 grid asmadmin 8, 21 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK04
brw-rw—- 1 grid asmadmin 8, 19 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK03
brw-rw—- 1 grid asmadmin 8, 18 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK02
brw-rw—- 1 grid asmadmin 8, 31 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK014
brw-rw—- 1 grid asmadmin 8, 30 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK013
brw-rw—- 1 grid asmadmin 8, 29 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK012
brw-rw—- 1 grid asmadmin 8, 28 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK011
brw-rw—- 1 grid asmadmin 8, 27 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK010
brw-rw—- 1 grid asmadmin 8, 17 Nov  8 09:35 /dev/oracleasm/disks/ASMDISK01
- Let us find out the major/minor device numbers of various disk partitions
[root@host01 ~]# ls -lr /dev/sdb*
brw-r—– 1 root disk 8, 25 Nov  8 09:35 /dev/sdb9
brw-r—– 1 root disk 8, 24 Nov  8 09:35 /dev/sdb8
brw-r—– 1 root disk 8, 23 Nov  8 09:35 /dev/sdb7
brw-r—– 1 root disk 8, 22 Nov  8 09:35 /dev/sdb6
brw-r—– 1 root disk 8, 21 Nov  8 09:35 /dev/sdb5
brw-r—– 1 root disk 8, 20 Nov  8 09:35 /dev/sdb4
brw-r—– 1 root disk 8, 19 Nov  8 09:35 /dev/sdb3
brw-r—– 1 root disk 8, 18 Nov  8 09:35 /dev/sdb2
brw-r—– 1 root disk 8, 31 Nov  8 09:35 /dev/sdb15
brw-r—– 1 root disk 8, 30 Nov  8 09:35 /dev/sdb14
brw-r—– 1 root disk 8, 29 Nov  8 09:35 /dev/sdb13
brw-r—– 1 root disk 8, 28 Nov  8 09:35 /dev/sdb12
brw-r—– 1 root disk 8, 27 Nov  8 09:35 /dev/sdb11
brw-r—– 1 root disk 8, 26 Nov  8 09:35 /dev/sdb10
brw-r—– 1 root disk 8, 17 Nov  8 09:35 /dev/sdb1
brw-r—– 1 root disk 8, 16 Nov  8 09:35 /dev/sdb
– Now we can find out the partitions mapping to various ASM disks by matching their
   major/minor device numbers
 ASMDISK01    8,17     /dev/sdb1
 ASMDISK02    8,18     /dev/sdb2
 ASMDISK03    8,19     /dev/sdb3
 ASMDISK04    8,21     /dev/sdb5
 ASMDISK09    8,26     /dev/sdb10
 ASMDISK10    8,27     /dev/sdb11
– Let’s scan the headers of those devices:
[root@host01 ~]#  kfed read /dev/sdb1 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
[root@host01 ~]#  kfed read /dev/sdb2 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
[root@host01 ~]#  kfed read /dev/sdb3 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                       16 ; 0x0f4: 0×00000010
kfdhdb.spfflg:                        ; 0x0f8: 0×00000001
[root@host01 ~]#  kfed read /dev/sdb5 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
[root@host01 ~]#  kfed read /dev/sdb10 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
[root@host01 ~]#  kfed read /dev/sdb11 | grep -E ‘spf|ausize’
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
In the output above, we see that
     the device /dev/sdb3 contains a copy of the ASM spfile (spfflg=1).
     The ASM spfile location starts at the disk offset of 16 (spfile=16)
Considering the allocation unit size (kfdhdb.ausize = 1M), let’s dump the ASM spfile from the device:
[root@host01 ~]#  dd if=/dev/sdb3 of=spfileASM_Copy2.ora skip=16  bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.170611 seconds, 6.1 MB/s
[root@host01 ~]# strings spfileASM_Copy2.ora
+ASM1.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM2.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value
+ASM3.asm_diskgroups=’FRA’#Manual Mount
+ASM2.asm_diskgroups=’FRA’#Manual Mount
+ASM1.asm_diskgroups=’FRA’#Manual Mount
*.asm_power_limit=1
*.diagnostic_dest=’/u01/app/grid’
*.instance_type=’asm’
*.large_pool_size=12M
*.remote_login_passwordfile=’EXCLUSIVE’
a:O/
 The same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:
Let’s check the location of voting disk :
[grid@host01 peer]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   243ec3b2a3cf4fbbbfed6f20a1ef4319 (ORCL:ASMDISK01) [DATA]
Located 1 voting disk(s).
– Since above query shows that voting disk is stored on ASMDISK01 which maps to /dev/sdb1,
   we will scan the header of /dev/sdb1
[root@host01 ~]#  kfed read /dev/sdb1 | grep vf
kfdhdb.vfstart:                      96 ; 0x0ec: 0×00000060
kfdhdb.vfend:                       128 ; 0x0f0: 0×00000080
Here we can see that voting disk resides on /dev/sdb1 .
Once the voting disk is accessible and ASM is started using the SPfile read above, rest of the resources on the node can be started after reading the Oracle Local Registry (OLR) on the node.

No comments: