Oracle ASM Spfile Stored in an ASM Disk Group - How it Works ?
I recently was asked an interesting question: "How is it possible to start the ASM instance if the spfile itself is stored in an ASM disk group? During ASM instance startup the disk groups themselves are closed, aren't they ?"
Beginning with the version 11g Release 2, the ASM spfile is stored automatically in the first disk group created during Grid Infrastructure installation:
grid@iudb007:~/ [+ASM5] asmcmd spget
+GRID/ivorato01/asmparameterfile/registry.253.768409647
During startup, the Grid Plug and Play profile delivers the ASM discovery string, i.e. directory containing ASM disks:
grid@iudb007:/u00/app/grid/product/11.2.0.3/gpnp/iudb007/profiles/peer/ [+ASM5] gpnptool getpval -asm_dis
Warning: some command line parameters were defaulted. Resulting command line:
/u00/app/grid/product/11.2.0.3/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-
/dev/mapper/*p1
The discovery string will be used to scan device headers to find those, which contain a copy of the ASM spfile (kfdhdb.spfflg=1). In my environment, the ASM disk group GRID created with NORMAL redundancy is used exclusively for ASM spfile, voting and OCR files:
grid@iudb007:~/ [+ASM5] asmcmd lsdsk -G GRID
Path
/dev/mapper/grid01p1
/dev/mapper/grid02p1
/dev/mapper/grid03p1
Let's scan the headers of those three devices:
grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid01p1 | grep -E 'spf|ausize'
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile: 288 ; 0x0f4: 0x00000120
kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001
grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid02p1 | grep -E 'spf|ausize'
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile: 59 ; 0x0f4: 0x0000003b
kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001
grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep -E 'spf|ausize'
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile: 0 ; 0x0f4: 0x00000000
kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000
In the output above, we see that the first two devices /dev/mapper/grid01p1 and /dev/mapper/grid02p1 each contain a copy of the ASM spfile. On the first device /dev/mapper/grid01p1 the ASM spfile location starts at the disk offset of 288, on the second device /dev/mapper/grid02p1 at the offset of 56.
Considering the allocation unit size (kfdhdb.ausize = 1M), let's dump the ASM spfile from those devices:
grid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid01p1 of=spfileASM_Copy1.ora skip=288 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.115467 seconds, 9.1 MB/s
grid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid02p1 of=spfileASM_Copy2.ora skip=59 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.029051 seconds, 36.1 MB/s
The output is stripped for readability:
grid@iudb007:~/ [+ASM5] strings spfileASM_Copy1.ora
...
+ASM8.asm_diskgroups='U1010','U1020',...
*.asm_diskstring='/dev/mapper/*p1'
*.asm_power_limit=10
*.db_cache_size=134217728
*.diagnostic_dest='/u00/app/oracle'
*.instance_type='asm'
*.large_pool_size=256M
*.memory_target=0
*.remote_listener='ivorato01:15300'
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1G
*.shared_pool_size=512M
The output from the second file spfileASM_Copy2.ora is of course the same.
Conclusion:
To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device's header. By the way, the same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:
grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep vf
kfdhdb.vfstart: 256 ; 0x0ec: 0x00000100 <- em="" file="" of="" offset="" start="" the="" voting="">->
kfdhdb.vfend: 288 ; 0x0f0: 0x00000120 <- em="" end="" file="" of="" offset="" the="" voting="">->
No comments:
Post a Comment