Sunday, March 30, 2014

How to detect block corruption using DBVERIFY utility

The primary tool for checking for corruption in an Oracle database is DBVERIFY. It can be used to perform a physical data structure integrity check on data files whether the database is online or offline. The big benefit of this is that DBVERIFY can be used to check backup data files without adding load to the database server. You invoke DBVERIFY from the operating system command line like this:
$ dbv file=data01.dbf logfile=verify.log blocksize=8192 feedback=100
In this example data01.dbf is the data file to check, and the tablespace this file belongs to has a block size of 8192 bytes. The feedback parameter tells DBVERIFY to draw a period on the screen after every 100 pages (blocks) of the file are verified.
In the log file you’ll see output like this:
DBVERIFY – Verification starting : FILE = data01.dbf
DBVERIFY – Verification complete
Total Pages Examined : 640
Total Pages Processed (Data) : 631
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 9
Total Pages Empty : 0
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
The Total Pages Failing values show the number of blocks that failed either the data block or index block checking routine. The Total Pages Marked Corrupt figure shows the number of blocks for which the cache header is invalid, thereby making it impossible for DBVERIFY to identify the block type. And the Total Pages Influx is the number of blocks for which DBVERIFY could not get a consistent image. (This could happen if the database is open when DBVERIFY is run. DBVERIFY reads blocks multiple times to try to get a consistent image, but DBVERIFY cannot get a consistent image of pages that are in flux.)
If you want to verify only a portion of a data file, you can specify a starting and ending block when running DBVERIFY. If you want to verify the entire database, you can generate a short shell script to run DBVERIFY on every data file in the database. You can do this easily using SQL*Plus:
SQL> SPOOL dbv_on_all_files.sh
SQL> SELECT ‘dbv file=’ || file_name ||
2 ‘ logfile=file’ || ROWNUM ||
3 ‘.log blocksize=8192′
4 FROM dba_data_files;
SQL> SPOOL OFF
After running the shell script you can quickly scan all of the DBVERIFY log files with Unix commands like:
$ grep Failing file*.log
$ grep Corrupt file*.log
$ grep Influx file*.log
You can also use DBVERIFY to validate a single data or index segment. To do this you must be logged onto the database with SYSDBA privileges. During the verification the segment is locked; if the segment is an index then the parent table is also locked.
There are other ways to check for database corruption besides DBVERIFY. You can take a full database export, with the dump file optionally specified as a null device. This will read every row in every user table in the database, discovering any corrupted data blocks along the way. However, this technique does not access every index entry or the entire data dictionary.
If you want to check one table and all of its indexes, you can use the ANALYZE statement to read every row of the table, read every entry in each of the table’s indexes, and make sure the table and index data are consistent with each other:
SQL> ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;
This will lock the table, preventing DML on the table, unless you specify the ONLINE keyword. Online validation reduces the amount of validation performed to allow for concurrency.
There are several ways to check for corruption in an Oracle database, but the DBVERIFY tool is the most versatile. DBVERIFY does not limit concurrency or DML while it is running, and it can be run against a database backup. Just remember that if DBVERIFY detects corruption in your database and you are planning to recover the corrupt file from a backup, you should perform a DBVERIFY validation on the backup file before beginning the recovery. This will tell you if the corruption exists in the backup also.

No comments: