Thursday, February 19, 2015

OPatch "CheckRollbackable" On Auto-rollback Patches Failed

Prerequisite Check "CheckRollbackable" On Auto-rollback Patches Failed


    Today, I encountered the following OPATCH issue while trying to rollback patches.

    [oracle@qed1db04:DBAUPG2 DBAUPG]$ opatch rollback -id 12834800 -local
    Oracle Interim Patch Installer version 11.2.0.3.6
    Copyright (c) 2013, Oracle Corporation. All rights reserved.


    Oracle Home       : /u01/app/oracle/product/11.2.0.3/DBAUPG
    Central Inventory : /u01/app/oraInventory
       from           : /u01/app/oracle/product/11.2.0.3/DBAUPG/oraInst.loc
    OPatch version   : 11.2.0.3.6
    OUI version       : 11.2.0.3.0
    Log file location : /u01/app/oracle/product/11.2.0.3/DBAUPG/cfgtoollogs/opatch/12834800_May_13_2014_20_42_33/rollback2014-05-13_20-42-33PM_1.log

    RollbackSession rolling back interim patch '12834800' from OH '/u01/app/oracle/product/11.2.0.3/DBAUPG'
    Prerequisite check "CheckRollbackable" failed.
    The details are:

    Patch 12834800:
    Archive Action: Source file"/u01/app/oracle/product/11.2.0.3/DBAUPG/.patch_storage/12834800_Apr_24_2013_05_58_13/files/lib/libserver11.a/qksvc.o" does not exists.
    'oracle.rdbms, 11.2.0.3.0': Cannot update file '/u01/app/oracle/product/11.2.0.3/DBAUPG/lib/libserver11.a' with '/qksvc.o'

    [ Error during Prerequisite for rollback Phase]. Detail: RollbackSession failed during prerequisite checks: Prerequisite check "CheckRollbackable" failed.
    Log file location: /u01/app/oracle/product/11.2.0.3/DBAUPG/cfgtoollogs/opatch/12834800_May_13_2014_20_42_33/rollback2014-05-13_20-42-33PM_1.log

    Recommended actions: OPatch won't be able to roll back the given patch. A common reason is there are other patches that depend on this patch. You need to roll back those dependent patches before you can roll back this patch.

    OPatch failed with error code 43

    According to (Doc ID 1331900.1) Prerequisite Check "CheckRollbackable" On Auto-rollback Patches Failed.

    This is because $ORACLE_HOME/.patch_storage is corrupted and does not contain the needed files for the rollback.

    The suggested solution is to restore complete .patch_storage directory from backed up ORACLE_HOME prior to the files missing, and do the rollback again.

    On ORBQA
    [oracle@qed1db03:ORBQA1 .patch_storage]$ cd $ORACLE_HOME/.patch_storage
    [oracle@qed1db03:ORBQA1 .patch_storage]$ cp -r 12834800_Apr_24_2013_05_58_13 /mnt/nfs/oracle.patches

    On DBAUPG
    [oracle@qed1db04:DBAUPG2 DBAUPG]$ cd $ORACLE_HOME/.patch_storage
    [oracle@qed1db04:DBAUPG2 .patch_storage]$ cp -r /mnt/nfs/oracle.patches/12834800_Apr_24_2013_05_58_13 .
    [oracle@qed1db04:DBAUPG2 .patch_storage]$ opatch rollback -id 12834800 -local
    Oracle Interim Patch Installer version 11.2.0.3.6
    Copyright (c) 2013, Oracle Corporation. All rights reserved.


    Oracle Home       : /u01/app/oracle/product/11.2.0.3/DBAUPG
    Central Inventory : /u01/app/oraInventory
       from           : /u01/app/oracle/product/11.2.0.3/DBAUPG/oraInst.loc
    OPatch version   : 11.2.0.3.6
    OUI version       : 11.2.0.3.0
    Log file location : /u01/app/oracle/product/11.2.0.3/DBAUPG/cfgtoollogs/opatch/12834800_May_13_2014_21_23_28/rollback2014-05-13_21-23-28PM_1.log

    RollbackSession rolling back interim patch '12834800' from OH '/u01/app/oracle/product/11.2.0.3/DBAUPG'

    Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
    (Oracle Home = '/u01/app/oracle/product/11.2.0.3/DBAUPG')


    Is the local system ready for patching? [y|n]
    y
    User Responded with: Y

    Patching component oracle.rdbms, 11.2.0.3.0...
    RollbackSession removing interim patch '12834800' from inventory
    Log file location: /u01/app/oracle/product/11.2.0.3/DBAUPG/cfgtoollogs/opatch/12834800_May_13_2014_21_23_28/rollback2014-05-13_21-23-28PM_1.log

    OPatch succeeded.

    Friday, February 6, 2015

    Restore database to new server using RMAN Backup


    Restore RMAN to New Host

    Steps On Source Database

     **Most Of work will be done on target database

     1- Get DBID By Using The below :

    SQL > Select Db_id from v$database ;

     Or RMAN target /

    2- Get the last SCN On Database since we will not have active redolog using the below document :

    SQL > select max(next_change#) from v$archived_log where archived = 'YES' group by thread#;

     Or Using RMAN:

     RMAN> list backup of archivelog all;

    3- Take Backup Using RMAN :

     run
    { allocate channel d1 type disk format 'YOUR-PATH/Database_%U;
    allocate channel c1 type disk; allocate channel c2 type disk;
     allocate channel c3 type disk;
    BACKUP DATABASE PLUS ARCHIVELOG;
    backup current controlfile format 'YOUR_PATH/controlfile_%U';
    release channel c1;
    release channel c2;
     release channel c3;
     }

    Now move all generated backup file to the new host with the same directory for example if you backup on folder /u01/app then on new host should be the same.

    Steps On New Server 

    After copy file do the below steps:

     1- export ORACLE_SID=SID_SAME_AS_PROD

    RMAN TARGET / RMAN > set dbid

    RMAN > Startup nomount; 

    RMAN > restore spfile to pfile '/u01/app/oracle/oradata/orcl/initorcl.ora' from autobackup; 

    RMAN> shutdown abort; 

    Now you have to edit SPFile ; 

    *.audit_file_dest=’LOCATION’ 
    *.control_files=’LOCATION’ 
    *.db_recovery_file_dest_size=Size 
    *.db_recovery_file_dest=’LOCATION’ 
    *.diagnostic_dest=’LOCATION’ 
    *.log_archive_dest_1='LOCATION=LOCATION’ 
    *.local_listener='LISTENER_ORCL'

    RMAN> startup nomount pfile='/u01/app/oracle/oradata/orcl/initorcl.ora'; 

    RMAN> restore controlfile from autobackup; 

    RMAN> alter database mount; 

    Note: now you have 2 options

    1 - move the backup to same location on target server 
    2- Copy Backup to any location but you need to use CATALOG Command check below : 

    RMAN> catalog start with 'Where You copy Backup'; 
    You are almost Done,

    You have But before restore database you can move datafile to any location using (all the below steps on RMAN ) But Remember don't run the below yet: 

    SET NEWNAME FOR DATAFILE 1 TO '/u01/app/oracle/oradata/orcl/system01.dbf'; 
    SET NEWNAME FOR DATAFILE 2 TO '/u01/app/oracle/oradata/orcl/sysaux01.dbf'; 
    SET NEWNAME FOR DATAFILE 3 TO '/u01/app/oracle/oradata/orcl/undotbs01.dbf'; 
    SET NEWNAME FOR DATAFILE 4 TO '/u01/app/oracle/oradata/orcl/users01.dbf';
     SET NEWNAME FOR DATAFILE 5 TO '/u01/app/oracle/oradata/orcl/example01.dbf'; 
    SET NEWNAME FOR DATAFILE 6 TO '/u01/app/oracle/oradata/orcl/test.dbf'; 
    SET NEWNAME FOR DATAFILE 7 TO '/u01/app/oracle/oradata/orcl/dbfs01.dbf'; 

    **Note: After TO: New Location.


    SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo01.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo01.log'' "; 
    SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo02.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo02.log'' ";
     SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo03.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo03.log'' "; 

    Are you still having SCN that we see it above :) 
    Ok Now you RMAN Script will looks like below 

    RUN 

    SET NEWNAME FOR DATAFILE 1 TO '/u01/app/oracle/oradata/orcl/system01.dbf';
    SET NEWNAME FOR DATAFILE 2 TO '/u01/app/oracle/oradata/orcl/sysaux01.dbf'; 
    SET NEWNAME FOR DATAFILE 3 TO '/u01/app/oracle/oradata/orcl/undotbs01.dbf'; 
    SET NEWNAME FOR DATAFILE 4 TO '/u01/app/oracle/oradata/orcl/users01.dbf'; 
    SET NEWNAME FOR DATAFILE 5 TO '/u01/app/oracle/oradata/orcl/example01.dbf';
    SET NEWNAME FOR DATAFILE 6 TO '/u01/app/oracle/oradata/orcl/test.dbf'; 
    SET NEWNAME FOR DATAFILE 7 TO '/u01/app/oracle/oradata/orcl/dbfs01.dbf'; 
    SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo01.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo01.log'' "; 
    SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo02.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo02.log'' "; 
    SQL "ALTER DATABASE RENAME FILE ''/u03/app/oracle/oradata/orcl/redo/redo03.log'' TO ''/u01/app/oracle/oradata/orcl/redo/redo03.log'' ";
    SET UNTIL SCN

    RESTORE DATABASE; 

    SWITCH DATAFILE ALL;

     RECOVER DATABASE;
     } 

    Sqlplus / as sysdba 

    SQL > Alter database open Resetlogs ;

    Restore RMAN backup to another server for testing disaster recovery procedures as well as for cloning

    Restore RMAN backup to another server for testing disaster recovery procedures as well as for cloning

    This note explains the procedure used to restore an Oracle Suggested Strategy incremental backup on another server.
    This is useful when we have to do periodic  testing of disaster recovery procedures where we simulate a scenario when the complete database server has crashed and a new server has been provisioned.
    But let us assume that the directory structure on the new or target server is different to that of the source server.
    So in this case the backup has been restored to the staging location /home/oracle/stage on the new server  and all the database files are being restored in a different location to that on the source – /home/oracle/sqlfun

    Restore the SPFILE
    RMAN> startup nomount force;
    
    startup failed: ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/u01/app/oracle/product/11.2.0/dbhome_2/dbs/initsqlfun.ora'
    
    starting Oracle instance without parameter file for retrieval of spfile
    Oracle instance started
    
    Total System Global Area     158662656 bytes
    
    Fixed Size                     2226456 bytes
    Variable Size                104859368 bytes
    Database Buffers              46137344 bytes
    Redo Buffers                   5439488 bytes
    
    RMAN> restore spfile from '/home/oracle/stage/o1_mf_s_819826669_8x7w7g70_.bkp';
    
    Starting restore at 04-JUL-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=162 device type=DISK
    
    channel ORA_DISK_1: restoring spfile from AUTOBACKUP /home/oracle/stage/o1_mf_s_819826669_8x7w7g70_.bkp
    channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
    Finished restore at 04-JUL-13
    Create the PFILE from SPFILE and make parameter changes as required
    $ sqlplus sys as sysdba
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 4 09:43:48 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Enter password:
    Connected to an idle instance.
    
    SQL> create pfile from spfile;
    
    File created.
    Note- make required changes here in the init.ora file to take care of new directories for control files, audit dump, fast recovery area etc
    SQL> create spfile from pfile;
    
    File created.

    Restore the Control Files
    RMAN> restore controlfile from '/home/oracle/stage/o1_mf_s_819826669_8x7w7g70_.bkp';
    
    Starting restore at 04-JUL-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=134 device type=DISK
    
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    output file name=/home/oracle/sqlfun/control01.ctl
    output file name=/home/oracle/sqlfun/control02.ctl
    Finished restore at 04-JUL-13

    Mount the database and catalog the backup pieces which have been restored in the new location
    RMAN> alter database mount;
    
    database mounted
    released channel: ORA_DISK_1
    
    RMAN> catalog start with '/home/oracle/stage';
    
    Starting implicit crosscheck backup at 04-JUL-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=134 device type=DISK
    Crosschecked 25 objects
    Finished implicit crosscheck backup at 04-JUL-13
    
    Starting implicit crosscheck copy at 04-JUL-13
    using channel ORA_DISK_1
    Crosschecked 10 objects
    Finished implicit crosscheck copy at 04-JUL-13
    
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    
    List of Cataloged Files
    =======================
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/autobackup/2013_07_04/o1_mf_n_819884144_8x9ncjfl_.bkp
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_33_8x84ld7x_.arc
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_35_8x8db3hp_.arc
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_36_8x8dbgcm_.arc
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_34_8x8d9y2h_.arc
    File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_32_8x6mflv0_.arc
    
    searching for all files that match the pattern /home/oracle/stage
    
    List of Files Unknown to the Database
    =====================================
    File Name: /home/oracle/stage/o1_mf_sysaux_7zpgb1hd_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w6zcb_.bkp
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w76jv_.bkp
    File Name: /home/oracle/stage/o1_mf_undotbs1_7zpgck9f_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w65l9_.bkp
    File Name: /home/oracle/stage/o1_mf_example_7zgkdp6p_.dbf
    File Name: /home/oracle/stage/o1_mf_threaten_7zpgbyck_.dbf
    File Name: /home/oracle/stage/o1_mf_users_7zpgcnno_.dbf
    File Name: /home/oracle/stage/o1_mf_s_819826669_8x7w7g70_.bkp
    File Name: /home/oracle/stage/o1_mf_index_da_7zgkcz04_.dbf
    File Name: /home/oracle/stage/o1_mf_example__7zpgcg0x_.dbf
    File Name: /home/oracle/stage/o1_mf_system_7zpgbh79_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w7b0f_.bkp
    
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    
    List of Cataloged Files
    =======================
    File Name: /home/oracle/stage/o1_mf_sysaux_7zpgb1hd_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w6zcb_.bkp
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w76jv_.bkp
    File Name: /home/oracle/stage/o1_mf_undotbs1_7zpgck9f_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w65l9_.bkp
    File Name: /home/oracle/stage/o1_mf_example_7zgkdp6p_.dbf
    File Name: /home/oracle/stage/o1_mf_threaten_7zpgbyck_.dbf
    File Name: /home/oracle/stage/o1_mf_users_7zpgcnno_.dbf
    File Name: /home/oracle/stage/o1_mf_s_819826669_8x7w7g70_.bkp
    File Name: /home/oracle/stage/o1_mf_index_da_7zgkcz04_.dbf
    File Name: /home/oracle/stage/o1_mf_example__7zpgcg0x_.dbf
    File Name: /home/oracle/stage/o1_mf_system_7zpgbh79_.dbf
    File Name: /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w7b0f_.bkp

    Generate the SET NEWNAME FOR DATAFILE command
    Note:
    Since the OSS backup image copy is based on Oracle Managed File format (OMF), I find this error even though we have set the DB_FILE_NAME_CONVERT parameter to account for the directory path change between source and target.
    RMAN is not restoring the data files in the new location but is looking for the directory path which existed on the source database, but which is not present on the new or target server where we are doing the restore.

    RMAN> restore database;
    Starting restore at 04-JUL-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: restoring datafile 00001
    input datafile copy RECID=3060 STAMP=819884997 file name=/home/oracle/stage/o1_mf_system_7zpgbh79_.dbf
    destination for restore of datafile 00001: /u01/app/oracle/oradata/sqlfun/system01.dbf
    ORA-19504: failed to create file “/u01/app/oracle/oradata/sqlfun/system01.dbf”
    ORA-27040: file create error, unable to create file
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 1
    ORA-19600: input file is datafile-copy 3060 (/home/oracle/stage/o1_mf_system_7zpgbh79_.dbf)
    ORA-19601: output file is datafile 1 (/u01/app/oracle/oradata/sqlfun/system01.dbf)

    So to work around this we will generate a text file via SQL*PLUS which will contain the SET NEWNAME commands and call this file from RMAN.
    These are the contents of the text file rename_files.sql
    set head off pages 0 feed off echo off verify off
    set lines 200
    spool rename_datafiles.lst
    select ‘SET NEWNAME FOR DATAFILE ‘ || FILE# || ‘ TO ”’ || ‘/home/oracle/sqlfun/’ || substr(name,instr(name,’/’,-1)+1) || ”’;’ from v$datafile;
    spool off
    exit;
    $ sqlplus sys as sysdba
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 4 09:52:44 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Enter password:
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> @rename_files.sql
    SET NEWNAME FOR DATAFILE 1 TO '/home/oracle/sqlfun/system01.dbf';
    SET NEWNAME FOR DATAFILE 2 TO '/home/oracle/sqlfun/sysaux01.dbf';
    SET NEWNAME FOR DATAFILE 3 TO '/home/oracle/sqlfun/undotbs01.dbf';
    SET NEWNAME FOR DATAFILE 4 TO '/home/oracle/sqlfun/users01.dbf';
    SET NEWNAME FOR DATAFILE 5 TO '/home/oracle/sqlfun/threatened_fauna_data.dbf';
    SET NEWNAME FOR DATAFILE 6 TO '/home/oracle/sqlfun/example_temp01.dbf';
    SET NEWNAME FOR DATAFILE 7 TO '/home/oracle/sqlfun/EXAMPLE_5';
    SET NEWNAME FOR DATAFILE 8 TO '/home/oracle/sqlfun/INDEX_DATA_6';

    Restore and Recover the database
    Now pass this file name to the RMAN run block
    RMAN> run {
    2> @rename_datafiles.lst
    3> SET NEWNAME FOR DATAFILE 1 TO '/home/oracle/sqlfun/system01.dbf';
    4> SET NEWNAME FOR DATAFILE 2 TO '/home/oracle/sqlfun/sysaux01.dbf';
    5> SET NEWNAME FOR DATAFILE 3 TO '/home/oracle/sqlfun/undotbs01.dbf';
    6> SET NEWNAME FOR DATAFILE 4 TO '/home/oracle/sqlfun/users01.dbf';
    7> SET NEWNAME FOR DATAFILE 5 TO '/home/oracle/sqlfun/threatened_fauna_data.dbf';
    8> SET NEWNAME FOR DATAFILE 6 TO '/home/oracle/sqlfun/example_temp01.dbf';
    9> SET NEWNAME FOR DATAFILE 7 TO '/home/oracle/sqlfun/EXAMPLE_5';
    10> SET NEWNAME FOR DATAFILE 8 TO '/home/oracle/sqlfun/INDEX_DATA_6';
    11> **end-of-file**
    12> restore database;
    13> switch datafile all;
    14> recover database;
    15> }
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    executing command: SET NEWNAME
    
    Starting restore at 04-JUL-13
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=125 device type=DISK
    
    channel ORA_DISK_1: restoring datafile 00001
    input datafile copy RECID=3060 STAMP=819884997 file name=/home/oracle/stage/o1_mf_system_7zpgbh79_.dbf
    destination for restore of datafile 00001: /home/oracle/sqlfun/system01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00001
    output file name=/home/oracle/sqlfun/system01.dbf RECID=3061 STAMP=819885296
    channel ORA_DISK_1: restoring datafile 00002
    input datafile copy RECID=3053 STAMP=819884997 file name=/home/oracle/stage/o1_mf_sysaux_7zpgb1hd_.dbf
    destination for restore of datafile 00002: /home/oracle/sqlfun/sysaux01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00002
    output file name=/home/oracle/sqlfun/sysaux01.dbf RECID=3062 STAMP=819885313
    channel ORA_DISK_1: restoring datafile 00003
    input datafile copy RECID=3054 STAMP=819884997 file name=/home/oracle/stage/o1_mf_undotbs1_7zpgck9f_.dbf
    destination for restore of datafile 00003: /home/oracle/sqlfun/undotbs01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00003
    output file name=/home/oracle/sqlfun/undotbs01.dbf RECID=3063 STAMP=819885321
    channel ORA_DISK_1: restoring datafile 00004
    input datafile copy RECID=3057 STAMP=819884997 file name=/home/oracle/stage/o1_mf_users_7zpgcnno_.dbf
    destination for restore of datafile 00004: /home/oracle/sqlfun/users01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00004
    output file name=/home/oracle/sqlfun/users01.dbf RECID=3064 STAMP=819885322
    channel ORA_DISK_1: restoring datafile 00005
    input datafile copy RECID=3056 STAMP=819884997 file name=/home/oracle/stage/o1_mf_threaten_7zpgbyck_.dbf
    destination for restore of datafile 00005: /home/oracle/sqlfun/threatened_fauna_data.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00005
    output file name=/home/oracle/sqlfun/threatened_fauna_data.dbf RECID=3065 STAMP=819885327
    channel ORA_DISK_1: restoring datafile 00006
    input datafile copy RECID=3059 STAMP=819884997 file name=/home/oracle/stage/o1_mf_example__7zpgcg0x_.dbf
    destination for restore of datafile 00006: /home/oracle/sqlfun/example_temp01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00006
    output file name=/home/oracle/sqlfun/example_temp01.dbf RECID=3066 STAMP=819885331
    channel ORA_DISK_1: restoring datafile 00007
    input datafile copy RECID=3055 STAMP=819884997 file name=/home/oracle/stage/o1_mf_example_7zgkdp6p_.dbf
    destination for restore of datafile 00007: /home/oracle/sqlfun/EXAMPLE_5
    channel ORA_DISK_1: copied datafile copy of datafile 00007
    output file name=/home/oracle/sqlfun/EXAMPLE_5 RECID=3067 STAMP=819885334
    channel ORA_DISK_1: restoring datafile 00008
    input datafile copy RECID=3058 STAMP=819884997 file name=/home/oracle/stage/o1_mf_index_da_7zgkcz04_.dbf
    destination for restore of datafile 00008: /home/oracle/sqlfun/INDEX_DATA_6
    channel ORA_DISK_1: copied datafile copy of datafile 00008
    output file name=/home/oracle/sqlfun/INDEX_DATA_6 RECID=3068 STAMP=819885339
    Finished restore at 04-JUL-13
    
    datafile 1 switched to datafile copy
    input datafile copy RECID=3069 STAMP=819885343 file name=/home/oracle/sqlfun/system01.dbf
    datafile 2 switched to datafile copy
    input datafile copy RECID=3070 STAMP=819885343 file name=/home/oracle/sqlfun/sysaux01.dbf
    datafile 3 switched to datafile copy
    input datafile copy RECID=3071 STAMP=819885343 file name=/home/oracle/sqlfun/undotbs01.dbf
    datafile 4 switched to datafile copy
    input datafile copy RECID=3072 STAMP=819885343 file name=/home/oracle/sqlfun/users01.dbf
    datafile 5 switched to datafile copy
    input datafile copy RECID=3073 STAMP=819885343 file name=/home/oracle/sqlfun/threatened_fauna_data.dbf
    datafile 6 switched to datafile copy
    input datafile copy RECID=3074 STAMP=819885343 file name=/home/oracle/sqlfun/example_temp01.dbf
    datafile 7 switched to datafile copy
    input datafile copy RECID=3075 STAMP=819885343 file name=/home/oracle/sqlfun/EXAMPLE_5
    datafile 8 switched to datafile copy
    input datafile copy RECID=3076 STAMP=819885343 file name=/home/oracle/sqlfun/INDEX_DATA_6
    
    Starting recover at 04-JUL-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00001: /home/oracle/sqlfun/system01.dbf
    destination for restore of datafile 00002: /home/oracle/sqlfun/sysaux01.dbf
    destination for restore of datafile 00007: /home/oracle/sqlfun/EXAMPLE_5
    destination for restore of datafile 00008: /home/oracle/sqlfun/INDEX_DATA_6
    channel ORA_DISK_1: reading from backup piece /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w65l9_.bkp
    channel ORA_DISK_1: piece handle=/home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w65l9_.bkp tag=ORA_OEM_LEVEL_0
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00003: /home/oracle/sqlfun/undotbs01.dbf
    channel ORA_DISK_1: reading from backup piece /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w6zcb_.bkp
    channel ORA_DISK_1: piece handle=/home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w6zcb_.bkp tag=ORA_OEM_LEVEL_0
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00004: /home/oracle/sqlfun/users01.dbf
    destination for restore of datafile 00005: /home/oracle/sqlfun/threatened_fauna_data.dbf
    channel ORA_DISK_1: reading from backup piece /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w76jv_.bkp
    channel ORA_DISK_1: piece handle=/home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w76jv_.bkp tag=ORA_OEM_LEVEL_0
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00006: /home/oracle/sqlfun/example_temp01.dbf
    channel ORA_DISK_1: reading from backup piece /home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w7b0f_.bkp
    channel ORA_DISK_1: piece handle=/home/oracle/stage/o1_mf_nnnd1_ORA_OEM_LEVEL_0_8x7w7b0f_.bkp tag=ORA_OEM_LEVEL_0
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    
    starting media recovery
    
    archived log for thread 1 with sequence 33 is already on disk as file /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_33_8x84ld7x_.arc
    archived log for thread 1 with sequence 34 is already on disk as file /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_34_8x8d9y2h_.arc
    archived log for thread 1 with sequence 35 is already on disk as file /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_35_8x8db3hp_.arc
    archived log for thread 1 with sequence 36 is already on disk as file /u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_36_8x8dbgcm_.arc
    archived log file name=/u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_33_8x84ld7x_.arc thread=1 sequence=33
    archived log file name=/u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_34_8x8d9y2h_.arc thread=1 sequence=34
    archived log file name=/u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_35_8x8db3hp_.arc thread=1 sequence=35
    archived log file name=/u01/app/oracle/flash_recovery_area/SQLFUN/archivelog/2013_07_04/2013_07_03/o1_mf_1_36_8x8dbgcm_.arc thread=1 sequence=36
    unable to find archived log
    archived log thread=1 sequence=37
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 07/04/2013 09:55:58
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 37 and starting SCN of 33985482
    
    RMAN>
    the error can be ignored as RMAN has applied all the available archive log files and is now trying to apply a non-existent archive log file.
    Open the database with RESETLOGS
    The RESETLOGS command will fail because we do not have the same directory path which existed on the source for the Online Redo Log files.
    So we create a script which will generate the new online redo log file names.
    These are the contents of the rename_logfiles.sql file
    set head off pages 0 feed off echo off verify off
    set lines 200
    spool rename_logfiles.lst
    select ‘alter database rename file ”’|| member ||”’ ‘||chr(10)|| ‘ TO ”’ || ‘/home/oracle/sqlfun/’ || substr(member,instr(member,’/’,-1)+1) || ”’;’ from v$logfile;
    spool off
    exit;
    $ sqlplus sys as sysdba
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 4 10:05:17 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Enter password:
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> @rename_logfiles.sql
    alter database rename file  '/u01/app/oracle/oradata/sqlfun/redo03.log'
    TO '/home/oracle/sqlfun/redo03.log';
    
    alter database rename file  '/u01/app/oracle/oradata/sqlfun/redo02.log'
    TO '/home/oracle/sqlfun/redo02.log';
    
    alter database rename file  '/u01/app/oracle/oradata/sqlfun/redo01.log'
    TO '/home/oracle/sqlfun/redo01.log';
    Note – we can do all this from the RMAN prompt as well using the RMAN SQL command. But for this example we do it from SQL*PLUS
    $ sqlplus sys as sysdba
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 4 10:50:44 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Enter password:
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> @rename_logfiles.lst
    
    Database altered.
    
    Database altered.
    
    Database altered.
    
    SQL> alter database open resetlogs;
    
    Database altered.

    Minimize database downtime when moving an Oracle DB to a new server

    Minimize database downtime when moving an Oracle DB to a new server

    My Oracle 10gR2 database is currently on a Windows 2003 32-bit server.  I am planning to migrate to Oracle10g R2 on another Windows 2008 64-bit server for better performance.   What is the best method to do so? My main concern is database downtime.  The total database size is around 200GB.

    The first thing to do is to install Oracle 10gR2 64-bit on your new server. Then use the Database Configuration Assistant (DBCA) to precreate a new database on that server. If you want this to go the fastest way possible, use the same exact directory structure for your database files on your new Win 2008 server as you are using on your Win 2003 server. Also, make sure the new database name is the same name as the old database. To minimize downtime, the fastest way is to copy the database datafiles from the old server to the new server. You can use this method because your original platform is Windows and your new platform is Windows. Follow these steps after you have precreated the database on the new server.

    1. SHUTDOWN IMMEDIATE the database on the Win 2003 server.
    2. SHUTDOWN IMMEDIATE the database on the Win 2008 server.
    3. On the Win 2008 server, delete the database datafiles, online redo log files, and the control files.
    4. Map a drive from the Win 2008 server to the Win 2003 server.
    5. Copy the datafiles, online redo logs, and control files from the Win 2003 server to the Win 2008 server, keeping the file directory placement the same.
    6. STARTUP your database on the Win 2008 server.

    The step that will take the longest is step 5. But doing a file copy is much faster than other methods.

    This procedure works because you are not changing platforms. From Oracle's file's perspective, 32-bit Win 2003 is the same platform as 64-bit Win 2008. We first create a dummy db on the Win 2008 server to get the server set up and configured for the database. We then copy the files over to the new server and since the directory structure and database name are the same, it just starts right up!

    How do I copy an Oracle DB from one server to another?

    How do I copy an Oracle DB from one server to another?

    I want to copy an Oracle database from one server to another. I'm going to save the entire database into a CD and copy the exact database to another server. 


    For the same server and operating system do the following. The DB will have the same name as the original.
    1. Shut source database down with the NORMAL or IMMEDIATE option.
    2. Copy all datafiles. Select the name from v$datafile; to get their names.
    3. Copy all online redo logs. Select the member from v$logfile; to get their names.
    4. Copy all control files. Select the name from v$controlfile; to get their names.
    5. Copy the parameter file. Find it in $ORACLE_HOME/dbs in Unix and$ORACLE_HOME/database in Windows. The name is initDBNAME.ora where DBNAMEis database name.
    6. All of the files must be placed in directories that have same name as the source server directories. The names returned from the queries above are fully qualified with directory names.
    7. Create all of the directories specified in the parameter file you just copied. It will be BDUMP, CDUMP, UDUMP. Just recreate all of the directories you find specified in your parameter file.
    8. Edit the parameter file you copied. Set REMOTE_LOGIN_PASSWORDFILE to NONE if it is set SHARED or EXCLUSIVE.
    9. Start the database up.
    10. If you use a true temporary tablespace for sorting, you will have to recreate it. The database will still start, but the first sort that writes to it will bomb. Simple syntax is in the manuals.
    11. To use remote authentication again, run ORAPWD. Here's the syntax:
       
      orapwd
      Usage: orapwd file= password= entries=
      
      where:
      file = name of password file (mand)
      password = password for SYS (mand)
      entries = maximum number of distinct DBAs and OPERs (opt).
      There are no spaces around the equal-to (=) character
    12. Edit the parameter file to set REMOTE_LOGIN_PASSWORDFILE to SHARED or EXCLUSIVE (depending on what it was set to before).
    13. Restart the DB for it to take affect.

    If you want to change the DB name or directories, you must run the CREATE CONTROLFILE statement. You'll have to read the manual on that one; it is about 30 steps. Or you can do the following.
    1. Create a database with a new name on the target server. Make it any database name you want with any directories you want.
      Since we will be using Export/Import, you will need to know this information:
      • You must precreate the tablespaces if you are going to change directory names for your data files. Oracle looks for directory names during the IMPORT when it tries to create the TABLESPACES.
      • If it can't find the directories, they will blow up.
      • If you precreate them (anywhere you want), Oracle will know that the tablespaces are there and load them up with data. It won't blow up because it won't attempt to create them.
    2. Export the data from the source database. Set ORACLE_SID to source database name.
      exp system/manager file=full_export_dump.dmp log=full_export_log.out full=y
      
      (We'll take defaults with everything else.)
    3. Copy full_export_dump.dmp to the target server.
    4. Import the data into the target database. Set ORACLE_SID to the target database name.
      imp system/manager file=full_dump.dmp log=full_import_log.out full=y ignore=y
      
      You only need to use ignore=y if your tablespaces are precreated. That means the database will ignore the "already exists" error that will be returned when it finds the tablespace is already there.

    Wednesday, February 4, 2015

    How to restore an RMAN cold backup on a new server

    RMAN

    HOW TO RESTORE AN RMAN COLD BACKUP ON A NEW SERVER. ( SAME OS, SAME ORACLE VERSION )


    How to restore an RMAN cold backup on a new server. ( same OS, same Oracle version )


    You have to restore your database from scratch to a new server with a new Oracle Home, your old database is not available anymore.

    I had a similar situation today. A server with hostname X was going to be replaced by another server with the same hostname X.
    At the moment I had to restore the database on the new server the old server was not available anymore so the duplicate target database technology was not possible. I had planned downtime of a few hours.


    Prior to the shutdown of the old database server I made a cold backup and I installed the same Oracle version on the same OS on the new database server ( requirement ). I had the good luck to have the same filesystem for both datafiles and rman backup pieces ( which were made on disk )



    STEP 1 : THE SERVER PARAMETER FILE

    I copy passwordfile and server parameter file into the new $ORACLE_HOME/dbs

    oracle@myhost ~$ cp spfilePLATINUM.ora $ORACLE_HOME/dbs
    oracle@myhost ~$ cp orapwPLATINUM $ORACLE_HOME/dbs


    I create an ascii file from the server parameter file and have some adjustements made and validations done ( like user_dump_dest ... )
    oracle@myhost dbs$ sqlplus / as sysdba

    SQL*Plus: Release 10.2.0.3.0 - Production on Wed Jul 1 12:31:15 2009

    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

    Connected to an idle instance.

    SQL> create pfile from spfile;

    File created.

    SQL> exit
    Disconnected

    oracle@myhost dbs$ ls -ltr
    total 40
    -rw-r-----  1 oracle oinstall  8385 Sep 11  1998 init.ora
    -rw-r--r--  1 oracle oinstall 12920 May  3  2001 initdw.ora
    -rw-r-----  1 oracle oinstall  3584 Jul  1 12:30 spfilePLATINUM.ora
    -rw-r-----  1 oracle oinstall  1536 Jul  1 12:30 orapwPLATINUM
    -rw-r--r--  1 oracle oinstall  1097 Jul  1 12:31 initPLATINUM.ora

    oracle@myhost dbs$ vi initPLATINUM.ora -- ( and I adjust the parameters )


    STEP 2 : STARTUP NOMOUNT


    Startup nomount since I haven' t a control file yet. 

    oracle@myhost dbs $ sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.3.0 - Production on Wed Jul 1 12:33:57 2009

    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

    Connected to an idle instance.

    SQL> create spfile from pfile;

    File created.

    SQL> startup nomount;
    ORACLE instance started.

    Total System Global Area 1895825408 bytes
    Fixed Size                  2073632 bytes
    Variable Size             436210656 bytes
    Database Buffers         1442840576 bytes
    Redo Buffers               14700544 bytes
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options

    STEP 3 : RESTORE CONTROL FILES


    I connect with RMAN, set the right dbid and restore the controlfiles. 

    oracle@myhost dbs$ rman target /

    Recovery Manager: Release 10.2.0.3.0 - Production on Wed Jul 1 12:34:56 2009

    Copyright (c) 1982, 2005, Oracle.  All rights reserved.

    connected to target database: PLATINUM (not mounted)


    RMAN> set dbid 706111982;

    executing command: SET DBID

    RMAN> restore controlfile from '/u01/backup/CT_4bkisua9_1_1';

    Starting restore at 01-JUL-09
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=321 devtype=DISK

    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
    output filename=/u01/oradata/PLATINUM/control01.ctl
    output filename=/u01/oradata/PLATINUM/control02.ctl
    output filename=/u01/oradata/PLATINUM/control03.ctl
    Finished restore at 01-JUL-09

    STEP 4 : MOUNT THE DATABASE


    Once the control files are restored I can mount the database.

    RMAN> sql "alter database mount";

    sql statement: alter database mount
    released channel: ORA_DISK_1

    STEP 5 : VERIFICATION AVAILABILITY OF BACKUP


    Now I validate whether the backups known in the controlfile are there. remember that I restored the lastest backups to exactly the same location.
    I use the rman below RMAN command' s

    RMAN > list backup summary; --- which backups should have been made and are logically known in the control file ? 
    RMAN > list backup TAG='xyz' --- Is this the backup i would like to restore ?
    RMAN > crosscheck backup TAG='xyz' --- is this backup available ?


    RMAN > list backup summary;
    output not posted
    RMAN > list backup TAG='TAG20090630T154910'; 
    List of Backup Sets
    ===================

    BS Key  Type LV Size       Device Type Elapsed Time Completion Time

    137     Full    115.50M    DISK        00:00:48     30-JUN-09
            BP Key: 137   Status: AVAILABLE  Compressed: YES  Tag: TAG20090630T154910
            Piece Name: /u01/backup/db_49kisu3m_1_1
      List of Datafiles in backup set 137
      File LV Type Ckp SCN    Ckp Time  Name
      ---- — ---- ---------- --------- ----
      1       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/system01.dbf
      2       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/undotbs01.dbf
      3       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/sysaux01.dbf
      4       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/users01.dbf
      5       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/silver.dbf
      6       Full 2108954    30-JUN-09 /u01/oradata/PLATINUM/gold.dbf

    BS Key  Type LV Size       Device Type Elapsed Time Completion Time

    138     Full    1.06M      DISK        00:00:01     30-JUN-09        
    BP Key: 138   Status: AVAILABLE  Compressed: YES 
    Tag: TAG20090630T154910         Piece Name: /u01/backup/db_4akisu5e_1_1  
    Control File Included: Ckp SCN: 2108954      Ckp time: 30-JUN-09  
    SPFILE Included: Modification time: 30-JUN-09
    RMAN > crosscheck backup TAG='TAG20090630T154910';
    output not posted

    STEP 6 : RESTORE THE BACKUP


    I restore the lastest backup. Since it concerns a cold backup I do not have to recover it, it is consistent.

    RMAN> restore database from TAG='TAG20090630T154910' 
    Starting restore at 01-JUL-09 
    using channel ORA_DISK_1 channel ORA_DISK_1:
    starting datafile backupset restore channel ORA_DISK_1:
    specifying datafile(s) to restore from backup set 
    restoring datafile 00001 to /u01/oradata/PLATINUM/system01.dbf 
    restoring datafile 00002 to /u01/oradata/PLATINUM/undotbs01.dbf 
    restoring datafile 00003 to /u01/oradata/PLATINUM/sysaux01.dbf 
    restoring datafile 00004 to /u01/oradata/PLATINUM/users01.dbf 
    restoring datafile 00005 to /u01/oradata/PLATINUM/silver.dbf 
    restoring datafile 00006 to /u01/oradata/PLATINUM/gold.dbf 
    channel ORA_DISK_1: reading from backup piece /u01/backup/db_49kisu3m_1_1 channel
    ORA_DISK_1: restored backup piece 1 piece handle=/u01/backup/db_49kisu3m_1_1 tag=TAG20090630T154910 
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:56 
    Finished restore at 01-JUL-09

    STEP 7 : OPEN THE DATABSE WITH RESETLOGS


    RMAN> sql "alter database open resetlogs";
    sql statement: alter database open resetlogs

    STEP 8 :  MANAGE LISTENER.ORA, TNSNAMES.ORA AND PASSWORDFILE


    I moved the tnsnames.ora and listener into the new $ORACLE_HOME/network/admin


    oracle@myhost ~ $ cp tnsnames.ora $ORACLE_HOME/network/admin
    oracle@myhost ~ $ cp listener.ora $ORACLE_HOME/network/admin
    oracle@myhost ~ $ cp orapwPLATINUM $ORACLE_HOME/dbs

    Tuesday, February 3, 2015

    Link to the AWS whitepapers




    Link to the AWS whitepapers

    http://aws.amazon.com/whitepapers/




    Migrating to Amazon RDS

    Migrating to Amazon RDS

    There are numerous ways to get "your data" to the cloud. Usually the simplest is export/import (bias towards Oracle), but this process is usually slow when you start moving large data sets. There is the RMAN backup/recovery manager but this requires you to have the same instance version in the cloud - not good if you want to upgrade at the same time. Datapump is also available and is very useful as you can do Network data Pumps across database links - but again this can sometimes be slow.
    I then looked into using Amazon's Advanced Data Migration Techniques (published Nov 13 2013) and decided to give it a go and have posted my walk through below (quite technical):


    This was done from a local "data center" 100GB database, exported using datapump, copied to a M1.Xlarge EC2 in cloud and then copied further to the backend DATA_PUMP_DIR on the RDS instance (which you don't have access to). Then a datapump import into the RDS and job done - took me approx 12 hours in total

    Migrate Database from host to AWS RDS via EC2

    Migrate Database from host to AWS RDS via EC2

    EC2 Server to Launch:
    This can be done using an existing AMI image (which will have the defaults already configured) or create one from scratch (just note that packages will have to be installed i.e oracle client/perl etc..). Both the EC2 and the RDS must be in the same zone and if you have your network configured in AWS use the same VPC's & Subnets.
    Login to your AWS console and create an EC2 instance
    • Select "My AMI’s" on left and search for your "in-house" one or use one of the "Quick Start" ones
    • Select General Purpose and M1.XLARGE
    • Select the correct Network/Subnet (we will use the testdev one – 10.10.XX.XX) in zone 1c
    • Add your storage:
    • Tag your instance:
    • Select the test security groups from existing list:
    • Review and Launch
    • Set up putty with the correct database keys:
    RDS Server to Launch:
    • Select standard edition one
    • Select No to multi-AZ
    • Add in the details:
    • Add in details again. Select the 11.2.0.3 DB engine and be sure to select 1c as zone and correct VPC’s. The security groups should be same/similar to the EC2 instance (check with your network administrator if not sure.
    • No backups
    • Review & Launch
    • Make a note of the endpoint:
    • Test Connection:
    Login to an EC2 AWS server in same subnet/VPC and setup a tns entry:
    $ setdb_dbd12c.sh
    oracle 12c Cloud DB RDBMS: cd $TNS_ADMIN
    oracle 12c Cloud DB RDBMS: vi tnsnames.ora

    RDSDB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SID = RDSDB)
        )
      )

    oracle 12c Cloud DB RDBMS: tnsping RDSDB
    TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-JAN-2014 11:03:22
    Copyright (c) 1997, 2011, Oracle.  All rights reserved.
    Used parameter files:
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = RDSDB)))
    OK (150 msec)

    Perform Export on Host:
    Run utlrp.sql on the source database and check how many invalid objects there are to start with:

    SQL>  select count(1) from dba_objects where status != 'VALID';
      COUNT(1)
    ----------
             6
    SQL> select distinct(owner) from  dba_objects where status != 'VALID' order by 1;
     
    OWNER
    ------------------------------
    USER09
    USER2010
    USER2011
    USER2012
    USER2013
     
    SQL>select owner, object_name, object_type, status from dba_objects where status != 'VALID'
    OWNER      OBJECT_NAME                    OBJECT_TYPE          STATUS
    ---------- ------------------------------ -------------------- -------
    USER09    APY_DRC_PKG                    PACKAGE BODY         INVALID
    USER2012  UCAS_APPLICANT_SCHOOL_CHANGE   PROCEDURE            INVALID
    USER2012  APY_DRC_PKG                    PACKAGE BODY         INVALID
    USER2011  APY_DRC_PKG                    PACKAGE BODY         INVALID
    USER2010  APY_DRC_PKG                    PACKAGE BODY         INVALID
    USER2013  APY_DRC_PKG                    PACKAGE BODY         INVALID
     
    6 rows selected.

    In this instance we will be exporting from HOST1 on host-01 using the data pump directory - /tmp/export/HOST

    host-01-oracle>pwd
    /tmp/export/HOST
    host-01-oracle>expdp system/system_HOST full=y parallel=4 logfile=HOST_dump.log job_name=HOST_rds filesize=5G dumpfile=HOST_%U.dmp directory=data_pump_dir
    Export: Release 10.2.0.5.0 - 64bit Production on Wednesday, 15 January, 2014 9:49:09
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
     
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Starting "SYSTEM"."HOST_RDS":  system/******** full=y parallel=4 logfile=HOST_dump.log job_name=HOST_rdsfilesize=5G dumpfile=HOST_%U.dmp directory=data_pump_dir
    Estimate in progress using BLOCKS method...
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 137.9 GB
    Processing object type DATABASE_EXPORT/TABLESPACE
    Processing object type DATABASE_EXPORT/PROFILE
    Processing object type DATABASE_EXPORT/SYS_USER/USER
    Processing object type DATABASE_EXPORT/SCHEMA/USER
    Processing object type DATABASE_EXPORT/ROLE
    Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
    Processing object type DATABASE_EXPORT/RESOURCE_COST
    Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
    Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
    Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/CONTEXT
    Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
    . . exported "SVR2011_SCH"."APY_APP_APPLICATION_S"      5.965 GB 2658468 rows
    Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    . . exported "SVR2013_SCH"."APY_APP_APPLICATION_S"      5.630 GB 2375422 rows
    . . exported "SVR2012_SCH"."APY_APP_APPLICATION_S"      6.777 GB 2904690 rows
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/JOB
    Processing object type DATABASE_EXPORT/SCHEMA/REFRESH_GROUP
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    ...
    . . exported "SVR2013_SCH"."APY_APP_APPLICATION_S"      5.630 GB 2375422 rows
    . . exported "SVR2012_SCH"."APY_APP_APPLICATION_S"      6.777 GB 2904690 rows
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    "HOST_dump.log" [Read only] 2524 lines, 197927 characters
    ******************************************************************************
    Dump file set for SYSTEM.HOST_RDS is:
      /tmp/export/HOST/HOST_01.dmp
      /tmp/export/HOST/HOST_02.dmp
      /tmp/export/HOST/HOST_03.dmp
      /tmp/export/HOST/HOST_04.dmp
      /tmp/export/HOST/HOST_05.dmp
      /tmp/export/HOST/HOST_06.dmp
      /tmp/export/HOST/HOST_07.dmp
      /tmp/export/HOST/HOST_08.dmp
      /tmp/export/HOST/HOST_09.dmp
      /tmp/export/HOST/HOST_10.dmp
      /tmp/export/HOST/HOST_11.dmp
      /tmp/export/HOST/HOST_12.dmp
      /tmp/export/HOST/HOST_13.dmp
      /tmp/export/HOST/HOST_14.dmp
      /tmp/export/HOST/HOST_15.dmp
      /tmp/export/HOST/HOST_16.dmp
      /tmp/export/HOST/HOST_17.dmp
      /tmp/export/HOST/HOST_18.dmp
      /tmp/export/HOST/HOST_19.dmp
      /tmp/export/HOST/HOST_20.dmp
      /tmp/export/HOST/HOST_21.dmp
    Job "SYSTEM"."HOST_RDS" successfully completed at 12:56:47
     
    ## TOTAL TIME 3hrs 7 minutes ##
     
    /tmp/export/HOST
    host-01-oracle>ls -lrt
    total 196998528
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 09:53 HOST_01.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 09:56 HOST_02.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 09:57 HOST_03.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 11:05 HOST_05.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 11:05 HOST_06.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 11:05 HOST_07.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 11:58 HOST_04.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:04 HOST_10.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:08 HOST_09.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:10 HOST_13.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:12 HOST_14.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:13 HOST_15.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:36 HOST_17.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:47 HOST_08.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:50 HOST_12.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:54 HOST_20.dmp
    -rw-rw----   1 oracle   oinstall 5368709120 Jan 15 12:54 HOST_11.dmp
    -rw-rw----   1 oracle   oinstall 1061998592 Jan 15 12:56 HOST_19.dmp
    -rw-rw----   1 oracle   oinstall 4734468096 Jan 15 12:56 HOST_18.dmp
    -rw-rw----   1 oracle   oinstall 1077534720 Jan 15 12:56 HOST_21.dmp
    -rw-rw----   1 oracle   oinstall 2671546368 Jan 15 12:56 HOST_16.dmp
    -rw-rw-r--   1 oracle   oinstall  197927 Jan 15 12:56 HOST_dump.log

    ## TOTAL SIZE 100GB ##

    Test Connection from host to EC2
    host-01-oracle> ping 10.10.10.10
    no answer from 10.10.10.10

    Open up firewall rules:

    host-01-oracle>ssh oracle@10.10.10.10
    The authenticity of host '10.10.10.10 (10.10.10.10)' can't be established.
    RSA key fingerprint is 0b:8f:9f:1d:8d:62:59:93:1c:02:e2:de:c3:59:ba:1b.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '10.10.10.10' (RSA) to the list of known hosts.
    oracle@10.10.10.10's password:
    Last login: Wed Jan 15 14:36:43 2014 from 10.1.5.33

    Copy files from host to EC2

    $ df -h
    FilesystemSize  Used Avail Use% Mounted on
    /dev/xvda1     99G   3.2G  95G    4% /
    none           7.4G  0     7.4G   0% /dev/shm
    /dev/xvdd       99G  196M   94G   1% /home
    /dev/xvde       99G   10G   84G  11% /u01/app/oracle
    /dev/xvdf      985G  324G  612G  35% /u01/app/oracle/fast_recovery_area
    /dev/xvdg       99G  188M   94G   1% /u05/goldengate
    /dev/xvdh      493G  220G  248G  47% /u02/app/oracle/DATA/MIDDLE
    /dev/xvdi      493G  219G  249G  47% /u02/app/oracle/DATA/MIDDLE1
    /dev/xvdj      493G  219G  249G  47% /u02/app/oracle/DATA/MIDDLE2
    /dev/xvdk      493G  219G  249G  47% /u02/app/oracle/DATA/MIDDLE3
    /dev/xvdl       20G  1.7G   18G   9% /u03/app/oracle/REDO1
    /dev/xvdm       20G  1.7G   18G   9% /u03/app/oracle/REDO2
    /dev/xvdn       99G  4.8G   89G   6% /u04/app/oracle/ARCH
    /dev/xvdo       30G  172M   28G   1% /jobdata

    host-01-oracle>scp *.dmp oracle@10.10.10.10:/u01/app/oracle/fast_recovery_area/export/
    oracle@10.10.10.10's password:
    HOST_01.dmp       100% |************************************************|  5120 MB    07:12
    HOST_02.dmp       100% |************************************************|  5120 MB    07:02
    HOST_03.dmp       100% |************************************************|  5120 MB    07:30
     
    host-01-oracle>scp *.dmp oracle@10.10.10.10:/u01/app/oracle/fast_recovery_area/export/rds       
    oracle@10.10.10.10's password:
    HOST_16.dmp       100% |************************************************|  2547 MB    03:57
    HOST_17.dmp       100% |************************************************|  5120 MB    08:38
    HOST_18.dmp       100% |************************************************|  4515 MB    06:54

    ## Approx 7min/5GB chunk to upload ##
    ## Total Time Approx 3hrs ##

    Test Connection from EC2 to RDS
    Check the perl version installed (10.10.10.10):

    $ setdb_MIDDLE.sh
    oracle 11gR2 Middle:whichperl
    /u01/app/oracle/product/11.2.0/db_1/perl/bin/perl
    oracle 11gR2 Middle:perl -v
     
    This is perl, v5.10.0 built for x86_64-linux-thread-multi
     
    Copyright 1987-2007, Larry Wall
     
    Perl may be copied only under the terms of either the Artistic License or the
    GNU General Public License, which may be found in the Perl 5 source kit.
     
    Complete documentation for Perl, including FAQ lists, should be found on
    this system using "man perl" or "perldocperl".  If you have access to the
    Internet, point your browser at http://www.perl.org/, the Perl Home Page.
     
    oracle 11gR2 Middle:/u01/app/oracle/product/11.2.0/db_1/perl/bin/perl  -le 'use DBD::Oracle; print $DBD::Oracle::VERSION'
    1.20
     
    Create a test script called test.pl and enter the following details into it:
     
    oracle 11gR2 Middle:mkdir -p /u01/app/oracle/fast_recovery_area/export/rds
    oracle 11gR2 Middle:cd /u01/app/oracle/fast_recovery_area/export/rds
    oracle 11gR2 Middle:vi test.pl
     
    --------
     
    use DBI;
    use warnings;
    use strict;
     
    # RDS instance info
    my $RDS_PORT=1521;
    my $RDS_HOST="rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com";
    my $RDS_LOGIN="user/*********";
    my $RDS_SID="RDSDB";
     
    my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') || die ( $DBI::errstr . "\n") ;
    my $sth = $conn->prepare("select * from dual");
    $sth->execute;
    print "Got here without dying\n";
     
    --------
     
    oracle 11gR2 Middle:chmod 755 test.pl
    oracle 11gR2 Middle:perl test.pl
    Got here without dying


    Create the main script called copy_to_rds.pl and enter the following details into it:

    oracle 11gR2 Middle:vi copy_to_rds.pl
     
    use DBI;
    use warnings;
    use strict;
     
     
    # RDS instance info
    my $RDS_PORT=1521;
    my $RDS_HOST="rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com";
    my $RDS_LOGIN="user/********";
    my $RDS_SID="RDSDB";
     
    #The $ARGV[0] is a parameter you pass into the script
    my $dirname = "DATA_PUMP_DIR";
    my $fname = $ARGV[0];
     
    my $data = "dummy";
    my $chunk = 8192;
     
    my $sql_open = "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;";
    my $sql_write = "BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;";
    my $sql_close = "BEGIN utl_file.fclose(perl_global.fh); END;";
    my $sql_global = "create or replace package perl_global as fhutl_file.file_type; end;";
     
    my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') || die ( $DBI::errstr . "\n") ;
     
    my $updated=$conn->do($sql_global);
    my $stmt = $conn->prepare ($sql_open);
    $stmt->bind_param_inout(":dirname", \$dirname, 12);
    $stmt->bind_param_inout(":fname", \$fname, 12);
    $stmt->bind_param_inout(":chunk", \$chunk, 4);
    $stmt->execute() || die ( $DBI::errstr . "\n");
     
    open (INF, $fname) || die "\nCan't open $fname for reading: $!\n";
    binmode(INF);
    $stmt = $conn->prepare ($sql_write);
    my %attrib = (’ora_type’,’24’);
    my $val=1;
    while ($val> 0) {
      $val = read (INF, $data, $chunk);
      $stmt->bind_param(":data", $data , \%attrib);
      $stmt->execute() || die ( $DBI::errstr . "\n") ; };
    die "Problem copying: $!\n" if $!;
    close INF || die "Can't close $fname: $!\n";
      $stmt = $conn->prepare ($sql_close);
    $stmt->execute() || die ( $DBI::errstr . "\n") ;
     
    oracle 11gR2 Middle:chmod 755 copy_to_rds.pl

    Run it:

    oracle 11gR2 Middle:perl copy_to_rds.pl
    DBD::Oracle::st execute failed: ORA-29288: invalid file name
    ORA-06512: at "SYS.UTL_FILE", line 41
    ORA-06512: at "SYS.UTL_FILE", line 478
    ORA-06512: at line 1 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;" with ParamValues: :chunk=8192, :fname=undef, :dirname='DATA_PUMP_DIR'] at copy_to_rds.pl line 31.
    ORA-29288: invalid file name
    ORA-06512: at "SYS.UTL_FILE", line 41
    ORA-06512: at "SYS.UTL_FILE", line 478
    ORA-06512: at line 1 (DBD ERROR: OCIStmtExecute)

    Run it with parameter

    oracle 11gR2 Middle:perl copy_to_rds.pl HOST_01.dmp
    oracle 11gR2 Middle:

    Probably easier to create a shell script to copy all the files

    #!/bin/bash
    cd /u01/app/oracle/fast_recovery_area/export/rds
    echo "Copying HOST_02.dmp"
    perl copy_to_rds.pl HOST_02.dmp
    echo "Copying HOST_03.dmp"
    perl copy_to_rds.pl HOST_03.dmp
    echo "Copying HOST_04.dmp"
    perl copy_to_rds.pl HOST_04.dmp
    echo "Copying HOST_05.dmp"
    perl copy_to_rds.pl HOST_05.dmp
    echo "Copying HOST_06.dmp"
    perl copy_to_rds.pl HOST_06.dmp
    echo "Copying HOST_07.dmp"
    perl copy_to_rds.pl HOST_07.dmp
    echo "Copying HOST_08.dmp"
    perl copy_to_rds.pl HOST_08.dmp
    echo "Copying HOST_09.dmp"
    perl copy_to_rds.pl HOST_09.dmp
    echo "Copying HOST_10.dmp"
    perl copy_to_rds.pl HOST_10.dmp
    echo "Copying HOST_11.dmp"
    perl copy_to_rds.pl HOST_11.dmp
    echo "Copying HOST_12.dmp"
    perl copy_to_rds.pl HOST_12.dmp
    echo "Copying HOST_13.dmp"
    perl copy_to_rds.pl HOST_13.dmp
    echo "Copying HOST_14.dmp"
    perl copy_to_rds.pl HOST_14.dmp
    echo "Copying HOST_15.dmp"
    perl copy_to_rds.pl HOST_15.dmp
    echo "Copying HOST_16.dmp"
    perl copy_to_rds.pl HOST_16.dmp
    echo "Copying HOST_17.dmp"
    perl copy_to_rds.pl HOST_17.dmp
    echo "Copying HOST_18.dmp"
    perl copy_to_rds.pl HOST_18.dmp
    echo "Copying HOST_19.dmp"
    perl copy_to_rds.pl HOST_19.dmp
    echo "Copying HOST_20.dmp"
    perl copy_to_rds.pl HOST_20.dmp
    echo "Copying HOST_21.dmp"
    perl copy_to_rds.pl HOST_21.dmp

    One way to check if the files are going to the correct location is to check the cloudwatch freespace monitor:

    Another way is to use the AWS packages to query the DATA_PUMP_DIR directory. Login as user@RDSDB RDS instance and run the following:

    SQL> select grantee, privilege from dba_tab_privs where table_name='DATA_PUMP_DIR';
    GRANTEE                        PRIVILEGE
    ------------------------------ -----------------
    DBA                            READ
    EXP_FULL_DATABASE              READ
    IMP_FULL_DATABASE              READ
    RDSADMIN                       READ
    DBA                            WRITE
    EXP_FULL_DATABASE              WRITE
    IMP_FULL_DATABASE              WRITE
    RDSADMIN                       WRITE
     
    8 rows selected.
     
    SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
    FILENAME             TYPE         FILESIZE MTIME
    -------------------- ---------- ---------- ---------
    HOST_01.dmp       file       5368709120 16-JAN-14
    HOST_02.dmp       file       5368709120 16-JAN-14
    HOST_03.dmp       file       5368709120 16-JAN-14
    HOST_04.dmp       file       5368709120 16-JAN-14
    HOST_05.dmp       file       5368709120 16-JAN-14
    HOST_06.dmp       file       5368709120 16-JAN-14
    HOST_07.dmp       file       5368709120 16-JAN-14
    HOST_08.dmp       file       5368709120 16-JAN-14
    HOST_09.dmp       file       5368709120 16-JAN-14
    HOST_10.dmp       file       5368709120 16-JAN-14
    HOST_11.dmp       file       5368709120 16-JAN-14
    HOST_12.dmp       file       5368709120 16-JAN-14
    HOST_13.dmp       file       5368709120 16-JAN-14
    HOST_14.dmp       file       5368709120 16-JAN-14
    HOST_15.dmp       file       5368709120 16-JAN-14
    HOST_16.dmp       file       2671546368 17-JAN-14
    HOST_17.dmp       file       5368709120 16-JAN-14
    HOST_18.dmp       file       4734468096 16-JAN-14
    HOST_19.dmp       file       1061998592 17-JAN-14
    HOST_20.dmp       file       5368709120 16-JAN-14
    HOST_21.dmp       file       1077534720 16-JAN-14
    datapump/            directory        4096 20-JAN-14
     
    22 rows selected.

    Check these sizes against the sizes from the EC2 instance.

    Proceed to Creating the tablespaces in RDS same as host.

    Export the metadata ddl from host using:

    set heading off;
    set echo off;
    Set pages 999;
    set long 90000;
    spool tablespace.sql
    select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;
    spool off

    Edit the “tablespace.sql” script to only include the “CREATE TABLESPACE ;” statement (rest can be ignored) i.e:
     
    CREATE TABLESPACE "USER_INDEX08";
    CREATE TABLESPACE "USER_DATA08";
    CREATE TABLESPACE "USER_DATA09";
    CREATE TABLESPACE "USER_INDEX09";
    CREATE TABLESPACE "JFORUM_DATA";

    Then create these tablespaces in the RDS instance:

    oracle 12c Cloud DB RDBMS: sqlplus user/*********@RDSDB
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 16 09:11:44 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
    SQL> @tablspace.sql
    Tablespace created
    ...
    .

    Import the data in RDS
    Import from within the Amazon RDS instance using DBMS_DATAPUMP package and submit a job using PL/SQL – Maybe worth taking a backup of your RDS instance before continuing.

    declare
      ind NUMBER;              -- Loop index
      h1 NUMBER;               -- Data Pump job handle
      percent_done NUMBER;     -- Percentage of job complete
      job_state VARCHAR2(30);  -- To keep track of job state
      le ku$_LogEntry;         -- For WIP and error messages
      js ku$_JobStatus;        -- The job status from get_status
      jd ku$_JobDesc;          -- The job description from get_status
      sts ku$_Status;          -- The status object returned by get_status
    begin
     
    -- Create a (user-named) Data Pump job to do a "full" import (everything in the dump file without filtering).
    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'RDSDB4');
    dbms_datapump.add_file(handle => h1, filename => 'IMPORT.LOG', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
     
    -- Specify the dump files for the job (using the handle just returned)
    -- and directory object, which must already be defined and accessible
    -- to the user running this procedure. These are the dump file created by
    -- the export operation in the first example.
     
    dbms_datapump.add_file(handle => h1, filename => 'HOST_01.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_02.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_03.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_04.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_05.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_06.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_07.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_08.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_09.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_10.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_11.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_12.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_13.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_14.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_15.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_16.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_17.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_18.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_19.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_20.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file(handle => h1, filename => 'HOST_21.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
     
    -- add your datapump parameters
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
     
    -- Start the job. An exception is returned if something is not set up properly.
    dbms_datapump.start_job(handle => h1, skip_current => 0);
     
    -- The export job should now be running. In the following loop, the job
    -- is monitored until it completes. In the meantime, progress information is
    -- displayed.
     
      percent_done := 0;
      job_state := 'UNDEFINED';
      while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
        dbms_datapump.get_status(h1,
               dbms_datapump.ku$_status_job_error +
               dbms_datapump.ku$_status_job_status +
               dbms_datapump.ku$_status_wip,-1,job_state,sts);
        js := sts.job_status;
     
    -- If the percentage done changed, display the new value.
     
        if js.percent_done != percent_done
        then
          dbms_output.put_line('*** Job percent done = ' ||
                               to_char(js.percent_done));
          percent_done := js.percent_done;
        end if;
     
    -- If any work-in-progress (WIP) or error messages were received for the job,
    -- display them.
     
       if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
        then
          le := sts.wip;
        else
          if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
          then
            le := sts.error;
          else
            le := null;
          end if;
        end if;
        if le is not null
        then
          ind := le.FIRST;
          while ind is not null loop
            dbms_output.put_line(le(ind).LogText);
            ind := le.NEXT(ind);
          end loop;
        end if;
      end loop;
     
    -- Indicate that the job finished and detach from it.
     
      dbms_output.put_line('Job has completed');
      dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(handle => h1);
    exception
     when others then
     raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;
    /

    oracle 12c Cloud DB RDBMS: sqlplus user/********@RDSDB
    SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 21 08:40:34 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
    SQL>set serveroutput on
    SQL>set serverout on
    SQL> @import.sql

    Check the state of the submitted job from another session:

    SET lines 200
    COL owner_name FORMAT a10;
    COL job_name FORMAT a20
    COL state FORMAT a11
    COL operation LIKE state
    COL job_mode LIKE state
     SELECT owner_name, job_name, operation, job_mode,
        state, attached_sessions
        FROM dba_datapump_jobs
        WHERE job_name NOT LIKE 'BIN$%'
        ORDER BY 1,2;
     
    OWNER_NAME JOB_NAME             OPERATION   JOB_MODE    STATE       ATTACHED_SESSIONS
    ---------- -------------------- ----------- ----------- ----------- -----------------
    user    RDSDB4              IMPORT      FULL        EXECUTING                    1

    If you need to Stop/Kill the job:

    SQL> SET serveroutput on

    SET lines 100SQL>


    SQL> declare


      2  h1 number;


      3  begin


      4  h1 := DBMS_DATAPUMP.ATTACH('RDSDB4','user');


      5  DBMS_DATAPUMP.STOP_JOB (h1);


      6  end;


      7  /


    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
    ORA-39082: Object type TRIGGER:"SYSMAN"."MGMT_CREDS_INS_UPD" created with compilation warnings
    ORA-39082: Object type TRIGGER:"SYSMAN"."MGMT_CREDS_INS_UPD" created with compilation warnings
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/JOB
    Processing object type DATABASE_EXPORT/SCHEMA/REFRESH_GROUP
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    Job "user"."RDSDB4" completed with 374 error(s) at 13:38:05
    Job has completed
    Final job state = COMPLETED
     
    PL/SQL procedure successfully completed.

    ## APPROX 3 hrs ##

    To check progress, monitor the cloudwatch graphs:

    To check the log file create a table that is externally linked to the logfile and select the contents:

    CREATE TABLE datapumplog
    (
    text VARCHAR2(400)
    )
    ORGANIZATION EXTERNAL
    (
    TYPE ORACLE_LOADER
    DEFAULT DIRECTORY DATA_PUMP_DIR
    ACCESS PARAMETERS
    (
    RECORDS DELIMITED BY NEWLINE
    NOBADFILE NODISCARDFILE NOLOGFILE
    fields terminated by 0x'09'
    MISSING FIELD VALUES ARE NULL
    (
    text
    )
    )
    LOCATION ( 'IMPORT.LOG' )
    )
    REJECT LIMIT UNLIMITED;
     
    SQL>set pages 5000
    SQL>set lines 300
    SQL>spool import.log
    SQL>select * from datapumplog;


    You may be able to directly read the contents of the log :
    
    SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','IMPORT.LOG'));

    Recompile the schemas (have to done individually):

    oracle 12c Cloud DB RDBMS: sqlplus  user/*********@RDSDB
    SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 21 14:19:37 2014
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
     
    SQL> set serverout on
    SQL> set serveroutput on
    SQL> select count(1) from dba_objects where status != ‘VALID’;
     
    1250 rows selected
     
    SQL> select distinct(owner) from  dba_objects where status != 'VALID' order by 1;
     
    OWNER
    --------------------
    USER09
    USER10TRAIN
    USER11TRAIN
    USER2010
    USER2011
    USER2012
    USER2013
    USER2014
    CSMIG
    NETUSER2012
    NETUSER2013
    NETUSER2014
    FORUM2010
    FORUM2011
    FORUM2012
    FORUM2013
    SVR2011
    SVR2011_SCH
    SVR2012
    SVR2013
    SVR2013BATCH
    SVR2014
    SVR2014BATCH
    SVRBATCH2012
    SVRREADONLY
    SVRSEND
    SVRSEND_SCH
    PAYMENT
    PERFSTAT
    PUBLIC
    DISK
    DISK_SCH
    SYS
    SYSMAN
    TARIFF_FORUM
     
    SQL> exec SYS.UTL_RECOMP.RECOMP_SERIAL('DISK_SCH');
     
    PL/SQL procedure successfully completed.
     
    SQL> show errors
    No errors.

    Do for each schema above


    SQL> select count(1) from dba_objects where status != ‘VALID’;

    513 rows selected
     
    SQL> select distinct(owner) from  dba_objects where status != 'VALID' order by 1;
     
    OWNER
    --------------------
    USER09
    USER10TRAIN
    USER11TRAIN
    USER2010
    USER2011
    USER2012
    USER2013
    USER2014
    CSMIG
    PERFSTAT
    PUBLIC
    DISK_SCH
    SYS
    SYSMAN
     
    16 rows selected.
     
    Now check the objects involved for each schema
    Some may be down to database links or invalid materialized views
     
    SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’)  order by 1;
     
    OWNER                OBJECT_NAME                                   OBJECT_TYPE          STATUS
    -------------------- --------------------------------------------- -------------------- -------
    USER09              ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER09              APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER09              ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER09              ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER09              ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER09              ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER09              APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER09              APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER09              APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER09              APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER10TRAIN         APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER10TRAIN         APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER10TRAIN         ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER10TRAIN         APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER10TRAIN         ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER10TRAIN         ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER10TRAIN         ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER10TRAIN         ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER10TRAIN         APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER10TRAIN         APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER11TRAIN         ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER11TRAIN         ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER11TRAIN         APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER11TRAIN         ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER11TRAIN         APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER11TRAIN         APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER11TRAIN         APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER11TRAIN         ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER11TRAIN         ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER11TRAIN         APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2010            APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER2010            ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER2010            ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER2010            ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER2010            APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER2010            ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER2010            ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER2010            APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2010            APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER2010            APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER2011            ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER2011            ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER2011            ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER2011            APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER2011            APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER2011            ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER2011            APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER2011            ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER2011            APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER2011            APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2012            UCAS_APPLICANT_SCHOOL_CHANGE                  PROCEDURE            INVALID
    USER2012            APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER2012            APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER2012            APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER2012            APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER2012            APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2012            ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER2012            ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER2012            ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER2012            ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER2012            ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER2013            APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    USER2013            ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER2013            ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER2013            ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER2013            APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER2013            APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2013            APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER2013            APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER2013            ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER2013            ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER2014            APY_DRC_PKG                                   PACKAGE BODY         INVALID
    USER2014            ARF_VAV_VALID_VALUES_MVW                      MATERIALIZED VIEW    INVALID
    USER2014            ARF_VVD_DOMAINS_MVW                           MATERIALIZED VIEW    INVALID
    USER2014            ARF_TTL_TITLE_MVW                             MATERIALIZED VIEW    INVALID
    USER2014            ARF_GLP_GROUP_LIST_PARAM_MVW                  MATERIALIZED VIEW    INVALID
    USER2014            ARF_CTY_COUNTRY_MVW                           MATERIALIZED VIEW    INVALID
    USER2014            APY_SAD_SCHOOL_ADDRESSES_MVW                  MATERIALIZED VIEW    INVALID
    USER2014            APY_APT_APPLICANT_MVW                         MATERIALIZED VIEW    INVALID
    USER2014            APY_CGP_CONTACT_GROUP_MVW                     MATERIALIZED VIEW    INVALID
    USER2014            APY_CHC_CHOICE_MVW                            MATERIALIZED VIEW    INVALID
    PERFSTAT             STATSPACK                                     PACKAGE BODY         INVALID
    DISK_SCH            MVW_CTY_COUNTRY                               MATERIALIZED VIEW    INVALID
    DISK_SCH            MVW_QET_QUALENT                               MATERIALIZED VIEW    INVALID
     
    84 rows selected.

    Create db_link back to a middle instance in Cloud for all schema’s involved:

    SQL> CREATE DATABASE LINK middle.ucas.ac.uk
     CONNECT TO middle
     IDENTIFIED BY pwd_1_middle
     USING '(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.10.10)(PORT = 1521)))(CONNECT_DATA = (SID = MIDDLE)))'; 
     
    Database link created.
     
    SQL>  select sysdate from dual@middle.ucas.ac.uk;
    SYSDATE
    ---------
    22-JAN-14

    Then recompile the objects again:

    SQL> select 'alter materialized view '||owner||'.'||object_name||' compile;' from dba_objects  where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC') and OBJECT_TYPE='MATERIALIZED VIEW';
     
    alter materialized view USER09.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER09.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER09.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER09.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER09.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER09.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER09.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER09.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER10TRAIN.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER10TRAIN.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER10TRAIN.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER10TRAIN.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER10TRAIN.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER10TRAIN.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER10TRAIN.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER10TRAIN.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER2010.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER2010.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER2010.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER2010.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER2010.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER10TRAIN.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER2012.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER2012.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER2011.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER2011.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER11TRAIN.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER11TRAIN.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER11TRAIN.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER11TRAIN.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER11TRAIN.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER11TRAIN.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER11TRAIN.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER11TRAIN.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER11TRAIN.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER09.APY_APT_APPLICANT_MVW compile;
    alter materialized view USER2010.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER2010.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER2010.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER2010.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER2011.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER2011.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER2011.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER2011.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER2011.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER2011.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER2011.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER2013.APY_CHC_CHOICE_MVW compile;
    alter materialized view USER2013.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER2013.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER2013.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER2013.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER2013.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER2013.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER2013.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view USER2012.APY_CGP_CONTACT_GROUP_MVW compile;
    alter materialized view USER2012.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
    alter materialized view USER2012.ARF_CTY_COUNTRY_MVW compile;
    alter materialized view USER2012.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
    alter materialized view USER2012.ARF_TTL_TITLE_MVW compile;
    alter materialized view USER2012.ARF_VAV_VALID_VALUES_MVW compile;
    alter materialized view USER2012.ARF_VVD_DOMAINS_MVW compile;
    alter materialized view DISK_SCH.MVW_CTY_COUNTRY compile;
    alter materialized view DISK_SCH.MVW_QET_QUALENT compile;
     
    SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’)  order by 1
     
    OWNER           OBJECT_NAME                    OBJECT_TYPE         STATUS
    --------------- ------------------------------ ------------------- -------
    USER09         APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER10TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER11TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2010       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2011       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2012       UCAS_APPLICANT_SCHOOL_CHANGE   PROCEDURE           INVALID
    USER2012       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2013       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2014       APY_DRC_PKG                    PACKAGE BODY        INVALID
    PERFSTAT        STATSPACK                      PACKAGE BODY        INVALID
     
    10 rows selected.


    Drop and re-create the synonyms for users:

    SQL> select distinct(owner) from dba_objects where status != 'VALID' and object_type='SYNONYM';
     
    OWNER
    ---------------
    PUBLIC

    select 'create synonym '||OWNER||'.'||SYNONYM_NAME||' for '||TABLE_OWNER||'.'||TABLE_NAME||';' from dba_synonyms where owner = 'OWNER';
    select 'drop synonym '||OWNER||'.'||SYNONYM_NAME||';' from dba_synonyms where owner = 'OWNER';

    Run another count of invalid objects:

    SQL> select count(1) from dba_objects where status != 'VALID';
      COUNT(1)
    ----------
           238

    SQL> select distinct(owner) from  dba_objects where status != 'VALID' order by 1;
    OWNER
    ---------------
    USER09
    USER10TRAIN
    USER11TRAIN
    USER2010
    USER2011
    USER2012
    USER2013
    USER2014
    CSMIG
    PERFSTAT
    PUBLIC
    SYS
    SYSMAN
     
    13 rows selected.


    SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’)  order by 1;
    OWNER           OBJECT_NAME                    OBJECT_TYPE         STATUS
    --------------- ------------------------------ ------------------- -------
    USER09         APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER10TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER11TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2010       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2011       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2012       UCAS_APPLICANT_SCHOOL_CHANGE   PROCEDURE           INVALID
    USER2012       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2013       APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2014       APY_DRC_PKG                    PACKAGE BODY        INVALID
    PERFSTAT        STATSPACK                      PACKAGE BODY        INVALID
     
    10 rows selected.

    Check the packages and fix any problems:

    SQL>  exec SYS.UTL_RECOMP.RECOMP_SERIAL('USER2010');
    PL/SQL procedure successfully completed.
    SQL>  select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('TIMH','STUARTB','PHILG','DAVIDS','DAVEF','SYSMAN','SYS','PUBLIC','CSMIG')  order by 1;
    OWNER           OBJECT_NAME                    OBJECT_TYPE         STATUS
    --------------- ------------------------------ ------------------- -------
    USER09         APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER10TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER11TRAIN    APY_DRC_PKG                    PACKAGE BODY        INVALID
    USER2012       UCAS_APPLICANT_SCHOOL_CHANGE   PROCEDURE           INVALID
    PERFSTAT        STATSPACK                      PACKAGE BODY        INVALID
     
    5 rows selected.
    Tidy Up exercises

    Delete files that are in the DATA_PUMP_DIR:

    SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
    FILENAME             TYPE         FILESIZE MTIME
    -------------------- ---------- ---------- ---------
    HOST01.dmp       file       5368709120 16-JAN-14
    HOST02.dmp       file       5368709120 16-JAN-14
    HOST03.dmp       file       5368709120 16-JAN-14
    HOST04.dmp       file       5368709120 16-JAN-14
    HOST05.dmp       file       5368709120 16-JAN-14
    HOST06.dmp       file       5368709120 16-JAN-14
    HOST07.dmp       file       5368709120 16-JAN-14
    HOST08.dmp       file       5368709120 16-JAN-14
    HOST09.dmp       file       5368709120 16-JAN-14
    HOST10.dmp       file       5368709120 16-JAN-14
    HOST11.dmp       file       5368709120 16-JAN-14
    HOST12.dmp       file       5368709120 16-JAN-14
    HOST13.dmp       file       5368709120 16-JAN-14
    HOST14.dmp       file       5368709120 16-JAN-14
    HOST15.dmp       file       5368709120 16-JAN-14
    HOST16.dmp       file       2671546368 17-JAN-14
    HOST17.dmp       file       5368709120 16-JAN-14
    HOST18.dmp       file       4734468096 16-JAN-14
    HOST19.dmp       file       1061998592 17-JAN-14
    HOST20.dmp       file       5368709120 16-JAN-14
    HOST21.dmp       file       1077534720 16-JAN-14
    IMPORT.LOG           file             3923 16-JAN-14
    datapump/            directory        4096 20-JAN-14

    Use utl_file.fgetattr to check if a file exists:

    SQL>set serverout on
    SQL>set serveroutput on
    SQL>SET SERVEROUTPUT ON
    SQL>DECLARE
    l_file_exists BOOLEAN;
    l_file_len NUMBER;
    l_blocksize BINARY_INTEGER;
    BEGIN
    utl_file.fgetattr(
    location => 'DATA_PUMP_DIR',
    filename => 'IMPORT.LOG',
    fexists => l_file_exists,
    file_length => l_file_len,
    block_size => l_blocksize);
    IF l_file_exists THEN
    dbms_output.put_line('File found, size=' || l_file_len);
    ELSE dbms_output.put_line('File not found.');
    END IF;
    END;
    /
     
    File found, size=3923
    PL/SQL procedure successfully completed.

    Use utl_file.fremove to delete a file:

    begin
    utl_file.fremove ('DATA_PUMP_DIR','IMPORT.LOG');
    end fremove;
    /
    PL/SQL procedure successfully completed.

    Run again using utl_file.fgetattr to check if a file exists:

    SQL>set serverout on
    SQL>set serveroutput on
    SQL>SET SERVEROUTPUT ON
    SQL>DECLARE
    l_file_exists BOOLEAN;
    l_file_len NUMBER;
    l_blocksize BINARY_INTEGER;
    BEGIN
    utl_file.fgetattr(
    location => 'DATA_PUMP_DIR',
    filename => 'IMPORT.LOG',
    fexists => l_file_exists,
    file_length => l_file_len,
    block_size => l_blocksize);
    IF l_file_exists THEN
    dbms_output.put_line('File found, size=' || l_file_len);
    ELSE dbms_output.put_line('File not found.');
    END IF;
    END;
    /
     
    File not found.
    PL/SQL procedure successfully completed.

    ..and confirm by listing again:

    SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
    FILENAME             TYPE         FILESIZE MTIME
    -------------------- ---------- ---------- ---------
    HOST01.dmp       file       5368709120 16-JAN-14
    HOST02.dmp       file       5368709120 16-JAN-14
    HOST03.dmp       file       5368709120 16-JAN-14
    HOST04.dmp       file       5368709120 16-JAN-14
    HOST05.dmp       file       5368709120 16-JAN-14
    HOST06.dmp       file       5368709120 16-JAN-14
    HOST07.dmp       file       5368709120 16-JAN-14
    HOST08.dmp       file       5368709120 16-JAN-14
    HOST09.dmp       file       5368709120 16-JAN-14
    HOST10.dmp       file       5368709120 16-JAN-14
    HOST11.dmp       file       5368709120 16-JAN-14
    HOST12.dmp       file       5368709120 16-JAN-14
    HOST13.dmp       file       5368709120 16-JAN-14
    HOST14.dmp       file       5368709120 16-JAN-14
    HOST15.dmp       file       5368709120 16-JAN-14
    HOST16.dmp       file       2671546368 17-JAN-14
    HOST17.dmp       file       5368709120 16-JAN-14
    HOST18.dmp       file       4734468096 16-JAN-14
    HOST19.dmp       file       1061998592 17-JAN-14
    HOST20.dmp       file       5368709120 16-JAN-14
    HOST21.dmp       file       1077534720 16-JAN-14
    datapump/            directory        4096 20-JAN-14


    To be able to retrieve any .dmp files (or any new ones), from your local instance (you need to be running a local Oracle instance) you can use the DBMS_FILE_TRANSFER.GET_FILE procedure to copy A.dmp to ACopy.dmp from the RDS server to your local server:

    SQL > BEGIN
    DBMS_FILE_TRANSFER.GET_FILE(source_directory_object => 'DATA_PUMP_DIR', source_file_name => 'A.dmp', source_database => 'to_rds', destination_directory_object => 'DATA_PUMP_DIR', destination_file_name => 'ACopy.dmp');
    END;
    /
     
    You will need to create a database link to the RDS database (to_rds) and a directory object on your local instance where you want to copy the dump file (LOCAL_DATA_PUMP_DIR).