EC2 Server to Launch:
This can be done using an existing AMI image (which will have the defaults already configured) or create one from scratch (just note that packages will have to be installed i.e oracle client/perl etc..). Both the EC2 and the RDS must be in the same zone and if you have your network configured in AWS use the same VPC's & Subnets.
Login to your AWS console and create an EC2 instance
- Select "My AMI’s" on left and search for your "in-house" one or use one of the "Quick Start" ones
- Select General Purpose and M1.XLARGE
- Select the correct Network/Subnet (we will use the testdev one – 10.10.XX.XX) in zone 1c
- Add your storage:
- Tag your instance:
- Select the test security groups from existing list:
- Review and Launch
- Set up putty with the correct database keys:
RDS Server to Launch:
- Select standard edition one
- Select No to multi-AZ
- Add in the details:
- Add in details again. Select the 11.2.0.3 DB engine and be sure to select 1c as zone and correct VPC’s. The security groups should be same/similar to the EC2 instance (check with your network administrator if not sure.
- No backups
- Review & Launch
- Make a note of the endpoint:
- Test Connection:
Login to an EC2 AWS server in same subnet/VPC and setup a tns entry:
$ setdb_dbd12c.sh
oracle 12c Cloud DB RDBMS: cd $TNS_ADMIN
oracle 12c Cloud DB RDBMS: vi tnsnames.ora
RDSDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = RDSDB)
)
)
oracle 12c Cloud DB RDBMS: tnsping RDSDB
TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-JAN-2014 11:03:22
Copyright (c) 1997, 2011, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = RDSDB)))
OK (150 msec)
Perform Export on Host:
Run utlrp.sql on the source database and check how many invalid objects there are to start with:
SQL> select count(1) from dba_objects where status != 'VALID';
COUNT(1)
----------
6
SQL> select distinct(owner) from dba_objects where status != 'VALID' order by 1;
OWNER
------------------------------
USER09
USER2010
USER2011
USER2012
USER2013
SQL>select owner, object_name, object_type, status from dba_objects where status != 'VALID'
OWNER OBJECT_NAME OBJECT_TYPE STATUS
---------- ------------------------------ -------------------- -------
USER09 APY_DRC_PKG PACKAGE BODY INVALID
USER2012 UCAS_APPLICANT_SCHOOL_CHANGE PROCEDURE INVALID
USER2012 APY_DRC_PKG PACKAGE BODY INVALID
USER2011 APY_DRC_PKG PACKAGE BODY INVALID
USER2010 APY_DRC_PKG PACKAGE BODY INVALID
USER2013 APY_DRC_PKG PACKAGE BODY INVALID
6 rows selected.
In this instance we will be exporting from HOST1 on host-01 using the data pump directory - /tmp/export/HOST
host-01-oracle>pwd
/tmp/export/HOST
host-01-oracle>expdp system/system_HOST full=y parallel=4 logfile=HOST_dump.log job_name=HOST_rds filesize=5G dumpfile=HOST_%U.dmp directory=data_pump_dir
Export: Release 10.2.0.5.0 - 64bit Production on Wednesday, 15 January, 2014 9:49:09
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."HOST_RDS": system/******** full=y parallel=4 logfile=HOST_dump.log job_name=HOST_rdsfilesize=5G dumpfile=HOST_%U.dmp directory=data_pump_dir
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 137.9 GB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/CONTEXT
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
. . exported "SVR2011_SCH"."APY_APP_APPLICATION_S" 5.965 GB 2658468 rows
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
. . exported "SVR2013_SCH"."APY_APP_APPLICATION_S" 5.630 GB 2375422 rows
. . exported "SVR2012_SCH"."APY_APP_APPLICATION_S" 6.777 GB 2904690 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/REFRESH_GROUP
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
...
. . exported "SVR2013_SCH"."APY_APP_APPLICATION_S" 5.630 GB 2375422 rows
. . exported "SVR2012_SCH"."APY_APP_APPLICATION_S" 6.777 GB 2904690 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
"HOST_dump.log" [Read only] 2524 lines, 197927 characters
******************************************************************************
Dump file set for SYSTEM.HOST_RDS is:
/tmp/export/HOST/HOST_01.dmp
/tmp/export/HOST/HOST_02.dmp
/tmp/export/HOST/HOST_03.dmp
/tmp/export/HOST/HOST_04.dmp
/tmp/export/HOST/HOST_05.dmp
/tmp/export/HOST/HOST_06.dmp
/tmp/export/HOST/HOST_07.dmp
/tmp/export/HOST/HOST_08.dmp
/tmp/export/HOST/HOST_09.dmp
/tmp/export/HOST/HOST_10.dmp
/tmp/export/HOST/HOST_11.dmp
/tmp/export/HOST/HOST_12.dmp
/tmp/export/HOST/HOST_13.dmp
/tmp/export/HOST/HOST_14.dmp
/tmp/export/HOST/HOST_15.dmp
/tmp/export/HOST/HOST_16.dmp
/tmp/export/HOST/HOST_17.dmp
/tmp/export/HOST/HOST_18.dmp
/tmp/export/HOST/HOST_19.dmp
/tmp/export/HOST/HOST_20.dmp
/tmp/export/HOST/HOST_21.dmp
Job "SYSTEM"."HOST_RDS" successfully completed at 12:56:47
## TOTAL TIME 3hrs 7 minutes ##
/tmp/export/HOST
host-01-oracle>ls -lrt
total 196998528
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 09:53 HOST_01.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 09:56 HOST_02.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 09:57 HOST_03.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 11:05 HOST_05.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 11:05 HOST_06.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 11:05 HOST_07.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 11:58 HOST_04.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:04 HOST_10.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:08 HOST_09.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:10 HOST_13.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:12 HOST_14.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:13 HOST_15.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:36 HOST_17.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:47 HOST_08.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:50 HOST_12.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:54 HOST_20.dmp
-rw-rw---- 1 oracle oinstall 5368709120 Jan 15 12:54 HOST_11.dmp
-rw-rw---- 1 oracle oinstall 1061998592 Jan 15 12:56 HOST_19.dmp
-rw-rw---- 1 oracle oinstall 4734468096 Jan 15 12:56 HOST_18.dmp
-rw-rw---- 1 oracle oinstall 1077534720 Jan 15 12:56 HOST_21.dmp
-rw-rw---- 1 oracle oinstall 2671546368 Jan 15 12:56 HOST_16.dmp
-rw-rw-r-- 1 oracle oinstall 197927 Jan 15 12:56 HOST_dump.log
## TOTAL SIZE 100GB ##
Test Connection from host to EC2
host-01-oracle> ping 10.10.10.10
no answer from 10.10.10.10
Open up firewall rules:
host-01-oracle>ssh oracle@10.10.10.10
The authenticity of host '10.10.10.10 (10.10.10.10)' can't be established.
RSA key fingerprint is 0b:8f:9f:1d:8d:62:59:93:1c:02:e2:de:c3:59:ba:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.10.10' (RSA) to the list of known hosts.
oracle@10.10.10.10's password:
Last login: Wed Jan 15 14:36:43 2014 from 10.1.5.33
Copy files from host to EC2
$ df -h
FilesystemSize Used Avail Use% Mounted on
/dev/xvda1 99G 3.2G 95G 4% /
none 7.4G 0 7.4G 0% /dev/shm
/dev/xvdd 99G 196M 94G 1% /home
/dev/xvde 99G 10G 84G 11% /u01/app/oracle
/dev/xvdf 985G 324G 612G 35% /u01/app/oracle/fast_recovery_area
/dev/xvdg 99G 188M 94G 1% /u05/goldengate
/dev/xvdh 493G 220G 248G 47% /u02/app/oracle/DATA/MIDDLE
/dev/xvdi 493G 219G 249G 47% /u02/app/oracle/DATA/MIDDLE1
/dev/xvdj 493G 219G 249G 47% /u02/app/oracle/DATA/MIDDLE2
/dev/xvdk 493G 219G 249G 47% /u02/app/oracle/DATA/MIDDLE3
/dev/xvdl 20G 1.7G 18G 9% /u03/app/oracle/REDO1
/dev/xvdm 20G 1.7G 18G 9% /u03/app/oracle/REDO2
/dev/xvdn 99G 4.8G 89G 6% /u04/app/oracle/ARCH
/dev/xvdo 30G 172M 28G 1% /jobdata
host-01-oracle>scp *.dmp oracle@10.10.10.10:/u01/app/oracle/fast_recovery_area/export/
oracle@10.10.10.10's password:
HOST_01.dmp 100% |************************************************| 5120 MB 07:12
HOST_02.dmp 100% |************************************************| 5120 MB 07:02
HOST_03.dmp 100% |************************************************| 5120 MB 07:30
host-01-oracle>scp *.dmp oracle@10.10.10.10:/u01/app/oracle/fast_recovery_area/export/rds
oracle@10.10.10.10's password:
HOST_16.dmp 100% |************************************************| 2547 MB 03:57
HOST_17.dmp 100% |************************************************| 5120 MB 08:38
HOST_18.dmp 100% |************************************************| 4515 MB 06:54
## Approx 7min/5GB chunk to upload ##
## Total Time Approx 3hrs ##
Test Connection from EC2 to RDS
Check the perl version installed (10.10.10.10):
$ setdb_MIDDLE.sh
oracle 11gR2 Middle:whichperl
/u01/app/oracle/product/11.2.0/db_1/perl/bin/perl
oracle 11gR2 Middle:perl -v
This is perl, v5.10.0 built for x86_64-linux-thread-multi
Copyright 1987-2007, Larry Wall
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldocperl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
oracle 11gR2 Middle:/u01/app/oracle/product/11.2.0/db_1/perl/bin/perl -le 'use DBD::Oracle; print $DBD::Oracle::VERSION'
1.20
Create a test script called test.pl and enter the following details into it:
oracle 11gR2 Middle:mkdir -p /u01/app/oracle/fast_recovery_area/export/rds
oracle 11gR2 Middle:cd /u01/app/oracle/fast_recovery_area/export/rds
oracle 11gR2 Middle:vi test.pl
--------
use DBI;
use warnings;
use strict;
# RDS instance info
my $RDS_PORT=1521;
my $RDS_HOST="rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com";
my $RDS_LOGIN="user/*********";
my $RDS_SID="RDSDB";
my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') || die ( $DBI::errstr . "\n") ;
my $sth = $conn->prepare("select * from dual");
$sth->execute;
print "Got here without dying\n";
--------
oracle 11gR2 Middle:chmod 755 test.pl
oracle 11gR2 Middle:perl test.pl
Got here without dying
Create the main script called copy_to_rds.pl and enter the following details into it:
oracle 11gR2 Middle:vi copy_to_rds.pl
use DBI;
use warnings;
use strict;
# RDS instance info
my $RDS_PORT=1521;
my $RDS_HOST="rdsdb.hhhgggjfjfj.eu-west-1.rds.amazonaws.com";
my $RDS_LOGIN="user/********";
my $RDS_SID="RDSDB";
#The $ARGV[0] is a parameter you pass into the script
my $dirname = "DATA_PUMP_DIR";
my $fname = $ARGV[0];
my $data = "dummy";
my $chunk = 8192;
my $sql_open = "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;";
my $sql_write = "BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;";
my $sql_close = "BEGIN utl_file.fclose(perl_global.fh); END;";
my $sql_global = "create or replace package perl_global as fhutl_file.file_type; end;";
my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') || die ( $DBI::errstr . "\n") ;
my $updated=$conn->do($sql_global);
my $stmt = $conn->prepare ($sql_open);
$stmt->bind_param_inout(":dirname", \$dirname, 12);
$stmt->bind_param_inout(":fname", \$fname, 12);
$stmt->bind_param_inout(":chunk", \$chunk, 4);
$stmt->execute() || die ( $DBI::errstr . "\n");
open (INF, $fname) || die "\nCan't open $fname for reading: $!\n";
binmode(INF);
$stmt = $conn->prepare ($sql_write);
my %attrib = (’ora_type’,’24’);
my $val=1;
while ($val> 0) {
$val = read (INF, $data, $chunk);
$stmt->bind_param(":data", $data , \%attrib);
$stmt->execute() || die ( $DBI::errstr . "\n") ; };
die "Problem copying: $!\n" if $!;
close INF || die "Can't close $fname: $!\n";
$stmt = $conn->prepare ($sql_close);
$stmt->execute() || die ( $DBI::errstr . "\n") ;
oracle 11gR2 Middle:chmod 755 copy_to_rds.pl
Run it:
oracle 11gR2 Middle:perl copy_to_rds.pl
DBD::Oracle::st execute failed: ORA-29288: invalid file name
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at line 1 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;" with ParamValues: :chunk=8192, :fname=undef, :dirname='DATA_PUMP_DIR'] at copy_to_rds.pl line 31.
ORA-29288: invalid file name
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at line 1 (DBD ERROR: OCIStmtExecute)
Run it with parameter
oracle 11gR2 Middle:perl copy_to_rds.pl HOST_01.dmp
oracle 11gR2 Middle:
Probably easier to create a shell script to copy all the files
#!/bin/bash
cd /u01/app/oracle/fast_recovery_area/export/rds
echo "Copying HOST_02.dmp"
perl copy_to_rds.pl HOST_02.dmp
echo "Copying HOST_03.dmp"
perl copy_to_rds.pl HOST_03.dmp
echo "Copying HOST_04.dmp"
perl copy_to_rds.pl HOST_04.dmp
echo "Copying HOST_05.dmp"
perl copy_to_rds.pl HOST_05.dmp
echo "Copying HOST_06.dmp"
perl copy_to_rds.pl HOST_06.dmp
echo "Copying HOST_07.dmp"
perl copy_to_rds.pl HOST_07.dmp
echo "Copying HOST_08.dmp"
perl copy_to_rds.pl HOST_08.dmp
echo "Copying HOST_09.dmp"
perl copy_to_rds.pl HOST_09.dmp
echo "Copying HOST_10.dmp"
perl copy_to_rds.pl HOST_10.dmp
echo "Copying HOST_11.dmp"
perl copy_to_rds.pl HOST_11.dmp
echo "Copying HOST_12.dmp"
perl copy_to_rds.pl HOST_12.dmp
echo "Copying HOST_13.dmp"
perl copy_to_rds.pl HOST_13.dmp
echo "Copying HOST_14.dmp"
perl copy_to_rds.pl HOST_14.dmp
echo "Copying HOST_15.dmp"
perl copy_to_rds.pl HOST_15.dmp
echo "Copying HOST_16.dmp"
perl copy_to_rds.pl HOST_16.dmp
echo "Copying HOST_17.dmp"
perl copy_to_rds.pl HOST_17.dmp
echo "Copying HOST_18.dmp"
perl copy_to_rds.pl HOST_18.dmp
echo "Copying HOST_19.dmp"
perl copy_to_rds.pl HOST_19.dmp
echo "Copying HOST_20.dmp"
perl copy_to_rds.pl HOST_20.dmp
echo "Copying HOST_21.dmp"
perl copy_to_rds.pl HOST_21.dmp
One way to check if the files are going to the correct location is to check the cloudwatch freespace monitor:
Another way is to use the AWS packages to query the DATA_PUMP_DIR directory. Login as user@RDSDB RDS instance and run the following:
SQL> select grantee, privilege from dba_tab_privs where table_name='DATA_PUMP_DIR';
GRANTEE PRIVILEGE
------------------------------ -----------------
DBA READ
EXP_FULL_DATABASE READ
IMP_FULL_DATABASE READ
RDSADMIN READ
DBA WRITE
EXP_FULL_DATABASE WRITE
IMP_FULL_DATABASE WRITE
RDSADMIN WRITE
8 rows selected.
SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
FILENAME TYPE FILESIZE MTIME
-------------------- ---------- ---------- ---------
HOST_01.dmp file 5368709120 16-JAN-14
HOST_02.dmp file 5368709120 16-JAN-14
HOST_03.dmp file 5368709120 16-JAN-14
HOST_04.dmp file 5368709120 16-JAN-14
HOST_05.dmp file 5368709120 16-JAN-14
HOST_06.dmp file 5368709120 16-JAN-14
HOST_07.dmp file 5368709120 16-JAN-14
HOST_08.dmp file 5368709120 16-JAN-14
HOST_09.dmp file 5368709120 16-JAN-14
HOST_10.dmp file 5368709120 16-JAN-14
HOST_11.dmp file 5368709120 16-JAN-14
HOST_12.dmp file 5368709120 16-JAN-14
HOST_13.dmp file 5368709120 16-JAN-14
HOST_14.dmp file 5368709120 16-JAN-14
HOST_15.dmp file 5368709120 16-JAN-14
HOST_16.dmp file 2671546368 17-JAN-14
HOST_17.dmp file 5368709120 16-JAN-14
HOST_18.dmp file 4734468096 16-JAN-14
HOST_19.dmp file 1061998592 17-JAN-14
HOST_20.dmp file 5368709120 16-JAN-14
HOST_21.dmp file 1077534720 16-JAN-14
datapump/ directory 4096 20-JAN-14
22 rows selected.
Check these sizes against the sizes from the EC2 instance.
Proceed to Creating the tablespaces in RDS same as host.
Export the metadata ddl from host using:
set heading off;
set echo off;
Set pages 999;
set long 90000;
spool tablespace.sql
select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;
spool off
Edit the “tablespace.sql” script to only include the “CREATE TABLESPACE
;” statement (rest can be ignored) i.e:
CREATE TABLESPACE "USER_INDEX08";
CREATE TABLESPACE "USER_DATA08";
CREATE TABLESPACE "USER_DATA09";
CREATE TABLESPACE "USER_INDEX09";
CREATE TABLESPACE "JFORUM_DATA";
Then create these tablespaces in the RDS instance:
oracle 12c Cloud DB RDBMS: sqlplus user/*********@RDSDB
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 16 09:11:44 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL> @tablspace.sql
Tablespace created
...
.
Import the data in RDS
Import from within the Amazon RDS instance using DBMS_DATAPUMP package and submit a job using PL/SQL – Maybe worth taking a backup of your RDS instance before continuing.
declare
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
begin
-- Create a (user-named) Data Pump job to do a "full" import (everything in the dump file without filtering).
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'RDSDB4');
dbms_datapump.add_file(handle => h1, filename => 'IMPORT.LOG', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
-- Specify the dump files for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. These are the dump file created by
-- the export operation in the first example.
dbms_datapump.add_file(handle => h1, filename => 'HOST_01.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_02.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_03.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_04.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_05.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_06.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_07.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_08.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_09.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_10.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_11.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_12.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_13.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_14.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_15.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_16.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_17.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_18.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_19.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_20.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => 'HOST_21.dmp', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
-- add your datapump parameters
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
-- Start the job. An exception is returned if something is not set up properly.
dbms_datapump.start_job(handle => h1, skip_current => 0);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(handle => h1);
exception
when others then
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
end;
/
oracle 12c Cloud DB RDBMS: sqlplus user/********@RDSDB
SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 21 08:40:34 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL>set serveroutput on
SQL>set serverout on
SQL> @import.sql
Check the state of the submitted job from another session:
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state
SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED_SESSIONS
---------- -------------------- ----------- ----------- ----------- -----------------
user RDSDB4 IMPORT FULL EXECUTING 1
If you need to Stop/Kill the job:
SQL> SET serveroutput on
SET lines 100SQL>
SQL> declare
2 h1 number;
3 begin
4 h1 := DBMS_DATAPUMP.ATTACH('RDSDB4','user');
5 DBMS_DATAPUMP.STOP_JOB (h1);
6 end;
7 /
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
ORA-39082: Object type TRIGGER:"SYSMAN"."MGMT_CREDS_INS_UPD" created with compilation warnings
ORA-39082: Object type TRIGGER:"SYSMAN"."MGMT_CREDS_INS_UPD" created with compilation warnings
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/REFRESH_GROUP
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Job "user"."RDSDB4" completed with 374 error(s) at 13:38:05
Job has completed
Final job state = COMPLETED
PL/SQL procedure successfully completed.
## APPROX 3 hrs ##
To check progress, monitor the cloudwatch graphs:
To check the log file create a table that is externally linked to the logfile and select the contents:
CREATE TABLE datapumplog
(
text VARCHAR2(400)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY DATA_PUMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
NOBADFILE NODISCARDFILE NOLOGFILE
fields terminated by 0x'09'
MISSING FIELD VALUES ARE NULL
(
text
)
)
LOCATION ( 'IMPORT.LOG' )
)
REJECT LIMIT UNLIMITED;
SQL>set pages 5000
SQL>set lines 300
SQL>spool import.log
SQL>select * from datapumplog;
You may be able to directly read the contents of the log :
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','IMPORT.LOG'));
Recompile the schemas (have to done individually):
oracle 12c Cloud DB RDBMS: sqlplus user/*********@RDSDB
SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 21 14:19:37 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL> set serverout on
SQL> set serveroutput on
SQL> select count(1) from dba_objects where status != ‘VALID’;
1250 rows selected
SQL> select distinct(owner) from dba_objects where status != 'VALID' order by 1;
OWNER
--------------------
USER09
USER10TRAIN
USER11TRAIN
USER2010
USER2011
USER2012
USER2013
USER2014
CSMIG
NETUSER2012
NETUSER2013
NETUSER2014
FORUM2010
FORUM2011
FORUM2012
FORUM2013
SVR2011
SVR2011_SCH
SVR2012
SVR2013
SVR2013BATCH
SVR2014
SVR2014BATCH
SVRBATCH2012
SVRREADONLY
SVRSEND
SVRSEND_SCH
PAYMENT
PERFSTAT
PUBLIC
DISK
DISK_SCH
SYS
SYSMAN
TARIFF_FORUM
SQL> exec SYS.UTL_RECOMP.RECOMP_SERIAL('DISK_SCH');
PL/SQL procedure successfully completed.
SQL> show errors
No errors.
Do for each schema above
SQL> select count(1) from dba_objects where status != ‘VALID’;
513 rows selected
SQL> select distinct(owner) from dba_objects where status != 'VALID' order by 1;
OWNER
--------------------
USER09
USER10TRAIN
USER11TRAIN
USER2010
USER2011
USER2012
USER2013
USER2014
CSMIG
PERFSTAT
PUBLIC
DISK_SCH
SYS
SYSMAN
16 rows selected.
Now check the objects involved for each schema
Some may be down to database links or invalid materialized views
SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’) order by 1;
OWNER OBJECT_NAME OBJECT_TYPE STATUS
-------------------- --------------------------------------------- -------------------- -------
USER09 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER09 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER09 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER09 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER09 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER09 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER09 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER09 APY_DRC_PKG PACKAGE BODY INVALID
USER09 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER09 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER10TRAIN APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER10TRAIN APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER10TRAIN ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER10TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER10TRAIN ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER10TRAIN ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER10TRAIN ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER10TRAIN ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER10TRAIN APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER10TRAIN APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER11TRAIN ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER11TRAIN ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER11TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER11TRAIN ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER11TRAIN APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER11TRAIN APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER11TRAIN APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER11TRAIN ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER11TRAIN ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER11TRAIN APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2010 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER2010 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER2010 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER2010 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER2010 APY_DRC_PKG PACKAGE BODY INVALID
USER2010 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER2010 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER2010 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2010 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER2010 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER2011 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER2011 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER2011 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER2011 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER2011 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER2011 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER2011 APY_DRC_PKG PACKAGE BODY INVALID
USER2011 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER2011 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER2011 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2012 UCAS_APPLICANT_SCHOOL_CHANGE PROCEDURE INVALID
USER2012 APY_DRC_PKG PACKAGE BODY INVALID
USER2012 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER2012 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER2012 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER2012 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2012 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER2012 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER2012 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER2012 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER2012 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER2013 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
USER2013 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER2013 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER2013 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER2013 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER2013 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2013 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER2013 APY_DRC_PKG PACKAGE BODY INVALID
USER2013 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER2013 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER2014 APY_DRC_PKG PACKAGE BODY INVALID
USER2014 ARF_VAV_VALID_VALUES_MVW MATERIALIZED VIEW INVALID
USER2014 ARF_VVD_DOMAINS_MVW MATERIALIZED VIEW INVALID
USER2014 ARF_TTL_TITLE_MVW MATERIALIZED VIEW INVALID
USER2014 ARF_GLP_GROUP_LIST_PARAM_MVW MATERIALIZED VIEW INVALID
USER2014 ARF_CTY_COUNTRY_MVW MATERIALIZED VIEW INVALID
USER2014 APY_SAD_SCHOOL_ADDRESSES_MVW MATERIALIZED VIEW INVALID
USER2014 APY_APT_APPLICANT_MVW MATERIALIZED VIEW INVALID
USER2014 APY_CGP_CONTACT_GROUP_MVW MATERIALIZED VIEW INVALID
USER2014 APY_CHC_CHOICE_MVW MATERIALIZED VIEW INVALID
PERFSTAT STATSPACK PACKAGE BODY INVALID
DISK_SCH MVW_CTY_COUNTRY MATERIALIZED VIEW INVALID
DISK_SCH MVW_QET_QUALENT MATERIALIZED VIEW INVALID
84 rows selected.
Create db_link back to a middle instance in Cloud for all schema’s involved:
SQL> CREATE DATABASE LINK middle.ucas.ac.uk
CONNECT TO middle
IDENTIFIED BY pwd_1_middle
USING '(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.10.10)(PORT = 1521)))(CONNECT_DATA = (SID = MIDDLE)))';
Database link created.
SQL> select sysdate from dual@middle.ucas.ac.uk;
SYSDATE
---------
22-JAN-14
Then recompile the objects again:
SQL> select 'alter materialized view '||owner||'.'||object_name||' compile;' from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC') and OBJECT_TYPE='MATERIALIZED VIEW';
alter materialized view USER09.APY_CHC_CHOICE_MVW compile;
alter materialized view USER09.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER09.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER09.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER09.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER09.ARF_TTL_TITLE_MVW compile;
alter materialized view USER09.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER09.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER10TRAIN.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER10TRAIN.APY_CHC_CHOICE_MVW compile;
alter materialized view USER10TRAIN.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER10TRAIN.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER10TRAIN.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER10TRAIN.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER10TRAIN.ARF_TTL_TITLE_MVW compile;
alter materialized view USER10TRAIN.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER2010.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER2010.ARF_TTL_TITLE_MVW compile;
alter materialized view USER2010.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER2010.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER2010.APY_APT_APPLICANT_MVW compile;
alter materialized view USER10TRAIN.APY_APT_APPLICANT_MVW compile;
alter materialized view USER2012.APY_APT_APPLICANT_MVW compile;
alter materialized view USER2012.APY_CHC_CHOICE_MVW compile;
alter materialized view USER2011.APY_CHC_CHOICE_MVW compile;
alter materialized view USER2011.APY_APT_APPLICANT_MVW compile;
alter materialized view USER11TRAIN.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER11TRAIN.APY_APT_APPLICANT_MVW compile;
alter materialized view USER11TRAIN.APY_CHC_CHOICE_MVW compile;
alter materialized view USER11TRAIN.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER11TRAIN.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER11TRAIN.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER11TRAIN.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER11TRAIN.ARF_TTL_TITLE_MVW compile;
alter materialized view USER11TRAIN.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER09.APY_APT_APPLICANT_MVW compile;
alter materialized view USER2010.APY_CHC_CHOICE_MVW compile;
alter materialized view USER2010.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER2010.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER2010.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER2011.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER2011.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER2011.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER2011.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER2011.ARF_TTL_TITLE_MVW compile;
alter materialized view USER2011.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER2011.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER2013.APY_CHC_CHOICE_MVW compile;
alter materialized view USER2013.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER2013.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER2013.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER2013.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER2013.ARF_TTL_TITLE_MVW compile;
alter materialized view USER2013.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER2013.ARF_VVD_DOMAINS_MVW compile;
alter materialized view USER2012.APY_CGP_CONTACT_GROUP_MVW compile;
alter materialized view USER2012.APY_SAD_SCHOOL_ADDRESSES_MVW compile;
alter materialized view USER2012.ARF_CTY_COUNTRY_MVW compile;
alter materialized view USER2012.ARF_GLP_GROUP_LIST_PARAM_MVW compile;
alter materialized view USER2012.ARF_TTL_TITLE_MVW compile;
alter materialized view USER2012.ARF_VAV_VALID_VALUES_MVW compile;
alter materialized view USER2012.ARF_VVD_DOMAINS_MVW compile;
alter materialized view DISK_SCH.MVW_CTY_COUNTRY compile;
alter materialized view DISK_SCH.MVW_QET_QUALENT compile;
SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’) order by 1
OWNER OBJECT_NAME OBJECT_TYPE STATUS
--------------- ------------------------------ ------------------- -------
USER09 APY_DRC_PKG PACKAGE BODY INVALID
USER10TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER11TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER2010 APY_DRC_PKG PACKAGE BODY INVALID
USER2011 APY_DRC_PKG PACKAGE BODY INVALID
USER2012 UCAS_APPLICANT_SCHOOL_CHANGE PROCEDURE INVALID
USER2012 APY_DRC_PKG PACKAGE BODY INVALID
USER2013 APY_DRC_PKG PACKAGE BODY INVALID
USER2014 APY_DRC_PKG PACKAGE BODY INVALID
PERFSTAT STATSPACK PACKAGE BODY INVALID
10 rows selected.
Drop and re-create the synonyms for users:
SQL> select distinct(owner) from dba_objects where status != 'VALID' and object_type='SYNONYM';
OWNER
---------------
PUBLIC
select 'create synonym '||OWNER||'.'||SYNONYM_NAME||' for '||TABLE_OWNER||'.'||TABLE_NAME||';' from dba_synonyms where owner = 'OWNER';
select 'drop synonym '||OWNER||'.'||SYNONYM_NAME||';' from dba_synonyms where owner = 'OWNER';
Run another count of invalid objects:
SQL> select count(1) from dba_objects where status != 'VALID';
COUNT(1)
----------
238
SQL> select distinct(owner) from dba_objects where status != 'VALID' order by 1;
OWNER
---------------
USER09
USER10TRAIN
USER11TRAIN
USER2010
USER2011
USER2012
USER2013
USER2014
CSMIG
PERFSTAT
PUBLIC
SYS
SYSMAN
13 rows selected.
SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('SYSMAN','SYS','PUBLIC',’CSMIG’) order by 1;
OWNER OBJECT_NAME OBJECT_TYPE STATUS
--------------- ------------------------------ ------------------- -------
USER09 APY_DRC_PKG PACKAGE BODY INVALID
USER10TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER11TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER2010 APY_DRC_PKG PACKAGE BODY INVALID
USER2011 APY_DRC_PKG PACKAGE BODY INVALID
USER2012 UCAS_APPLICANT_SCHOOL_CHANGE PROCEDURE INVALID
USER2012 APY_DRC_PKG PACKAGE BODY INVALID
USER2013 APY_DRC_PKG PACKAGE BODY INVALID
USER2014 APY_DRC_PKG PACKAGE BODY INVALID
PERFSTAT STATSPACK PACKAGE BODY INVALID
10 rows selected.
Check the packages and fix any problems:
SQL> exec SYS.UTL_RECOMP.RECOMP_SERIAL('USER2010');
PL/SQL procedure successfully completed.
SQL> select owner, object_name, object_type, status from dba_objects where status != 'VALID' and owner not in ('TIMH','STUARTB','PHILG','DAVIDS','DAVEF','SYSMAN','SYS','PUBLIC','CSMIG') order by 1;
OWNER OBJECT_NAME OBJECT_TYPE STATUS
--------------- ------------------------------ ------------------- -------
USER09 APY_DRC_PKG PACKAGE BODY INVALID
USER10TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER11TRAIN APY_DRC_PKG PACKAGE BODY INVALID
USER2012 UCAS_APPLICANT_SCHOOL_CHANGE PROCEDURE INVALID
PERFSTAT STATSPACK PACKAGE BODY INVALID
5 rows selected.
Tidy Up exercises
Delete files that are in the DATA_PUMP_DIR:
SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
FILENAME TYPE FILESIZE MTIME
-------------------- ---------- ---------- ---------
HOST01.dmp file 5368709120 16-JAN-14
HOST02.dmp file 5368709120 16-JAN-14
HOST03.dmp file 5368709120 16-JAN-14
HOST04.dmp file 5368709120 16-JAN-14
HOST05.dmp file 5368709120 16-JAN-14
HOST06.dmp file 5368709120 16-JAN-14
HOST07.dmp file 5368709120 16-JAN-14
HOST08.dmp file 5368709120 16-JAN-14
HOST09.dmp file 5368709120 16-JAN-14
HOST10.dmp file 5368709120 16-JAN-14
HOST11.dmp file 5368709120 16-JAN-14
HOST12.dmp file 5368709120 16-JAN-14
HOST13.dmp file 5368709120 16-JAN-14
HOST14.dmp file 5368709120 16-JAN-14
HOST15.dmp file 5368709120 16-JAN-14
HOST16.dmp file 2671546368 17-JAN-14
HOST17.dmp file 5368709120 16-JAN-14
HOST18.dmp file 4734468096 16-JAN-14
HOST19.dmp file 1061998592 17-JAN-14
HOST20.dmp file 5368709120 16-JAN-14
HOST21.dmp file 1077534720 16-JAN-14
IMPORT.LOG file 3923 16-JAN-14
datapump/ directory 4096 20-JAN-14
Use utl_file.fgetattr to check if a file exists:
SQL>set serverout on
SQL>set serveroutput on
SQL>SET SERVEROUTPUT ON
SQL>DECLARE
l_file_exists BOOLEAN;
l_file_len NUMBER;
l_blocksize BINARY_INTEGER;
BEGIN
utl_file.fgetattr(
location => 'DATA_PUMP_DIR',
filename => 'IMPORT.LOG',
fexists => l_file_exists,
file_length => l_file_len,
block_size => l_blocksize);
IF l_file_exists THEN
dbms_output.put_line('File found, size=' || l_file_len);
ELSE dbms_output.put_line('File not found.');
END IF;
END;
/
File found, size=3923
PL/SQL procedure successfully completed.
Use utl_file.fremove to delete a file:
begin
utl_file.fremove ('DATA_PUMP_DIR','IMPORT.LOG');
end fremove;
/
PL/SQL procedure successfully completed.
Run again using utl_file.fgetattr to check if a file exists:
SQL>set serverout on
SQL>set serveroutput on
SQL>SET SERVEROUTPUT ON
SQL>DECLARE
l_file_exists BOOLEAN;
l_file_len NUMBER;
l_blocksize BINARY_INTEGER;
BEGIN
utl_file.fgetattr(
location => 'DATA_PUMP_DIR',
filename => 'IMPORT.LOG',
fexists => l_file_exists,
file_length => l_file_len,
block_size => l_blocksize);
IF l_file_exists THEN
dbms_output.put_line('File found, size=' || l_file_len);
ELSE dbms_output.put_line('File not found.');
END IF;
END;
/
File not found.
PL/SQL procedure successfully completed.
..and confirm by listing again:
SQL> SELECT * FROM table(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) order by 1;
FILENAME TYPE FILESIZE MTIME
-------------------- ---------- ---------- ---------
HOST01.dmp file 5368709120 16-JAN-14
HOST02.dmp file 5368709120 16-JAN-14
HOST03.dmp file 5368709120 16-JAN-14
HOST04.dmp file 5368709120 16-JAN-14
HOST05.dmp file 5368709120 16-JAN-14
HOST06.dmp file 5368709120 16-JAN-14
HOST07.dmp file 5368709120 16-JAN-14
HOST08.dmp file 5368709120 16-JAN-14
HOST09.dmp file 5368709120 16-JAN-14
HOST10.dmp file 5368709120 16-JAN-14
HOST11.dmp file 5368709120 16-JAN-14
HOST12.dmp file 5368709120 16-JAN-14
HOST13.dmp file 5368709120 16-JAN-14
HOST14.dmp file 5368709120 16-JAN-14
HOST15.dmp file 5368709120 16-JAN-14
HOST16.dmp file 2671546368 17-JAN-14
HOST17.dmp file 5368709120 16-JAN-14
HOST18.dmp file 4734468096 16-JAN-14
HOST19.dmp file 1061998592 17-JAN-14
HOST20.dmp file 5368709120 16-JAN-14
HOST21.dmp file 1077534720 16-JAN-14
datapump/ directory 4096 20-JAN-14
To be able to retrieve any .dmp files (or any new ones), from your local instance (you need to be running a local Oracle instance) you can use the DBMS_FILE_TRANSFER.GET_FILE procedure to copy A.dmp to ACopy.dmp from the RDS server to your local server:
SQL > BEGIN
DBMS_FILE_TRANSFER.GET_FILE(source_directory_object => 'DATA_PUMP_DIR', source_file_name => 'A.dmp', source_database => 'to_rds', destination_directory_object => 'DATA_PUMP_DIR', destination_file_name => 'ACopy.dmp');
END;
/
You will need to create a database link to the RDS database (to_rds) and a directory object on your local instance where you want to copy the dump file (LOCAL_DATA_PUMP_DIR).