Thursday, October 23, 2014

Create Database Manually – Step by Step instruction


Create Database Manually – Step by Step instruction

Today, I’ll show you how we can create a Database without Database Configuration Assistant (DBCA). We’re going to create the database manually, using CREATE DATABASE statement. Sometimes, we are not able to use GUI in order to create a database. For this, we should know syntax of database creation from SQL*Plus. To create database manually, follow below steps:    
1. Firstly, export Environment Variables. To export EV automatically for every session, do below changes to /home/oracle/.bashrc file:
 export ORACLE_SID=kamran
export ORACLE_HOME=/home/oracle/oracle/product/10.2.0/db_1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
 manual_1

2. Create parameter file and modify it by setting minimum required parameters:
*.db_name=kamran
*.db_block_size=8192
*.sga_target=1677721600
*.undo_management=’AUTO’
*.control_files = (‘/home/oracle/oracle/product/10.2.0/control01.ctl’)
*.user_dump_dest=’/home/oracle/oracle/product/10.2.0/udump’
*.background_dump_dest=’/home/oracle/oracle/product/10.2.0/bdump’
*.core_dump_dest=’/home/oracle/oracle/product/10.2.0/cdump’
After creation of this parameter file, create below folders in /home/oracle/oracle/product/10.2.0/ directory. Three of them are dump folders (needed for trace files and alert.log file). We’re going to keep Control Files and DataFiles in oradata folder.
-          oradata
-          udump
-          bdump
-          cdump
manual_2

 3. Create Server parameter file (SPFILE) using this parameter file and STARTUP the instance in NOMOUNT mode.
 CREATE SPFILE FROM PFILE=’/home/oracle/oracle/product/10.2.0/init.ora’;
STARTUP NOMOUNT
 manual_3 
Now our instance started, SGA allocated and background processes started

4. To create a new database, use the CREATE DATABASE statement. As a result, below files will be created:
-          Redo Log files
-          system.dbf and sysaux.dbf (files for SYSTEM tablespace)
-          undo01.dbf file (for UNDO tablespace)
-          temp_tbs.dbf file (for TEMPORARY tablespace)
-          users.dbf (for DEFAULT PERMANENT tablespace)

//######## Database Creation Code ####### 
CREATE DATABASE kamran
    USER SYS IDENTIFIED BY kamran
    USER SYSTEM IDENTIFIED BY kamran
LOGFILE GROUP 1 (‘/home/oracle/oracle/product/10.2.0/oradata/redo01.log’) SIZE 50 m,
    GROUP 2 (‘/home/oracle/oracle/product/10.2.0/oradata/redo02.log’) SIZE 50 m,
    GROUP 3 (‘/home/oracle/oracle/product/10.2.0/oradata/redo03.log’) SIZE 50 m
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXLOGHISTORY 1
   MAXDATAFILES 100
   MAXINSTANCES 1
   CHARACTER SET us7ascii
   NATIONAL CHARACTER SET al16utf16
   DATAFILE ‘/home/oracle/oracle/product/10.2.0/oradata/system01.dbf’ SIZE 325 m REUSE EXTENT MANAGEMENT LOCAL
   SYSAUX DATAFILE ‘/home/oracle/oracle/product/10.2.0/oradata/sysaux.dbf’ SIZE 400 m REUSE
    DEFAULT TABLESPACE tbs_1 DATAFILE ‘/home/oracle/oracle/product/10.2.0/oradata/users.dbf’ SIZE 200m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
    DEFAULT TEMPORARY TABLESPACE tempts1 TEMPFILE ‘/home/oracle/oracle/product/10.2.0/oradata/temp_tbs.dbf’ SIZE 20m REUSE
    undo TABLESPACE undotbs DATAFILE ‘/home/oracle/oracle/product/10.2.0/oradata/undo01.dbf’ SIZE 200m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED

manual_4 
 5. Run the scripts necessary to build views, synonyms, and PL/SQL packages
CONNECT / AS SYSDBA
SQL>@$ORACLE_HOME/rdbms/admin/catalog.sql
SQL>@$ORACLE_HOME/rdbms/admin/catproc.sql

6. Shutdown the instance and startup the database. Your database is ready for use!
 manual_5

Monday, October 13, 2014

Create Partition in an Old Table Oracle

Create Partition in an Old Table Oracle

You can partition a non-partitioned table three different ways:
A) export/import method
B) Insert with a subquery method
C) Partition exchange method
Either of these 3 methods will create a partitioned table from an existing non-partitioned table.

A. Export/import method

1) Export your table:

exp usr/pswd tables=numbers file=exp.dmp

2) Drop the table:

drop table numbers;

3) Recreate the table with partitions:

create table numbers (qty number(3), name varchar2(15))
partition by range (qty)
(partition p1 values less than (501),
partition p2 values less than (maxvalue));

4) Import the table with ignore=y:

imp usr/pswd file=exp.dmp ignore=y

The ignore=y causes the import to skip the table creation and
continues to load all rows.
B. Insert with a subquery method

1) Create a partitioned table:

create table partbl (qty number(3), name varchar2(15))
partition by range (qty)
(partition p1 values less than (501),
partition p2 values less than (maxvalue));

2) Insert into the partitioned table with a subquery from the
non-partitioned table:

insert into partbl (qty, name)
select * from origtbl;

3) If you want the partitioned table to have the same name as the
original table, then drop the original table and rename the
new table:

drop table origtbl;
alter table partbl rename to origtbl;

C. Partition Exchange method
ALTER TABLE EXCHANGE PARTITION can be used to convert a partition (or subpartition) into a non-partitioned table and a non-partitioned table into a partition (or subpartition) of a partitioned table by exchanging their data and index segments.
1) Create table dummy_t as select with the required partitions
2) Alter table EXCHANGE partition with ;

Example
-------

SQL> CREATE TABLE p_emp
2 (sal NUMBER(7,2))
3 PARTITION BY RANGE(sal)
4 (partition emp_p1 VALUES LESS THAN (2000),
5 partition emp_p2 VALUES LESS THAN (4000));
Table created.


SQL> SELECT * FROM emp;
EMPNO ENAME JOB MGR HIREDATE SAL
--------- ---------- --------- --------- --------- ---------
7369 SMITH CLERK 7902 17-DEC-80 800
7499 ALLEN SALESMAN 7698 20-FEB-81 1600
7521 WARD SALESMAN 7698 22-FEB-81 1250
7566 JONES MANAGER 7839 02-APR-81 2975
7654 MARTIN SALESMAN 7698 28-SEP-81 1250
7698 BLAKE MANAGER 7839 01-MAY-81 2850
7782 CLARK MANAGER 7839 09-JUN-81 2450
7788 SCOTT ANALYST 7566 19-APR-87 3000
7839 KING PRESIDENT 17-NOV-81 5000
7844 TURNER SALESMAN 7698 08-SEP-81 1500
7876 ADAMS CLERK 7788 23-MAY-87 1100
7900 JAMES CLERK 7698 03-DEC-81 950
7902 FORD ANALYST 7566 03-DEC-81 3000
7934 MILLER CLERK 7782 23-JAN-82 1300
14 rows selected.

SQL> CREATE TABLE dummy_y as SELECT sal
FROM emp WHERE sal CREATE TABLE dummy_z as SELECT sal FROM emp WHERE sal
BETWEEN 2000 AND 3999;
Table created.

SQL> alter table p_emp exchange partition emp_p1
with table dummy_y;
Table altered.

SQL> alter table p_emp exchange partition emp_p2
with table dummy_z;
Table altered.

Another Detailed Example using Partition Exchange method

    -- Create and populate a small lookup table.
    CREATE TABLE lookup (
      id            NUMBER(10),
      description   VARCHAR2(50)
    );

    ALTER TABLE lookup ADD (
      CONSTRAINT lookup_pk PRIMARY KEY (id)
    );

    INSERT INTO lookup (id, description) VALUES (1, 'ONE');
    INSERT INTO lookup (id, description) VALUES (2, 'TWO');
    INSERT INTO lookup (id, description) VALUES (3, 'THREE');
    COMMIT;

    -- Create and populate a larger table that we will later partition.
    CREATE TABLE big_table (
      id            NUMBER(10),
      created_date  DATE,
      lookup_id     NUMBER(10),
      data          VARCHAR2(50)
    );

    DECLARE
      l_lookup_id    lookup.id%TYPE;
      l_create_date  DATE;
    BEGIN
      FOR i IN 1 .. 1000000 LOOP
        IF MOD(i, 3) = 0 THEN
          l_create_date := ADD_MONTHS(SYSDATE, -24);
          l_lookup_id   := 2;
        ELSIF MOD(i, 2) = 0 THEN
          l_create_date := ADD_MONTHS(SYSDATE, -12);
          l_lookup_id   := 1;
        ELSE
          l_create_date := SYSDATE;
          l_lookup_id   := 3;
        END IF;
        
        INSERT INTO big_table (id, created_date, lookup_id, data)
        VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i);
      END LOOP;
      COMMIT;
    END;
    

    -- Apply some constraints to the table.
    ALTER TABLE big_table ADD (
      CONSTRAINT big_table_pk PRIMARY KEY (id)
    );

    CREATE INDEX bita_created_date_i ON big_table(created_date);

    CREATE INDEX bita_look_fk_i ON big_table(lookup_id);

    ALTER TABLE big_table ADD (
      CONSTRAINT bita_look_fk
      FOREIGN KEY (lookup_id)
      REFERENCES lookup(id)
    );

    -- Gather statistics on the schema objects
    EXEC DBMS_STATS.gather_table_stats(USER, 'LOOKUP', cascade => TRUE);
    EXEC DBMS_STATS.gather_table_stats(USER, 'BIG_TABLE', cascade => TRUE);


    -- Create partitioned table.
    CREATE TABLE big_table2 (
      id            NUMBER(10),
      created_date  DATE,
      lookup_id     NUMBER(10),
      data          VARCHAR2(50)
    )
    PARTITION BY RANGE (created_date)
    (PARTITION big_table_2007 VALUES LESS THAN (MAXVALUE));

    -- Add new keys, FKs and triggers.
    ALTER TABLE big_table2 ADD (
      CONSTRAINT big_table_pk2 PRIMARY KEY (id)
    );

    CREATE INDEX bita_created_date_i2 ON big_table2(created_date) LOCAL;

    CREATE INDEX bita_look_fk_i2 ON big_table2(lookup_id) LOCAL;

    ALTER TABLE big_table2 ADD (
      CONSTRAINT bita_look_fk2
      FOREIGN KEY (lookup_id)
      REFERENCES lookup(id)
    );


      ALTER TABLE big_table2
      EXCHANGE PARTITION big_table_2007
      WITH TABLE big_table
      WITHOUT VALIDATION
      UPDATE GLOBAL INDEXES;

    DROP TABLE big_table;
    RENAME big_table2 TO big_table;

    ALTER TABLE big_table RENAME CONSTRAINT big_table_pk2 TO big_table_pk;
    ALTER TABLE big_table RENAME CONSTRAINT bita_look_fk2 TO bita_look_fk;
    ALTER INDEX big_table_pk2 RENAME TO big_table_pk;
    ALTER INDEX bita_look_fk_i2 RENAME TO bita_look_fk_i;
    ALTER INDEX bita_created_date_i2 RENAME TO bita_created_date_i;

      ALTER TABLE big_table
      SPLIT PARTITION big_table_2007 AT (TO_DATE('31-DEC-2005 23:59:59', 'DD-MON-YYYY HH24:MI:SS'))
      INTO (PARTITION big_table_2005,
            PARTITION big_table_2007)
      UPDATE GLOBAL INDEXES;

      ALTER TABLE big_table
      SPLIT PARTITION big_table_2007 AT (TO_DATE('31-DEC-2006 23:59:59', 'DD-MON-YYYY HH24:MI:SS'))
      INTO (PARTITION big_table_2006,
            PARTITION big_table_2007)
      UPDATE GLOBAL INDEXES;

     EXEC DBMS_STATS.gather_table_stats(USER, 'BIG_TABLE', cascade => TRUE);


    SELECT partitioned
    FROM   user_tables
    WHERE  table_name = 'BIG_TABLE';

    PAR
    ---
    YES

    1 row selected.

    SELECT partition_name, num_rows
    FROM   user_tab_partitions
    WHERE  table_name = 'BIG_TABLE';

    PARTITION_NAME                   NUM_ROWS
    ------------------------------ ----------
    BIG_TABLE_2005                     335326
    BIG_TABLE_2006                     332730
    BIG_TABLE_2007                     334340

    3 rows selected.

Sunday, October 12, 2014

Query to find long running queries or operation. [ Problem - Want to know how long you have to wait until the operation finishes?]

Query to find long running queries or operation.

Problem - Want to know how long you have to wait until the operation finishes?

Sometimes when you run a slow query which produces a full table scan, you'll have to know how long it's going to take for the query to finish. There's a simple query which shows you how many seconds elapsed and how many seconds you'll have to wait.

Recipe #1 - Get details about long running operations

Run the following query and you'll get most information you need:
SELECT osuser,
       sl.sql_id,
       sl.sql_hash_value,
       opname,
       target,
       elapsed_seconds,
       time_remaining
  FROM v$session_longops sl
inner join v$session s ON sl.SID = s.SID AND sl.SERIAL# = s.SERIAL#
WHERE time_remaining > 0

Recipe #2 - Show long running SQL Statements

If you also want to see the SQL query causing the long running operation, just join v$sql like show below:
SELECT s.username,
       sl.sid,
       sq.executions,
       sl.last_update_time,
       sl.sql_id,
       sl.sql_hash_value,
       opname,
       target,
       elapsed_seconds,
       time_remaining,
       sq.sql_fulltext
  FROM v$session_longops sl
 INNER JOIN v$sql sq ON sq.sql_id = sl.sql_id
 INNER JOIN v$session s ON sl.SID = s.SID AND sl.serial# = s.serial#
 WHERE time_remaining > 0

Resuming failed or stopped Data Pump Import execution

Resuming failed or stopped Data Pump Import execution

When datapump import stops or fails, may be because putty was closed by mistake or Job got stopped due some db errors then we can resume datapump as below.

1.Identify the datapump import Job which you had started as below.

SYS@TESTDB1 AS SYSDBA 13-FEB-12> select * from dba_datapump_jobs;

OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE                              DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ---------- ----------------- -----------------
SYSTEM                         SYS_IMPORT_SCHEMA_01           IMPORT                         SCHEMA                         EXECUTING                               8                 0          9

SYS@TESTDB1 AS SYSDBA 13-FEB-12> exit

2.Then resume this import by attaching the Job and then enter continue as below.

$ impdp system/manager attach=SYS_IMPORT_SCHEMA_01

Import: Release 11.2.0.2.0 - Production on Mon Feb 13 00:15:40 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: SYS_IMPORT_SCHEMA_01
  Owner: SYSTEM
  Operation: IMPORT
  Creator Privs: TRUE
  GUID: B8D498476B797D54E040F10A07255A96
  Start Time: Sunday, 12 February, 2012 23:40:40
  Mode: SCHEMA
  Instance: TESTDB1
  Max Parallelism: 8
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        system/******** dumpfile=expdp_TESTDB_20121302.dmp logfile=expdp_TESTDB_20121302.log directory=dump_dir1 schemas=user parallel=8
  IMPORT Job Parameters:
     CLIENT_COMMAND        system/******** dumpfile=expdp_TESTDB_20121302.dmp logfile=impdp_expdp_TESTDB_TESTDB1_20121302.log directory=dump_dir schemas=user remap_schema=user:user_gr parallel=8
  State: EXECUTING
  Bytes Processed: 10,378,558,440
  Percent Done: 99
  Current Parallelism: 8
  Job Error Count: 0
  Dump File: /bi_oradata_01/dumps/expdp_TESTDB_20121302.dmp

Worker 1 Status:
  Process Name: DW00
  State: EXECUTING
  Object Schema: USER_GR
  Object Name: Q1_USZIP54004_SDX
  Object Type: SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
  Completed Objects: 21
  Worker Parallelism: 8

Worker 2 Status:
  Process Name: DW01
  State: WORK WAITING

Worker 3 Status:
  Process Name: DW02
  State: WORK WAITING

Worker 4 Status:
  Process Name: DW03
  State: WORK WAITING

Worker 5 Status:
  Process Name: DW04
  State: WORK WAITING

Worker 6 Status:
  Process Name: DW05
  State: WORK WAITING

Worker 7 Status:
  Process Name: DW06
  State: WORK WAITING

Worker 8 Status:
  Process Name: DW07
  State: WORK WAITING


Import> continue 

Kill, cancel and resume or restart datapump expdp and impdp jobs

Kill, cancel and resume or restart datapump expdp and impdp jobs

The expdp and impdp utilities are command-line driven, but when starting them from the OS-prompt, one does not notice it. When you want to kill, cancel, start or resume a job, you will and up in the datapump command prompt… now what?!
All command shown here can be used with expdp and impdp datapump.

Identifying datapump jobs

Do a select from dba_datapump_jobs in sqlplus to get the job name:
> expdp system full=y

SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     SYS_EXPORT_FULL_01   EXPORT     FULL       EXECUTING
Or when you use the JOB_NAME parameter when datapumping, you already identified the job with a name. You don’t need to look up afterwards…
expdp system full=y JOB_NAME=EXP_FULL

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     EXP_FULL             EXPORT     FULL       EXECUTING

Killing or stopping a running datapump job

The difference between Kill and Stop is simple to explain. When killing a job, you won’t be able to resume or start it again. Also logs and dumpfiles will be removed!
When exporting (or importing), press Ctrl-c to show the datapump prompt and type KILL_JOB orSTOP_JOB[=IMMEDIATE]. You will be prompted to confirm if you are sure…
Adding ‘=IMMEDIATE‘ to STOP_JOB will not finish currently running ‘sub-job’ and must be redone when starting it again.
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
[Ctrl-c]
Export> KILL_JOB
..or..
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([yes]/no): yes

Resuming a stopped job

Identify your job with SQL or you already knew it because you used ‘JOB_NAME=‘ ;)
SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE      
---------- -------------------- ---------- ---------- ------------
SYSTEM     EXP_FULL             EXPORT     FULL       NOT RUNNING
Now we can ATTACH to the job using it as a parameter to the expdp or impdp command, and a lot of gibberish is shown:
> expdp system ATTACH=EXP_FULL

Job: EXP_FULL
 Owner: SYSTEM
 Operation: EXPORT
 Creator Privs: TRUE
 GUID: A5441357B472DFEEE040007F0100692A
 Start Time: Thursday, 08 June, 2011 20:23:39
 Mode: FULL
 Instance: db1
 Max Parallelism: 1
 EXPORT Job Parameters:
 Parameter Name      Parameter Value:
 CLIENT_COMMAND        system/******** full=y JOB_NAME=EXP_FULL
 State: IDLING
 Bytes Processed: 0
 Current Parallelism: 1
 Job Error Count: 0
 Dump File: /u01/app/oracle/admin/db1/dpdump/expdat.dmp
 bytes written: 520,192

Worker 1 Status:
 Process Name: DW00
 State: UNDEFINED
(Re)start the job with START_JOB, use ‘=SKIP_CURRENT‘ if you want to skip the current job. To show progress again, type CONTINUE_CLIENT (Job will be restarted if idle).
Export> START_JOB[=SKIP_CURRENT]
Export> CONTINUE_CLIENT
Job EXP_FULL has been reopened at Thursday, 09 June, 2011 10:26
Restarting "SYSTEM"."EXP_FULL":  system/******** full=y JOB_NAME=EXP_FULL

Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Done…

How to stop or kill data pump jobs in Oracle the CORRECT way

How to stop or kill data pump jobs in Oracle the CORRECT way


1. Get the list of datapump jobs:
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

-- locate Data Pump jobs:

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
The output might look something like this:
OWNER_NAME JOB_NAME             OPERATION   JOB_MODE    STATE       ATTACHED_SESSIONS
---------- -------------------- ----------- ----------- ----------- -----------------
SCHEMA_USER SYS_IMPORT_SCHEMA_01 IMPORT      SCHEMA      EXECUTING                   1
There are two things needed to perform the kill:
1. OWNER_NAME (Which is SCHEMA_USER)
2. JOB_NAME (Which is SYS_IMPORT_SCHEMA_01)
With that information, we can now stop and kill the job:
SET serveroutput on
SET lines 100
DECLARE
   h1 NUMBER;
BEGIN
  -- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
   h1 := DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','SCHEMA_USER');
   DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/
Check that the job has stopped:
SQL> SET lines 200
SQL> COL owner_name FORMAT a10;
SQL> COL job_name FORMAT a20
SQL> COL state FORMAT a11 
SQL> COL operation LIKE state
SQL> COL job_mode LIKE state
SQL> 
SQL> -- locate Data Pump jobs:
SQL> 
SQL> SELECT owner_name, job_name, operation, job_mode,
  2  state, attached_sessions
  3  FROM dba_datapump_jobs
  4  WHERE job_name NOT LIKE 'BIN$%'
  5  ORDER BY 1,2;

no rows selected

Generate user DDL

Generate user DDL


The oracle DBA can use these 2 scripts to generate DDL statements for a user with their roles, system and object privileges.
On the other side, with datapump (impdp) you can use the parameter sqlfile=My_file.sql you can easily get DDL from dumpfile
For Oracle >=10:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
clear screen
accept uname prompt 'Enter User Name : '
accept outfile prompt  ' Output filename : '
spool &&outfile..gen
SET LONG 2000000 PAGESIZE 0 head off verify off feedback off linesize 132
SELECT dbms_metadata.get_ddl('USER','&&uname') FROM dual;
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','&&uname') from dual;
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','&&uname') from dual;
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','&&uname') from dual;
spool off
More information for this package in the official Oracle 12c documentation :
http://docs.oracle.com/cd/E16655_01/appdev.121/e17602/d_metada.htm#ARPLS026
For Oracle <10 10g="" 11g="" runs="" strong="" too="" well="" with="">:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
clear screen
accept uname prompt 'Enter User Name : '
accept outfile prompt  ' Output filename : '
col username noprint
col lne newline
set heading off pagesize 0 verify off feedback off linesize 132
spool &&outfile..gen
prompt  -- genarate user ddl
SELECT username, 'CREATE USER '||username||' '||
       DECODE(password, 'EXTERNAL', 'IDENTIFIED EXTERNALLY',
              'IDENTIFIED BY VALUES '''||password||''' ') lne,
       'DEFAULT TABLESPACE '||default_tablespace lne,
       'TEMPORARY TABLESPACE '||temporary_tablespace||';' lne
  FROM DBA_USERS
 WHERE USERNAME LIKE UPPER('%&&uname%')
    OR UPPER('&&uname') IS NULL
 ORDER BY USERNAME;
SELECT username, 'ALTER USER '||username||' QUOTA '||
       DECODE(MAX_BYTES, -1, 'UNLIMITED', TO_CHAR(ROUND(MAX_BYTES/1024))||'K')
       ||' ON '||tablespace_name||';' lne
  FROM DBA_TS_QUOTAS
 WHERE USERNAME LIKE UPPER('%&&uname%')
    OR UPPER('&&uname') IS NULL
 ORDER BY USERNAME;
col grantee noprint
select grantee, granted_role granted_priv,
       'GRANT '||granted_role||' to '||grantee||
       DECODE(ADMIN_OPTION, 'Y', ' WITH ADMIN OPTION;', ';')
  from dba_role_privs
 where grantee like upper('%&&uname%')
         UNION
select grantee, privilege granted_priv,
       'GRANT '||privilege||' to '||grantee||
       DECODE(ADMIN_OPTION, 'Y', ' WITH ADMIN OPTION;', ';')
  from dba_sys_privs
 where grantee like upper('%&&uname%')
 order by 1, 2;
spool off