In this article we are going to discuss everything about Data Pump utility provided by oracle from 10g version with no extra cost.We are going to discuss based on below points

no rows selected
Direct Path
Now we need set up user and grant privilege to perform import operations
Now import datafile using impdp
4. Using export to estimate space
- Definition, need and use of data pump
- Architecture of data pump
- Different methods of data movement using data pump
- Different scenarios of data pump
Definition, need and use of data pump
- Data pump is oracle database utility introduced within oracle 10g as replacement to conventional export import (exp/imp)
- High speed, fast object recovery , parallel read and write, data transformation and compression
- Provides bulk data and metadata movement between oracle databases independent of oracle version and OS platform
- Data Pump support below modes
- Full database mode
- Schema mode
- Table mode
- Tablespace mode
- Transportable tablespace mode
- Data pump is much faster than conventional exp/imp due to the use of data access method, direct path and external tables
- It does all processing on server rather than on client level, speed can be dramatically increased by using parallel (number of CPUx2) parameter
- It allows to take point in time logical backup of entire database or backup of any object.
- Data pump jobs can be restarted without any loss of data whether or not the stoppage was voluntary or involuntary. This is possible as data pump provides interactive command line control of data pump jobs
- Virtually any type of objects can be included or excluded in data pump jobs using simple INCLUDE or EXCLUDE parameter
- It supports data loading directly from another database over network link without need of generating dump file.
- Possibility to disable logging generation during import operation, redo information is not written to disk or even archived.This is very useful while transporting huge data(new in oracle 12c).
- Data pump provides options to export view as table and all objects depending on view will also be exported as if they were defined on table (new in oracle 12c).
Important points about data pump
Data pump job manages few type of files
DUMPFILE: these files contains data and metadata that is being moved.
LOGFILE: logfile related to export and import operations
SQLFILE: file is result of SQLFILE operations, contains DDL of objects
Roles required for data pump operations
DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE
Directory Objects
Data pump job requires directory to generate dumpfile, logfile or sqlfile, this is new in data pump compared to conventional exp/imp
We need to create directory physically and then related it to logically and grant read write permission to user.
On OS level
$mkdir -p /u01/backup
On database level
create or replace directory datapump_dr as '/u01/backup';
grant read,write on directory datapump_dr to ;
Data Pump Architecture
Oracle Data Pump Architecture consists of below 3 important parts
- Command line interface expdp/impdp
- DBMS_DATAPUMP package
- DBMS_METADATA package

As shown in figure,
- expdp/impdp uses DBMS_DATAPUMP package to execute export and import operations
- when metadata needs to be manipulated, it uses DBMS_METADATA package to extract, manipulate, and also to recreate the dictionary metadata.
Other two parts from architecture needs attention
External Table API
- ORACLE_DATAPUMP access driver
- This is responsible for allowing oracle to read and write data in external table
Direct Path API
- Responsible for passing the data loaded directly to load engine in the server using direct-path INSERT statements instead of using conventional insert statements.
Different Methods of Data Movement
In oracle, data pump uses different methods to move the data in and out of database
- Datafile copy
- Direct path
- External tables
- Conventional path
- Network link
We will go in detail with few of them
Datafile Copy(Transportable Tablespace)
- Quickest way to transport an entire tablespace to a target system, meanwhile you need to transfer datafiles associated with tablespace to target server
- Data pump job is to export only structural information (metadata) into dumpfile.
- For transporting tablespace, tablespace must be self contained, to check the same, use procedure DBMS_TTS.TRANSPORT_SET_CHECK
- once this procedure executed we need to check for any violations from transport_set_violations views, if this does not return any row then its self contained.
- Below we are trying to transport self contained tablespace TECHON_TBS
SQL> exec
dbms_tts.transport_set_check('TECHON_TBS',TRUE);
PL/SQL procedure successfully completed.
SQL> select * from
transport_set_violations;
- At source place tablespace in read only mode and generate a transportable tablespace set
SQL> alter tablespace TECHON_TBS
read only;
Tablespace
altered.
$expdp userid=" '/ as sysdba'
" dumpfile = techon_tbs_trans_Dec2018.dmp directory=datapump_dr
transport_tablespaces=TECHON_TBS logfile=techon_tbs_trans_Dec2018.log
- Copy the transportable tablespace set created (dumpfile and datafiles) to target using any file transfer method (scp, sftp, ftp). put back tablespace in read write mode
Tablespace altered.
- Now at the target import the tablespace set. First import metadata. In case schema is not present in target database then need to create the same.
SQL>create user techon_owner
identified by techon#123;
User created.
SQL>grant
create session, resource to techon_owner;
Grant
succeeded.
impdp userid="
'/ as sysdba' " dumpfile = techon_tbs_trans_Dec2018.dmp
directory=datapump_dr transport_datafiles=;
logfile=impdp_techon_tbs_trans_Dec2018.log
- Once import is completed, all tablespaces will remain in read only mode, bring those in read write mode using below command.
Direct Path
- Direct path operations can bypass undo (hence reducing redo), the load and unload process of the data will be faster, Data Pump uses by default the direct path method when the structure of a table allows it
- Data pump does not load table with disabled unique indexes. When loading data into such situation, indexes should be either dropped or renamed.
- If not using datafile copy (transportable tablespace), and data can not be moved using direct path, this external table method is used.
- It creates external table that maps to dump file data for database table, then SQL engine is used to move the data.
- Data pump uses external table with following cases
- loading and unloading huge tables and partition in situations where it is advantageous to use parallel SQL capabilities.
- loading table with global or domain indexes defined on them.
- loading tables with active triggers or clustered tables, with encrypted columns, with fine grained access control enabled for inserts
- loading table not created by import operations.
Conventional Path
- when data pump is not able to use either direct path or external table method, then conventional path will be used. This method can affect performance of data pump operation
Network Link
- By specifying NETWORK_LINK parameter in data pump import; this options will be enabled.
- this does not need to generate dump file
- To export from a read only database , the NETWORK_LINK parameter is required
Different scenarios of data pump
We will discuss below different scenarios
- Schema export and import
- Table export and import
- Export and importing pluggable database
- Using export to estimate space
- Import with metadata only
- Export view as tables
- Import data via network link
- Interaction with datapump jobs
1. Schema export and import
Please read this Oracle Database Schema Refresh. I have already written dedicated article on schema refresh
2. Table export and import
I am going to export the table then drop the table and import it.
export the table using below command
Next step will be to drop table then proceed with import
SQL> drop table CANDIDATE purge;
I am going to export the table then drop the table and import it.
export the table using below command
expdp techon_owner/techon#123 directory
=datapump_dr dumpfile=table_dump.dmp logfile table_logfile.log
tables=TECHON_OWNER.candidate
Export:
Release 12.1.0.2.0 - Production on Thu Dec 20 04:59:20 2018
Copyright
(c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Password:
Connected
to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With
the Automatic Storage Management option
Starting
"TECHON_OWNER"."SYS_EXPORT_TABLE_01":
TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp logfile
table_logfile.log tables=TECHON_OWNER.candidate
Estimate
in progress using BLOCKS method...
Processing
object type TABLE_EXPORT/TABLE/TABLE_DATA
Total
estimation using BLOCKS method: 64 KB
Processing
object type TABLE_EXPORT/TABLE/TABLE
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. .
exported "TECHON_OWNER"."CANDIDATE"
6.023 KB 4
rows
Master
table "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully
loaded/unloaded
******************************************************************************
Dump
file set for TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
/u01/backup/table_dump.dmp
Job
"TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed
at Thu Dec 20 04:59:58 2018 elapsed 0 00:00:23
Next step will be to drop table then proceed with import
SQL> drop table CANDIDATE purge;
Table
dropped.
impdp
techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp
logfile=impdp_logfile tables =TECHON_OWNER.candidate
Import:
Release 12.1.0.2.0 - Production on Thu Dec 20 06:16:08 2018
Copyright
(c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Password:
Connected
to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With
the Automatic Storage Management option
Master
table "TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully
loaded/unloaded
Starting
"TECHON_OWNER"."SYS_IMPORT_TABLE_01":
TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp
logfile=impdp_logfile tables=TECHON_OWNER.candidate
Processing
object type TABLE_EXPORT/TABLE/TABLE
Processing
object type TABLE_EXPORT/TABLE/TABLE_DATA
. .
imported "TECHON_OWNER"."CANDIDATE"
6.023 KB 4
rows
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job
"TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully completed
at Thu Dec 20 06:16:36 2018 elapsed 0 00:00:22
This will complete export and import of table.
3. Export and importing pluggable database
Below are the steps to export and import pluggable database
Export the full pluggable database using below command with full=y parameter
Now close the PDB and drop it including datafiles
SQL> alter pluggable database pdb_techon close;
Export the full pluggable database using below command with full=y parameter
expdp TECHON_OWNER/techon#123@pdb_techon directory =datapump_dr
dumpfile = full_pdb_dump.dmp logfile=full_pdb_logfile.log full=y
Now close the PDB and drop it including datafiles
SQL> alter pluggable database pdb_techon close;
Pluggable database
altered.
SQL> drop pluggable
database pdb_techon including datafiles;
Pluggable database
dropped.
now create pluggable database from pdb$seed as mentioned below (also read create-pluggable-database-pdb-oracle-12c) and open in read write mode
SQL> create pluggable database pdb_techon admin user techon_admin identified by techon#123 roles = (DBA)
default tablespace techon_tbs datafile '+DATA01' size 100M
FILE_NAME_CONVERT=('+DATA01','+DATA01'); 2 3
Pluggable database created.
SQL> alter pluggable database pdb_techon open;
Pluggable database altered.
Now we need set up user and grant privilege to perform import operations
SQL> alter session set
container=pdb_techon;
Session
altered.
SQL>create
user techon_owner identified by techon#123 default tablespace techon_tbs quota
unlimited on techon_tbs;
User
created.
SQL>grant
create session, resource, datapump_exp_full_database,
datapump_imp_full_database to techon_owner;
Grant
succeeded.
SQL>create
or replace directory datapump_dr as '/u01/backup/';
Directory
created.
SQL>grant
read,write on directory datapump_dr to techon_owner;
Grant
succeeded.
Now import datafile using impdp
impdp techon_owner/techon#123@ directory=datapump_dr dumpfile=full_pdb_dump.dmp logfile=impdp_full_pdb_logfile.log
4. Using export to estimate space
we will use ESTIMATE_ONLY parameter to estimate the amount of disk space consumed in schema mode export. This operation will not create any dump file. It will just estimate space.
expdp TECHON_OWNER/techon#123 directory
=datapump_dr logfile=estimate_logfile.log schemas=TECHON_OWNER estimate_only=y
Export: Release 12.1.0.2.0 - Production on Thu
Dec 20 06:47:43 2018
Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise
Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01": TECHON_OWNER/******** directory=datapump_dr logfile=estimate_logfile.log
schemas=TECHON_OWNER estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type
SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "TECHON_OWNER"."CANDIDATE"
64 KB
Total estimation using BLOCKS method: 64 KB
Job "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01"
successfully completed at Thu Dec 20 06:47:50 2018 elapsed 0 00:00:02
5. Import with metadata only
One can use parameter content=metadata_only to import metadata only of table
impdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile content=metadata_only table_exists_action=replace tables =TECHON_OWNER.candidate
We must notice this parameter table_exists_action in impdp, it comes with below 4 options
APPEND: It loads row from the source and leaves existing rows unchanged
SKIP: It leaves the table as it is and moves on the next object. Not valid option if CONTENT=DATA_ONLY parameter is selected
TRUNCATE: It deletes existing rows and loads data from source
REPLACE: it replaces data in target from source.
6. Export view as tables
To illustrate this I have created view candidate_view on candidate table, Now take export of view as table.
expdp TECHON_OWNER/techon#123 directory
=datapump_dr dumpfile=viewastable_dump.dmp logfile=viewastable_logfile.log
views_as_tables=TECHON_OWNER.candidate_view
Export: Release 12.1.0.2.0 - Production on Thu
Dec 20 07:08:32 2018
Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise
Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting
"TECHON_OWNER"."SYS_EXPORT_TABLE_01":
TECHON_OWNER/******** directory=datapump_dr dumpfile=viewastable_dump.dmp
logfile=viewastable_logfile.log views_as_tables=TECHON_OWNER.candidate_view
Estimate in progress using BLOCKS method...
Processing object type
TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type
TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported
"TECHON_OWNER"."CANDIDATE_VIEW"
6.031 KB 4 rows
Master table
"TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully
loaded/unloaded
******************************************************************************
Dump file set for
TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
/u01/backup/viewastable_dump.dmp
Job
"TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed
at Thu Dec 20 07:08:44 2018 elapsed 0 00:00:07
To check if it works, we need to generate sqlfile using impdp , it generate DDL statements for objects exported
impdp TECHON_OWNER directory =datapump_dr
dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql
Import: Release 12.1.0.2.0 - Production on
Thu Dec 20 07:15:07 2018
Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise
Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Master table
"TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully
loaded/unloaded
Starting
"TECHON_OWNER"."SYS_SQL_FILE_FULL_01": TECHON_OWNER/******** directory=datapump_dr
dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql
Processing object type
TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
Job
"TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully
completed at Thu Dec 20 07:15:15 2018 elapsed 0 00:00:02
$ cat CANDIDATE_VIEW.sql
-- CONNECT TECHON_OWNER
ALTER SESSION SET EVENTS '10150 TRACE NAME
CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME
CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME
CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME
CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME
CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME
CONTEXT FOREVER, LEVEL 192 ';
-- new object type path:
TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
CREATE TABLE
"TECHON_OWNER"."CANDIDATE_VIEW"
( "CID" NUMBER(10,0)
NOT NULL ENABLE,
"CNAME" VARCHAR2(20 BYTE),
"CDOJ" DATE
)
SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "TOOLS" ;
We can see it worked successfully.
7.Import data via network link
By using data pump, we are able to import metadata and data directly to a target database using a database link to connect to the source and pull information through network
Benefits of this
By using data pump, we are able to import metadata and data directly to a target database using a database link to connect to the source and pull information through network
Benefits of this
- Dump file generation is not required
- no need to execute export command
- it exports from source,then import into target immediately
import operation must be done by a user on target database with DATAPUMP_IMP_FULL_DATABASE role.
We will be importing a table from source to target
First create database link in target database under desired user
SQL> create database link techon_link connect
to TECHON_OWNER identified by techon#123 using 'techon.pfizer.com';
Database link created.
Try accessing the table using database link
SQL> select * from
CANDIDATE@techon_link ;
CID CNAME
CDOJ
---------- --------------------
------------------
101 Rajesh
05-JUN-18
102 Ian
05-AUG-18
103 Chris
25-JUN-18
104 John
25-JUL-18
Now will import the table using impdp operation through db link using parameter NETWORK_LINK
impdp TECHON_OWNER/techon#123
tables=TECHON_OWNER.CANDIDATE directory=datapump_dr network_link=techon_link
logfile=impdp_network.log
We can table got imported without any dump file.
SQL> select * from candidate;
CID CNAME
CDOJ
---------- --------------------
------------------
101 Rajesh
05-JUN-18
102 Ian
05-AUG-18
103 Chris
25-JUN-18
104 John
25-JUL-18
8. Interaction with datapump jobs
While any export or import job is running, we can interact with that job via command line.
We can login to job using below command
$expdp techon_owner/techon#123
attach=job_name
job name can be extracted from dba_datapump_jobs table
we can stop the job, start the job, increase the parallelism
Export> STOP_JOB=immediate
Export> START_JOB
Export> PARALLEL=10
Export> HELP
Export> HELP
Comment below if additional information is required.
0 comments:
Post a Comment