Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

Everything you need to know about Oracle Data Pump

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c     No comments   

In this article we are going to discuss everything about Data Pump utility provided by oracle from 10g version with no extra cost.We are going to discuss based on below points
  • Definition, need and use of data pump
  • Architecture of data pump
  • Different methods of data movement using data pump
  • Different scenarios of data pump
Definition, need and use of data pump
  • Data pump is oracle database utility introduced within oracle 10g as replacement to conventional export import (exp/imp)
  • High speed, fast object recovery , parallel read and write, data transformation and compression
  • Provides bulk data and metadata movement between oracle databases independent of oracle version and OS platform
  • Data Pump support below modes
    • Full database mode
    • Schema mode
    • Table mode
    • Tablespace mode
    • Transportable tablespace mode
  • Data pump is much faster than conventional exp/imp due to the use of data access method, direct path and external tables
  • It does all processing on server rather than on client level, speed can be dramatically increased by using parallel (number of CPUx2) parameter
  • It allows to take point in time logical backup of entire database or backup of any object.
  • Data pump jobs can be restarted without any loss of data whether or not the stoppage was voluntary or involuntary. This is possible as data pump provides interactive command line control of data pump jobs
  • Virtually any type of objects can be included or excluded in data pump jobs using simple INCLUDE or EXCLUDE parameter
  • It supports data loading directly from another database over network link without need of generating dump file.
  • Possibility to disable logging generation during import operation, redo information is not written to disk or even archived.This is very useful while transporting huge data(new in oracle 12c).
  • Data pump provides options to export view as table and all objects depending on view will also be exported as if they were defined on table (new in oracle 12c).
Important points about data pump
Data pump job manages  few type of files

DUMPFILE: these files contains data and metadata that is being moved.
LOGFILE: logfile related to export and import operations
SQLFILE: file is result of SQLFILE operations, contains DDL of objects
Roles required for data pump operations

DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE

Directory Objects
Data pump job requires directory to generate dumpfile, logfile or sqlfile, this is new in data pump compared to conventional exp/imp
We need to create directory physically and then related it to logically and grant read write permission to user.

On OS level
$mkdir -p /u01/backup
On database level
create or replace directory datapump_dr as '/u01/backup';
grant read,write on directory datapump_dr to ;

Data Pump Architecture

Oracle Data Pump Architecture consists of below 3 important parts
  1. Command line interface expdp/impdp
  2. DBMS_DATAPUMP package
  3. DBMS_METADATA package
datapump architecture oracle 12c, everything you need to know about datapump

As shown in figure,
  • expdp/impdp uses DBMS_DATAPUMP package to execute export and import operations
  • when metadata needs to be manipulated, it uses DBMS_METADATA package to extract, manipulate, and also to recreate the dictionary metadata.
Other two parts from architecture needs attention
External Table API
  • ORACLE_DATAPUMP access driver
  • This is responsible for allowing oracle to read and write data in external table
Direct Path API
  • Responsible for passing the data loaded directly to load engine in the server using direct-path INSERT statements instead of using conventional insert statements.
Different Methods of Data Movement

In oracle, data pump uses different methods to move the data in and out of database
  • Datafile copy
  • Direct path
  • External tables
  • Conventional path
  • Network link
We will go in detail with few of them

Datafile Copy(Transportable Tablespace)
  • Quickest way to transport an entire tablespace to a target system, meanwhile you need to transfer datafiles associated with tablespace to target server 
  • Data pump job is to export only structural information (metadata) into dumpfile.
  • For transporting tablespace, tablespace must be self contained, to check the same, use procedure DBMS_TTS.TRANSPORT_SET_CHECK 
  • once this procedure executed we need to check for any violations from transport_set_violations views, if this does not return any row then its self contained.
  • Below we are trying to transport self contained tablespace TECHON_TBS
SQL> exec dbms_tts.transport_set_check('TECHON_TBS',TRUE);

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

  • At source place tablespace in read only mode and generate a transportable tablespace set
SQL> alter tablespace TECHON_TBS read only;

Tablespace altered.

$expdp userid=" '/ as sysdba' "  dumpfile = techon_tbs_trans_Dec2018.dmp directory=datapump_dr transport_tablespaces=TECHON_TBS  logfile=techon_tbs_trans_Dec2018.log
  • Copy the transportable tablespace set created (dumpfile and datafiles) to target using any file transfer method (scp, sftp, ftp). put back tablespace in read write mode
SQL> alter tablespace TECHON_TBS read write;

Tablespace altered.
  • Now at the target import the tablespace set. First import metadata. In case schema is not present in target database then need to create the same.
SQL>create user techon_owner identified by techon#123;

User created.

SQL>grant create session, resource to techon_owner;

Grant succeeded.

impdp userid=" '/ as sysdba' "  dumpfile = techon_tbs_trans_Dec2018.dmp directory=datapump_dr transport_datafiles=;  logfile=impdp_techon_tbs_trans_Dec2018.log
  • Once import is completed, all tablespaces will remain in read only mode, bring those in read write mode using below command.
SQL> alter tablespace techon_tbs read write;


Direct Path
  • Direct path operations can bypass undo (hence reducing redo), the load and unload process of the data will be faster, Data Pump uses by default the direct path method when the structure of a table allows it
  • Data pump does not load table with disabled unique indexes. When loading data into such situation, indexes should be either dropped or renamed.
External Tables
  • If not using datafile copy (transportable tablespace), and data can not be moved using direct path, this external table method is used.
  • It creates external table that maps to dump file data for database table, then SQL engine is used to move the data.
  • Data pump uses external table with following cases
    • loading and unloading huge tables and partition in situations where it is advantageous to use parallel SQL capabilities.
    • loading table with global or domain indexes defined on them.
    • loading tables with active triggers or clustered tables, with encrypted columns, with fine grained access control enabled for inserts
    • loading table not created by import operations.
Conventional Path
  • when data pump is not able to use either direct path or external table method, then conventional path will be used. This method can affect performance of data pump operation
Network Link
  • By specifying NETWORK_LINK parameter in data pump import; this options will be enabled.
  • this does not need to generate dump file
  • To export from a read only database , the NETWORK_LINK parameter is required 
Different scenarios of data pump

We will discuss below different scenarios 
  1. Schema export and import
  2. Table export and import
  3. Export and importing pluggable database
  4. Using export to estimate space
  5. Import with metadata only
  6. Export view as tables
  7. Import data via network link
  8. Interaction with datapump jobs
1. Schema export and import

Please read this Oracle Database Schema Refresh. I have already written dedicated article on schema refresh

2. Table export and import

I am going to export the table then drop the table and import it.
export the table using below command
expdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile table_logfile.log tables=TECHON_OWNER.candidate

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 04:59:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp logfile table_logfile.log tables=TECHON_OWNER.candidate
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "TECHON_OWNER"."CANDIDATE"                  6.023 KB       4 rows
Master table "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
  /u01/backup/table_dump.dmp
Job "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed at Thu Dec 20 04:59:58 2018 elapsed 0 00:00:23 

Next step will be to drop table then proceed with import
SQL> drop table CANDIDATE purge;

Table dropped.

impdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile tables =TECHON_OWNER.candidate

Import: Release 12.1.0.2.0 - Production on Thu Dec 20 06:16:08 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Master table "TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TECHON_OWNER"."SYS_IMPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile tables=TECHON_OWNER.candidate
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TECHON_OWNER"."CANDIDATE"                  6.023 KB       4 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully completed at Thu Dec 20 06:16:36 2018 elapsed 0 00:00:22

This will complete export and import of table.

3. Export and importing pluggable database
Below are the steps to export and import pluggable database
Export the full pluggable database using below command with full=y parameter
expdp TECHON_OWNER/techon#123@pdb_techon directory =datapump_dr dumpfile = full_pdb_dump.dmp logfile=full_pdb_logfile.log full=y

Now close the PDB and drop it including datafiles
SQL> alter pluggable database pdb_techon close;

Pluggable database altered.

SQL> drop pluggable database pdb_techon including datafiles;

Pluggable database dropped.

now create pluggable database from pdb$seed as mentioned below (also read create-pluggable-database-pdb-oracle-12c) and open in read write mode

SQL> create pluggable database pdb_techon admin user techon_admin identified by techon#123 roles = (DBA)
default tablespace techon_tbs datafile '+DATA01' size 100M
FILE_NAME_CONVERT=('+DATA01','+DATA01');  2    3

Pluggable database created.



SQL> alter pluggable database pdb_techon open;


Pluggable database altered.

Now we need set up user and grant privilege to perform import operations


SQL> alter session set container=pdb_techon;

Session altered.
SQL>create user techon_owner identified by techon#123 default tablespace techon_tbs quota unlimited on techon_tbs;

User created.
SQL>grant create session, resource, datapump_exp_full_database, datapump_imp_full_database to techon_owner;

Grant succeeded.
SQL>create or replace directory datapump_dr as '/u01/backup/';

Directory created.
SQL>grant read,write on directory datapump_dr to techon_owner;

Grant succeeded.

Now import datafile using impdp
impdp techon_owner/techon#123@ directory=datapump_dr dumpfile=full_pdb_dump.dmp  logfile=impdp_full_pdb_logfile.log

4. Using export to estimate space
we will use ESTIMATE_ONLY parameter to estimate the amount of disk space consumed in schema mode export. This operation will not create any dump file. It will just estimate space.

expdp TECHON_OWNER/techon#123 directory =datapump_dr logfile=estimate_logfile.log schemas=TECHON_OWNER estimate_only=y

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 06:47:43 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01":  TECHON_OWNER/******** directory=datapump_dr logfile=estimate_logfile.log schemas=TECHON_OWNER estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "TECHON_OWNER"."CANDIDATE"                     64 KB
Total estimation using BLOCKS method: 64 KB
Job "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Dec 20 06:47:50 2018 elapsed 0 00:00:02

5. Import with metadata only 
One can use parameter content=metadata_only to import metadata only of table 

impdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile content=metadata_only table_exists_action=replace tables =TECHON_OWNER.candidate

We must notice this parameter table_exists_action in impdp, it comes with below 4 options

APPEND: It loads row from the source and leaves existing rows unchanged
SKIP: It leaves the table as it is and moves on the next object. Not valid option if CONTENT=DATA_ONLY parameter is selected
TRUNCATE: It deletes existing rows and loads data from source
REPLACE: it replaces data in target from source.

6. Export view as tables
To illustrate this I have created view candidate_view on candidate table, Now take export of view as table.

expdp TECHON_OWNER/techon#123 directory =datapump_dr dumpfile=viewastable_dump.dmp logfile=viewastable_logfile.log views_as_tables=TECHON_OWNER.candidate_view

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 07:08:32 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=viewastable_dump.dmp logfile=viewastable_logfile.log views_as_tables=TECHON_OWNER.candidate_view
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported "TECHON_OWNER"."CANDIDATE_VIEW"             6.031 KB       4 rows
Master table "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
  /u01/backup/viewastable_dump.dmp
Job "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed at Thu Dec 20 07:08:44 2018 elapsed 0 00:00:07

To check if it works, we need to generate sqlfile using impdp , it generate DDL statements for objects exported

impdp TECHON_OWNER directory =datapump_dr dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql

Import: Release 12.1.0.2.0 - Production on Thu Dec 20 07:15:07 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Master table "TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "TECHON_OWNER"."SYS_SQL_FILE_FULL_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
Job "TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 20 07:15:15 2018 elapsed 0 00:00:02

$ cat CANDIDATE_VIEW.sql
-- CONNECT TECHON_OWNER
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
CREATE TABLE "TECHON_OWNER"."CANDIDATE_VIEW"
   (    "CID" NUMBER(10,0) NOT NULL ENABLE,
        "CNAME" VARCHAR2(20 BYTE),
        "CDOJ" DATE
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  TABLESPACE "TOOLS" ;

We can see it worked successfully.

7.Import data via network link

By using data pump, we are able to import metadata and data directly to a target database using a database link to connect to the source and pull information through network
Benefits of this
  • Dump file generation is not required
  • no need to execute export command
  • it exports from source,then import into target immediately
import operation must be done by a user on target database with DATAPUMP_IMP_FULL_DATABASE role.

We will be importing a table from source to target
First create database link in target database under desired user
SQL> create database link techon_link connect to TECHON_OWNER identified by techon#123 using 'techon.pfizer.com';

Database link created.

Try accessing the table using database link 

SQL> select * from CANDIDATE@techon_link ;

       CID CNAME                CDOJ
---------- -------------------- ------------------
       101 Rajesh               05-JUN-18
       102 Ian                  05-AUG-18
       103 Chris                25-JUN-18
       104 John                 25-JUL-18

Now will import the table using impdp operation through db link using parameter NETWORK_LINK

impdp TECHON_OWNER/techon#123 tables=TECHON_OWNER.CANDIDATE directory=datapump_dr network_link=techon_link logfile=impdp_network.log

We can table got imported without any dump file.

SQL> select * from candidate;

       CID CNAME                CDOJ
---------- -------------------- ------------------
       101 Rajesh               05-JUN-18
       102 Ian                  05-AUG-18
       103 Chris                25-JUN-18
       104 John                 25-JUL-18


8. Interaction with datapump jobs

While any export or import job is running, we can interact with that job via command line.
We can login to job using below command

$expdp techon_owner/techon#123 attach=job_name

job name can be extracted from dba_datapump_jobs table
we can stop the job, start the job, increase the parallelism
Export> STOP_JOB=immediate
Export> START_JOB
Export> PARALLEL=10
Export> HELP

Comment below if additional information is required.
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Email ThisBlogThis!Share to XShare to Facebook
Newer Post Older Post Home

0 comments:

Post a Comment

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ▼  2018 (38)
    • ►  November (25)
    • ▼  December (13)
      • Starting and Stopping MongoDB
      • Oracle Database Architecture Physical and Logical
      • MongoDB Database Backup and Restore
      • Startup and Shutdown Oracle Database
      • Oracle 12c New features – Multitenant Database
      • MongoDB Storage Engines
      • Create Pluggable Database (PDB) in Oracle 12c
      • Warning: PDB altered with errors- opening PDB in O...
      • Oracle 12c: Starting and Stopping PDB
      • Know your Hostname in MySQL
      • Everything you need to know about Oracle Data Pump
      • List Databases, Tables, schemas and other basic c...
      • User Managed Backups in Oracle
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ►  May (2)

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • Oracle 23ai : Use of NOVALIDATE Constraints in IMPDP
    While performing impdp operations in the Oracle database, Oracle performs validation checks for every constraint on the imported table, that...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger