Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

User Managed Backups in Oracle

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c     1 comment   

Definition :Backup is real and consistent copy of data from database that could be used to reconstruct the data after and incident.

technodba-user-managed-backup, oracle backup types, types of backup in oracle, physical backup, logical backup

There are two types of backup
1. Physical Backup
2. Logical Backup

Physical Backup
Copy of all the physical datafiles that are required to perform the restore and recovery of database,
This type of backup includes copy of below files
  • Datafiles
  • Control files
  • Parameter files
  • Archived log files
Different Options to take physical backup are 
  • User managed techniques
  • RMAN (Recovery Manager)
Logical Backup
Oracle uses oracle datapump to allow us to generate a logical backup that can be used to migrate data even do partial or full recovery of database.Utilities available are 
  • exp/expdp
  • imp/impdp

Difference between restore and recovery

Restore: Act that involves the restoration pf all files that will be required to recover your database to a consistent state. for example copying all backup files from backup location such as TAPE or DISK

Recover: It is process to apply all transaction recorder in archivelog, rolling forward your database to point in time. 

User Managed Backups

COLD BACKUP
This is only possible way for DBA to make consistent backup of database independent of mode of database (archivelog or noarchivelog). Below are steps explained
  1. If database is running, bring it down completely in a consistent mode (using shutdown ( NORMAL/ IMMEDIATE/ TRANSACTIONAL only). This will make sure all database files header is updated to same SCN
  2. Backup of all datafiles ,control files, parameter files using OS copy command
  3. Start the database
  4. Archive all unarchived redo logs using below command and copy those to backup location
              alter system archive log current
OFFLINE BACKUP
Assume you want to take offline backup of one or more tablespace, offline backup method will help you to achieve the same, note that SYSTEM tablespace and UNDO tablespace with active undo cannot be taken offline.Steps are mentioned below
assume tablespace is TECHON_TBS
1. Identify all datafiles associated with tablespace using dba_data_files view
select tablespace_name,file_name from dba-data_files from dba_data_files where tablespace_name='TECHON_TBS';

2. Take tablepspace offline using normal priority (do not use temporary and immediate, it will require recovery at the time of bringing tablespace online)
alter tablespace TECHON_TBS offline normal;

3. Backup all datafiles related to tablespace using OS copy command

4. Bring tablespace online
alter tablespace TECHON_TBS online;

5.archive all unarchived redo logs and copy archived log to backup location
alter system archive log current;

HOT BACKUPS

This has been introduced back in version of 6 of oracle, but initially it was bound to  tablespace only , from version 10 they included whole database as well. This allows us to take hot backup of database or tablespace without need to shutdown the database.
When taking hot backup, oracle stops recording checkpoints to all the associated datafiles.
Below are two things happens internally while database is put in BEGIN BACKUP mode
  • A hot backup flag in datafile header is set
  • A checkpoint occur, flashing of all dirty blocks from memory to disk, synchronizing all the datafile headers to same SCN & freezing the headers for consistency, protection and recoverability.
Hot backups can be made to the whole database, tablespace or even at container level, this requires recovery process after backup is restored so always take backup of all required archived log files.

Hot Backup of Whole Database
For taking this type of backup, we need to put our database in backup mode using 'alter database begin backup;'
This is most common type of user managed backup being used by DBA across the world. 
Below are the steps

1. Place the database in backup mode
alter database begin backup;

2. Backup all datafiles and parameter files using OS copy command

3. Pull database out of backup mode
alter database end backup;

4. Archive all unarchived logs and copy required as backup
alter system archivelog current;

5. create copy of control file as trace
alter database backup controlfile to '/control_file.trc;

Hot Backup of Tablespaces
Now we will learn taking hot backup of one or more tablespaces i.e. backing up your tablespace while database is running. while we put tablespace in backup mode, the database copies all the changed blocks into redo stream to make all the data consistent until. Below are step explained.

1. Identify all datafiles associated with tablespace using dba_data_files view
select tablespace_name,file_name from dba-data_files from dba_data_files where tablespace_name='TECHON_TBS';

2. place the tablespace in backup mode using begin backup command
alter tablespace TECHON_TBS begin backup;

3. Backup all datafiles related to tablespace using OS copy command

4. pull tablespace out of backup mode
alter tablespace TECHON_TBS end backup;

5.archive all unarchived redo logs and copy archived log to backup location
alter system archive log current;



Hot Backup of Container Database
As introduced in Oracle 12c, you can perform user managed backup of whole container database or the only root or and individual PDB.

Whole Container Database:

Below are the steps mentioned 
1. login to database using user having sysdba or sysbackup privileges.
sqlplus /nolog

connect system@container1

2.Place your database in backup mode
alter database begin backup;

3. Identify all the databases related to container database using below command
select file_name from dba_data_files;

4. Copy all the database using OS copy command
5. Pull database out of backup mode
alter database end backup;

6. Archive all unarchived logs and copy required as backup
alter system archivelog current;

7. create copy of control file as trace
alter database backup controlfile to '/control_file.trc;

ROOT only or individual PDB

Below are the simple steps to complete hot backup of root 
container or individual pluggable database
1. connect to root or individual pluggable database with user having sysdba or sysbackup privileges.
sqlplus /nolog
connect system

alter session set container= tech_pdb;

2. Identify all the datafiles part of PDB using dba_data_files view
select file_name from dba_data_files;

3. Place pluggable database in backup mode
alter pluggable database tech_pdb begin backup;

4. Copy all the datafiles using OS copy command.
5. Pull PDB out of begin backup mode
alter pluggable database tech_pbd end backup;

6. Archive all unarchived logs and copy required as backup
alter system archivelog current;

7. create copy of control file as trace
alter database backup controlfile to '/control_file.trc;

Control File Backup
There are two ways to take backup of control 
  1. Binary backup
  2. Text file Backup
Having valid backup of control file of a database is crucial for successful recovery of database.

Binary backup
One can take binary copy of control file via SQL statement. This copy contains additional information such as the archived redo log history , offline range for read-only and offline tablespaces, tempfile entries and RMAN backup sets and copies of data
command to take binary backup

alter database backup controlfile to '';

Text file backup
This type of backup contains a create controlfile statement based on current control file in use. This type of backup can be taken while database is in mount or open mode. This is generated as trace file and can be easily modified.
Below are few statements used to created text based backup of control file

1. alter database backup controlfile to trace;
2. alter database backup controlfile to trace resetlogs;
3. alter database backup controlfile to trace noresetlogs;
4. alter database backup controlfile to trace as '';
5. alter database backup controlfile to trace as '' reuse;
6. alter database backup controlfile to trace as '' resetlogs;
7. alter database backup controlfile to trace as '' noresetlogs;

Please comment and share if you feel this is helpful.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

List Databases, Tables, schemas and other basic commands in PostgreSQL

 Atikh Shaikh     PostgreSQL     No comments   

In previous articles we have learned about basics of PostgreSQL, architecture and installation of PostgreSQL on windows
(Read here Introduction to PostgreSQL , Installation of PostgreSQL )
Now we will learn basic commands to administrates the PostgreSQL databases
list databases, tables,schemas and other basic queries in postgresql, postgresql blog

Login to PostgreSQL
we can login to postgres database using psql with few options like 
database name and postgres user, below we logged in to tech_pg database using user postgreu
$ psql -d tech_pg -U postgreu
psql.bin (9.6.2.7)
Type "help" for help.

tech_pg=#
 

Listing the database available
A single PostgreSQL server instance can have multiple databases, each and every database is stored as separate file under its own directory, to list database we can use \l or we can use pg_database table
tech_pg=# \l
                                             List of databases
     Name     |   Owner    | Encoding |   Collate   |    Ctype    | ICU |        Access privileges
--------------+------------+----------+-------------+-------------+-----+----------------------------------
 tech_pg      | postgreu | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     |
 postgres     | postgreu | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     |
template0     | postgreu | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/postgreu                   +
              |            |          |             |             |     | postgreu=CTc/postgreu
 template1    | postgreu | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/ postgreu                   +
              |            |          |             |             |     | postgreu=CTc/postgreu
(4 rows)

tech_pg=#
tech_pg=# select datname from pg_database;
   datname
--------------
 postgres
 tech_pg
 template1
 template0
 (4 rows)

tech_pg=#


Connect to database (Switch Database)
By default PostgreSQL server instance has 3 databases i.e. postgres, template1 and template0
one can connect to database from postgres command line tool and switch to other database using \connect or \c

tech_pg=# \c postgres
You are now connected to database "postgres" as user "postgreu".
postgres=#
postgres=# \connect tech_pg
You are now connected to database "tech_pg" as user "postgreu".
tech_pg=#

Checking Current Database

We can check the name of the database in which we are currently logged in using different methods, i.e. using \c, \conninfo, select * from current_catalog and current_database() function. it has been illustrated below 

tech_pg-# \c tech_pg
You are now connected to database "tech_pg" as user "postgreu".
tech_pg-#
tech_pg=# \conninfo
You are connected to database "tech_pg" as user "postgreu" via socket in "/tmp" at port "5444".
tech_pg=#
tech_pg=# \c
You are now connected to database "tech_pg" as user "postgreu".
tech_pg=#
tech_pg=# select * from current_catalog;
 current_database
------------------
 tech_pg
(1 row)

tech_pg=# select current_database() ;
 current_database
------------------
 tech_pg
(1 row)

tech_pg=#

Listing the Tables
one can simply list the tables created under specific database using \dt command, in Below example we can see there is no table present in database, if any present then it will list out with details.


tech_pg=# select current_database() ;
 current_database
------------------
 tech_pg
(1 row)

tech_pg=# \dt
No relations found.
tech_pg=#

Get list of all Schema
using command \dn, one can simply list out all the schema in postgres

tech_pg=# \dn
         List of schemas
        Name        |   Owner
--------------------+------------
 postgreu           | postgreu
 dbms_job_procedure | postgreu
 pgagent            | postgreu
 public             | postgreu
(4 rows)

One can use information_schema.schemata view to list out all the schemas in current database 

tech_pg=# select catalog_name,schema_name,schema_owner from information_schema.schemata;
 catalog_name| schema_name    | schema_owner
-------------+--------------------+--------------
 tech_pg | pg_toast           | postgreu
 tech_pg | pg_temp_1          | postgreu
 tech_pg | pg_toast_temp_1    | postgreu
 tech_pg | pg_catalog         | postgreu
 tech_pg | public             | postgreu
 tech_pg | information_schema | postgreu
 tech_pg | sys                | postgreu
 tech_pg | dbo                | postgreu
(8 rows)

Comment for feedback or in case need additional information, share the post if you like it.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Everything you need to know about Oracle Data Pump

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c     No comments   

In this article we are going to discuss everything about Data Pump utility provided by oracle from 10g version with no extra cost.We are going to discuss based on below points
  • Definition, need and use of data pump
  • Architecture of data pump
  • Different methods of data movement using data pump
  • Different scenarios of data pump
Definition, need and use of data pump
  • Data pump is oracle database utility introduced within oracle 10g as replacement to conventional export import (exp/imp)
  • High speed, fast object recovery , parallel read and write, data transformation and compression
  • Provides bulk data and metadata movement between oracle databases independent of oracle version and OS platform
  • Data Pump support below modes
    • Full database mode
    • Schema mode
    • Table mode
    • Tablespace mode
    • Transportable tablespace mode
  • Data pump is much faster than conventional exp/imp due to the use of data access method, direct path and external tables
  • It does all processing on server rather than on client level, speed can be dramatically increased by using parallel (number of CPUx2) parameter
  • It allows to take point in time logical backup of entire database or backup of any object.
  • Data pump jobs can be restarted without any loss of data whether or not the stoppage was voluntary or involuntary. This is possible as data pump provides interactive command line control of data pump jobs
  • Virtually any type of objects can be included or excluded in data pump jobs using simple INCLUDE or EXCLUDE parameter
  • It supports data loading directly from another database over network link without need of generating dump file.
  • Possibility to disable logging generation during import operation, redo information is not written to disk or even archived.This is very useful while transporting huge data(new in oracle 12c).
  • Data pump provides options to export view as table and all objects depending on view will also be exported as if they were defined on table (new in oracle 12c).
Important points about data pump
Data pump job manages  few type of files

DUMPFILE: these files contains data and metadata that is being moved.
LOGFILE: logfile related to export and import operations
SQLFILE: file is result of SQLFILE operations, contains DDL of objects
Roles required for data pump operations

DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE

Directory Objects
Data pump job requires directory to generate dumpfile, logfile or sqlfile, this is new in data pump compared to conventional exp/imp
We need to create directory physically and then related it to logically and grant read write permission to user.

On OS level
$mkdir -p /u01/backup
On database level
create or replace directory datapump_dr as '/u01/backup';
grant read,write on directory datapump_dr to ;

Data Pump Architecture

Oracle Data Pump Architecture consists of below 3 important parts
  1. Command line interface expdp/impdp
  2. DBMS_DATAPUMP package
  3. DBMS_METADATA package
datapump architecture oracle 12c, everything you need to know about datapump

As shown in figure,
  • expdp/impdp uses DBMS_DATAPUMP package to execute export and import operations
  • when metadata needs to be manipulated, it uses DBMS_METADATA package to extract, manipulate, and also to recreate the dictionary metadata.
Other two parts from architecture needs attention
External Table API
  • ORACLE_DATAPUMP access driver
  • This is responsible for allowing oracle to read and write data in external table
Direct Path API
  • Responsible for passing the data loaded directly to load engine in the server using direct-path INSERT statements instead of using conventional insert statements.
Different Methods of Data Movement

In oracle, data pump uses different methods to move the data in and out of database
  • Datafile copy
  • Direct path
  • External tables
  • Conventional path
  • Network link
We will go in detail with few of them

Datafile Copy(Transportable Tablespace)
  • Quickest way to transport an entire tablespace to a target system, meanwhile you need to transfer datafiles associated with tablespace to target server 
  • Data pump job is to export only structural information (metadata) into dumpfile.
  • For transporting tablespace, tablespace must be self contained, to check the same, use procedure DBMS_TTS.TRANSPORT_SET_CHECK 
  • once this procedure executed we need to check for any violations from transport_set_violations views, if this does not return any row then its self contained.
  • Below we are trying to transport self contained tablespace TECHON_TBS
SQL> exec dbms_tts.transport_set_check('TECHON_TBS',TRUE);

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

  • At source place tablespace in read only mode and generate a transportable tablespace set
SQL> alter tablespace TECHON_TBS read only;

Tablespace altered.

$expdp userid=" '/ as sysdba' "  dumpfile = techon_tbs_trans_Dec2018.dmp directory=datapump_dr transport_tablespaces=TECHON_TBS  logfile=techon_tbs_trans_Dec2018.log
  • Copy the transportable tablespace set created (dumpfile and datafiles) to target using any file transfer method (scp, sftp, ftp). put back tablespace in read write mode
SQL> alter tablespace TECHON_TBS read write;

Tablespace altered.
  • Now at the target import the tablespace set. First import metadata. In case schema is not present in target database then need to create the same.
SQL>create user techon_owner identified by techon#123;

User created.

SQL>grant create session, resource to techon_owner;

Grant succeeded.

impdp userid=" '/ as sysdba' "  dumpfile = techon_tbs_trans_Dec2018.dmp directory=datapump_dr transport_datafiles=;  logfile=impdp_techon_tbs_trans_Dec2018.log
  • Once import is completed, all tablespaces will remain in read only mode, bring those in read write mode using below command.
SQL> alter tablespace techon_tbs read write;


Direct Path
  • Direct path operations can bypass undo (hence reducing redo), the load and unload process of the data will be faster, Data Pump uses by default the direct path method when the structure of a table allows it
  • Data pump does not load table with disabled unique indexes. When loading data into such situation, indexes should be either dropped or renamed.
External Tables
  • If not using datafile copy (transportable tablespace), and data can not be moved using direct path, this external table method is used.
  • It creates external table that maps to dump file data for database table, then SQL engine is used to move the data.
  • Data pump uses external table with following cases
    • loading and unloading huge tables and partition in situations where it is advantageous to use parallel SQL capabilities.
    • loading table with global or domain indexes defined on them.
    • loading tables with active triggers or clustered tables, with encrypted columns, with fine grained access control enabled for inserts
    • loading table not created by import operations.
Conventional Path
  • when data pump is not able to use either direct path or external table method, then conventional path will be used. This method can affect performance of data pump operation
Network Link
  • By specifying NETWORK_LINK parameter in data pump import; this options will be enabled.
  • this does not need to generate dump file
  • To export from a read only database , the NETWORK_LINK parameter is required 
Different scenarios of data pump

We will discuss below different scenarios 
  1. Schema export and import
  2. Table export and import
  3. Export and importing pluggable database
  4. Using export to estimate space
  5. Import with metadata only
  6. Export view as tables
  7. Import data via network link
  8. Interaction with datapump jobs
1. Schema export and import

Please read this Oracle Database Schema Refresh. I have already written dedicated article on schema refresh

2. Table export and import

I am going to export the table then drop the table and import it.
export the table using below command
expdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile table_logfile.log tables=TECHON_OWNER.candidate

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 04:59:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp logfile table_logfile.log tables=TECHON_OWNER.candidate
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "TECHON_OWNER"."CANDIDATE"                  6.023 KB       4 rows
Master table "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
  /u01/backup/table_dump.dmp
Job "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed at Thu Dec 20 04:59:58 2018 elapsed 0 00:00:23 

Next step will be to drop table then proceed with import
SQL> drop table CANDIDATE purge;

Table dropped.

impdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile tables =TECHON_OWNER.candidate

Import: Release 12.1.0.2.0 - Production on Thu Dec 20 06:16:08 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Master table "TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TECHON_OWNER"."SYS_IMPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile tables=TECHON_OWNER.candidate
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TECHON_OWNER"."CANDIDATE"                  6.023 KB       4 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "TECHON_OWNER"."SYS_IMPORT_TABLE_01" successfully completed at Thu Dec 20 06:16:36 2018 elapsed 0 00:00:22

This will complete export and import of table.

3. Export and importing pluggable database
Below are the steps to export and import pluggable database
Export the full pluggable database using below command with full=y parameter
expdp TECHON_OWNER/techon#123@pdb_techon directory =datapump_dr dumpfile = full_pdb_dump.dmp logfile=full_pdb_logfile.log full=y

Now close the PDB and drop it including datafiles
SQL> alter pluggable database pdb_techon close;

Pluggable database altered.

SQL> drop pluggable database pdb_techon including datafiles;

Pluggable database dropped.

now create pluggable database from pdb$seed as mentioned below (also read create-pluggable-database-pdb-oracle-12c) and open in read write mode

SQL> create pluggable database pdb_techon admin user techon_admin identified by techon#123 roles = (DBA)
default tablespace techon_tbs datafile '+DATA01' size 100M
FILE_NAME_CONVERT=('+DATA01','+DATA01');  2    3

Pluggable database created.



SQL> alter pluggable database pdb_techon open;


Pluggable database altered.

Now we need set up user and grant privilege to perform import operations


SQL> alter session set container=pdb_techon;

Session altered.
SQL>create user techon_owner identified by techon#123 default tablespace techon_tbs quota unlimited on techon_tbs;

User created.
SQL>grant create session, resource, datapump_exp_full_database, datapump_imp_full_database to techon_owner;

Grant succeeded.
SQL>create or replace directory datapump_dr as '/u01/backup/';

Directory created.
SQL>grant read,write on directory datapump_dr to techon_owner;

Grant succeeded.

Now import datafile using impdp
impdp techon_owner/techon#123@ directory=datapump_dr dumpfile=full_pdb_dump.dmp  logfile=impdp_full_pdb_logfile.log

4. Using export to estimate space
we will use ESTIMATE_ONLY parameter to estimate the amount of disk space consumed in schema mode export. This operation will not create any dump file. It will just estimate space.

expdp TECHON_OWNER/techon#123 directory =datapump_dr logfile=estimate_logfile.log schemas=TECHON_OWNER estimate_only=y

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 06:47:43 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01":  TECHON_OWNER/******** directory=datapump_dr logfile=estimate_logfile.log schemas=TECHON_OWNER estimate_only=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "TECHON_OWNER"."CANDIDATE"                     64 KB
Total estimation using BLOCKS method: 64 KB
Job "TECHON_OWNER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Dec 20 06:47:50 2018 elapsed 0 00:00:02

5. Import with metadata only 
One can use parameter content=metadata_only to import metadata only of table 

impdp techon_owner/techon#123 directory =datapump_dr dumpfile=table_dump.dmp logfile=impdp_logfile content=metadata_only table_exists_action=replace tables =TECHON_OWNER.candidate

We must notice this parameter table_exists_action in impdp, it comes with below 4 options

APPEND: It loads row from the source and leaves existing rows unchanged
SKIP: It leaves the table as it is and moves on the next object. Not valid option if CONTENT=DATA_ONLY parameter is selected
TRUNCATE: It deletes existing rows and loads data from source
REPLACE: it replaces data in target from source.

6. Export view as tables
To illustrate this I have created view candidate_view on candidate table, Now take export of view as table.

expdp TECHON_OWNER/techon#123 directory =datapump_dr dumpfile=viewastable_dump.dmp logfile=viewastable_logfile.log views_as_tables=TECHON_OWNER.candidate_view

Export: Release 12.1.0.2.0 - Production on Thu Dec 20 07:08:32 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Starting "TECHON_OWNER"."SYS_EXPORT_TABLE_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=viewastable_dump.dmp logfile=viewastable_logfile.log views_as_tables=TECHON_OWNER.candidate_view
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
. . exported "TECHON_OWNER"."CANDIDATE_VIEW"             6.031 KB       4 rows
Master table "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TECHON_OWNER.SYS_EXPORT_TABLE_01 is:
  /u01/backup/viewastable_dump.dmp
Job "TECHON_OWNER"."SYS_EXPORT_TABLE_01" successfully completed at Thu Dec 20 07:08:44 2018 elapsed 0 00:00:07

To check if it works, we need to generate sqlfile using impdp , it generate DDL statements for objects exported

impdp TECHON_OWNER directory =datapump_dr dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql

Import: Release 12.1.0.2.0 - Production on Thu Dec 20 07:15:07 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
Master table "TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "TECHON_OWNER"."SYS_SQL_FILE_FULL_01":  TECHON_OWNER/******** directory=datapump_dr dumpfile=viewastable_dump.dmp sqlfile=CANDIDATE_VIEW.sql
Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
Job "TECHON_OWNER"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 20 07:15:15 2018 elapsed 0 00:00:02

$ cat CANDIDATE_VIEW.sql
-- CONNECT TECHON_OWNER
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/VIEWS_AS_TABLES/TABLE
CREATE TABLE "TECHON_OWNER"."CANDIDATE_VIEW"
   (    "CID" NUMBER(10,0) NOT NULL ENABLE,
        "CNAME" VARCHAR2(20 BYTE),
        "CDOJ" DATE
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  TABLESPACE "TOOLS" ;

We can see it worked successfully.

7.Import data via network link

By using data pump, we are able to import metadata and data directly to a target database using a database link to connect to the source and pull information through network
Benefits of this
  • Dump file generation is not required
  • no need to execute export command
  • it exports from source,then import into target immediately
import operation must be done by a user on target database with DATAPUMP_IMP_FULL_DATABASE role.

We will be importing a table from source to target
First create database link in target database under desired user
SQL> create database link techon_link connect to TECHON_OWNER identified by techon#123 using 'techon.pfizer.com';

Database link created.

Try accessing the table using database link 

SQL> select * from CANDIDATE@techon_link ;

       CID CNAME                CDOJ
---------- -------------------- ------------------
       101 Rajesh               05-JUN-18
       102 Ian                  05-AUG-18
       103 Chris                25-JUN-18
       104 John                 25-JUL-18

Now will import the table using impdp operation through db link using parameter NETWORK_LINK

impdp TECHON_OWNER/techon#123 tables=TECHON_OWNER.CANDIDATE directory=datapump_dr network_link=techon_link logfile=impdp_network.log

We can table got imported without any dump file.

SQL> select * from candidate;

       CID CNAME                CDOJ
---------- -------------------- ------------------
       101 Rajesh               05-JUN-18
       102 Ian                  05-AUG-18
       103 Chris                25-JUN-18
       104 John                 25-JUL-18


8. Interaction with datapump jobs

While any export or import job is running, we can interact with that job via command line.
We can login to job using below command

$expdp techon_owner/techon#123 attach=job_name

job name can be extracted from dba_datapump_jobs table
we can stop the job, start the job, increase the parallelism
Export> STOP_JOB=immediate
Export> START_JOB
Export> PARALLEL=10
Export> HELP

Comment below if additional information is required.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ▼  2018 (38)
    • ►  November (25)
    • ▼  December (13)
      • Starting and Stopping MongoDB
      • Oracle Database Architecture Physical and Logical
      • MongoDB Database Backup and Restore
      • Startup and Shutdown Oracle Database
      • Oracle 12c New features – Multitenant Database
      • MongoDB Storage Engines
      • Create Pluggable Database (PDB) in Oracle 12c
      • Warning: PDB altered with errors- opening PDB in O...
      • Oracle 12c: Starting and Stopping PDB
      • Know your Hostname in MySQL
      • Everything you need to know about Oracle Data Pump
      • List Databases, Tables, schemas and other basic c...
      • User Managed Backups in Oracle
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ►  May (2)

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • Oracle 23ai : Use of NOVALIDATE Constraints in IMPDP
    While performing impdp operations in the Oracle database, Oracle performs validation checks for every constraint on the imported table, that...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger