Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

Oracle : The oerr Utility (Oracle Error)

 Atikh Shaikh     Oracle 12c, Orale     No comments   

  • The oerr utility (oracle error) is provided only with Oracle Database on UNIX platform
  • oerr is not an executable, but instead a shell script that retrieves messages from installed message files
  • oerr is not provided on windows, since it uses “awk” commands to retrieve the requested text from the file
Syntax
oerr
oerr utility architecture, oracle oerr utlity. ORA errors
where
facility is prefix to error number, this includes ORA, PLS, EXP etc.
The error is actual error number
For example
$ oerr ora 01652
01652, 00000, "unable to extend temp segment by %s in 
tablespace %s"
// *Cause:  Failed to allocate an extent of the required number of blocks for
//          a temporary segment in the tablespace indicated.
// *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
//          files to the tablespace indicated.

$ oerr EXP 00008
00008, 00000, "ORACLE error %lu encountered"
// *Cause:  Export encountered the referenced Oracle error.
// *Action: Look up the Oracle message in the ORA message chapters of this

//          manual and take appropriate action.

Output shows “”Cause” of the error and “Action” recommended.

How oerr works

  • To best understand how oerr works, you can review the shell script oerr.ksh available at $ORACLE_HOME/bin
  • The commands in the oerr.ksh file confirms $ORACLE_HOME is set and if not utility will terminate.
  • Facility information is read from facility file located in $ORACLE_HOME/lib/facility.lis. below is some portion of facility.lis file
oerr utility architecture, oracle oerr utlity. ORA errors

aud:ord:*:
amd:cwmlite:*:
av:pfs:*:
bbed:rdbms:*:
brw:browser:*:
clsr:racg:*:
ds:office:*:
dsp:office:*:
dsw:office:*:
dsz:office:*:
ebu:obackup:*:
evm:evm:*:
exp:rdbms:*:
fmc:forms40:*:
iac:forms:*:
iad:forms:*:
lcd:rdbms:*:
oao:office:*:
obk:obackup:ebu:
omv:office:*:
opw:rdbms:*:
ora:rdbms:*:
osn:network:*:
osnq:network:*:

Facility file contains three mandatory data items and one optional data item
Mandatory
  •   Facility
  •  Component
  •  Name of alias for component if any, otherwise a * will be used
In above file facility.lis ora is facility, rdbms is component and it does not have alias so *
Optional
  •   Description
Using the facility name provided on command line, oerr retrieves the component for that facility

For example 
oerr ora 01652, oerr uses rdbms component, using this information, appropriate message file can be retrieved
Msg_File=$ORACLE_HOME/$Component/mesg/$(Facility)us.msg

for ORA errors


Msg_File=$ORACLE_HOME/rdbms/mesg/oraus.msg

Once this path and file is retrieved the content of the will be provided with cause and action for each error in facility

oerr on Windows

  • oerr  is only available on UNIX but it does not take much code to access message file and display the message on windows
  • To do this, the actual message file needs to copied to windows machine and placed in same directory as java program.
  • The java program (oerr.java) reads the message and display text associated with error code.
  • Java program reads the message file line by line until it encounters the actual error code
For example
C:\oracle\bin\java oerr ora 01652

Create custom Message Files

There are number of cases, where oerr does not show up any message, for such error codes, we can create own customized message file with error code
For example

$ oerr ora 00942
00942, 00000, "table or view does not exist"
// *Cause:
// *Action:

Following can be used to show customized message file

oerr technofile 00942


Where technofile is customized error message file. A customized message file can be created with below steps
  1. Add the facility in facility.lis file
  2. Create a directory that contains new message file
  3. Create the actual message file

Alternatives for oerr 

ORACLE TechNet  - http://technet.oracle.com
ORACLE Metalink – http://support.oracle.com

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle : Drop Pluggable Database (PDB) in 12c/19c/21c/23c

 Atikh Shaikh     oracle, Oracle 12c     1 comment   

After learning creating pluggable database, we will discuss about dropping pluggable database(PDB) in 12c, 19c and above versions
  • Dropping pluggable database is similar to dropping any other regular database, you have two options while dropping pluggable database related to its datafiles
      • Dropping PDB including datafiles
      • Dropping PDB keeping datafiles
drop-pluggable-database, dropping pluggable database, drop pdb

Here we will drop pluggable database PDB_TECHNO, below are the steps for the same
  • Check status of pluggable database using v$containers you want to drop
SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE
         4 PDB_TECHNO                     READ WRITE
you can see this PDB exists and save is in READ WRITE mode
  • Close the database using alter database command before dropping database and check the status
SQL> alter pluggable database PDB_TECHNO close;

Pluggable database altered.

SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE
         4 PDB_TECHNO                     MOUNTED


Now drop the PDB using including datafiles


SQL> drop pluggable database PDB_TECHNO including datafiles;

Pluggable database dropped.

SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE

In this case we have dropped pluggable database including datafiles as we do not need these datafiles but in case if we need datafile even after dropping pluggable database, we can simply use drop command excluding option of "including datafiles", 
  • By default oracle drops pluggable database with keeping datafiles so below both commands are equivalent
drop pluggable database PDB_TECHNO keep datafiles;

drop pluggable database PDB_TECHNO;

In case we try to drop non closed (open) pluggable database using including datafiles, it will throw and error but same can be dropped with option keep datafiles

SQL> drop pluggable database PDB_TECHNO including datafiles;
drop pluggable database PDB_TECHNO including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDB_TECHNO is not closed on all instances.

SQL> drop pluggable database PDB_TECHNO keep datafiles;

Pluggable database dropped.

In this way we can drop any pluggable database, dropping container database is like dropping any other database but we have to be sure before dropping it as it will drop all pluggable databases including seed PDB present under it.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

The DUAL table in oracle

 Atikh Shaikh     oracle     No comments   

Everyone must have came across DUAL table present in oracle database, most of DBA's are using it daily but do not know what exactly it is, here we will discuss about the same
DUAL table is special one row one column table available by default in oracle database
oracle-dba-dual-table ,dual table in oracle

  • DUAL table is owned by SYS schema in oracle i.e. DUAL is part of oracle data dictionary
  • DUAL is having only one row and only a column named DUMMY of VARCHAR2 datafile with value x.
  • Selecting from dual table is useful for computing constant expressions with select statement, as DUAL is only one row table, the constant is returned only once.
  • The advantage to DUAL is the optimizer understand dual is special one row , one column table
  • There are times when some calculations need to be performed on values which are not available in database table, In such arithmetic calculations, table are not referenced only numeric values are used, To perform such calculations the SELECT query can be used to output the calculated values. A SELECT always requires a table in the FROM clause without which it fails.
  • Now we will perform some operations using DUAL table
Describing DUAL table
SQL> DESC DUAL
 Name         Null?    Type
 ---------- -------- --------------
 DUMMY                 VARCHAR2(1)

Selecting from DUAL table
SQL> select * from DUAL;

D
-
X

Performing arithmetic operations
SQL> select 25*25 from dual;

     25*25
----------
       625
select date using dual
SQL> select sysdate from dual;

SYSDATE
---------
17-JAN-19
Generating DDL statements using DUAL
SQL> select dbms_metadata.get_ddl('TABLE', 'DUAL') from dual;

DBMS_METADATA.GET_DDL('TABLE','DUAL')
------------------------------------------------------------------
   CREATE TABLE "SYS"."DUAL"
   (    "DUMMY" VARCHAR2(1)
   ) PCTFREE 10 PCTUSED 4

Features of DUAL is not limited to this operations only, we can much more than this
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle RMAN: Fast Incremental Backups

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c, RMAN     No comments   

There are two ways to achieve fast incremental backups as mentioned in last section of this post 
  1. Block change tracking (BCT) file
  2. Multi-Section incremental backups using section size (12c)
We will discuss these two methods in details

Block change tracking (BCT) file
  • While performing incremental backup RMAN search for modified blocks whose system change number (SCN) is higher than last incremental level backups incremental start SCN
  • In normal incremental backup strategy, in order to identify modified blocks it would require to read an entire datafile
  • If incremental backups needs to be fast enough we need to skip the scan of entire datafile to find modified blocks, oracle 10g introduced a more refined way of using block change tracking (BCT) file.
  • This file keeps entry of those blocks that are modified since last full backup
  • At the time of next incremental backup, RMAN would read details from this file only and avoid looking for whole datafile
  • BCT file uses bitmap structures to update and maintain the information from changed blocks
  • BCT file is neither default nor it gets generated at the time of database creation.
  • It can be viewed using database view v$block_change_tracking
SQL> select filename, status from v$block_change_tracking;

FILENAME   STATUS
---------- ----------
          DISABLED
SQL>

We can enable block change tracking file using below command

SQL >alter database enable block change tracking using file '/u01/oracle/bct/config.f';

Database altered

SQL> select filename,status from v$block_change_tracking;

FILENAME                               STATUS
-------------------------------------- ----------
/u01/oracle/bct/config.f                ENABLED

  • Default location for BCT file is $ORACLE_HOME/dbs
  • Default size would be 10M (1/30,000) of total database sizes at the time of database creation.
  • When BCT file is enabled, oracle database use the background process change tracking writer (CTWR) for recording the bitmaps of blocks being modified for datafile
  • The CTWR background process uses memory are CTWR dba buffer allocated from large pool
Current size is viewed using  v$sgastat view

SQL> select pool,name, bytes from v$sgastat where name like 'CTWR%';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   CTWR dba buffer               1525808


BCT file speeds up the already faster incremental backups even more, this file does not need an additional administration by DBA.

 Multi Section Incremental backups
  • As we have discussed here, SECTION SIZE clause helps to improve the performance backups of huge size datafiles and databases, same can be used in Incremental backup strategy as well.
  • To make use of SECTION SIZE  clause in level 1 incremental backups, the compatible parameter must be set to 12.0, for level 0 incremental backups compatible can be 11.0
  • Using below command we can complete multi section incremental backups
RMAN > backup incremental level 1 section 500m database;

Kindly comment below if any additional information is required
Read 
      • RMAN Introduction
      • Oracle RMAN Commands
      • Incremental Backups
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

ORA-19804, ORA-19809: limit exceeded for recovery files

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c, RMAN     No comments   

I was taking backup of database as image copy of size 200G, I faced ORA errors like ORA-19809, ORA-19804 as shown below


RMAN> run
{
allocate channel ch1 type disk;
backup as copy SECTION SIZE 800M database;
release channel ch1;
}2> 3> 4> 5> 6>

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=2330 device type=DISK

Starting backup at 07-JAN-19
channel ch1: starting datafile copy
input datafile file number=00013 name=+DATA/TECH_DB/DATAFILE/tech_tbs_data.277.996526534
backing up blocks 1 through 153600
released channel: ch1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ch1 channel at 01/07/2019 05:52:11
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 322123595776 bytes disk space from 107374182400 limit

RMAN> 


Below is the solution for the same
I found database size is 200GB and FRA size is set 100GB only, which caused this error

commands to check size of FRA

SQL> show parameter db_recovery

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FLASH
db_recovery_file_dest_size           big integer 100G
SQL>


Command to check size of database and usage of FRA

SQL > select sum(bytes)/1024/1024/1024 as "Size Of DB" from dba_data_files;
              Size Of DB
-------------------------
                      200

SQL> col NAME for a10
SQL> SELECT NAME, round(space_limit/1024/1024/1024,2) TOTAL_GB, round(space_used/1024/1024/1024,2) USED_GB, round((space_limit-space_used+space_reclaimable)/1024/1024/1024,2) AVAILABLE_GB, ROUND((space_used-space_reclaimable)/space_limit * 100,1) PERCENT_FULL FROM v$recovery_file_dest;

NAME         TOTAL_GB    USED_GB AVAILABLE_GB PERCENT_FULL
---------- ---------- ---------- ------------ ------------
+FLASH            100       1.09        99.42           .6


I have changed size of the FRA using DB_RECOVERY_FILE_DEST_SIZE to 250GB, then it worked without any issue.

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE =250G SCOPE=BOTH


Once this is done fire RMAN backup command, it will initiate the backup and will not through this error.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle RMAN : File Section for backup of large datafiles

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c, RMAN     No comments   

After learning RMAN introduction and taking first backup of database, we will go through file section for RMAN backups of large datafiles and databases.
  • With introduction of BigFile tablespaces in oracle, large sized datafiles are command these days. For such large datafiles, it takes enormous amount of time to get backed up.
  • By using multiple channels, we can make it faster but issue remains same for single large sized datafiles, as channels support inter-file parallelism not intra-file parallelism
  • To resolve this we need to logically divide large data file into small file chunks using the option “SECTION SIZE” in backup command
  • When large data file is broken into many smaller chunks, each chunk will be treated as separate file. Every single chunk will be backed up by individual channels i.e. intra parallelism.
  • SECTION SIZE can be mentioned in KB, MB, GB and backup will be completed in backup set format
  • SECTION SIZE clause can be used in either full database backup or partial database backup such as tablespace or datafile
  • In below example, we will take backup of one of tablespace SYSTEM using SECTION SIZE
RMAN> run
{
allocate channel ch1 type disk;
BACKUP SECTION SIZE 800M TABLESPACE SYSTEM;
release channel ch1;
}2> 3> 4> 5> 6>

allocated channel: ch1
channel ch1: SID=2330 device type=DISK

Starting backup at 07-JAN-19
channel ch1: starting full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/TECH_DB/DATAFILE/system.285.996743219
backing up blocks 1 through 102400
channel ch1: starting piece 1 at 07-JAN-19
channel ch1: finished piece 1 at 07-JAN-19
piece handle=+FLASH/TECH_DB/BACKUPSET/2019_01_07/nnndf0_tag20190107t054719_0.328.996904041 tag=TAG20190107T054719 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:07
channel ch1: starting full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/TECH_DB/DATAFILE/system.285.996743219
backing up blocks 102401 through 204800
channel ch1: starting piece 2 at 07-JAN-19
channel ch1: finished piece 2 at 07-JAN-19
piece handle=+FLASH/TECH_DB/BACKUPSET/2019_01_07/nnndf0_tag20190107t054719_0.327.996904049 tag=TAG20190107T054719 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: starting full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/TECH_DB/DATAFILE/system.285.996743219
backing up blocks 204801 through 262144
channel ch1: starting piece 3 at 07-JAN-19
channel ch1: finished piece 3 at 07-JAN-19
piece handle=+FLASH/TECH_DB/BACKUPSET/2019_01_07/nnndf0_tag20190107t054719_0.326.996904051 tag=TAG20190107T054719 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: starting full datafile backup set
channel ch1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ch1: starting piece 1 at 07-JAN-19
channel ch1: finished piece 1 at 07-JAN-19
piece handle=+FLASH/TECH_DB/BACKUPSET/2019_01_07/ncsnf0_tag20190107t054719_0.325.996904055 tag=TAG20190107T054719 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JAN-19

released channel: ch1

RMAN>


  • For Oracle 12c onward, SECTION SIZE clause can be used for image copy format backup as well, command will be like below
RMAN > backup as copy SECTION SIZE 500M database;


  • Using SECTION SIZE method, it helps a lot in case database is huge to reduce elapsed time.
Feel free to comment  here and share if you like it.
[Also read : RMAN disk backup ]

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

MongoDB 4.0 New Features

 Atikh Shaikh     MongoDB     No comments   

MongoDB 4.0 has been released on Aug 6, 2018, with tremendous new features specially ACID transactions.
I have listed the MongoDB 4.0 new features below
Mongodb 4.0 new features, whats new in mongodb 4.0

Read concern snapshot
  • MongoDB 4.0 introduces a new concern level snapshot for the multi-document transactions.
  • Read concern helps in targeting consistency and isolation properties of data
  • This new feature ensures that a consistent view of data is returned to the client, whether data is being modified simultaneously

Data Type Conversions
  • This is new in MongoDB 4.0 to convert data types, this comes under the aggregation framework and can be used with the help of the $convert expression
New String Operators
Below are the string operators added in the new version 4.0

  • $ltrim : Removes white spaces or specified characters from beginning of string
  • $rtrim : Removes white spaces or specified characters from end of the string
  • $trim : Removes white spaces or specified characters from beginning and end of the string
Read Preference
  • In the previous release MongoDB blocked secondary reads while oplog entries were applied.
  • Now there is improved read latency and increased throughput from the replica set which helps in maintaining a consistent ordering of data
Sharding Operations and Migration throughput
  • Sharded migrations are now up 40% faster helping for better distribution of data
  • Operations can list and kill queries running in shared cluster.
Locking System
  • By default, multi-document transactions wait 5 milliseconds to acquire locks required by operations in transactions
  • If transactions can not acquire their required locks within 5 milliseconds the transactions abort
The latest Minor release is 4.0.5 (Dec 20, 2018)
Below will be featured in the upcoming MongoDB 4.2
  • Removed MMAPv1 storage engine 
  • Removed a few commands and methods like group, eval, copydb etc.
  • Security improvements like adding TLS and depreciating SSL
  • Aggregation improvements
  • Transaction Manager
[Also read: Introduction to MongoDB MongoDB storage Engines]


Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (21)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ►  2018 (38)
    • ►  November (25)
    • ►  December (13)
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (7)
    • ►  March (1)
    • ►  April (3)
    • ►  May (2)
    • ►  August (1)
  • ▼  2026 (1)
    • ▼  January (1)
      • Table Dropped in Oracle Database ? Worry not, we c...

Popular Posts

  • RMAN Backup of Single Datafile and List Backup
    This post will discuss about taking backup of single datafile, provided database is in archive log mode. Check if ARCHIVELOG mode is o...
  • PostgreSQL : pg_hba. conf configuration file
    In PostgreSQL, there are a number of configuration files, some of the files needs to be managed by postgres DBA, out of which  pg_hba.conf ...
  • Create local user in oracle pluggable database
    In this short article, I will give a brief idea about how to create a local user in a pluggable database and how to check its status as well...
  • User Managed Backups in Oracle
    Definition :Backup is real and consistent copy of data from database that could be used to reconstruct the data after and incident. ...
  • Oracle 23ai : The all new Hybrid Read-Only for pluggable databases (PDBs)
      The latest Oracle database version, Oracle 23ai, introduced a new open mode called Hybrid Read-Only for pluggable databases (PDBs). Local ...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger