Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

Postgres spooling file and other command line options

 Atikh Shaikh     PostgreSQL     3 comments   

I wanted to drop all the tables in a particular postgres database, there were around 90 tables, dropping individual tables one by one is not a good idea and is time-consuming as well, 
So decided to create a DROP script and then execute it, the challenge for me was to spool the file in PSQL as it is slightly different from SQL language
In the below example I have spooled drop statements in the file and then executed the same by login to the postgres database

Login to database tech_db using tech_usr 


$ psql -d tech_db -U tech_usr

Password for user tech_usr:

psql.bin (9.6.2.7)

Type "help" for help.

tech_db=#


Check the connection information

 

tech_db=# \conninfo

You are connected to database "tech_db" as user "tech_usr" via socket in "/tmp" at port "5442".

tech_db=#

Check the tables in the database


tech_db=# \dt

                        List of relations

 Schema |             Name              | Type  |      Owner

--------+-------------------------------+-------+-----------------

 public | admin_events                        | table | tech_db

 public | associat_policy               | table | tech_db

 ...

 ...

 ...

 public | origins_tab                   | table | tech_db

(90 rows)

Create Drop script for drop tables 


tech_db=# \o drop_tables.sql

tech_db=# select 'drop table if exists "' || tablename || '" cascade;'   from pg_tables where schemaname = 'public';

tech_db=# \o

you will be able to see file drop_tables.sql is created under current directory


$ ls -lsrth
8.0K -rw-r--r-- 1 technodba technodba 4.9K Jan 24 08:42 drop_tables.sql

Execute script in PSQL command prompt


tech_db-# \o drop_tables.log

tech_db-# \i drop_tables.sql

tech_db-# \o

Now check the tables 


tech_db=# \dt

No relations found.

tech_db=#

Playing with the command line
As we see, by default query wont be written to output file, we have to specifically mention to write query to output file,
this is is how we can do it. We need to use 
\qecho before query

tech_db=# \o tables.sql

tech_db=# select * from origins_tab;

tech_db=# \qecho select * from origins_tab;

tech_db=# \o

tech_db=# \q

$

$ ls -lsrth

total 4.0K

4.0K -rw-r--r-- 1 technodba technodba 77 Jan 28 04:02 tables.sql

$

$ cat tables.sql

 client_id | value

-----------+-------

(0 rows)

 

select * from origins_tab;

$
at first query we did not use \qecho , it has directly written output and in second query as we used \qecho it has written query

To write the query in the output
Using \set ECHO queries we will be able to write query in output. Below is difference


tech_db=# select * from origins_tab;

 client_id | value

-----------+-------

(0 rows)

 

tech_db=#

tech_db=# \set ECHO queries

tech_db=# select * from origins_tab;

select * from origins_tab;

 client_id | value

-----------+-------

(0 rows)

 

tech_db=#


Feel free to comment and feedback on this. Check out other posts as well.

 

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

MySQL 8.0 New Features

 Atikh Shaikh     mysql     1 comment   

MySQL 8.0 is released with lot of new features, we will discuss those in short, please visit official website for details of the features
mysql 8.0 new features, mysql 8.0.14 new features, mysql new features

Features can be divided in 3 types, in this post will discuss features added in MySQL 8.0, including latest 8.0.14 (22nd Dec 2018)
  • Features added in 8.0
  • Depreciated
  • Completely removed
Features added in 8.0
Data Dictionary
Previously data dictionary was stored in metadata files and non-transactional tables, but here MySQL 8.0 incorporates a transactional data dictionary that stores information about database
Atomic DDL
An atomic DDL combines the data dictionary objects, storage engine operations and binary log writes associated with DDL operation in a single atomic transaction.
Security and Account Management
  • The grant table is now InnoDB (transactional) table.
  • A new caching_sha2_password authentication plugin is available and is default in MySQL 8.0. Like sha256_password plugin, caching_sha2_password implements SHA-256 password hashing, but uses caching to address latency issue at connect time.
  • MySQL 8.0 supports roles i.e. collection of privileges
  • MySQL can now maintain password history, enabling restrictions on reuse of passwords.
Resource Management
  • MySQL now supports creation and management of resource groups and permits assigning threads running within the server to particular groups so that threads execute according to resource available to group
InnoDB environment
  • The current maximum auto increment counter value is written to redo log, each time the value changes and saved to engines private system table on checkpoint.
  • A server restarts no longer cancels effect of AUTO_INCREMENT = n table option
  • New dynamic variable innodb_deadlock_detect may be used to disable deadlock operations
  • INFORMATION_SCHEMA.INNODB_CACHED_INDEXES table reports number of index pages cached in InnoDB buffer pool for each index
  • InnoDB temporary tables are now created in shared temporary tablespace ibtmp1.
  • ADD PARTITION, DROP PARTITION, COALESCE PARTITION, REORGANIZE PARTITION and REBUILD PARTITION ALTER TABLE options are supported by native partitioning in place APIs.
  • InnoDB storage engine now uses mysql data dictionary instead of own storage engine specified data dictionary.
  • The mysql system tables and data dictionary tables are now created in single InnoDB tablespace file named mysql.ibd in MySQL data directory
MySQL 8.0 changes to Undo Tablespace
  • Undo logs re no longer created in system tablespace. As of 8.0.14 additional undo tablespace can be created in a chosen location at run time using “create undo tablespace” command.
  • Default value for variables that affect buffer pool pre-flushing and flushing behavior were modified
    • innodb_max_dirty_pages_pct_lwm = 10 – previously it was 0 (disable), setting value 10 enables pre-flushing where % of dirty pages exceeds 10%
    • innodb_max_dirty_pages_pct = 90 –previously it was 75
  • Renaming a general tablespace is supported by “alter tablespace ..rename’ command.
  • The innodb_dedicated_server variable which is disabled by default, can be used to have InnoDB automatically configure following options
    • innodb_buffer_pool_size
    • innodb_log_file_size
    • innodb_flush_method
  • The new table under information schema INFORMATION_SCHEMA.INNODB_TABLESAPCE_BRIEF
  • Tablespace files can be moved or restored to new location while server is offline using innodb_directories option.
  • The innodb_log_buffer_size variable is now dynamic, which permits on the fly changes to its value
  • The ADD DATAFILE clause in CREATE TABLESPACE command is optional it will automatically create datafile with unique name
Character Set Support
The default character set is now utf8mb4 instead of latin1, this new character set support lot of new Unicode’s.
JSON and NoSQL
mysql 8.0 new features, mysql 8.0.14 new features, mysql new features
  • Today’s world is internet world and internet is document based medium. Regular transactional relation database lacks ability to match up with webs unstructured data and document central model, this lack of feature was major factor in rise of NoSQL databases such as MongoDB.
  • Release of MySQL 8.0 introduces a NoSQL document store so that single database can be used for both relational and document databases
  • In addition to introduction to NoSQL, MySQL 8.0 brings lot of improvements to JSON handling, few function are mentioned below
    • JSON aggregate functions
      •             JSON_ARRAYAGG()
      •             JSON_OBJECTAGG()
    • JSON pretty function
      •             JSON_PRETTY()
    • JSON storage function
      •             JSON_STORAGE_SIZE()
      •             JSON_STORAGE_FREE()
    • JSON merge function
      •             JSON_MERGE_PRESERVE()
    • JSON table function
      •            JSON_TABLE()
Data Type Support
MySQL now supports use of expressions as default values in data type specifications. This includes the use of expressions as default values for the BLOB, TEXT, GEOMETRY, and JSON data types, which previously could not be assigned default values at all
Optimizer
MySQL 8.0 now support invisible indexes and descending indexes
MySQL 8.0 adds hinds for INDEX_MERGE and NO_INDEX_MERGE, this allows the user to control index merge behavior for an individual query without changing optimizer switch.
Common Table Expression
MySQL 8.0 supports common table expression both recursive and non recursive, these expressions enable use of named temporary result sets, implemented by permitting a WITH clause preceding SELECT statement and certain other statements.
Window Functions
MySQL 8.0 supports SQL window functions similar to grouped aggregate functions.
Window functions perform some calculation on set of rows e.g COUNT or SUM. There are two kinds of window functions
SQL aggregate functions used as window functions
COUNT, SUM, AVG, MIN, MAX, BIT_OR, BIT_AND, BIT XOR, STDDEV_OOP, STDEV_SAMP, VAR_OOP and VAR_SAMP
Specialized window functions
RANK DENSE_RANK, PERCENT_RANK, NTILE, ROW_NUMBER, FIRST_NUMBER, FIRST_VALUE, LAST_VALUE, NTH_VALUE, LEAD and LAG
Lateral Derived Table
A derived table now preceded by LATERAL keyword to specify that it is permitted to refer to columns of preceding tables in the same FROM clause.
Internal Temporary Tables
The TempTable storage engine replaces the MEMORY storage engine as default engine for in-memory internal temporary tables. TempTable storage engine provides efficient storage for VARCHAR and VARBINARY column
Error Logging
Error logging was rewritten to use the mysql component architecture traditional error logging is implemented using built-in components as logging using system log is implemented as as loadable component. In addition a loadable JSON log writer is available related to parameter log_error_services
Replication
Replication now supports binary logging of partial updates to JSON documents using a compact binary format, saving space in log over logging complete JSON documents
Connection Management
MySQL 8.0 server allows TCP/IP port to be configured specifically for administrative connections
Plugins
Plugins must be written in C++ not C

Please feel free to ask questions and give feedback
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle RMAN : Incrementally Updated Backups

 Atikh Shaikh     Backup and Recovery, oracle, Oracle 12c, RMAN     No comments   

We have learned about incremental backups and how to make these incremental backups faster. Now we will discuss incrementally updated backups
  • Assume we have very huge size database say around 50-60TB, there are few challenges we may face while performing backup like making backup faster and keeping recovery time as less as possible.
  • Generally recovery is done after restore and restore takes all the time, so we need to concentrate on reducing this restore time.
  • For backup minimization time we have enough option of incremental backups but in order to reduce restore time, we go new Incrementally updated backups
  • Using incrementally updated backup method, RMAN rolls forward the backup taken of an image copy with subsequent incremental backups. With incremental backups updated with all the changes since last incremental level backup
  • Using RMAN, database can be just switched over this updated image copy, rather than restoring them back to original files location
  • Since we are not restoring files, restore will be very very minimum and to recover the same, redo logs will be less and it would be only last incremental backup
  • Thus within few minutes, we will be able to restore and recover the database
  • One of the major condition while using this method is use of large enough FRA (Fast Recovery Area) in order to take full image copy database backup. RMAN will look for updated image at FRA only
Now we will discuss how exactly this works


We can use RUN block to execute set of RMAN commands
RUN
{
RECOVER copy of database
with tag 'tech_db_incr';
BACKUP
incremental level 1
for recover of copy with tag 'tech_db_incr' database;
}

Assuming you running this RUN block daily, here is what happens every day

Day 1 : On first day RECOVER command does nothing , to apply incremental backups a level backup is needed, since there isn't one existing yet BACKUP command will create level 0 image copy of database backup
Day 2 : On second day, as level 0 backup exists the BACKUP command will create level 1 incremental backup. There wont be any work for RECOVER command on second day as well
Day 3 Onward : from third day onward RECOVER command will update the image copy with previous days level 1 incremental backup daily 

To do point in time recovery of this image copy, the available time window is only until level 0 backup, so if we created level 0 backup of database on Aug 6, we can not recover it using incrementally updated backups till Aug 4 or so. 

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle : The oerr Utility (Oracle Error)

 Atikh Shaikh     Oracle 12c, Orale     No comments   

  • The oerr utility (oracle error) is provided only with Oracle Database on UNIX platform
  • oerr is not an executable, but instead a shell script that retrieves messages from installed message files
  • oerr is not provided on windows, since it uses “awk” commands to retrieve the requested text from the file
Syntax
oerr
oerr utility architecture, oracle oerr utlity. ORA errors
where
facility is prefix to error number, this includes ORA, PLS, EXP etc.
The error is actual error number
For example
$ oerr ora 01652
01652, 00000, "unable to extend temp segment by %s in 
tablespace %s"
// *Cause:  Failed to allocate an extent of the required number of blocks for
//          a temporary segment in the tablespace indicated.
// *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
//          files to the tablespace indicated.

$ oerr EXP 00008
00008, 00000, "ORACLE error %lu encountered"
// *Cause:  Export encountered the referenced Oracle error.
// *Action: Look up the Oracle message in the ORA message chapters of this

//          manual and take appropriate action.

Output shows “”Cause” of the error and “Action” recommended.

How oerr works

  • To best understand how oerr works, you can review the shell script oerr.ksh available at $ORACLE_HOME/bin
  • The commands in the oerr.ksh file confirms $ORACLE_HOME is set and if not utility will terminate.
  • Facility information is read from facility file located in $ORACLE_HOME/lib/facility.lis. below is some portion of facility.lis file
oerr utility architecture, oracle oerr utlity. ORA errors

aud:ord:*:
amd:cwmlite:*:
av:pfs:*:
bbed:rdbms:*:
brw:browser:*:
clsr:racg:*:
ds:office:*:
dsp:office:*:
dsw:office:*:
dsz:office:*:
ebu:obackup:*:
evm:evm:*:
exp:rdbms:*:
fmc:forms40:*:
iac:forms:*:
iad:forms:*:
lcd:rdbms:*:
oao:office:*:
obk:obackup:ebu:
omv:office:*:
opw:rdbms:*:
ora:rdbms:*:
osn:network:*:
osnq:network:*:

Facility file contains three mandatory data items and one optional data item
Mandatory
  •   Facility
  •  Component
  •  Name of alias for component if any, otherwise a * will be used
In above file facility.lis ora is facility, rdbms is component and it does not have alias so *
Optional
  •   Description
Using the facility name provided on command line, oerr retrieves the component for that facility

For example 
oerr ora 01652, oerr uses rdbms component, using this information, appropriate message file can be retrieved
Msg_File=$ORACLE_HOME/$Component/mesg/$(Facility)us.msg

for ORA errors


Msg_File=$ORACLE_HOME/rdbms/mesg/oraus.msg

Once this path and file is retrieved the content of the will be provided with cause and action for each error in facility

oerr on Windows

  • oerr  is only available on UNIX but it does not take much code to access message file and display the message on windows
  • To do this, the actual message file needs to copied to windows machine and placed in same directory as java program.
  • The java program (oerr.java) reads the message and display text associated with error code.
  • Java program reads the message file line by line until it encounters the actual error code
For example
C:\oracle\bin\java oerr ora 01652

Create custom Message Files

There are number of cases, where oerr does not show up any message, for such error codes, we can create own customized message file with error code
For example

$ oerr ora 00942
00942, 00000, "table or view does not exist"
// *Cause:
// *Action:

Following can be used to show customized message file

oerr technofile 00942


Where technofile is customized error message file. A customized message file can be created with below steps
  1. Add the facility in facility.lis file
  2. Create a directory that contains new message file
  3. Create the actual message file

Alternatives for oerr 

ORACLE TechNet  - http://technet.oracle.com
ORACLE Metalink – http://support.oracle.com

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Oracle : Drop Pluggable Database (PDB) in 12c/19c/21c/23c

 Atikh Shaikh     oracle, Oracle 12c     1 comment   

After learning creating pluggable database, we will discuss about dropping pluggable database(PDB) in 12c, 19c and above versions
  • Dropping pluggable database is similar to dropping any other regular database, you have two options while dropping pluggable database related to its datafiles
      • Dropping PDB including datafiles
      • Dropping PDB keeping datafiles
drop-pluggable-database, dropping pluggable database, drop pdb

Here we will drop pluggable database PDB_TECHNO, below are the steps for the same
  • Check status of pluggable database using v$containers you want to drop
SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE
         4 PDB_TECHNO                     READ WRITE
you can see this PDB exists and save is in READ WRITE mode
  • Close the database using alter database command before dropping database and check the status
SQL> alter pluggable database PDB_TECHNO close;

Pluggable database altered.

SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE
         4 PDB_TECHNO                     MOUNTED


Now drop the PDB using including datafiles


SQL> drop pluggable database PDB_TECHNO including datafiles;

Pluggable database dropped.

SQL> select con_id, name,open_mode from v$containers;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 PDB_1                          READ WRITE

In this case we have dropped pluggable database including datafiles as we do not need these datafiles but in case if we need datafile even after dropping pluggable database, we can simply use drop command excluding option of "including datafiles", 
  • By default oracle drops pluggable database with keeping datafiles so below both commands are equivalent
drop pluggable database PDB_TECHNO keep datafiles;

drop pluggable database PDB_TECHNO;

In case we try to drop non closed (open) pluggable database using including datafiles, it will throw and error but same can be dropped with option keep datafiles

SQL> drop pluggable database PDB_TECHNO including datafiles;
drop pluggable database PDB_TECHNO including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDB_TECHNO is not closed on all instances.

SQL> drop pluggable database PDB_TECHNO keep datafiles;

Pluggable database dropped.

In this way we can drop any pluggable database, dropping container database is like dropping any other database but we have to be sure before dropping it as it will drop all pluggable databases including seed PDB present under it.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ►  2018 (38)
    • ►  November (25)
    • ►  December (13)
  • ▼  2019 (33)
    • ▼  January (15)
      • Data Types in MongoDB
      • Oracle : RMAN Introduction
      • ORA-00265: instance recovery required, cannot set ...
      • Oracle : Starting with RMAN Commands
      • Oracle RMAN : Incremental Backups
      • MongoDB 4.0 New Features
      • Oracle RMAN : File Section for backup of large dat...
      • ORA-19804, ORA-19809: limit exceeded for recovery ...
      • Oracle RMAN: Fast Incremental Backups
      • The DUAL table in oracle
      • Oracle : Drop Pluggable Database (PDB) in 12c/19c/...
      • Oracle : The oerr Utility (Oracle Error)
      • Oracle RMAN : Incrementally Updated Backups
      • MySQL 8.0 New Features
      • Postgres spooling file and other command line options
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ►  May (2)

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • Oracle 23ai : Use of NOVALIDATE Constraints in IMPDP
    While performing impdp operations in the Oracle database, Oracle performs validation checks for every constraint on the imported table, that...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger