Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

All about oracle database auditing

 Atikh Shaikh     12c, oracle, Oracle 12c, oracle 19c, Oracle user     No comments   

As your application and database grow, more and more users get connected to the database, and it becomes difficult to manage all the privileges that individual user has granted. So, there is a feature provided by the oracle called AUDITING.

Users with DBA privileges can do a lot of things with the database, it is important to make sure that users with DBA privileges should not harm the database by any means, in order to do it, the security team or database admin can enable different levels of auditing.

Before enabling any auditing, you need to consider the fact that, this is put some extra effort into the database, so try to push it on a minimal basis.

 

Auditing SYSDBA activity

Based on the parameter AUDIT_SYS_OPERATIONS value, SYSDBA activity is tracked at the operating system level audit trail file. If AUDIT_SYS_OPERATIONS is set to TRUE then every statement executed by the user connected as “as sysdba” or “as sysopr”  is audited. Location of audit trail file can be set and monitored by parameter AUDIT_FILE_DEST

 

Database Auditing

Database auditing is controlled using the parameter AUDIT_TRAIL parameter, there are different values associated with it.

 

NONE (FALSE) – database auditing is disabled

OS – auditing will be recorded at the OS level audit trail and location is controlled by audit_file_dest

DB- auditing will be recorded at database table SYS.AUD$

DB_EXTENDED – saves at database level but includes the SQL statements with bind variables

XML- auditing is done at OS level, formatted with XML tag

XML_EXTENDED- formatted as XML tags, includes SQL statements with bind variables

 

Database auditing can be configured by AUDIT commands,

For example

 SQL> audit update any table; 

Audit succeeded. 

SQL> audit select any table by session; 

Audit succeeded. 

SQL>

 Assume, that few users have “update any table” privilege granted, this can be used to harm the database as well apart from regular work.  In order to record what tables are being updated, you can simply turn on auditing for the same.

By default, auditing will generate one row for each auditing violation.

BY SESSION- one record for each session does not matter how many times it violates (DEFAULT)

BY ACCESS- one record for every violation.

 

Auditing can be enabled on specific objects as well, for example

SQL> audit select on SYS.DBA_USERS whenever successful; 

Audit succeeded. 

SQL> 

This statement generates a record for every successful insertion for the table SYS.DBA_USERS

WHENEVER SUCCESSFUL – records only when the insertion is successful

WHENEVER NOT SUCCESSFUL – records only when the insertion is failed

By default- both conditions are recorded.

 

When AUDIT_TRAIL is set as OS or XML, one can check audit records at OS level audit trail file and when AUDIT_TRAIL is set as DB or a similar one, you can fetch records using SYS.AUD$ or you can use the DBA_AUDIT_TRAIL view. There are around 50 columns available to view in the DBA_AUDIT_TRAIL view.

There are a couple of subset views of DBA_AUDIT_TRAIL and can  be used to narrow down the results

DBA_AUDIT_OBJECT,

DBA_AUDIT_STATEMENT

DBA_AUDIT_SESSION

 

Auditing WITH triggers

Auditing enabled using the audit command will only have a single record for each statement, but it will not have the exact statement used to insert the record. Sometimes you may need to watch out for statements executed,

Database triggers will help to capture the statement based on the condition defined. Assume, there is an update trigger defined on the table, you try to update the table, it will simply generate an audit record and put the row in another table defined in trigger.

 

Fine-Grained Auditing (FGA)

Till now, we have discussed auditing at database level or table level, what if you want to capture auditing for only specific rows in table or views, FGA can help you to achieve this.

FGA can be configured through package DBMS_FGA and add FGA audit policy, need to use ADD_POLICY procedure.

To view records, you need to use the DBA_FGA_AUDIT_TRAIL view, generally FGA auditing can be enabled on rows on which data is critical such as salary or budget or revenue.

DBMS_FGA has a lot of procedures and can be used to add, drop, enable policy, disable policy.

 

SQL> desc dbms_FGA

PROCEDURE ADD_POLICY

PROCEDURE DISABLE_POLICY

PROCEDURE DROP_POLICY

PROCEDURE ENABLE_POLICY 

This is all about the database auditing theory part.

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

oracle 19c : Metrics parameter in expdp

 Atikh Shaikh     oracle, oracle12c, Oracle12cR2 New Feature     No comments   

Here we will be discussing metrics parameter used in expdp or impdp utility, here is the definition of metrics as per oracle, it provides additional information about the export job in the export log file, we will see with the help of an example 

 

METRICS

Report additional job information to the export log file [NO].

 

I performed export with 2 methods

1. Without metrics parameter

2. Without metrics parameter

 

and here is the result of the same

without 

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Starting "TECHNO_USER"."SYS_EXPORT_SCHEMA_01":  userid=techno_user/********@technopdb directory=export_dir dumpfile=expdp_techno_users1.dmp logfile=expdp_techno_users.log

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

. . exported "TECHNO_USER"."DEPTS"                       393.4 KB   12002 rows

. . exported "TECHNO_USER"."DUMMY"                       5.085 KB       3 rows

. . exported "TECHNO_USER"."DUMMY1"                      5.085 KB       2 rows

. . exported "TECHNO_USER"."EMPLOYEES"                   393.4 KB   12002 rows

. . exported "TECHNO_USER"."PERSONS"                     425.6 KB   13002 rows

Master table "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for TECHNO_USER.SYS_EXPORT_SCHEMA_01 is:

  C:\DOWNLOADS\ARCH\EXPDP_TECHNO_USERS1.DMP

Job "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Jun 2 19:55:23 2022 elapsed 0 00:00:20

 

With 

 

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Starting "TECHNO_USER"."SYS_EXPORT_SCHEMA_01":  userid=techno_user/********@technopdb directory=export_dir dumpfile=expdp_techno_users.dmp logfile=expdp_techno_users.log metrics=yes

W-1 Startup took 0 seconds

W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

W-1 Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

W-1 Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

W-1      Completed 5 TABLE_STATISTICS objects in 0 seconds

W-1 Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

W-1      Completed 1 MARKER objects in 5 seconds

W-1 Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

W-1      Completed 1 PROCACT_SCHEMA objects in 0 seconds

W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE

W-1      Completed 5 TABLE objects in 13 seconds

W-1 Processing object type SCHEMA_EXPORT/TABLE/COMMENT

W-1 Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ

W-1      Completed 2 AUDIT_OBJ objects in 0 seconds

W-1 Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

W-1 . . exported "TECHNO_USER"."DEPTS"                       393.4 KB   12002 rows in 0 seconds using direct_path

W-1 . . exported "TECHNO_USER"."DUMMY"                       5.085 KB       3 rows in 0 seconds using direct_path

W-1 . . exported "TECHNO_USER"."DUMMY1"                      5.085 KB       2 rows in 0 seconds using direct_path

W-1 . . exported "TECHNO_USER"."EMPLOYEES"                   393.4 KB   12002 rows in 0 seconds using direct_path

W-1 . . exported "TECHNO_USER"."PERSONS"                     425.6 KB   13002 rows in 0 seconds using direct_path

W-1      Completed 5 SCHEMA_EXPORT/TABLE/TABLE_DATA objects in 0 seconds

W-1 Master table "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for TECHNO_USER.SYS_EXPORT_SCHEMA_01 is:

  C:\DOWNLOADS\ARCH\EXPDP_TECHNO_USERS.DMP

Job "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Jun 2 19:53:15 2022 elapsed 0 00:00:33

 

Here is the conclusion

Export without metrics parameter gives simple log file with much information about objects

Export with metrics parameter give details about a number of objects, time taken by objects to get exported, details about workers.


Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]

 Atikh Shaikh     oracle, oracle 19c, Oracle12cR2 New Feature     2 comments   

I was trying to export the schema in my windows PC, it got stuck with below error 

 

C:\Users\shaik\Videos\technodba exp>expdp userid=techno_user@technopdb directory=export_dir dumpfile=expdp_techno_users.dmp logfile=expdp_techno_users.log 

 

Export: Release 19.0.0.0.0 - Production on Wed Jun 1 16:13:15 2022

Version 19.3.0.0.0

 

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Password:

 

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

ORA-39002: invalid operation

ORA-39070: Unable to open the log file.

ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]

 

On further investigation found that, the directory I had created and pointed out does not have write permission to it, Though I was part of admin group for my PC, it was not allowing oracle to write in C drive.

once I changed directory location, I was able to perform export. 

 

C:\Users\shaik\Videos\technodba exp>expdp userid=techno_user@technopdb directory=export_dir dumpfile=expdp_techno_users.dmp logfile=expdp_techno_users.log

 

Export: Release 19.0.0.0.0 - Production on Wed Jun 1 20:54:31 2022

Version 19.3.0.0.0

 

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Password:

 

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Starting "TECHNO_USER"."SYS_EXPORT_SCHEMA_01":  userid=techno_user/********@technopdb directory=export_dir dumpfile=expdp_techno_users.dmp logfile=expdp_techno_users.log

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

. . exported "TECHNO_USER"."DEPTS"                       393.4 KB   12002 rows

. . exported "TECHNO_USER"."DUMMY"                       5.085 KB       3 rows

. . exported "TECHNO_USER"."DUMMY1"                      5.085 KB       2 rows

. . exported "TECHNO_USER"."EMPLOYEES"                   393.4 KB   12002 rows

. . exported "TECHNO_USER"."PERSONS"                     425.6 KB   13002 rows

Master table "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for TECHNO_USER.SYS_EXPORT_SCHEMA_01 is:

  C:\DOWNLOADS\ARCH\EXPDP_TECHNO_USERS.DMP

Job "TECHNO_USER"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Jun 1 20:54:56 2022 elapsed 0 00:00:20

 

 

C:\Users\shaik\Videos\technodba exp>

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

ORA-38706: Cannot turn on FLASHBACK DATABASE logging. ORA-38707: Media recovery is not enabled.

 Atikh Shaikh     Backup and Recovery, oracle, oracle 19c     No comments   

I was trying to turn on flashback for the oracle database and it was failed with below error 

 

SQL> alter database flashback on;

alter database flashback on

*

ERROR at line 1:

ORA-38706: Cannot turn on FLASHBACK DATABASE logging.

ORA-38707: Media recovery is not enabled.

 

On further checking I found, database was in noarchivelog mode, this is how we can check archive log mode.

 

SQL> archive log list

Database log mode              No Archive Mode

Automatic archival             Disabled

Archive destination            C:\Downloads\arch

Oldest online log sequence     137

Current log sequence           139

SQL>

 

In order to bring it in archive log mode, I followed steps mentioned here convert database to archive log mode  , once your database is in archive log mode , you can simple turn on flashback without any issues.

Follow below commands to turn on flashback for the database.

 

SQL> archive log list

Database log mode              No Archive Mode

Automatic archival             Disabled

Archive destination            C:\Downloads\arch

Oldest online log sequence     137

Current log sequence           139

SQL>

 

SQL> alter database flashback on;

 

Database altered.

 

SQL> select database_name, flashback_on from v$database;

 

DATABASE_NAME        FLASHBACK_ON

-------------------- ------------------

TECHNODB             YES

 

 

 

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

ORA-16024: parameter LOG_ARCHIVE_DEST_1 cannot be parsed

 Atikh Shaikh     oracle, oracle 19c     No comments   

While working on database activities, we come across different error, below one of the errors reported while updating parameter LOG_ARCHIVE_DEST_1 in oracle database.

 

SQL> alter system set log_archive_dest_1='C:\Downloads\arch' scope=both;

alter system set log_archive_dest_1='C:\Downloads\arch' scope=both

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-16024: parameter LOG_ARCHIVE_DEST_1 cannot be parsed

 

This is purely due to syntax error, as LOCATION keyword is required before path, so correct syntax will be as below 

alter system set log_archive_dest_1='location=C:\Downloads\arch\';

so I was able to update parameter without any issue

 

SQL> alter system set log_archive_dest_1='location=C:\Downloads\arch\';

 

System altered.


Also read: Statspack installation check

Also read : switching between archive log and noarchivelog


Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

* ERROR at line 1: ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only

 Atikh Shaikh     Error Code: 1419, oracle     No comments   

As a DBA, you must have come across the below error while working on pluggable databases

 

ERROR at line 1:

ORA-01219: database or pluggable database not open: queries allowed on fixed

tables or views only

 Generally, this error gets reported whenever you try to access a view or a table in the pluggable database which is not in the open state, Here I was trying to check users in the database and was hit with this error, then I checked the status of my pluggable database and I found it was in mount state and not in the open state.

oracle pluggable database not open

Then immediately I opened the database using the below command and then I was able to query the table, 

SQL> alter pluggable database open;

 

Pluggable database altered.

 

SQL> show pdbs

 

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         3 TECHNOPDB                      READ WRITE NO

SQL>  select count (*) from dba_users;

 

  COUNT(*)

----------

        40

SQL>



so wherever you see this type of error, try to check status of the database and if down, bring it up.

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Snowflake- what is account name in snowflake login

 Atikh Shaikh     Snowflake     No comments   

Snowflake is a really hot topic in 2021 and 2022 due to its benefits over traditional database offerings, Here I will show you, to check the account name in the snowflake portal, for login to snowflake you can use the link https://app.snowflake.com/, this is how the login page looks like and the first thing it will ask to enter the account name 

snowflake login page accountname

Once you enter the account name, it will ask for your username and password, the username is the one you have entered during creating the demo account for snowflake

snowflake login page

and once you enter your username and password, it will go to the main portal of snowflake, now click on the left bottom corner, you will see the below screen


snowflake account name




Now just click on highlighted part, it will give you one URL, which looks like below 

https://i****3.ap-southeast-1.snowflakecomputing.com

 The highlighted part is your account name that can be used for login purposes

[Also read -Introduction and Architecture of Snowflake]


Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

How to know if STATSPACK is installed

 Atikh Shaikh     12c, Orale, Performance     No comments   

 

Statspack is important tool for dba to investigate performance issue in oracle database, it makes DBA's life easy in performance issues cases. Most of the time, statspack is not installed automatically unless it is specified in custom script prepared from database creation.

The question is how to check if statspack is installed or not? There are number of ways to detect that, we will discuss few here

 

1.  PERFSTAT user

statspack uses PERFSTAT user to perform its operation, so check in DBA_USERS whether user is present of not using select * from dba_users where USERNAME='PERFSTAT';

statspack user perfstat


2. Presence of table STATS$DATABASE_INSTANCE

statspack generates snap id's and save information in table STATS$DATABASE_INSTANCE, and table is created during installation of statspack, check using desc STATS$DATABASE_INSTANCE if table is present, if present then statspack is installed otherwise not

stats$database_instance, statspack check , install statspack


3. Running report using spreport.sql

For DBA's convenience, there are number of scripts provided by oracle in admin folder under oracle home directory, spreport.sql is such script used to generate statspack report , try running this sql file, if it asks for begin snap id and end snap id then assume statspack is installed otherwise it will throw error.

statspack check, running spreport.sql file

 

 

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Different files involved in Oracle database

 Atikh Shaikh     12c, oracle, Oracle 12c, oracle 19c     1 comment   

As Oracle DBA, we all know Oracle database server is combination of physical files, processes and memory, in this article we are going to discuss about different files involved in oracle database architecture

There are 3 most important files

·       Data files

·       Control files

·       Redo logs file

There are other supplementary files as well - parameter files, password files, archive log files

 

Let’s discuss some more details 

Datafiles 

Data files contains actual data of the database, data files are formatted as per block size designed during database setup.

There are few types of datafiles

 

1.    SYSTEM - this includes system datafile and is critical to database, it includes dictionary of the database

2.    User datafile - this includes application specific datafiles and contains application data and can be customized as per requirement

3.    UNDO datafile - It holds old data while performing activity in database like select, insert, update, delete, this old data can be used to revert the changes in case of recovery

4.    TEMP datafile - In case user orders data based on select query, it needs space to sort the data before returning back to user, first sorting operation is performed in memory and if memory is not sufficient then it moves to temp tablespace.

 

Control Files

Control file contains structure of the database like location of all datafiles, redo logfiles.

This is Critical file to operation of database, and it is one of first file read by instance at startup.

 

Redo logfile

This file contains changes made to the database and can be used for recovery of processes. 

copies of redo logfile are called as archived log files.

 

Other files in oracle database 

Password file - this file authenticates the use who is capable to startup the database.

parameter files- 

    This file contains list of parameters used by instance.  typically, there are two types of parameter files

PFILE - text based parameter file, change to parameter needs to be modified/added manually in file and restart is required to get it into effect. 

SPFILE - binary parameter file, change to the parameter can be carried out on the go. 

 


Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

SQL Functions - LENGTH, ASCII, SUBSTR

 Atikh Shaikh     oracle     No comments   

In this article we will discuss few SQL functions useful while working daily. SQL functions are kind of predefined packages/procedure, can be used in different aspects, lets discuss one by one

LENGTH

Syntax - LENGTH(String)

This function returns the total number of characters present in the string

Example

SQL>  SELECT LENGTH('Interest') from dual;

 

LENGTH('INTEREST')

------------------

                 8

 

ASCII

Syntax – ASCII(char)

This function gives ASCII value of given character

Example

SQL>  SELECT ASCII('B') from dual;

 

ASCII('B')

----------

        66

 

SQL>  SELECT ASCII('I') from dual;

 

ASCII('I')

----------

        73

 

SUBSTR

Syntax - SUBSTR (string, position, sub-string_length)

This substring function return a portion of string, starting at character position, sub-string_length characters long.

SUBSTR calculates length using characters as defined by the input character set

SQL> select SUBSTR('entertainment',4,10) from dual;

 

SUBSTR('EN

----------

ertainment

 

SQL>

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ►  2018 (38)
    • ►  November (25)
    • ►  December (13)
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ▼  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ▼  May (2)
      • Oracle 23ai : The all new Hybrid Read-Only for plu...
      • Oracle Active Data Guard Features and Benefits

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle 23ai : Use of NOVALIDATE Constraints in IMPDP
    While performing impdp operations in the Oracle database, Oracle performs validation checks for every constraint on the imported table, that...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger