Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

Creating User Account in MongoDB

 Atikh Shaikh     MongoDB     No comments   

For any database whether it is RDBMS or NoSQL, the user account is a very important aspect and is a must to connect to a database
There are different types of users depending on usage like, admin user account, read-only account, read write only.
Users can be created or added to the database using db.createUser() method available with MongoDB
The db.createUser() method accepts a document object that enables you to specify the username, roles and password for user to be created

Definition of db.createUser() method
db.createUser(user, writeConcern)

Field
Type
Description
User
Document
The document with authentication and access information about user
writeConcern
Document
Optional, the level of writeConcern for creation operation

The user document has the following fields

Field
Type
Description
user
String
The name of the new user
pwd
String
The user password
customData
Document
Optional, any information about user
roles
Array
This defines roles, The roles granted to a user can be an empty array[] to create a user without any roles
AuthenticationRestrictions
Array
Optional, the authentication restrictions the server enforces on the user. Specifies a list of IP addresses and CIDR ranges from which user will be able to connect
Mechanisms
Array
Optional, specify the specific SCRAM mechanism for creating SCRAM user creation
passwordDigester
string
Optional, Indicates whether the server or the client digest the password

Roles
In the role field, you can specify the build-in roles and user-defined roles. You can specify directly role name or in the form of a document as well
i.e.
"readWrite" or {role : "" : db ""}

External Credentials
Users created on $external database should have credentials stored externally to MongoDB
Local database
user can not be created on local database
Examples
Create account with roles assigned
> db.createUser({ user : "tech_owner",
... pwd : "techo!23",
... customData : {user : "blogUser"},
... roles :[{role: "clusterAdmin",
... db : "admin" },
... "readWrite"]}
... );
Successfully added user: {
        "user" : "tech_owner",
        "customData" : {
                "user" : "blogUser"
        },
        "roles" : [
                {
                        "role" : "clusterAdmin",
                        "db" : "admin"
                },
                "readWrite"
        ]
}
> 

Create an account with roles

> db.createUser({ user : "tech_user",
... pwd : "techo!23",
... roles :["readWrite", "dbAdmin"]}
... );
Successfully added user: { "user" : "tech_user", "roles" : [ "readWrite", "dbAdmin" ] }
> 
> 

Create user without any roles

> db.createUser({ user : "tech_wo_roles",
... pwd : "techo!23",
... roles :[]
... }
... );
Successfully added user: { "user" : "tech_wo_roles", "roles" : [ ] }
> 
> 

Listing users created

> db.getUsers();
[
        {
                "_id" : "techno_db.tech_owner",
                "user" : "tech_owner",
                "db" : "techno_db",
                "customData" : {
                        "user" : "blogUser"
                },
                "roles" : [
                        {
                                "role" : "clusterAdmin",
                                "db" : "admin"
                        },
                        {
                                "role" : "readWrite",
                                "db" : "techno_db"
                        }
                ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        },
        {
                "_id" : "techno_db.tech_user",
                "user" : "tech_user",
                "db" : "techno_db",
                "roles" : [
                        {
                                "role" : "readWrite",
                                "db" : "techno_db"
                        },
                        {
                                "role" : "dbAdmin",
                                "db" : "techno_db"
                        }
                ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        },
        {
                "_id" : "techno_db.tech_wo_roles",
                "user" : "tech_wo_roles",
                "db" : "techno_db",
                "roles" : [ ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        }
]
> 
> 

Removing/Dropping User in MongoDB
First list the users using db.getUsers() method and decide which one to drop
then use db.dropUser() method to drop user, this dropUser() method takes username as argument
We will drop tech_wo_roles user 

> db.dropUser("tech_wo_roles");
true
> 

If you want to drop all the users if you can user db.dropAllUsers() method.

More will come on MongoDB. follow this space for more and explore more.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Table Creation in PostgreSQL

 Atikh Shaikh     PostgreSQL     No comments   

We have created database tech_db in previous article, now we will create tables in PostgreSQL database. Table creation in postgres is very much similar to creating table in any other RDBMS database, we can make use of data types described here

In oracle we use desc to see definition of any table in PostgreSQL, we use \d or \d+ to see list of columns and its description

We will create table name employee with few columns in it

CREATE TABLE employee (
emp_id int NOT NULL,
emp_name varchar(255) NOT NULL,
age int NOT NULL,
PRIMARY KEY(emp_id)
);

you can see, we have defined all columns as NOT NULL and primary key as emp_id, same can be defined as below as well


CREATE TABLE employee (
emp_id int NOT NULL PRIMARY KEY,
emp_name varchar(255) NOT NULL,
age int NOT NULL
);

List of tables can be fetched using \dt 
tech_db=# \dt
          List of relations
 Schema |   Name   | Type  |  Owner
--------+----------+-------+----------
 public | employee | table | postgres
(1 row)
tech_db=#

Create table using select as statement


create table emp_2019 as select * from employee where age>26;

List tables
tech_db=# \dt
          List of relations
 Schema |   Name   | Type  |  Owner
--------+----------+-------+----------
 public | emp_2019 | table | postgres
 public | employee | table | postgres
(2 rows)

tech_db=#

Create unlogged table
Unlogged tables are created with skipping write-ahead log, these are faster tables but can be recovered in case of crash and can not be replicated. syntax is similar to create table just with additional key word unlogged . for example


CREATE UNLOGGED TABLE student (
student_id int NOT NULL,
student_name varchar(255),
DOB date,
DOJ date
);

Describing table
tech_db=# \d employee;
            Table "public.employee"
  Column  |          Type          | Modifiers
----------+------------------------+-----------
 emp_id   | integer                | not null
 emp_name | character varying(255) | not null
 age      | integer                | not null
Indexes:
    "employee_pkey" PRIMARY KEY, btree (emp_id)

tech_db=#
tech_db=# \d+ employee;
                                Table "public.employee"
  Column  |          Type          | Modifiers | Storage  | Stats target | Description
----------+------------------------+-----------+----------+--------------+-------------
 emp_id   | integer                | not null  | plain    |              |
 emp_name | character varying(255) | not null  | extended |              |
 age      | integer                | not null  | plain    |              |
Indexes:
    "employee_pkey" PRIMARY KEY, btree (emp_id)

tech_db=#


Recommendation while creating tables
  • use lower case name for tables
  • use lower case name for columns
PostgreSQL column constraints
NOT NULL - value can not be null
UNIQUE - value must be unique
PRIMARY KEY - not null + unique
CHECK - enables to have check value before insert or update
REFERENCE - constraints the value of column in another table

PostgreSQL table constraints
Similar to column constraints just they are applied to whole table.

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Different Server Logs in MySQL

 Atikh Shaikh     mysql     1 comment   

The log files are always important part of any database,whether it is RDBMS database or NoSQL database.
log files make troubleshooting job easy and indicates lo of ways to analyze performance issues.
Below is the list of logs available in MySQL


Log type
Information written to log
Error log
Problem encountered during starting, running and stopping daemon process (mysqld)
General Query Log
Established connections and statements received from client
Binary log
Statements that changes data (also used for replication)
Relay log
Data changes received from replication master server
Slow query log
Query that took more than log_query_time seconds to executes
DDL log
Metadata operations performed by DDL statements
By default,logfile location is always data directory. We will discuss these logs in details

The Error Log
  • As name suggest, error log contains diagnostic message such as errors, warning and notes that generated during startup and shutdown of daemon process i.e. mysqld and while server is running.
  • File name always ends with .err
  • If used to start mysqld, mysqld_safe nay write message to error log e.g. when mysqld_safe notices abnormal mysqld exits,it restarts and write  mysql restarted message to this error log
  • The log_error_services system variable control which log component to enable for error logging
select @@log_error_services;
+---------------------------------------------+
| @@log_error_services                        |
+---------------------------------------------+
| log_filter_internal, log_sink_internal      |
+---------------------------------------------+
  • Output indicates, log events generated first go through built-in filter component log_filter_internal then through the built-in log writer component log_sink_internal
  • A filter modifies log events seen by components named later in log_error_internal value
  • A sink is destination for log events
  • The location of error log can be defined using --log-error while starting daemon process or can be set using log_error in configuration file.
Error log in JSON format
  • MySQL error log can be set in JSON format by modifying log_error_services parameter value.
  • To make use of JSON format error log file, we need to install components "file://component_log_sink_json";
  • and set value to this component using 
set GLOBAL log_error_services = 'log_filter_internal, log_sink_json';
Format of logfile message
timestamp thread_id [severity] [error_code] [subsystem] message

The General Query Log
  • The server writes information to this log when client connects or disconnects and it also log each statement received from client
  • If you want to track or monitor activity or the purpose of troubleshooting one should activate this general query log by using --log option
  • This can be disabled or enabled while runtime using below command
Disable -set GLOBAL general_log=OFF;
Enable - set GLOBAL general_log=ON;
  • This does not require restart restart of the mysql services
  • Log file location can be set using general_log_file variable in configuration file
The Binary Log
  • The binary log contains "events" that describe database changes such as table creation operations or changes to the table data 
  • The binary log has below two important purpose
    • For replication, the binary log on master replication server provides a record of data changes to be sent to slave servers
    • Data recover operations requires use of binary log
  • Binary logging is enabled by default i.e. log_bin system variable is set to ON
  • The log_file_basename is used to specify base name for binary log files
  • The mysqld appends the numeric  extension to binary log base name to generate binary log file names. Number increases each time server creates new log file
  • The maximum size is controlled by max_binlog_size
  • To keep track of  binary log index file that contains name of the binary log files
  • You can delete all binary log files with RESET MASTER or a subset of this with PURGE BINARY LOGS
  • To display content of binary log file , you can use mysqlbinlog utility as below
    mysqlbinlog

The Slow Query Log
  • This particular log is related to slow query, it lists all the queries that exceed a predefined amount of time long_query_time  (mentioned seconds).
  • Any query that takes more than time than this parameter value, it will be recorded in slow query log
  • This slow query log is enabled by below parameters
     slow_query_log=ON
     slow_query_log_file=
  • Slow query log contains:
    • query_time : duration (in seconds)
    • lock_time : duration (in seconds)
    • row_sent : N
    • rows_examined : N
    • Thread_id : ID
    • errno : error number
  • There are log of parameters available in slow query log
The DDL Log
  • The DDL log records metadata operations generated by data definition statements such as DROP TABLE or ALTER TABLE 
  • MySQL uses this log to recover from crashes occurring during metadata operations
  • A record of metadata operations such as those just described are written to the file ddl_log.log in mysql data directory
  • This is binary file (human unreadable)
  • The ddl_log.log can hold upto 1048573 entries equivalent to 4GB in size
Relay Log
  • The relay log is like binary log, consists of set of numbered files containing events that describe database changes, and and index file that contains names of all used relay log files.
  • Relay log file have same format as binary log files and can be read using mysqlbinlog utility.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Database Creation in PostgreSQL

 Atikh Shaikh     PostgreSQL     1 comment   

Creating the database in PostgreSQL is a very simple task, it's like creating a directory structure. We will discuss the steps to create a new database.
Login to the Postgres instance using the admin account

$ psql -d postgres -U postgres
psql.bin (9.6.2.7)
Type "help" for help.

postgres=# 

Create a database using the below command 

postgres# create database tech_db;

The above command will take parameters by default, full syntax is defined below
create database database name
owner=role_name
template = template
encoding = encoding
lc_collate =collate
lc_ctype = ctype
tablespace = tablespace_Name
connection_limit = max_concurrent_connections

We will describe each parameter in the below table

technodba_postgresql-create-database, database creation, parameters

By using default parameters database can be created with the below command

postgres=# create database tech_db;
CREATE DATABASE
postgres=#

The list of the database can be checked using \l or pg_database table


postgres=# \l
                                                 List of databases
   Name    |  Owner   | Encoding |          Collate           |           Ctype            |   Access privileges
-----------+----------+----------+----------------------------+----------------------------+-----------------------
 postgres  | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =Tc/postgres         +
           |          |          |                            |                            | postgres=CTc/postgres
 tech_db   | postgres | UTF8     | English_United States.1252 | English_United States.1252 |
 template0 | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =c/postgres          +
           |          |          |                            |                            | postgres=CTc/postgres
 template1 | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =c/postgres          +
           |          |          |                            |                            | postgres=CTc/postgres
(4 rows)

postgres=# select datname from pg_database;
  datname
-----------
 template1
 template0
 postgres
 tech_db
(4 rows)

postgres=#

Your database is ready to use. There is other way as well to create 
database using createdb command and need to run from UNIX shell, will be discussed later on
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Data Types in PostgreSQL

 Atikh Shaikh     PostgreSQL     No comments   

There are number of data types in PostgreSQL and user can create their own data types as PostgreSQL is identified with object oriented database.

Data types in PostgreSQL has been differentiated on the basis of its use and its type, below is the list of data types available in PostgreSQL
  • Numeric types
  • Monetary types
  • Character types
  • Binary types
  • Date/Time types
  • Boolean types
  • Enumerated types
  • Geometric types
  • Network Address types
  • Bit string types
  • Text search types
  • UUID types
  • XML types
  • JSON types
  • Arrays
  • Composite types
  • Range types
  • Domain types
  • Object oriented types
  • pg_lsn types
  • pseudo types
Now we will discussion each one in details

Numeric types
Below table represents all number data types available
Name
Storage Size
Description
Range
smallint
2 bytes
Small range integer
-32768 to +32767
integer
4 bytes
integer
-2147483648 to +2147483647
bigint
8 byes
Large range integer
-922337203685475808 to +922337203685475807
decimal
variable
User specified precision, exact
Upto 131072 digits before decimal, upto 16383 digits after decimal
numeric
Variable
User specified precision
Upto 131072 digits before decimal, upto 16383 digits after decimal
real
4 bytes
Variable precision inexact
6 decimal digit precision
double precision
8 bytes
Variable precision inexact
15 decimal digits precision
smallserial
2 bytes
Small auto increment integer
1 to 32767
serial
4 bytes
auto increment integer
1 to 2147483647
bigserial
8 bytes
large auto increment integer
1 to 922337203685475807

Monetary types
Below is the monetary data type available to use in PostgreSQL
Name
Storage Size
Description
Range
Money
8 bytes
Currency amount
-922337203685475808.08 to +922337203685475808.07

Character types
Below are described character types in PostgreSQL
Name
Description
Character varying(n), varchar(n)
Variable length with limit
Character(n) , char(n)
Fixed length, blank padded
Text
Variable unlimited length

Binary types
Name
Storage
Description
bytea
1 or 4 bytes plus actual binary string
Variable length binary string
bytea is binary data types allows to store binary strings
bytea Hex format:

Select '\xDEADBEEF';

bytea Escape format:

SET bytea_output = 'escape';

SELECT 'abc \153\154\155 \052\251\124'::bytea;
     bytea
----------------
 abc klm *\251T

Date/Time types
Below are the available data types related to Date/time in PostgreSQL
Name
Storage Size
Description
Low value
High value
resolution
Timestamp[(p)][without timezone]
8 bytes
Both date and time no timezone
4713BC
294276 AD
1 microsec
timestamp [ (p) ] with time zone
8 bytes
both date and time, with time zone
4713BC
294276 AD
1 microsec
date
4 bytes
date (no time of day)
4713BC
5874897 AD
1 day
time [ (p) ] [ without time zone ]
8 bytes
time of day (no date)
00:00:00
24:00:00
1 microsec
time [ (p) ] with time zone
12 bytes
time of day (no date), with time zone
00:00:00+1459
24:00:00-1459
1 microsec
interval [ fields ] [ (p) ]
16 bytes
time interval
178000000 years
178000000 years
1 microsec

Boolean type
Boolean data type is command and same is  every database system

Name
Storage Size
Description
Boolean
1 byte
State of true or false

Enumerated types
Enumerated data types are comprised static ordered set of  values. They are equivalent to enum types supported in number of programming languages.
example-days of the week, set of status value for piece of data
this kind of types are created as below

CREATE TYPE order AS enum ('Yes', 'No','NA');

Geometric type
Geometric data types represents two dimensional spatial objects, below are available data types in PostgreSQL
Name
Storage Size
Description
Representation
Point
16 bytes
Point of plane
(x,y)
Line
32 bytes
Infinite line
{A,B,C}
Lseg
32 bytes
Finite line segment
((x1,y1),(x2,y2))
Box
32 bytes
Rectangular box
((x1,y1),(x2,y2))
Path
16+16n bytes
Closed path
[(x1,y1)…..]
path
16+16n bytes
Open path
((x1,y1)…..)
Polygon
40+16 bytes
Polygon
((x1,y1)…..)
circle
20 bytes
circle
<(x,y), r> (central point and radius

Network types
PostgreSQL offers data types to store IPv4, IPv6 and MAC addresses
Name
Storage Size
Description
cidr
7 or 19 bytes
IPv4 and IPv6 networks
inet
7 or 19 byes
IPv4 and IPv6 hosts and networks
macaddr
6 bytes
MAC addresses
macaddr8
8 bytes
MAC address (EUI -64 format)

Bit string types
Bit string types are strings of 1's and 0's. They can be used to store or visualize bit masks
bit(n)-- > exact match
bit varying(n) -- > variable length

Text Search types
there are two data types to support full text search 
tsvector
a tsvector value is stored list of distinct lexemes, which are words that have been normalized to merge different variables of the same word.
example
SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector;
                      tsvector
----------------------------------------------------
 'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat' 

tsquery
A tsquery value stores lexemes, that are to be searched for  and combine them using Boolean operator &(AND), | (OR) and !(NOT) as well as the phrase search operator <->

SELECT 'fat & rat'::tsquery;
    tsquery    
---------------
 'fat' & 'rat' 

UUID type
  • This data stores universally unique identifier(UUID) as per standards defined.
  • This identifier is 128 bit quantity that is generated by an algorithm chosen to make it very unlikely that the same identifier will be generated by any one else in the known universe  using same algorithm
  • UUID is written as sequence of lower case hexadecimal digits , in several groups separated by hyphens specially group of 8 digits followed by three groups of 4 digits followed by group of 12 digits, total 32 digits representing 128 bits
  • for example
           a0eebc99-9cob-4ef8-bb6d-6bb9bd380011
XML types
XML data types can be used to store XML data. To use data type, installation should be done using below option
configure --with-libxml
This data type can store well formed "documents", as defined by xml standards as well as "content" fragments which are defined by production XMLDecl? content in XML standards.

JSON types
JSON data type are for storing JSON (java script object notation) data
There are two types
  1. json
  2. jsonb
The major difference is one of efficiency 
  • json- stores an exact copy of input text
  • jsonb- 
    • Data stored in decomposed binary format that makes it slightly slower to input due to added conversion overhead.
    • It also support indexing

Json primitive type
PostgreSQL type
Notes
String
Text
\u0000 is allowed, as are non-ASCII unicode escape if db encoding is UTF-8
Number
Numeric
NaN and infinity values are disallowed
Boolean
Boolean
Lower case true and false
Null
(none)
SQL null is different concept
For example
select '5' ::json;
select '[1,2,"foo",null]'::json;

Composite types
A composite type represents the structure of row or record; it is essentially just a list of field names and their data types
Declaration
CREATE TYPE complex AS (
r double precision,
i double precision
);

CREATE TYPE inventory_item AS (
name text,
supplied_id integer,
price numeric
);
syntax is quite similar to create table command without any constraints
Use of composite types
CREATE TABLE on_hand (
item inventory_item,
count integer
);
Inserting data 
insert into on_hand values (ROW('Fuzzydice',45,1.99),1000)

Accessing data
select (item).name from on_hand where (item).price>9.99;
or
select (on_hand.item).name from on_hand where (on_hand.item).price >9.99;

Range types
There are following range types in PostgreSQL
Name
Description
int4range
Range of integer
Int8range
Range of big integer
numrange
Range of numeric
tsrange
Range of timestamp without timezone
tstzrange
Range of timestamp with timezone
daterange
Range of date

These are the data types available in PostgreSQL, this includes 11 version as well.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (73)

Blog Archive

  • ►  2018 (38)
    • ►  November (25)
    • ►  December (13)
  • ▼  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ▼  April (5)
      • Data Types in PostgreSQL
      • Database Creation in PostgreSQL
      • Different Server Logs in MySQL
      • Table Creation in PostgreSQL
      • Creating User Account in MongoDB
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ►  2025 (5)
    • ►  March (1)
    • ►  April (3)
    • ►  May (1)

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • Oracle 23ai : The all new Hybrid Read-Only for pluggable databases (PDBs)
      The latest Oracle database version, Oracle 23ai, introduced a new open mode called Hybrid Read-Only for pluggable databases (PDBs). Local ...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger