Atikh's DBA blog
  • Home
  • Oracle
  • MySQL
  • MongoDB
  • PostgreSQL
  • Snowflake
  • About Me
  • Contact Us

MySQL : mysqld_safe

 Atikh Shaikh     mysql     No comments   

mysqld_safe is wrapper used to mysql server startup and is recommended way to start a mysqld server on UNIX

  • mysqld_safe provides safety feature such as restarting the server when error occurs and write run time information to error log
  • mysqld_safe tries to start and executable mysqld to override the default behavior and specify explicitly the name of the server you want to run 
  • specify --mysqld or --mysqld --version to mysqld_safe
  • mysqld_safe reads all options  from the [mysqld][server] and [mysqld_safe] sections in option files.

options with mysqld_safe
Format
Description
--basedir
Path to mysql installation directory
--core-file-size
Size of core file
--datadir
Data directory path
--defaults-extra-file
Read named option file in addition to usual option files
--default-file
Read only names option file
--help
Help message and exit
--ledir
Path to directory where server is located
--log-error
Write error log to named file
--malloc-lib
Alternative malloc library to use for mysqld
--mysqld
Name of server program to start
--mysqld-safe—log-timestamp
Timestamp format logging
--mysqld-version
Suffix for server program name
--nice
Use nice program to set server scheduling priority
--no-defaults
Read no options file
--open-files-limit
Number of files that mysqld should be able to open
--pid-file
Path named server process ID file
--port
Port number
--skip-kill-mysqld
Do not try kill stray mysqld processes
--skip-syslog
Do not write error message to syslog, use log error
--socket
Socket file
--syslog
Write error message to syslog
--syslog-tag
Tag suffix for message written to syslog
--timezone
Set timezone
--user
Run mysqld as user having name user_name or numeric used ID user_id

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Creating User Account in MongoDB

 Atikh Shaikh     MongoDB     No comments   

For any database whether it is RDBMS or NoSQL, the user account is a very important aspect and is a must to connect to a database
There are different types of users depending on usage like, admin user account, read-only account, read write only.
Users can be created or added to the database using db.createUser() method available with MongoDB
The db.createUser() method accepts a document object that enables you to specify the username, roles and password for user to be created

Definition of db.createUser() method
db.createUser(user, writeConcern)

Field
Type
Description
User
Document
The document with authentication and access information about user
writeConcern
Document
Optional, the level of writeConcern for creation operation

The user document has the following fields

Field
Type
Description
user
String
The name of the new user
pwd
String
The user password
customData
Document
Optional, any information about user
roles
Array
This defines roles, The roles granted to a user can be an empty array[] to create a user without any roles
AuthenticationRestrictions
Array
Optional, the authentication restrictions the server enforces on the user. Specifies a list of IP addresses and CIDR ranges from which user will be able to connect
Mechanisms
Array
Optional, specify the specific SCRAM mechanism for creating SCRAM user creation
passwordDigester
string
Optional, Indicates whether the server or the client digest the password

Roles
In the role field, you can specify the build-in roles and user-defined roles. You can specify directly role name or in the form of a document as well
i.e.
"readWrite" or {role : "" : db ""}

External Credentials
Users created on $external database should have credentials stored externally to MongoDB
Local database
user can not be created on local database
Examples
Create account with roles assigned
> db.createUser({ user : "tech_owner",
... pwd : "techo!23",
... customData : {user : "blogUser"},
... roles :[{role: "clusterAdmin",
... db : "admin" },
... "readWrite"]}
... );
Successfully added user: {
        "user" : "tech_owner",
        "customData" : {
                "user" : "blogUser"
        },
        "roles" : [
                {
                        "role" : "clusterAdmin",
                        "db" : "admin"
                },
                "readWrite"
        ]
}
> 

Create an account with roles

> db.createUser({ user : "tech_user",
... pwd : "techo!23",
... roles :["readWrite", "dbAdmin"]}
... );
Successfully added user: { "user" : "tech_user", "roles" : [ "readWrite", "dbAdmin" ] }
> 
> 

Create user without any roles

> db.createUser({ user : "tech_wo_roles",
... pwd : "techo!23",
... roles :[]
... }
... );
Successfully added user: { "user" : "tech_wo_roles", "roles" : [ ] }
> 
> 

Listing users created

> db.getUsers();
[
        {
                "_id" : "techno_db.tech_owner",
                "user" : "tech_owner",
                "db" : "techno_db",
                "customData" : {
                        "user" : "blogUser"
                },
                "roles" : [
                        {
                                "role" : "clusterAdmin",
                                "db" : "admin"
                        },
                        {
                                "role" : "readWrite",
                                "db" : "techno_db"
                        }
                ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        },
        {
                "_id" : "techno_db.tech_user",
                "user" : "tech_user",
                "db" : "techno_db",
                "roles" : [
                        {
                                "role" : "readWrite",
                                "db" : "techno_db"
                        },
                        {
                                "role" : "dbAdmin",
                                "db" : "techno_db"
                        }
                ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        },
        {
                "_id" : "techno_db.tech_wo_roles",
                "user" : "tech_wo_roles",
                "db" : "techno_db",
                "roles" : [ ],
                "mechanisms" : [
                        "SCRAM-SHA-1"
                ]
        }
]
> 
> 

Removing/Dropping User in MongoDB
First list the users using db.getUsers() method and decide which one to drop
then use db.dropUser() method to drop user, this dropUser() method takes username as argument
We will drop tech_wo_roles user 

> db.dropUser("tech_wo_roles");
true
> 

If you want to drop all the users if you can user db.dropAllUsers() method.

More will come on MongoDB. follow this space for more and explore more.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Table Creation in PostgreSQL

 Atikh Shaikh     PostgreSQL     No comments   

We have created database tech_db in previous article, now we will create tables in PostgreSQL database. Table creation in postgres is very much similar to creating table in any other RDBMS database, we can make use of data types described here

In oracle we use desc to see definition of any table in PostgreSQL, we use \d or \d+ to see list of columns and its description

We will create table name employee with few columns in it

CREATE TABLE employee (
emp_id int NOT NULL,
emp_name varchar(255) NOT NULL,
age int NOT NULL,
PRIMARY KEY(emp_id)
);

you can see, we have defined all columns as NOT NULL and primary key as emp_id, same can be defined as below as well


CREATE TABLE employee (
emp_id int NOT NULL PRIMARY KEY,
emp_name varchar(255) NOT NULL,
age int NOT NULL
);

List of tables can be fetched using \dt 
tech_db=# \dt
          List of relations
 Schema |   Name   | Type  |  Owner
--------+----------+-------+----------
 public | employee | table | postgres
(1 row)
tech_db=#

Create table using select as statement


create table emp_2019 as select * from employee where age>26;

List tables
tech_db=# \dt
          List of relations
 Schema |   Name   | Type  |  Owner
--------+----------+-------+----------
 public | emp_2019 | table | postgres
 public | employee | table | postgres
(2 rows)

tech_db=#

Create unlogged table
Unlogged tables are created with skipping write-ahead log, these are faster tables but can be recovered in case of crash and can not be replicated. syntax is similar to create table just with additional key word unlogged . for example


CREATE UNLOGGED TABLE student (
student_id int NOT NULL,
student_name varchar(255),
DOB date,
DOJ date
);

Describing table
tech_db=# \d employee;
            Table "public.employee"
  Column  |          Type          | Modifiers
----------+------------------------+-----------
 emp_id   | integer                | not null
 emp_name | character varying(255) | not null
 age      | integer                | not null
Indexes:
    "employee_pkey" PRIMARY KEY, btree (emp_id)

tech_db=#
tech_db=# \d+ employee;
                                Table "public.employee"
  Column  |          Type          | Modifiers | Storage  | Stats target | Description
----------+------------------------+-----------+----------+--------------+-------------
 emp_id   | integer                | not null  | plain    |              |
 emp_name | character varying(255) | not null  | extended |              |
 age      | integer                | not null  | plain    |              |
Indexes:
    "employee_pkey" PRIMARY KEY, btree (emp_id)

tech_db=#


Recommendation while creating tables
  • use lower case name for tables
  • use lower case name for columns
PostgreSQL column constraints
NOT NULL - value can not be null
UNIQUE - value must be unique
PRIMARY KEY - not null + unique
CHECK - enables to have check value before insert or update
REFERENCE - constraints the value of column in another table

PostgreSQL table constraints
Similar to column constraints just they are applied to whole table.

Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Different Server Logs in MySQL

 Atikh Shaikh     mysql     1 comment   

The log files are always important part of any database,whether it is RDBMS database or NoSQL database.
log files make troubleshooting job easy and indicates lo of ways to analyze performance issues.
Below is the list of logs available in MySQL


Log type
Information written to log
Error log
Problem encountered during starting, running and stopping daemon process (mysqld)
General Query Log
Established connections and statements received from client
Binary log
Statements that changes data (also used for replication)
Relay log
Data changes received from replication master server
Slow query log
Query that took more than log_query_time seconds to executes
DDL log
Metadata operations performed by DDL statements
By default,logfile location is always data directory. We will discuss these logs in details

The Error Log
  • As name suggest, error log contains diagnostic message such as errors, warning and notes that generated during startup and shutdown of daemon process i.e. mysqld and while server is running.
  • File name always ends with .err
  • If used to start mysqld, mysqld_safe nay write message to error log e.g. when mysqld_safe notices abnormal mysqld exits,it restarts and write  mysql restarted message to this error log
  • The log_error_services system variable control which log component to enable for error logging
select @@log_error_services;
+---------------------------------------------+
| @@log_error_services                        |
+---------------------------------------------+
| log_filter_internal, log_sink_internal      |
+---------------------------------------------+
  • Output indicates, log events generated first go through built-in filter component log_filter_internal then through the built-in log writer component log_sink_internal
  • A filter modifies log events seen by components named later in log_error_internal value
  • A sink is destination for log events
  • The location of error log can be defined using --log-error while starting daemon process or can be set using log_error in configuration file.
Error log in JSON format
  • MySQL error log can be set in JSON format by modifying log_error_services parameter value.
  • To make use of JSON format error log file, we need to install components "file://component_log_sink_json";
  • and set value to this component using 
set GLOBAL log_error_services = 'log_filter_internal, log_sink_json';
Format of logfile message
timestamp thread_id [severity] [error_code] [subsystem] message

The General Query Log
  • The server writes information to this log when client connects or disconnects and it also log each statement received from client
  • If you want to track or monitor activity or the purpose of troubleshooting one should activate this general query log by using --log option
  • This can be disabled or enabled while runtime using below command
Disable -set GLOBAL general_log=OFF;
Enable - set GLOBAL general_log=ON;
  • This does not require restart restart of the mysql services
  • Log file location can be set using general_log_file variable in configuration file
The Binary Log
  • The binary log contains "events" that describe database changes such as table creation operations or changes to the table data 
  • The binary log has below two important purpose
    • For replication, the binary log on master replication server provides a record of data changes to be sent to slave servers
    • Data recover operations requires use of binary log
  • Binary logging is enabled by default i.e. log_bin system variable is set to ON
  • The log_file_basename is used to specify base name for binary log files
  • The mysqld appends the numeric  extension to binary log base name to generate binary log file names. Number increases each time server creates new log file
  • The maximum size is controlled by max_binlog_size
  • To keep track of  binary log index file that contains name of the binary log files
  • You can delete all binary log files with RESET MASTER or a subset of this with PURGE BINARY LOGS
  • To display content of binary log file , you can use mysqlbinlog utility as below
    mysqlbinlog

The Slow Query Log
  • This particular log is related to slow query, it lists all the queries that exceed a predefined amount of time long_query_time  (mentioned seconds).
  • Any query that takes more than time than this parameter value, it will be recorded in slow query log
  • This slow query log is enabled by below parameters
     slow_query_log=ON
     slow_query_log_file=
  • Slow query log contains:
    • query_time : duration (in seconds)
    • lock_time : duration (in seconds)
    • row_sent : N
    • rows_examined : N
    • Thread_id : ID
    • errno : error number
  • There are log of parameters available in slow query log
The DDL Log
  • The DDL log records metadata operations generated by data definition statements such as DROP TABLE or ALTER TABLE 
  • MySQL uses this log to recover from crashes occurring during metadata operations
  • A record of metadata operations such as those just described are written to the file ddl_log.log in mysql data directory
  • This is binary file (human unreadable)
  • The ddl_log.log can hold upto 1048573 entries equivalent to 4GB in size
Relay Log
  • The relay log is like binary log, consists of set of numbered files containing events that describe database changes, and and index file that contains names of all used relay log files.
  • Relay log file have same format as binary log files and can be read using mysqlbinlog utility.
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit

Database Creation in PostgreSQL

 Atikh Shaikh     PostgreSQL     1 comment   

Creating the database in PostgreSQL is a very simple task, it's like creating a directory structure. We will discuss the steps to create a new database.
Login to the Postgres instance using the admin account

$ psql -d postgres -U postgres
psql.bin (9.6.2.7)
Type "help" for help.

postgres=# 

Create a database using the below command 

postgres# create database tech_db;

The above command will take parameters by default, full syntax is defined below
create database database name
owner=role_name
template = template
encoding = encoding
lc_collate =collate
lc_ctype = ctype
tablespace = tablespace_Name
connection_limit = max_concurrent_connections

We will describe each parameter in the below table

technodba_postgresql-create-database, database creation, parameters

By using default parameters database can be created with the below command

postgres=# create database tech_db;
CREATE DATABASE
postgres=#

The list of the database can be checked using \l or pg_database table


postgres=# \l
                                                 List of databases
   Name    |  Owner   | Encoding |          Collate           |           Ctype            |   Access privileges
-----------+----------+----------+----------------------------+----------------------------+-----------------------
 postgres  | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =Tc/postgres         +
           |          |          |                            |                            | postgres=CTc/postgres
 tech_db   | postgres | UTF8     | English_United States.1252 | English_United States.1252 |
 template0 | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =c/postgres          +
           |          |          |                            |                            | postgres=CTc/postgres
 template1 | postgres | UTF8     | English_United States.1252 | English_United States.1252 | =c/postgres          +
           |          |          |                            |                            | postgres=CTc/postgres
(4 rows)

postgres=# select datname from pg_database;
  datname
-----------
 template1
 template0
 postgres
 tech_db
(4 rows)

postgres=#

Your database is ready to use. There is other way as well to create 
database using createdb command and need to run from UNIX shell, will be discussed later on
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Instagram
  •  Pin
  •  linkedin
  •  reddit
Newer Posts Older Posts Home

Author

Atikh Shaikh
View my complete profile

Categories

  • MongoDB (18)
  • Oracle 12c (30)
  • Oracle12cR2 New Feature (3)
  • PostgreSQL (20)
  • RMAN (10)
  • Snowflake (8)
  • mysql (23)
  • oracle (74)

Blog Archive

  • ►  2018 (38)
    • ►  November (25)
    • ►  December (13)
  • ►  2019 (33)
    • ►  January (15)
    • ►  February (6)
    • ►  March (2)
    • ►  April (5)
    • ►  May (5)
  • ►  2020 (5)
    • ►  April (1)
    • ►  May (2)
    • ►  July (2)
  • ►  2021 (8)
    • ►  June (3)
    • ►  July (3)
    • ►  August (1)
    • ►  December (1)
  • ►  2022 (33)
    • ►  May (3)
    • ►  June (10)
    • ►  July (3)
    • ►  August (4)
    • ►  September (8)
    • ►  October (3)
    • ►  November (2)
  • ►  2023 (14)
    • ►  February (1)
    • ►  April (5)
    • ►  May (2)
    • ►  June (1)
    • ►  September (1)
    • ►  October (1)
    • ►  December (3)
  • ►  2024 (5)
    • ►  January (2)
    • ►  March (3)
  • ▼  2025 (6)
    • ►  March (1)
    • ►  April (3)
    • ▼  May (2)
      • Oracle 23ai : The all new Hybrid Read-Only for plu...
      • Oracle Active Data Guard Features and Benefits

Popular Posts

  • ORA-29283: invalid file operation: unexpected "LFI" error (1509)[29437]
    I was trying to export the schema in my windows PC, it got stuck with below error    C:\Users\shaik\Videos\technodba exp>expdp userid...
  • PostgreSQL : How to get data directory location for PostgreSQL instance
    Sometimes, you start working on a PostgreSQL instance but forget about the data directory, here we will discuss different methods to know th...
  • Oracle Dataguard Broker Configuration (DGMGRL)
    Data Guard Broker is a command-line interface that makes managing primary and standby databases easy. DBA can use a single command to switch...
  • ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
    In previous articles, we have learned about user creation and grants  in MySQL in detail, but there are a few privileges called global priv...
  • Oracle 23ai : Use of NOVALIDATE Constraints in IMPDP
    While performing impdp operations in the Oracle database, Oracle performs validation checks for every constraint on the imported table, that...

Labels

oracle Oracle 12c mysql PostgreSQL MongoDB oracle 19c Oracle23c oracle19c Orale PDB-CDB oracle12c python AWS Oracle ASM Virtualbox pluggable database storage engine

Pages

  • Disclaimer
  • Privacy Policy

Follow TechnoDBA

Copyright © Atikh's DBA blog | Powered by Blogger