+91 9619904949
Multiple Mysql version on same server

Multiple Mysql version on same server

Multiple Mysql version on the same server is possible, but it requires some extra configuration.

One way to achieve this is by using containers or virtual machines to isolate each MySQL installation. For example, you could use Docker containers or virtual machines to run each version of MySQL.

Another way is to install each version of MySQL in a separate directory and use different ports for each MySQL installation. You would also need to specify a different configuration file for each installation, which would include the appropriate port number.

To install multiple versions of MySQL on the same server, follow these steps:

Install default server Percona-8.0

1 sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
2 percona-release setup ps80
yum install percona-server-server  percona-server-devel.x86_64
3 service mysqld restart
4 cat /var/log/mysqld.log |grep password
5 ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘MyNewPass’; You must reset your password using ALTER USER statement before executing any statement.
  TO resolve ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
1st set the password as per policy like ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘Nitwings@111181_Pass’; then fire”SET GLOBAL validate_password.policy = LOW;” or step 6 & 7
 
6 mysql> SHOW VARIABLES LIKE ‘validate_password%’;
+————————————–+——–+
| Variable_name | Value |
+————————————–+——–+
| validate_password.check_user_name    | ON |
| validate_password.dictionary_file    | |
| validate_password.length             | 8 |
| validate_password.mixed_case_count   | 1 |
| validate_password.number_count       | 1 |
| validate_password.policy             | MEDIUM |
| validate_password.special_char_count | 1 |
+————————————–+——–+
7 rows in set (0.01 sec)
TO resolve ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
1st set password as per policy like ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘Nitwings@111181_Pass’;
and make necessary changes as per step 7
7 SET GLOBAL validate_password.check_user_name = No;
SET GLOBAL validate_password.dictionary_file = ”;
SET GLOBAL validate_password.length = 0;
SET GLOBAL validate_password.mixed_case_count = 0;
SET GLOBAL validate_password.number_count = 0;
SET GLOBAL validate_password.policy = LOW;
SET GLOBAL validate_password.special_char_count = 0;ORUNINSTALL PLUGIN validate_password;
UPDATE mysql.user SET Password=PASSWORD(‘your-password’) WHERE User=’root’;
set password policy. also, you can uninstall the password PLUGIN.

the password plugin name is validate_password but some versions of MySQL have  “validate.password”

 

8 ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘your_password’;

OR

UPDATE user SET password=PASSWORD(‘your_password’) WHERE user=’root’;

set your password
9 mysql_secure_installation run it to secure your Mysql server
10 mysql -uroot -p Percona-8.0 ready to use
  /etc/my.cnf for Percona-8.0 

 

Install  2nd server Percona-5.7:-

1 Percona-Server-5.7.23-23-Linux.x86_64.ssl101.tar.gz => Go to https://www.percona.com/software/mysql-database/percona-server.
=> Select Percona Server 5.7.
=>Select Product Version ‘PERCONA-SERVER-5.7.23-23’
=>Select Platform  ‘LINUX – GENERIC’Note:- chose file as per your OS.
=> ssl100 – for Debian prior to 9 and Ubuntu prior to 14.04.

=> ssl101 – for CentOS 6 and CentOS 7.
=> ssl102 – for Debian 9 and Ubuntu versions starting from 14.04.
=> ssl1:111 – for CentOS 8 and RedHat 8. => Download it using wget command.
2 tar -zxvf  Percona-Server-5.7.23-23-Linux.x86_64.ssl101.tar.gz extract downloaded files
3 mkdir /home1/replication/

mv mysql-5.1.73-linux-x86_64-glibc23  /home1/replication/Mysql-204

create a base directory ‘Mysql-204’ for the MySQL environment.

 

4 mkdir /home1/replication/Mysql-204/etc
mkdir /home1/replication/Mysql-204/pid
touch /home1/replication/Mysql-204/etc/my.cnf
create my.cnf as per the below example.
5 chown mysql:mysql  /home1/replication/Mysql-204/ -R change ownership of the directory
6 cd /home1/replication/Mysql-204/

./bin/mysql_install_db –datadir=/home1/replication/Mysql-204/ –datadir=/home1/replication/Mysql-204/data/ –user=mysql

initialize data directory to create  MySQL  database.
7 /home1/replication/Mysql-204/bin/mysqld_safe –defaults-file=/home1/replication/Mysql-204/etc/my.cnf &

mysql -uroot -p –socket=/home1/replication/Mysql-204/data/mysql.sock –port=3308

start MySQL service and log in to MySQL server.
8 cat  /home1/replication/Mysql-204/mysqld.err |grep pass

or

cat /root/.mysql_secret

find out the password in a log file.
9 ./bin/mysqld –defaults-file=/home1/replication/Mysql-204/etc/my.cnf –skip-grant-tables & if the password is not present in the log file then stop the MySQL service (init file present at step 11)  and start with –skip-grant-tables to reset the password.
10 flush privileges;
ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘your-password’;
UPDATE user SET password=PASSWORD(‘your-password’) WHERE user=’root’;
flush privileges;
reset MySQL root password.
11
#! /bin/sh
test -f /home1/replication/Mysql-204/bin/mysqld_safe || exit 0
case “$1” in
start)
echo –n “Starting mysql204: mysql204”
cd /home1/replication/Mysql-204
./bin/mysqld_safe –defaults-file=/home1/replication/Mysql-204/etc/my.cnf &
echo “.”
;;
stop)
echo –n “Stopping mysql204: mysql204”
/home1/replication/Mysql-204/bin/mysqladmin –socket=/home1/replication/Mysql-204/data/mysql.sock –port=3307 -u root -p shutdown
kill `cat /home1/replication/Mysql-204/pid/mysqld.pid`
echo “.”
;;
restart)
echo –n “Stopping mysql204: mysql204”
/home1/replication/Mysql-204/bin/mysqladmin –socket=/home1/replication/Mysql-204/data/mysql.sock –port=3307 -u root -p shutdown
kill `cat /home1/replication/Mysql-204/pid/mysqld.pid`
echo “.”
echo –n “Starting mysql204: mysql204”
cd /home1/replication/Mysql-204
./bin/mysqld_safe –defaults-file=/home1/replication/Mysql-204/etc/my.cnf &
echo “.”
;;
*)
echo “Usage: /etc/init.d/sshd start|stop|restart”
exit 1
;;
esac
init script for restart.
vi /home1/replication/Mysql-204/etc/my.cnf
[client]
port = 3308
[mysqld]
bind-address=0.0.0.0
#skip-grant-tables
#skip-networking
user=mysql
old_passwords=0
expire_logs_days = 30
max_binlog_size = 100M
port = 3308
datadir=/home1/replication/Mysql-204/data/
pid-file = /home1/replication/Mysql-204/pid/mysqld_3308.pid
#log = /home1/replication/Mysql-204/data/mysql_general.log
socket = /home1/replication/Mysql-204/data/mysql.sock
#skip-locking
skip-external-locking
group_concat_max_len=20480
#log
general_log
#######################
max_allowed_packet = 32M
sort_buffer_size = 120M
read_buffer_size = 120M
read_rnd_buffer_size = 640M
myisam_sort_buffer_size = 120M
tmp_table_size = 256M
query_cache_size = 320M
max_connections = 100
max_user_connections = 100
max_connect_errors = 99999999
interactive_timeout=280
wait_timeout=280
skip-name-resolve
slave-skip-errors = 1062,1053
########################
innodb_buffer_pool_size = 200M
innodb_log_buffer_size = 32M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 256
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 40
innodb_open_files = 2000
innodb_file_per_table
log_bin_trust_function_creators = 1############ Replication #####################
server-id=153
log-bin = /home1/replication/Mysql-204/master-bin.log
log-bin-index = /home1/replication/Mysql-204/master-log-bin.index
show-slave-auth-info
replicate-same-server-id=0
relay-log = /home1/replication/Mysql-204/slave-relay.log
relay-log-index = /home1/replication/Mysql-204/slave-relay-log.index
log-slave-updates
log_output=FILE
slow_query_log=1
slow_query_log_file=/home1/replication/Mysql-204/slow_queries.log
long_query_time=3
log-slow-admin-statements
########################
[mysqldump]
quick
max_allowed_packet = 32M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 20M
write_buffer = 20M
[myisamchk]
key_buffer = 1024M
sort_buffer_size = 512M
read_buffer = 64M
write_buffer = 64M[mysqld_safe]
log-error=/home1/replication/Mysql-204/mysqld.err
pid-file= /home1/replication/Mysql-204/pid/mysqld.pid

Install  3rd server Mysql-5.1:-

1 mysql-5.1.73-linux-x86_64-glibc23.tar.gz => Go to https://downloads.mysql.com/archives/community/?version=5.1.23

=>  Product Version:”5.1.23″
=>  Operating System:’Linux – Generic’
=> download Linux – Generic (glibc 2.3) (x86, 64-bit), Compressed TAR Archive
=> Download it using “wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.1.23-linux-x86_64-glibc23.tar.gz”

2 tar -zxvf  mysql-5.1.73-linux-x86_64-glibc23.tar.gz

mkdir /home1/replication/

mv mysql-5.1.73-linux-x86_64-glibc23   /home1/replication/Mysql-187

extract downloaded files
create a base directory ‘Mysql-187’ for the MySQL environment.
3 mkdir /home1/replication/Mysql-187/etc create an etc directory for ‘my.cnf’ file
4 touch /home1/replication/Mysql-204/etc/my.cnf
5 chown mysql:mysql  /home1/replication/Mysql-187/ -R change ownership of the directory with MySQL user and MySQL group.
6 cd /home1/replication/Mysql-187/

./scripts/mysql_install_db –initialize –user=mysql –defaults-file=/home1/replication/Mysql-187/etc/my.cnf
initialize the data directory to create the MySQL database.
7 /home1/replication/Mysql-187/bin/mysqld_safe –defaults-file=/home1/replication/Mysql-187/etc/my.cnf & start  mysql server
8 /home1/replication/Mysql-187/bin/mysqladmin –socket=/home1/replication/Mysql-187/data/mysql.sock –port=3307 -u root password ‘your-password’ change the root password using mysqladmin command. The default password is empty.
9 /home1/replication/Mysql-187/bin/mysql -uroot -p –socket=/home1/replication/Mysql-187/data/mysql.sock –port=3307 login MySQL server
10 case “$1” in
start)
echo –n “Starting mysql187: mysql187”
cd /home1/replication/Mysql-187
./bin/mysqld_safe –defaults-file=/home1/replication/Mysql-187/etc/my.cnf &
echo “.”
;;
stop)
echo –n “Stopping mysql187: mysql187”
/home1/replication/Mysql-187/bin/mysqladmin –socket=/home1/replication/Mysql-187/data/mysql.sock –port=3307 -u root -p shutdown
kill `cat /home1/replication/Mysql-187/mysqld.pid`
echo “.”
;;
restart)
echo –n “Stopping mysql187: mysql187”
/home1/replication/Mysql-187/bin/mysqladmin –socket=/home1/replication/Mysql-187/data/mysql.sock –port=3307 -u root -p shutdown
kill `cat /home1/replication/Mysql-187/mysqld.pid`
echo “.”
echo –n “Starting mysql187: mysql187”
cd /home1/replication/Mysql-187
./bin/mysqld_safe –defaults-file=/home1/replication/Mysql-187/etc/my.cnf &
echo “.”
;;
*)
echo “Usage: /etc/init.d/sshd start|stop|restart”
exit 1
;;
esac
init script the server to stop-start MySQL server
vi /home1/replication/Mysql-187/etc/my.cnf
[client]
port = 3307
[mysqld]
user=mysql
old_passwords=1
expire_logs_days = 30
max_binlog_size = 100M
port = 3307
datadir=/home1/replication/Mysql-187/data/
pid-file = /home1/replication/Mysql-187/data/mysqld_3307.pid
log = /home1/replication/Mysql-187/data/mysql_general.log
socket = /home1/replication/Mysql-187/data/mysql.sock
#skip-locking
skip-external-locking
group_concat_max_len=20480
#log
general_log
#######################
key_buffer = 128M
max_allowed_packet = 8M
table_cache = 5000
sort_buffer_size = 120M
read_buffer_size = 120M
read_rnd_buffer_size = 640M
myisam_sort_buffer_size = 120M
thread_cache = 128
tmp_table_size = 256M
query_cache_size = 320M
thread_concurrency = 32
max_connections = 100
max_user_connections = 100
max_connect_errors = 99999999
interactive_timeout=280
wait_timeout=280
skip-name-resolve
slave-skip-errors = 1062,1053
########################
innodb_buffer_pool_size = 200M
innodb_additional_mem_pool_size = 200M
innodb_log_buffer_size = 32M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 256
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 40
innodb_open_files = 2000
innodb_file_per_table
log_bin_trust_function_creators = 1############ Replication #####################
server-id=153
master-host=11.11.18.81
master-user=replication-user
master-password=replication-password
#master-port=3306 #if master run on a different port
log-bin = /home1/replication/Mysql-187/master-bin.log
log-bin-index = /home1/replication/Mysql-187/master-log-bin.index
show-slave-auth-info
replicate-same-server-id=0
relay-log = /home1/replication/Mysql-187/slave-relay.log
relay-log-index = /home1/replication/Mysql-187/slave-relay-log.index
log-slave-updates
log_output=FILE
slow_query_log=1
slow_query_log_file=/home1/replication/Mysql-187/slow_queries.log
long_query_time=3
log-slow-admin-statements
########################
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 20M
write_buffer = 20M
[myisamchk]
key_buffer = 1024M
sort_buffer_size = 512M
read_buffer = 64M
write_buffer = 64M[mysqld_safe]
log-error=/home1/replication/Mysql-187/mysqld.err
pid-file=/home1/replication/Mysql-187/mysqld.pid
Great CM Tool Ansible

Great CM Tool Ansible

Ansible is an open-source configuration management, deployment, and orchestration tool. It aims to provide large productivity gains to a wide variety of Automation challenges.

Ansible Introduction:

✓ Ansible is a simple open-source IT engine that automates: application deployment, intra-service orchestration, cloud provisioning, and many other IT tools.

✓ Ansible is easy to deploy because it does not use any agents or custom security inf restructure.

✓ Ansible uses playbook to describe automation jobs, and playbook uses very simple language i.e., YAML

✓ Ansible is designed for multi-tier deployment.

✓ Ansible uses the hosts’ file where one can group the hosts & can control the actions of a specific group in the playbooks.

Michael Dehaan developed Ansible and the Ansible project began in February 2012.

Redhat acquired the Ansible tool in 2015. it is available for RHEL, Debian, Centos, Oracle Linux, etc.

Can we this tool whether your servers are on-premises or in the cloud? It turns your code into infrastructure ie your computing environment has some of the same attributes as your application.

Why Do We Require Ansible:

✓ Ansible automates and simplifies repetitive, complex, and tedious operations.

✓ Ansible is open source, saves time as well as human efforts & is easy to implement.

✓ Ansible architecture is simple and effective, It works by connecting to your nodes & pushing small programs to them.

✓ Ansible is a push-based architecture & doesn’t need any agents running on the client nodes.

✓ Ansible works over SSH and doesn’t require any daemons, special servers, or libraries to work. A text editor and a command line tool are usually enough to get your work done.

✓ Ansible infrastructure is described in a text file (INI) and then all the information about the desired state of these machines is organized in playbooks.

Advantages: –

Ansible is free to use by everyone.

Ansible is very consistent and lightweight and no constraints regarding the OS or underlying hardware are present

Ssh-id is very secure due to its agent-less cooler abilities and open SSH security features.

Ansible does not need any special system administrator skills to install and use it.

Push mechanism.

Disadvantages: –

An insufficient user interface, though Ansible tower is GUI it is still in the development stages.
Cannot achieve full automation by Ansible.
New to the market therefore limited support and document are available.
The “yum install ansible” command installs ansible and all the configuration files by default are stored in the /etc/ansible directory.

/etc/ansible/ansible.cfg is the main configuration file and /etc/ansible/hosts is the default configuration file but each user cannot update the files.

So users can use their own environment to run ansible. For eg: anil is a DevOps admin and their role is to manage the web server but he doesn’t have root access.

So anil can setup his ansible environment as per below

 

#mkdir /home/anil/ansible

#cp /etc/ansible/ansible.cfg /home/anil/ansible

#vi /home/anil/.bashrc
And add
export ANSIBLE_CONFIG=/home/anil/ansible/ansible.cfg

#vi /home/anil/ansible/ansible.cfg
And add
inventory      = =/home/anil/ansible/anil-hosts

 

Ad-hoc Commands: –

Ad-hoc commands are commands which can be run individually to perform quick functions

these ad-hoc commands are not used for configuration management and deployment because these commands are of one-time usage.

The Ansible ad-hoc commands use the “Ansible” command line tool to automate a single task.

Ansible is the module: – Ansible module is an executable plug-in that gets the real job done it is invoked with the command line or Ansible playbook

Ansible ships with a number of modules (called module libraries) that can be executed directly on remote servers or through playbooks. Your library of modules can reside on any machine, there are no servers demons, or databases required.

Vault:-

Ansible allows Keeping passwords are keys in than plaintext inSensitive data such as encrypted files, rather than your playbooks.

Terms used in Ansible:-

Ansible Server The machine where Ansible is installed and from which all tasks and playbooks will be run.
Module Basically, a module is a command or set of similar commands meant to be executed on the client side.
Task A task is a section that consists of a single producer to be completed.
Role A way of organizing tasks and related file to be later called in a Playbook.
Fact Information is fetched from the client system from the Global variables with the gather facts operation.
Inventory The file contains data about the Ansible client servers.

vi /etc/ansible/hosts this file used for managing clients and groups.

Example of /etc/ansible/hosts 

#Making a comment in my hosts’ file

rainbows.nitwinngs.com

[testServer]
52.36.167.137 ansible_user=ec2-user ansible_ssh_user=ec2-user
test1.nitwinngs.com:2222

[testServer:vars] ansible_user=ec2-user

[webServers]
apache[01:50].nitwinngs.com
nginx[50:100].nitwinngs.com
Postfix.us[1:50].nitwinngs.com

[appServers]
app[a:f].nitwinngs.com

Play Execution of playbook.
Handler Task Which is called only if the notifier is present.
Notifier Section attributed to a task that calls a handler Output is changed.
Playbooks It consists of code in YAML  format, which describes tasks to be executed.
Host Nodes that are automated by Ansible

Roles:-
We can use two techniques for reusing tasks:-

  • include
  • roles

roles are good for organizing tasks and encapsulation data needed to accomplish those tasks.

we can organize the playbook into a directory structure called roles.

adding more and more functionality to the playbook will make it difficult to maintain in a single file.

Default It Stores the data about role/application Default Variables e.g. if you want to run to port 80 or 8080 then Variables need to define in this path.
Files It Contains files that need to be transferred to the remote VM (static files)
Handlers They are triggers or tasks. We Can Segregate all the handlers required in Playbook
Meta This directory contains files that establish roles dependencies eg, Author Name, Supported platform, and Dependencies if any.
Tasks It Contains all the tasks that are normally in the playbook eg, Installing packages and Copying files, etc.
Vars Variables for the role Can be specified in this directory and used in your Configuration files Both Vars and default stores Variables.

 

VARIABLE PRECEDENCE:-

1 Role defaults
2 Inventory variables
3 Inventory group_vars
4 Inventory host_vars
5 Playbook group_vars
6 Playbook host_vars
7 Host facts
8 Registered vars
9 Set_facts
10 Play vars
11 Play vars_prompt
12 Play vars_files
13 Role and include vars
14 Block variables
15 Task variables
16 Extra_vars

Basic Command:-

 

yum install epel-release.noarch Install Epel repo
Yum install ansible Install Ansible
Ansible –version
Ansible all –list-hosts The “all” pattern refers to all the machines present in an inventory file.
Ansible group-name –list-hosts

 

List all the hosts present in the specific group
Ansible group-name[0] –list-hosts group-name[0]   0=1st host of group
group-name[1]    1= 2nd host of group
group-name[-1]   -1= last host of group
group-name[1:5]  1:5] 2nd to  6th  host of group
groupname1[1:5]:groupname2[2:6] host of 2 different group
Ansible all  -a  “ls”
Ansible group-name  -a  “ls” -kEg:-Ansible mail  -a  “ls”
 Run command for all hosts or specific groups. By default, module is a command Where

-a is an argument
-k for asking password

Ansible web -m command -a “ls” The command module runs a single command on specific group hosts.
Ansible web -m raw  -a “ls; dir; date;ifconfig” Use the RAW module to fire multiple commands.
Ansible web -m copy -a ‘src=/etc/apf/conf.apf_glob  dest=/etc/apf/conf.apf owner=root group=root mode=660 backup=yes’ Copy module
Ansible web -m copy -a ‘src=/tmp/apf/conf.apf_glob remote_src=yes dest=/etc/apf/conf.apf owner=root group=root mode=660 backup=yes’ Copy from target to target
Ansible all -m fetch -a ‘src=/etc/httpd/conf/httpd.conf  dest=/backup/httd” Fetch files from the remote server to the master server.
Ansible all -m file -a ‘path=/etc/httpd/ssl state=directory”

 

Or

Ansible all -m file -a ‘path=/etc/httpd/ssl/wings.key state=touch”

Or

Ansible all -m file -a ‘path=/etc/httpd/ssl/wings.key mode=777”

 

Or

 

Ansible all -m file -a ‘path=/etc/httpd/ssl/wings.key state=absent”

 

Create a file or folder using the file module.

Or

Create file.

 

 

 

 

 

 

Or

 

Change file or folder permission.

 

 

Or

 

Delete the file or Folder.

Ansible group-name -a “sudo yum install httpd -y”

or

 

Ansible group-name -ba “yum install httpd -y”

or

Ansible group-name -b -m module-name  -a “pkg=httpd status=present”

 

eg:-

Ansible mail -b -m yum  -a “pkg=postfix status=present”

Install the package  using Ansible

-b = –become  sudo  and -a is an argument

 

(-a for argument and -m for the module)

when you use -m (module) syntax always into YAML language.

-m Yum -a “pkg=postfix status=present”
-m service -a “name=postfix status=started”
-m user -a “name=raj”
-m copy -a “src=main.cf  dest=/etc/postfix”

Ansible -i inventory all -m command  -a “service iptables stop ” –become –ask-become-pass Ansible groupname -m command -a “Linux_commad”
Eg: Ansible webservers -m command -a “mkdir /tmp/blackpost”
Ansible all -m ping Check connection with node
Ansible mail -m setup

Ansible mail -m setup

 -a “filter=*ipv4*”

Check all the information on the nodes.

 

Ansible-doc setup Ansible-doc module name to find information about the module
Ansible-playbook playbook-name.yml

Eg:-

Ansible-playbook info.yml

Run playbook using Ansible-playbook command

 

ansible-vault create Vault.yml Creating a new encrypted playbook
ansible-vault edit Vault.yml Edit the encrypted playbook
ansible-vault rekey Vault.yml To Change the password
ansible- vault encrypt target.yml To encrypt an existing playbook
ansible-vault decrypt target yml To Decrypt an encrypted playbook
ansible – playbook handlers yml –check

 

Dryrun:-Check whether the playbook is formatted Correctly

 

PlayBook:-

→ Playbooks in ansible are written in YAML format

→ It is human readable data Serialization Language. It is commonly used for Configuration files

→ Playbook is like a file where you write Codes consisting of Vars, tasks, handlers, files, templates, and roles.

→ Each Playbook is Composed of one or more modules in a list Module is a Collection of Configuration files.

→ Playbooks are divided into many Sections like –

Target Section: – Defines the host against which the playbooks task has to be executed

Variable Section – Define Variables.

Task Section:- List all modules that we need to run in order.

How to write YAML:-

 ansible, nearly every YAML file starts with a list.

 Each item in the list is a list of key-value pairs, commonly called a dictionary.

 All YAML files have to begin with “—” and end with “…”

 All members of list lines must begin with the Same indentation level starting with “-”.

for eg –
— # A list of Mail Servers:
– Postfix
– Exim
– Qmail
– PowerMta
– MailerQ
– GreenArrow

Variables:-

  • Ansible uses variables that are defined previously to enable more flexibility in playbooks and roles. They Can be used to loop through a set of given Values, access Various information like the hostname of a system, and replace Certain strings in templates with specific Values
  • Put the Variable Section above tasks so that we define it first & Use it later.

Handlers Section:-

A handler is exactly the same as a task, but it will run when Called by another task.
or
Handlers are just like regular tasks in an ansible playbook but are only run if the task Contains a notify directive and also indicates that it changed something.

DRYRUN:-

Check whether the playbook is formatted correctly.

ansible-playbook handlers yml –check

1 Info.yml — # node information playbook
– hosts: mail
user: ansible

   become: yes
   connection: ssh
   gather_facts: yes
2 Apache.yml — #install httpd on mail group
– name: Apache playbook
hosts: mail
user: ansible
become: yes
become_user: root
  tasks:
– name: ensure apache is at the latest version
yum:
name: httpd
state: latest
– name: ensure apache is running
service:
name: httpd
state: started
3 Var.yml — # node information playbook
– hosts: mail   user: ansible
   become: yes
   connection: ssh
   gather_facts: yesvars:
pkgname: postfix
task:
-name install packages using variable
     Action yum name= ‘{{pkgname }}’ state=installed
4 Handlers.yml — # MY PLAYBOOK FOR HANDLERS

hosts: demo
user: ansible
become: yes
connection: ssh

tasks:

–  name: install httpd server on centos
action: yum name=httpd state-installed
notify: restart httpd

handlers:
– name: restart httpd
action: service name=httpd         state=restarted

5 Loops.yml — # MY LOOPS PLAYBOOK

– hosts: demo
user: ansible
become: yes
connection: ssh

 tasks:
– name: add list of users in my nodes
user: name='{{userslist} }’ state-present
with_userslist:
– Anil
– jalela
– Nilax
– Arti

OR Install packages

– name: install and configure packages
hosts: “{{myHost}}”
remote_user: ec2-user
become: yes

vars_files:
– Maria_vars.yml

tasks:
– name: install packages
yum: name={{item}} state=installed
with_item
– bind-utils
– netcat
– fping
– nmap
– httpd

 

OR Loop like

– name: testing loops
hosts: testServer
remote_user: ec2-user
become: yes

tasks:
– name: looping over environment facts
debug
msg={{item.key}}={{item.value}}
with_dict: ansible_env

 – name: looping over files and then copy copy: src={{item}} dest=/tmp/loops with_fileglob: “/tmp/*.conf”

 – name: do until something
shell: echo hello
register: output
retries: 5
delay: 5
until: output.stdout.find(‘hello’) != -1

6 Condition.yml — # CONDITIONAL PLAYBOOK

-hosts: demo
user: ansible
become: yes
connection: ssh

tasks:
–          name: Install apache server for debian family
command: apt-get -y install apache2
when: ansible_os_family == “Debian”
–          name: install apache server for redhat family
command: yum -y install httpd
when: ansible_os_family =”Redhat”

7 Mariadb.yml

 

Ansible-playbook -i hosts Mariadb.yml

 

– name: install and configure mariadb
hosts: testServer
remote_user: ec2-user
become: yes

vars:
mysql_port: 3306

tasks:
– name: install mariadb
yum: name=mariadb-server state=latest

– name: create mysql configuration file
template: src=my.cnf.j2 dest=/etc/my.cnf
notify: restart mariadb

– name: create mariadb log file
file: path=/var/log/mysqld.log state=touch owner=mysql group=mysql mode=0775

– name: start mariadb service
service: name=mariadb state=started enabled=yes

handlers:
– name: restart mariadb
service: name=mariadb state-restarted
my.cnf.j2 template file look  like :-

#cat my.cnf.j2

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
port={{ mysql_port }}
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mariadb/mysqld.pid 

8 Mariadb.yml

With log path and port variable


– name: install and configure mariadb
hosts: testServer
remote_user: ec2-user
become: yesvars:
mysql_port: 3306
log_path: /var/log/
tasks:
– name: install mariadb
yum: name=mariadb-server state=latest
– name: create mysql configuration file
template: src=my.cnf.j2 dest=/etc/my.cnf
notify: restart mariadb
– name: create mariadb log file
file: path={{ log_path }}/mysqld.log state=touch  owner=mysql group=mysql mode=0775

– name: start mariadb service
service: name=mariadb state=started enabled=yes

handlers:
– name: restart mariadb
service: name=mariadb state-restarted

9 Mariadb.yml

With Maria_vars.yml

And define hostname in run time

 

Ansible-playbook -i hosts Mariadb.yml

–extra-vars “hosts= testServer”

– name: install and configure mariadb
hosts: “{{myHost}}”
remote_user: ec2-user
become: yes

vars_files:
– Maria_vars.yml

tasks:
– name: install mariadb
yum: name=mariadb-server state=latest

– name: create mysql configuration file
template: src=my.cnf.j2 dest=/etc/my.cnf
notify: restart mariadb

– name: create mariadb log file
file: path=/var/log/mysqld.log state=touch  owner=mysql group=mysql mode=0775

– name: start mariadb service
service: name=mariadb state=started enabled=yes

handlers:
– name: restart mariadb
service: name=mariadb state-restarted

Maria_vars.yml vars file look  like :-
#cat Maria_vars.yml


mysql_port: 3306
log_path: /var/log/

Var_cases.yml

 

Ansible-playbook -i hosts Var_cases.yml

 

–  name: testing variables
hosts: testServer
remote_user: ec2-usertasks:
– name: get date on the server
shell: date
register: output
– debug: msg=”the date is {{output.stdout}}”– debug: var ansible_distribution_version
name: group some machines together temporarily
group_by: key=rhel_{{ansible_distribution_version}}
register: group_result
– debug: var=group_result
Conditionals.yml

 

Ansible-playbook -i hosts Conditionals.yml

 


– name: testing conditionals
hosts: testServer
remote_user: ec2-user
become: yes 
vars:
unicorn: true
tasks:
– name: don’t install on debian machines
yum: name=httpd state=latest
when: (ansible_os_family==”RedHat” and ansible_distribution_major_version==”6″)
– name: are unicorns real or fake
shell: echo “unicorns are fake”
when: not unicorn
 

# – fail: msg=”unicorns require rainbow variable to be set”
#when: rainbow is undefined

– name: test to see if selinux is running
shell: getenforce
register: sestatus

– name: configure SELinux if not enforcing
seboolean: name=mysql_connect_any state=true persistent=yes when: sestatus.rc != 0

– name: checking systems
shell: cat /var/log/messages
register: log_output

– name: next task
shell: echo “systemd knows when we’re doing ansible stuff”
when: log_output.stdout.find(‘ansible’) != 0
register: shell_echo
– debug: var=shell_echo

 

Management Career Level

Management Career Level

 

Dimension/Level M1
Organizational Scope and Impact Leads a team that supports the operations of a work unit.

Executes annual goals and priorities as established with input from the next-level manager.

Delivery/Production Focus: Position has a direct impact on the productivity of the work unit.

Influencing and People Leadership Typically reports to a level M2.

Oversees the work of a team of at least three or more non-exempt employees within a single work unit.

Autonomy and Responsibility  

 

Ensures that subordinates perform work as prescribed by policies and procedures in order to achieve productivity, service, and quality standards, quotas, and goals.

 

Assigns work and resources to subordinates to achieve productivity, service, and quality standards within the parameters of the operating plan and budget.

 

Administers and executes policies & procedures typically affecting subordinates.

 

Has limited authority to make exceptions to policy and procedure—decisions are subject to frequent in process review.

 

Responsible for input on pay, performance appraisals, work schedules, day-to-day personnel issues, discipline and hiring employees they supervise.

 

Problem Complexity Problems and opportunities arise within the operations of the immediate work group—practice Organization or related experience provides the solutions.

 

Resolves operational problems within provided guidelines.

Knowledge and Typical Educational Preparation Requires advanced technical/operational know-how.

 

Requires a high school diploma; may require associate/bachelor’s degree or specialized training.

 

Dimension/Level M2
Organizational Scope and Impact Leads a work unit or department.

 

Establishes annual goals and priorities, and influences the direction for new or revised services, programs, processes, standards, or operational plans, based upon the Organization’s longer-term strategies.

 

Operational/Delivery Focus: The position has a direct impact on the work unit or department and may impact the entire Organization.

Influencing and People Leadership *Typically reports to a level M3, M4.

 

Manages three or more exempt individuals within a work unit or department. Typically oversees the work of a team or teams of non-exempt employees through subordinate-level M1s.

In addition, may be responsible for coordinating and monitoring the work of external vendors, contractors, etc.

Autonomy and Responsibility Establishes, interprets, and adjusts as circumstances require Standard Operating Procedures by which subordinates operate.

Estimates staffing needs and schedules and assigns projects/work for the department. Is ultimately responsible for the success of all departmental projects.

Has authority to make exceptions to policy or procedures under guidelines that require judgment and discretion on issues of importance.

May have some responsibilities for managing financial or external risks that require occasional interaction with senior management.

Responsible for making decisions on pay, performance, discipline, and hiring for the employees they manage.

Problem Complexity Problems and opportunities arise from normal Organization department operations.

Identifies issues, gathers facts, and resolves operational problems. Recommends and collaborates with a next-level manager to resolve strategic issues.

Knowledge and Typical Educational Preparation Requires advanced knowledge of a specific professional discipline in addition to operational knowledge of related work units.

Typically requires a bachelor’s degree in a related discipline.

 

Dimension/Level M3
Organizational Scope and Impact Leads a large single department, multiple work units, or multiple departments.

 

Establishes annual or mid-term priorities, goals, and operational plans for the department or work units. Leads definition and direction for new or revised services, programs, processes, policies, standards, or operational plans, based on the Organization’s longer-term strategies. Recommends departmental strategic plans within Organization’s strategic direction to the next-level manager.

Tactical/Strategic Focus: The position has a significant impact on the specific work units or departments and impacts the entire Organization.

Influencing and People Leadership Typically reports to a level M4, Vice President, Dean, Assistant Provost, Assoc Provost, or Provost.

Oversees the work of a team or teams of exempt individual contributors through subordinate-level M2s. In addition, may be responsible for managing the work of external vendors, contractors, etc.

Autonomy and Responsibility Responsible for the organizational design of the department or work units.

Often recommends innovation and improvement to policy or procedures under guidelines that require judgment and discretion on issues of the significant dollar or stakeholder relationship impact.

Decisions affect mid to long-term operational results delivered, and typically affect the financial, employee or public relations aspects of the Organization.

Has responsibility for managing significant financial or external risks that require frequent interaction with executive leadership.

Responsible for all human resource management, integrating work throughout the area, and developing and monitoring budgets for the department or several work units.

Problem Complexity Problems and opportunities arise from broad internal or external issues and events.

 

Problems are both operational and strategic and may require the integration of knowledge from several disciplines or areas of expertise.

Knowledge and Typical Educational Preparation Requires expertise across multiple work units OR mastery of a specific professional discipline.

Typically requires a bachelor’s degree in a related discipline and may require an advanced degree.

 

Dimension/Level M4
Organizational Scope and Impact 1) Has responsibility for 30-50% of the operations of a strategically critical function in which actions can measurably increase or decrease Organization operating results.

OR

2) Has responsibility for a material portion of Organization assets or processes – operations, financial, human capital – as determined by the President, Provost, Executive Vice President committee.

Typically not more than one or two M4s per function. (Provost departments must take into account faculty administrative positions that are equivalent to this level).

Tactical/Strategic Focus: The position has a significant direct impact on a large function and a direct impact on the entire Organization. Actions can measurably increase or decrease Organization-wide operating results.

Influencing and People Leadership *Reports to a Vice President, Assoc Provost, Provost, Executive Vice President, or the President.

Manages teams of professionals  (multiple M2’s and M3s).

Makes regular presentations to the Organization’s Trustees.

Advises senior leadership outside his or her own function on Organization issues with high, quantifiable impact on the success of the Organization as a whole Is and is perceived to be, an authoritative representative of the Organization on a variety of issues.

Autonomy and Responsibility Responsible for the organizational design of the departments or functions.

Expected to recommend innovation and improvement to policy or procedures on issues of high dollar impact for the Organization;

Has the ability to significantly modify the major or most significant policies and processes in the function.

Decisions affect long-term operational results delivered, and typically affect the financial, employee, or public relations aspects of the Organization.

Has responsibility for managing significant financial or external risks that require frequent interaction with executive leadership and/or Organization Trustees.

 

Makes hiring and firing decisions for the departments or functions. Makes strategic vendor selections and purchasing decisions for the departments or functions.

Problem Complexity Problems and opportunities are strategic and often unprecedented and impact broad segments of the Organization or the entire Organization. Leading strategic planning for the function/departments is not a determining factor for this level.

Problems are resolved through abstract and conceptual analysis and require innovative thinking and problem-solving that impacts two of the three dimensions of management at the Organization level —operations, financial and human capital.

Knowledge and Typical Educational Preparation Recognized as the Organization’s expert in one of the primary areas of operations within a function—includes regularly being sought out by senior leadership outside the function (VPs and above) to provide advice on areas of significance and being seen as providing credible advice; the VP acknowledges the position as having higher level knowledge in this area of specialty than the VP.

Typically requires a bachelor’s degree in a related discipline and may require an advanced degree.

 

Dimension/Level M5
Organizational Scope and Impact Leads a large strategically critical function (with Organization-wide impact).

Has authority/accountability to define the long-term strategy for the function.

Shares direct accountability to develop and implement the three-to-five-year strategies for the Organization as a whole, and for major initiatives that shape the long-term future of the Organization.

Typically, does not perform individual contributor assignments.

Strategic Focus: Position has a predominant impact on the specific function and a significant impact on the entire Organization. Actions can measurably increase or decrease Organization-wide operating results.

Influencing and People Leadership Reports directly to the Provost, Executive Vice President, or the President.

Oversees the work of multiple teams of professionals typically through subordinate M3s and M4s Appointed at the discretion of the President of the Organization.

Autonomy and Responsibility Sets or changes the function’s strategic plans or goals and actions of the function.

Performance is assessed primarily on long-term strategic results achieved; rather than on individual decisions or short–term operational results.

Decisions significantly affect the success of function goals, greatly impacting the financial stability, employee relations, and public relations of the Organization.

Makes hiring and firing decisions for the function. Plans for succession and overall talent management for the function. Makes vendor selections and purchasing decisions for the function.

Problem Complexity Problems are complex and difficult and affect the Organization as a whole.

Problems are resolved through abstract and conceptual analysis.

Knowledge and Typical Educational Preparation Requires in-depth management knowledge of functions OR is recognized as a national expert or model in a relevant discipline(s).

Requires an advanced degree in the relevant course of study.

 

more details

DD For Swap and Large File

In IT, developers or System admin needs large files for multiple reasons.
==> create swap files
==> Check hard disk write speed
==> Internet and  LAN speed check
==> Chek development functionality (upload/download speed and limit,)

Commands are used to create a  file.
(1) xfs_mkfile
(2) dd
(3) head
(4) fallocate

head -c 10G </dev/urandom  > myfile

Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. Note:- Btrfs does not support swap space. 

To create a swap file, use the dd command to create an empty file. To create a 1GB file, type:

dd if=/dev/zero of=/swapfile bs=1024 count=1024000

dd is a command for converting and copying a file.
if=/dev/zero or if=/dev/null read from FILE instead of stdin.
of=/swapfile is the name of the swap file.
bs=1024 read 1024 bytes from /dev/zero and write into the /swapfile.
count=1024000 count of 1024000 is the size in kilobytes (i.e. 1GB).

Prepare the swap file using mkswap just as you would a partition, but this time uses the name of the swap file:

mkswap /swapfile

swapon, swapoff – enable/disable devices and files for paging and swapping.

swapon /swapfile.

The /etc/fstab entry for a swap file would look like this:

/swapfile       none    swap    sw      0       0

In the last fire, mount -a to mount all the unmounted filesystems and check free -g to check whether your swap file is mounted or not.

 

5-AWS VPC Components

VPC:-

VPC A Virtual Private Cloud is a Virtual Network that closely resembles traditional Networking that you Operate in your Own Data Centre, with the Benefits of Using the Scalable Infra- Structure of Aws.

OR

UPC is a Virtual Network or Data Centre inside AWS for One Client.
→ It is logically Isolated from Other Virtual – N/W in the AWS Cloud → Max. 5 VPC Can be Created and 200 Subnets in 1 VPC.
→ We Can allocate max. 5 Elastic IPs.
→ Once We Create VPC, DHCP, NACL, and Security-Group will be automatically created.
→ A VPC is confined to an aws region and does not extend between regions.
→ VPC exists in the region, not in the availability zone.
→ subnet created in an availability zone, not in a region.
→ the same subnet you cannot use in 2 availability zone.
→ One subnet cannot extend in 2 availability zone.
→ subnet is availability zone specific and VPC is region specific.
→ Once the VPC is created, you cannot change its CIDR Block Range.
→ If you need a different CIDR Size, Create a New VPC.
→ The different Subnets within a VPC Cannot Overlap.

→ You Can however expand your VPC CIDR By adding New / Extra IP address ranges (Except Gor Cloud & Aws China).

Step to create VPC: –
create VPC.
create subnet.
create an internet gateway.
create a routing table.

Components of VPC: –
→ CIDR & IP address subnets.
» Implied Router & Routing Table.
» Internet Gateway.
» Security Groups.
» Network ACL.
» Virtual Private Gateway.
» Peering Connections.
» Elastic IP.

 

VPC Type:-

Default VPC Default VPC: –

→ Created in Each AwS Region when an AWS Account is Created.
→ Has default CIDR, Security Group, NACL, and Route Table Settings.
→ Has an Internet Gateway by default.

Custom VPC Custom VPC: – 

Is a VPC an AWS account Owner Creates.
→ AWS User Creating the Custom VPC Can decide the CIDR.
→ Has its own default Security Group, Network ACL, and Route Tables.
→ Does not have an Internet Gateway by default, one needs to be Created if Needed.

 

Subnet: –

Public Subnet → If a Subnet’s traffic is Routed to an Internet Gateway, the Subnet: is known as a Public Subnet! If you want Your instance in a Public Subnet to Communicate with the internet Over IPv4, it must have a public IPv4 address or an Elastic IP address.
Private Subnet → If a Subnet does not have a route to the internet gateway, the Subnet is known as a Private Subnet.

→ When you Create a VPC, you must specify an IPv4 CIOR Block for the VPC. The allowed blockSize is Between /16 to /28 netmask.
→ The first four & Last IP addresses of – Subnet Cannot be assigned.

for eg –
10.0.0.0 → Network address.
10.0.0.1→ Reserved by Aws for the VPC Router.
10.0.0.2 Reserved by Aws: The IP address of the DNS server.
10.0.0.3 Reserved for Future Use.
10.0.0.255 → Broadcast Address  Aws does not support Broadcast in a VPC But Reserve this Address.

 

Route & Route table: –

Route & Route table → It is the Central Routing function.
→ It Connects the different AZ together and Connects the VPC to the Internet Gateway.
→ You Can have up to 200 Route tables per VPC.
→ You Can have up to 50 Routes Entries per Route Table.
→ Each Subnet must be associated with Only One Route table at any given time.
→ If you do not Specify a Subnet to Route table association, the Subnet will be associated with the default VPC Route table.
→ You Can also edit the Main Route table if you need, but you cannot delete the main Route Table.
→ However you Can make a Custom Route table manually become the main Route Table then delete the former main, as it is no longer the main Route table.
→ You can associate multiple Subnets with the same Route table.

 

Internet Gateway: –

Internet Gateway → An Internet Gateway is a Virtual Router that connects a VPC’ to the internet.
→ Default VPC is already attached with – an Internet Gateway.
→ If you Create a new VPC then you must attach the Internet Gateway in order to access the Internet.
→ Ensure that your subnet’s Route table points to the Internet Gateway.
→ It performs NAT Between your private and Public IPv4 address.
→ Its Supports both IPv4 and IPv6.

Net Gateway:-

Net Gateway You Can Use a Network address translation Gateway to enable instances in a private Subnet to Connect to the internet or Other AWS Services, but prevent the internet from initiating a Connection with those instances.
→ You are charged for Creating and Using a NAT Gateway in your account NAT Gateway hourly Usage and data processing rates apply Amazon EC2 Charges for data transfer also apply.
→ To Create a NAT Gateway, you must Specify the public Subnet in which the NAT Gateway should Reside.
→ You must also specify an Elastic IP address to associate with NAT Gateway When you create it → No need to assign a public IP address to your Private instance.
→ Net Gateway is always present in public Networks not in private networks. But it works for private networks to access the internet.
→ After you have created a NAT Gateway You must update the Route Table associated with one or more of your Private Subnets to point Bound Internet traffic to the NAT Gateway. This enables instances in your private Subnet to Communicate with the internet.
→ Deleting a NAT Gateway, disassociates its Elastic IP address but does not release the address from your account.

Security Groups:-

Security Groups → It is a Virtual Firewall Works at ENI Level.
→ Upto 5 Security Groups per EC2 instance interface can be applied.
→ Can Only have permit Rules, Cannot have denied Rules.
→ Stateful, Return traffic, of allowed inbound traffic is allowed, even if there are no rules to allow it.
→ Security Groups work with Ec2 instance but NACL works with VPC.

 

Network ACL:-

Network ACL → It is a function performed on the Implied Router.
→ NACL is an optional Layer of Security for your VPC that acts as a ? firewall for Controlling traffic in and Out of One or more Subnets.
→ Your VPC automatically Comes with a modifiable default Network ACL By default, it allows all inbound and Outbound IPv4 traffic and if applicable, IPv6 traffic.
→ You Can Create a Custom network ACL and associate it with a Subnet By default, Each Custom Network ACL denies all inbound and Outbound traffic until you add rules.
→ Each subnet in your VPC must be associated with a Network ACL of you don’t explicitly associate the Subnet with a Network ACL, the Subnet is automatically associated with the default Network ACL.
→ You Can associate a Network ACL with multiple Subnets, however, a Subnet Can be associated with Only One network ACL at a time. When you associate a Network ACL with a Subnet, the previous association is Removed.
→ A network ACL Contains a Numbered list of Rules that we evaluate in order, starting with the lowest numbered Rule.
→ The highest number that you can use for a Rule is 32766 Recommended that you start by Creating Rules with Rule Numbers that have a multiple of 100 so that you can insert new Rules where you need them later.
→  It functions at the Subnet Level.
→  ANACL is stateless, Outbound traffic for allowed inbound traffic must be explicitly allowed too.
→ You Can have the permit and deny Rules in an NACL.

 

Diff SG&NACL:-

Security Group

→ Operate at the Instance level.
→ Support allows rules only.

→ stateful, Return traffic is automatically allowed.

→ Applies to an instance only.

NACL

→ Operate at the subnet level.

→It permits allow as deny rule.
→stateless, Return traffic must be explicitly allowed by rule.
→Applies to all instances in the subnet.

VPC Peering:-

VPC Peering → A VPC Peering Connection is a Networking Connection between two VPCs that enables you to route traffic between them Using Private IPv4 addresses or IPv6 addresses.

→ Instances in either VPC Can Communicate with Each Other as if they are within the same Network.

→ You Can Create a UPC Peering Connection Between your Own VPC, or with a VPC in another Aws Account. The VPC Can be in different Regions.

4-AWS Ec2 Components

4-AWS Ec2 Components

 

EC2 Access:-

EC2 Access → To access instances, you need a key I and Key pair Name.
→ You Can download the Private key Only Once.
→ The Public Key is Saved by AWS to match it to the Key pair name, and the Private key is when you try to login to the EC2 instances.
→ Without Key pair you Cannot access instances via RDP or SSH.
→ There are 20 EC2 instances Soft Limit per region, you Can Submit a Request to AWS to increase it.

Ec2 Status Check:-

Ec2 Status Check

 

→ By default AWS EC2 Service performs automated Status checks every minute.
→ This is done on every running EC2 instance to identify any Hw or software issue.
→ Status check is built into the AWS EC2 instance.
→ They Cannot be Configured, deleted, or disabled.
→ EC2 Services Can Send its metric Data to AWS Cloudwatch every 5 minutes (enabled by default).
→ Enabled detailed monitoring is Chargeable and Sends matric every 1 minute.
→ You are not charged for EC2 instances, if they are stopped, however, attached EBS Volumes incur charges.

 

When you stop an EBS Backed EC2 Instance → Instances perform a Shutdown.
→ state changes from Running Stopping.
→ EBS Volumes remain attached to the instance.
→ Any data cached in RAM or instances store volume is gone.
→ Instances retain its private IPv4 or any IPv6 address.
→ Instances releases its public LPv4 address back to Aws Pool.
→ Instances Retain their Elastic IP address.

 

 

EC2 Termination → when you terminate a Running instance the instance stakes change as follows.
Running → shutting down → Terminated.
→ During the shutting down and Terminated states, you do not incur Charges.
→ By default EBS Root Devices Volumes have been deleted Automatically when the EC2 instances are terminated.
→ Any additional (non-boot/boot) Vokimes attached to the instances by default, persist after the instance is terminated.
→ You Can Modify both behaviors by Modifying the ‘Delete on Termination’ attribute of any EBS Volumes during instances launch or while Running.
→ Enable EC2 Termination Protection against accidental Termination.

 

EC2 Metadeta:-

EC2 Metadeta

 

This is instance data that you can use to configure or manage the instance.
eg:- IPv4 addresses, [Pu6 addresses, DNS hostname, AMI-id, Instance ID, Instance Type, local hostname, public keys, Security groups.→ Metadata Can be Only viewed from within the instance itself ie you have to log in to the instance.
→ Metadata is not protected by encryption, anyone that has access to the instance.→ Can view this data To View instance Metadata, GET http://169254169254/latest/Metadata

 

Instances User data:-

Instances User data → data Supplied by the User at instance launch in the form of a Script to be executed during the instance boot.
→ User data is limited to 16 KB.
→ You Can Change User data, by Stopping EC2 first.
→ User data is not encrypted.

 

Elastic Block Storage (NAS)

Elastic Block Storage (NAS) → The storage is called (EBS block volume) and the instance is called EBS Backed Instance.
→ Most Common Replicate with A-Z.
→ EBS Volumes attached at launch are deleted when the instance terminates.→ EBS Volumes attached to a running instance are not deleted when the instance is terminated but are detached with data interact.

 

Instance Storage  (CDAS)

Instance Storage (CDAS) → Physically attach to the host Server.
→ Data not lost when os is Rebooted.Data lost when:
→ Underlying drive fails.
→ Instance is stopped or terminated.
→ You Can’t detach or attach to another.
Do not Rely on for Valuable long-term data.