Friday 21 August 2015

RHEL cluster 3.X installation & configuration.

SERVERS:10.25.12.20/10.25.12.25
RHEL  6: Configuring a cman-based cluster to use a specific network interface
1)Define all nodes in /etc/hosts.
If these nodes do not have any pre-existing hostname on that network, then names can be assigned to them in this file,
 as long as those names don't conflict with others hosts on the network.
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.143.61 node1.google.com node1
192.168.143.62 node2.google.com node2
Note: configure above ip's manullay on both nodes for private communication.
2)RPM need to install on both nodes.
Note: - Configure YUM on web
#mkdir -p /var/www/html/RHEL6/u5/Server/x86_64/
#cp -R /cdrom/* /var/www/html/RHEL6/u5/Server/x86_64/
#chmod a+rx -R /var/www/html/RHEL6/u5/Server/x86_64/
#service httpd start
#chcon -R -t httpd_sys_content_t /var/www/html/RHEL6/u5/Server/x86_64/
make entries in repo configuration file.
[or]
local repository cofiguration.
copy dvd into required location(/var/ftp/pub/RHEL6).
#vi  /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Server
baseurl=file:///var/ftp/pub/RHEL6/Server
enabled=1
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=file:///var/ftp/pub/RHEL6/HighAvailability
enabled=1
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=file:///var/ftp/pub/RHEL6/LoadBalancer
enabled=1
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=file:///var/ftp/pub/RHEL6/ScalableFileSystem
enabled=1
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=file:///var/ftp/pub/RHEL6/ResilientStorage
enabled=1
gpgcheck=0
# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
HighAvailability                                                                                                                                 | 3.9 kB     00:00 ...
LoadBalancer                                                                                                                                     | 3.9 kB     00:00 ...
ResilientStorage                                                                                                                                 | 3.9 kB     00:00 ...
ScalableFileSystem                                                                                                                               | 3.9 kB     00:00 ...
rhel-source                                                                                                                                      | 3.9 kB     00:00 ...
repo id                                                                          repo name                                                                        status
HighAvailability                                                                 HighAvailability                                                                    56
LoadBalancer                                                                     LoadBalancer                                                                         4
ResilientStorage                                                                 ResilientStorage                                                                    62
ScalableFileSystem                                                               ScalableFileSystem                                                                   7
rhel-source                                                                      Server                                                                           3,690
repolist: 3,819
yum install gfs2-utils
yum install rgmanager
yum groupinstall "highavailabulity"
rpm -qa | egrep -i "ricci|luci|cluster|ccs|cman|gfs2"
yum groupinstall  Resilient*
yum install -y pacemaker
LUCI user & Ricci user is created while installing cluster
ricci:x:140:140:ricci daemon user:/var/lib/ricci:/sbin/nologin
luci:x:141:141:luci high availability management application:/var/lib/luci:/sbin/nologin
hacluster:x:494:489:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologin
please reset passd for above users.
#passwd ricci
#passwd luci
#passwd hacluser



3)Starting Cluster Software
type the following commands in this order:
1)service cman start
2)service clvmd start -----> if CLVM has been used to create clustered volumes
3)service gfs2 start---------> if you are using Red Hat GFS2
4)service rgmanager start--------> if you using high-availability (HA) services (rgmanager).
--------
To stop the cluster software on a node,
type the following commands in this order:
1)service rgmanager stop
2)service gfs2 stop
3)umount -at gfs2
4)service clvmd stop
5)service cman stop
4) Staring Cluster GUI .
# yum install luci
# service luci start
Starting luci: generating https SSL certificates...  done
                                                           [  OK  ]
/etc/sysconfig/luci is configuration file.
https://10.25.12.20:8084/

By using gui we need to configure
1) Clster name
2) cluster members
3) fencing
4) quorum devices
5)resources,& service groups.

5) then it generates cluster.conf file
6. After completing configuration of the rest of the necessary components and starting the cman service, check cman_tool status to see what addresses the nodes are communicating over:
Raw
[root@node1 ~]# cman_tool status | grep "Node addresses"
Node addresses: 192.168.143.61
Raw
[root@node2 ~]# cman_tool status | grep "Node addresses"
Node addresses: 192.168.143.62
Product(s) Red Hat Enterprise Linux Component cluster
Category Learn



7) Creating GFS file system is mandatiory.
after install cluster tools then only configure pvs,vgs,lvs other wise it does not recognize.
crateating volume:-
before creating volume with out cluster install ??
# service clvmd status
clvmd (pid  2121) is running...
Clustered Volume Groups: (none)
Active clustered Logical Volumes: (none)
# lvremove posdb1datavol  posdb1datavg
  Volume group "posdb1datavol" not found
  Skipping volume group posdb1datavol
Do you really want to remove active logical volume posdb1datavol? [y/n]: y
  Logical volume "posdb1datavol" successfully removed

# pvs
  PV         VG           Fmt  Attr PSize   PFree
  /dev/sdb   posdb1datavg lvm2 a--  200.00g 200.00g
[root@hydposdb1 ~]# vgs
  VG           #PV #LV #SN Attr   VSize   VFree
  posdb1datavg   1   0   0 wz--n- 200.00g 200.00g

# pvcreate /dev/sdb
  clvmd not running on node posdb_node1.google.co.in
  Can't get lock for orphan PVs

# service clvmd stop
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]
# pvcreate /dev/sdb
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  Physical volume "/dev/sdb" successfully created
# pvs
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sdb        lvm2 a--  200.00g 200.00g
# service clvmd start
Starting clvmd:
Activating VG(s):   No volume groups found
                                                           [  OK  ]
# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sdb        lvm2 a--  200.00g 200.00g

# vgcreate posdbdatavg /dev/sdb
  Clustered volume group "posdbdatavg" successfully created
[root@hydposdb1 ~]# service clvmd status
clvmd (pid  17090) is running...
Clustered Volume Groups: posdbdatavg
Active clustered Logical Volumes: (none)
# lvcreate -L 199G -n posdbdatavol posdbdatavg
  Logical volume "posdbdatavol" created
# lvs
  LV           VG          Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  posdbdatavol posdbdatavg -wi-a----- 199.00g
# service clvmd status
clvmd (pid  17090) is running...
Clustered Volume Groups: posdbdatavg
Active clustered Logical Volumes: posdbdatavol
To create GFS file system.
#  mkfs.gfs2 -t pos_cluster:posvol_gfs -p lock_dlm -j 3 /dev/posdbdatavg/posdbdatavol
This will destroy any data on /dev/posdbdatavg/posdbdatavol.
It appears to contain: symbolic link to `../dm-0'
Are you sure you want to proceed? [y/n] y
Device:                    /dev/posdbdatavg/posdbdatavol
Blocksize:                 4096
Device Size                199.00 GB (52166656 blocks)
Filesystem Size:           199.00 GB (52166654 blocks)
Journals:                  3
Resource Groups:           796
Locking Protocol:          "lock_dlm"
Lock Table:                "pos_cluster:posvol_gfs"
UUID:                      79bd19f4-478a-65b1-b563-b537cfa15f46

mkfs -t gfs2 -p lock_dlm -j 2 -t <cluster>:<name>  /volumefullpath

8)Testing command line:-
# clustat
Cluster Status for poc_cluster @ Sat May 16 12:34:46 2015
Member Status: Quorate
 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1.google.co.in                                                   1 Online, rgmanager
 node2.google.co.in                                                   2 Online, Local, rgmanager
 Service Name                                                     Owner (Last)                                                     State
 ------- ----                                                     ----- ------                                                     -----
 service:pos                                                      (node1.google.co.in)                                              disabled
2)To enable service
#clusvcadm -e pos   -m hostnme
3)To restart service on running node.
#clusvcadm -R service_name
3)To stop service on running node.
#clusvcadm -s service_name
4)To freeze service on running node.
#clusvcadm -Z service_name
5)To unfreeze service on running node.
#clusvcadm -U service_name
6)To move service to another node.
#clusvcadm -r service_name


===================
RHEL 7: Configuring a corosync-based cluster to use a specific network interface.
1. Define all nodes in /etc/hosts.
 If these nodes do not have any pre-existing hostname on that network, then names can be assigned to them in this file,
 as long as those names don't conflict with others hosts on the network.

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.143.61 node1.example.com node1
192.168.143.62 node2.example.com node2
2. Set up the cluster with pcs,
the pcsd web interface, or directly in /etc/corosync/corosync.conf, specifying names matching those that were added in /etc/hosts. For example:

# pcs cluster setup --name myCluster node1.example.com node2.example.com
Which would produce a nodelist definition in /etc/corosync/corosync.conf with those names as the "ring0_addr" for each node:
Raw
nodelist {
  node {
        ring0_addr: node1.example.com
        nodeid: 1
       }
  node {
        ring0_addr: node2.example.com
        nodeid: 2
       }
}














NOTE: RHEL 7 corosync supports the usage of Redundant Ring Protocol in which multiple redundant interfaces can be used to communicate in the cluster.
In such configurations, the "ring1_addr" should be defined in /etc/hosts the same way the primary name was.
3. Upon starting the cluster with pcs cluster start [--all], check the output of corosync-cfgtool on each node to see what addresses are being used for node communication:
[root@node1 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
    id  = 192.168.143.61
    status  = ring 0 active with no faults
[root@node1 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
    id  = 192.168.143.62
    status  = ring 0 active with no faults

Post fix configuration / to flush / remove mailq

# postfix flush
OR
# postfix -f
To see mail queue, enter:
# mailq
To remove all mail from the queue, enter:
# postsuper -d ALL
To remove all mails in the deferred queue, enter:
# postsuper -d ALL deferred
 vi /etc/postfix/main.cf
relayhost = 172.16.7.185

Scripts : how to ping mulitple server from centralized server ?

#!/bin/bash
for i in `cat /tmp/server`
do
ping -q -c 1 $i > /tmp/null
if [ $? -eq 0 ]
then
   echo  " $i server is up & running fine"
else
   echo " $i server is down -- NEED ADMIN HELP"
fi
done

how to restart ovmm-console in OVM server??

Login in OVM server on which guest vm contains & execute below


#rm /var/run/ovm-consoled.pid

#service ovm-consoled start

Kernel upgradation in RHEL

Kernal Software :-
kernel debuginfo is required for vmcore analysis
debug kernel is compiled to troubleshoot issue
kernel - devel and headers is for compiling new kernel
kerne - doc is documentation
For kernel upgradation below two RPM's are mandatory.
kernel-2.6.32-431.el6.x86_64
kernel-firmware-2.6.32-431.el6.noarch
#rpm -ivh kernel-2.6.32-431.el6.x86_64
#rpm -ivh kernel-firmware-2.6.32-431.el6.noarch
#reboot



How to mount iSCSI devices at boot time in RHEL ?

Logical Volumes residing on my iSCSI devices are not activated after booting?
Resolution
To mount iSCSI LUNs in /etc/fstab, add _netdev to the mount options near the end of the line. Properly formatted /etc/fstab lines for two different iSCSI mount points are shown below:
Raw
#device         mount point     FS      Options Backup  fsck
LABEL=data1     /mnt/data1      ext3    _netdev 0       0
LABEL=data2     /mnt/data2      ext3    _netdev 0       0
The netfs service also needs to be enabled at boot time, as it is responsible for mounting devices that use _netdev:

# chkconfig netfs on

how to generate VMcore in RHEL ?

kernel.hung_task_panic parameter could be enabled along with kdump
so that the system will panic when it sees a process in hung state for 120 seconds
and a vmcore generation is initiated.




  # echo 1 > /proc/sys/kernel/hung_task_panic

how to check user limits in shell initialization scripts in RHEL ?

We can see user oracle has these resource limits set in /etc/security/limits.conf:
oracle    soft  nproc  2047         -->      4096
oracle    hard  nproc  16384
oracle    soft  nofile  1024          -->     4096
oracle    hard  nofile  65536
Does this user also have some limits changed in his shell initialization scripts? We can check it with a command like this from the users account:
$ cat /proc/$$/limits
# cat /proc/sys/fs/file-nr
# ps auxm|wc

Thursday 20 August 2015

RHEL basic hardening script ?

#!/bin/sh

#store password in encryption format

authconfig --passalgo=sha512 --update

#set password Creation Requirement parameter using pam_cracklib

sed -i 's/try_first_pass retry=3 type=/try_first_pass retry=3 minlen=8 dcredit=-1 ucredit=-1 ocredit=-1 lcredit=-1/g' /etc/pam.d/system-auth

grep pam_cracklib.so /etc/pam.d/system-auth

sleep 1

#Limit password Reuse

sed -i 's/pam_unix.so sha512 shadow nullok try_first_pass use_authtok/pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5'/g /etc/pam.d/system-auth

grep pam_unix.so /etc/pam.d/system-auth

sleep 1

#Set Password Expiration Days

sed -i 's/PASS_MAX_DAYS [^ ]*/PASS_MAX_DAYS 42/g' /etc/login.defs

sed -i 's/PASS_MIN_DAYS [^ ]*/PASS_MIN_DAYS 1/g' /etc/login.defs

sed -i 's/PASS_MIN_LEN [^ ]*/PASS_MIN_LEN 8/g' /etc/login.defs

sed -i 's/PASS_WARN_AGE [^ ]*/PASS_WARN_AGE 7/g' /etc/login.defs

awk '/PASS_MAX_DAYS [^ ]*/ { print $0}' /etc/login.defs

awk '/PASS_MIN_DAYS [^ ]*/ { print $0}' /etc/login.defs

awk '/PASS_WARN_AGE [^ ]*/ { print $0}' /etc/login.defs

sleep 5

# lock inactive user accounts

useradd -D -f 35

awk '/INACTIVE[^ ]*/ { print $0}' /etc/default/useradd

sleep 3

#set user/gruop owner ans permission on crontab

chown root:root /etc/crontab

chmod og-rwx /etc/crontab

ls -ld /etc/crontab

sleep 1

#restrict deamon

rm /etc/at.deny

touch /etc/at.allow

chown root:root /etc/at.allow

chmod og-rwx /etc/at.allow

#Restrict at/cron to authorized users

/bin/rm /etc/cron.deny

touch /etc/cron.allow

/bin/rm /etc/at.deny

chmod og-rwx /etc/cron.allow

chmod og-rwx /etc/at.allow

chown root:root /etc/cron.allow

chown root:root /etc/at.allow

#Set SHH Protocal to 2

sed -i 's/Protocol [^ ]*/Protocol 2/g' /etc/ssh/sshd_config

awk '/Protocol 2/{print $0}' /etc/ssh/sshd_config

#Disable SSH root login

sed -i 's/#PermitRootLogin [^ ]*/PermitRootLogin no/g' /etc/ssh/sshd_config

awk '/PermitRootLogin[^ ]*/{print $0}' /etc/ssh/sshd_config

#Set SSH PermitEmptyPassword to No

sed -i 's/#PermitEmptyPasswords no/PermitEmptyPasswords no/g' /etc/ssh/sshd_config

awk '/PermitEmptyPasswords[^ ]*/{print $0}' /etc/ssh/sshd_config

#Do NOT Allow Users to Set Environment Options

sed -i 's/#PermitUserEnvironment no/PermitUserEnvironment no/g' /etc/ssh/sshd_config

awk '/PermitUserEnvironment[^ ]*/{print $0}' /etc/ssh/sshd_config

#Use Only Approved Cipher inCounter mode

sed -i '140i\Ciphers aes128-ctr,aes192-ctr,aes256-ctr\' /etc/ssh/sshd_config

awk '/Ciphers aes128-ctr,aes192-ctr,aes256-ctr/{print $0}' /etc/ssh/sshd_config

sleep 5

#Set Idel Timeout Interval for userlogin

sed -i 's/#ClientAliveInterval [^ ]*/ClientAliveInterval 300/g' /etc/ssh/sshd_config

sed -i 's/#ClientAliveCountMax [^ ]*/ClientAliveCountMax 0/g' /etc/ssh/sshd_config

awk '/ClientAliveInterval[^ ]*/{print $0}' /etc/ssh/sshd_config

awk '/ClientAliveCountMax 0[^ ]*/{print $0}' /etc/ssh/sshd_config

sleep 3

service sshd restart

#restrict access to critical files

chown root:root /etc/passwd /etc/shadow /etc/group

chmod 644 /etc/passwd /etc/group

chmod 400 /etc/shadow

ls -ld /etc/passwd /etc/shadow /etc/group

sleep 2

#Secure Boot loader setting

sed -i 's/default=[^ ]*/default=0/g' /etc/grub.conf

sed -i 's/timeout=[^ ]*/timeout=15/g' /etc/grub.conf

awk '/default=[^ ]*/{print $0}' /etc/grub.conf

awk '/timeout=[^ ]*/{print $0}' /etc/grub.conf

#Remove the non-Essential Services

chkconfig apmd off

chkconfig atd off

chkconfig autofs off

chkconfig chargen off

chkconfig chargen-dup off

chkconfig cups off

chkconfig cups-lpd off

chkconfig daytime-udp off

chkconfig echo off

chkconfig echo-udp off

chkconfig eklogin off

chkconfig gssftp off

chkconfig httpd off

chkconfig irda off

chkconfig irqbalance off

chkconfig isdn off

chkconfig klogin off

chkconfig krb-telnet off

chkconfig kshell off

chkconfig mdmonitor off

chkconfig mdmpd off

chkconfig microcode_ctl off

chkconfig named off

chkconfig netdump off

chkconfig netfs off

chkconfig nfs off

chkconfig nfslock off

chkconfig pcmcia off

chkconfig portmap off

chkconfig pssacct off

chkconfig random off

chkconfig rawdevices off

chkconfig rhnsd off

chkconfig rsync off

chkconfig saslauthd off

chkconfig sendmail off

chkconfig smartd off

chkconfig smb off

chkconfig snmpd off

chkconfig snmptrapd off

chkconfig swat off

chkconfig time off

chkconfig time-udp off

chkconfig vncserver off

chkconfig windbind off

chkconfig --list | grep '3:off'

sleep 2

#Remove OS information from Login Waring Banner

#cat /dev/null > /etc/issue.net

#cat /dev/null > /etc/motd

#Set SELINUX Policy

sed -i 's/SELINUXTYPE=[^ ]*/SELINUXTYPE=targeted/g' /etc/selinux/config

#block nonessential user accounts in the system

userdel lp

userdel sync

userdel shutdown

userdel Uupc

userdel ftp    # disabling FTP

userdel games

userdel nscd

userdel gopher

userdel operator

userdel nobody     #disabling FTP



#Set password for single user mode

sed -i '26i\~~:S:wait:/sbin/sulogin\' /etc/inittab

#configure strong permissions on TFTP

chmod 754 /usr/bin/tftpboot

#configure strong permission on temporary folders

cd /

chmod 1777 tmp

chmod 1777 utmp

chmod 1777 utmpx

#configure rsyslog

yum install rsyslog*

chkconfig rsyslog on

service rsyslog start

#configure strong permission on log files

chmod 622 /var/log/messages

chmod 622 /var/log/secure

chmod 622 /var/log/spooler

chmod 622 /var/log/maillog

chmod 622 /var/log/cron

chmod 622 /var/log/boot.log

#Configure Audit Log Storage Size

sed -i 's/max_log_file = [^ ]*/max_log_file = 100/g' /etc/audit/auditd.conf

awk '/max_log_file = [^ ]*/{print $0}' /etc/audit/auditd.conf

#configure Strong System Mask

sed -i 's/umask [^ ]*/umask 022/g' /etc/bashrc

#Keep All Auditing Information

sed -i 's/max_log_file_action =[^ ]*/max_log_file_action =keep_logs/g' /etc/audit/auditd.conf

awk '/max_log_file_action = [^ ]*/{print $0}' /etc/audit/auditd.conf

#Login and logon Events should be audited

echo "-w /var/log/faillog -p wa -k logins" >> /etc/audit/audit.rules

echo "-w /var/log/lastlog -p wa -k logins" >> /etc/audit/audit.rules

echo "-w /var/log/tallylog -p wa -k logins" >> /etc/audit/audit.rules

pkill -HUP -P 1 auditd

awk '/ -p wa -k logins/ {print $0}' /etc/audit/audit.rules

awk '/ -p wa -k logins/ {print $0}' /etc/audit/audit.rules

awk '/ -p wa -k logins/ {print $0}' /etc/audit/audit.rules

sleep 2

#Enable login banner in the system

#echo "Access to this system is restricted to authorized users only If you are not authorized users, please exit now." >> /etc/issue.net

#echo "Access to this system is restricted to authorized users only. If you are not authorized users, please exit now." >> /etc/motd

#Permission on /etc/passwd

/bin/chmod 644 /etc/passwd

ls -ld /etc/passwd

#permission on /etc/shadow

/bin/chmod 000 /etc/shadow

ls -ld /etc/shadow

#permission on /etc/gshadow

/bin/chmod 000 /etc/gshadow

ls -ld /etc/gshadow

#permission on /etc/group

/bin/chown 644 /etc/group

ls -ld /etc/group

#verify user/group Ownership on /etc/passwd

/bin/chown root:root /etc/passwd

ls -lrt /etc/passwd

#verify user/group Ownership on /etc/shadow

/bin/chown root:root /etc/shadow

ls -lrt /etc/shadow

#verify user/group Ownership on /etc/gshadow

/bin/chown root:root /etc/gshadow

ls -lrt /etc/gshadow

#verify user/group Ownership on /etc/group

chown root:root /etc/group

ls -lrt /etc/group

sleep 2

SSH vulnerabulity ?



How to set SSH Protocol/ Disable Rool logiin / Not allowing env options / ciphers / client alive intervals etc


#Set SSH Protocal to 2
sed -i 's/Protocol [^ ]*/Protocol 2/g' /etc/ssh/sshd_config
awk  '/Protocol 2/{print $0}' /etc/ssh/sshd_config
#Disable SSH  root login
sed -i 's/#PermitRootLogin [^ ]*/PermitRootLogin no/g' /etc/ssh/sshd_config
awk  '/PermitRootLogin[^ ]*/{print $0}' /etc/ssh/sshd_config
#Set SSH PermitEmptyPassword to No
sed -i 's/#PermitEmptyPasswords no/PermitEmptyPasswords no/g' /etc/ssh/sshd_config
awk  '/PermitEmptyPasswords[^ ]*/{print $0}' /etc/ssh/sshd_config
#Do NOT Allow Users to Set Environment Options
sed -i 's/#PermitUserEnvironment no/PermitUserEnvironment no/g' /etc/ssh/sshd_config
awk  '/PermitUserEnvironment[^ ]*/{print $0}' /etc/ssh/sshd_config
#Use Only Approved Cipher inCounter mode
sed -i '140i\Ciphers aes128-ctr,aes192-ctr,aes256-ctr\' /etc/ssh/sshd_config
awk  '/Ciphers aes128-ctr,aes192-ctr,aes256-ctr/{print $0}' /etc/ssh/sshd_config
sleep 5
#Set Idel Timeout Interval for userlogin
sed -i 's/#ClientAliveInterval [^ ]*/ClientAliveInterval 300/g' /etc/ssh/sshd_config
sed -i 's/#ClientAliveCountMax [^ ]*/ClientAliveCountMax 0/g' /etc/ssh/sshd_config
awk  '/ClientAliveInterval[^ ]*/{print $0}' /etc/ssh/sshd_config
awk  '/ClientAliveCountMax 0[^ ]*/{print $0}' /etc/ssh/sshd_config
sleep 3
service sshd restart

How to Restrict Job automation(Cron + At ) jobs in RHEL ?

#set user/gruop owner ans permission on crontab
chown root:root /etc/crontab
chmod og-rwx /etc/crontab
ls -ld /etc/crontab
sleep 1
#Restrict at/cron to authorized users
/bin/rm /etc/cron.deny
touch /etc/cron.allow
/bin/rm /etc/at.deny
chmod og-rwx /etc/cron.allow
chmod og-rwx /etc/at.allow
chown root:root /etc/cron.allow
chown root:root /etc/at.allow

How to set password policy in RHEL ?


To set Password Expriation days
#vi /etc/login.defs
PASS_MAX_DAYS 42
PASS_MIN_DAYS 1
PASS_MIN_LEN 8
PASS_WARN_AGE 7



To impliment script
#vi passwdage.sh
#Set Password Expiration Days
sed -i 's/PASS_MAX_DAYS [^ ]*/PASS_MAX_DAYS 42/g' /etc/login.defs
sed -i 's/PASS_MIN_DAYS [^ ]*/PASS_MIN_DAYS 1/g' /etc/login.defs
sed -i 's/PASS_MIN_LEN [^ ]*/PASS_MIN_LEN 8/g' /etc/login.defs
sed -i 's/PASS_WARN_AGE [^ ]*/PASS_WARN_AGE 7/g' /etc/login.defs
awk '/PASS_MAX_DAYS [^ ]*/ { print $0}' /etc/login.defs
awk '/PASS_MIN_DAYS [^ ]*/ { print $0}' /etc/login.defs
awk '/PASS_WARN_AGE [^ ]*/ { print $0}' /etc/login.defs
sleep 5

How To clear RAM cache in Linux ?

# we can clear RAM cache in Linux by using following simple steps
#free -m
#echo 3 > /proc/sys/vm/drop_caches
#free -m