Posts

Showing posts from April, 2015

Active vs Passive ftp

Active and passive are the two modes that FTP can run in. FTP uses two channels between client and server, the command channel and the data channel, which are actually separate TCP connections. The command channel is for commands and responses, the data channel is for actually transferring files. It's a nifty way of sending commands to the server without having to wait for the current data transfer to finish. In active mode, the client establishes the command channel (from client port X to server port 21 (b) ) but the server establishes the data channel (from server port 20 (b) to client port Y , where Y has been supplied by the client). In passive mode, the client establishes both channels. In that case, the server tells the client which port should be used for the data channel. Passive mode is generally used in situations where the FTP server is not able to establish the data channel. One of the major reasons for this is network firewalls. While you may have...

Create a simple webserver container on Docker

1. Start the container with a shell, #docker run -i -t -p 8080:80 214a4932132a /bin/bash Mapped the host port 8080 to container port 80, so that you can access the container webserver contents through apache. [It is similar as DNAT] 2. Install the web server package inside the container, At first, the containers have only internal IP addresses. To access the Internet, SNAT (Source Network Address Translation, also known as IP masquerading) should be configured on theHardware Node. [root@proxy ~]# iptables -t nat -A POSTROUTING -s 172.17.0.0/24 -o eth0 -j SNAT --to 31.x.x.x 172.17.0.0 - Container's ip range 31.x.x.x - Host public IP Now you can able to access the internet from container, then we can install the apache server. [root@c576532e21ab /]# yum install httpd -y 3. Create a custom content in document root, [root@c576532e21ab /]# echo "Hello world" > /var/www/html/index.html 4. Test the web server, We cant start the httpd service through ...

Install and Configure the Docker on Centos

We are going to install and configure the Docker on Centos 1. Enable the epel repository. 2. yum install docker-io 3. Start the docker daemon [root@proxy ~]# service docker start Starting cgconfig service:                                 [  OK  ] Starting docker:                                       [  OK  ] [root@proxy ~]# chkconfig docker on 4. Download any public container images and store them in a local repository, [root@proxy ~]# docker pull ubuntu ubuntu:latest: The image you are pulling has been verified 511136ea3c5a: Pull complete f3c84ac3a053: Pull complete a1a958a24818: Pull complete 9fec74352904: ...

How to install and configure GlusterFS server on centos

We are going to setup glusterfs storage server with four nodes. 1. On all of your four nodes install the glusterfs and xfs packages, wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo yum install glusterfs-server xfsprogs chkconfig glusterd on service glusterd start 2. On all of your cluster nodescreate a new 2Gb LV called brick1 in vgsrv VG and format this LV with an XFS filesystem with 512byte inodes. lvcreate -L 2G -n brick1 vgsrv mkfs.xfs -i size=512 /dev/vgsrv/brick1 mkdir /server1_export1 echo "/dev/vgsrv/brick1 /server1_export1 xfs defaults 0 1" >> /etc/fstab mount -a 3. From server1, add the other three nodes as trusted peers. [root@proxy ~]# gluster peer probe server2{ip} [root@proxy ~]# gluster peer probe server3{ip} [root@proxy ~]# gluster peer probe server4{ip} [root@proxy ~]# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: a381532b-81a0-41c7-9adb-cd29f9f38158 State: Peer in Cluster (C...

Create XFS filesystem on Centos

Server : Test1 Step 1: ------------ Create two logical volumes with size 2Gb & 512MB respectively. [root@sara2 ~]# lvcreate -n xfsdata -L 2G vgsrv -->xfs filesystem   Logical volume "xfsdata" created [root@sara2 ~]# lvcreate -n xfsjournal -L 512M vgsrv -->external journal setup   Logical volume "xfsjournal" created [root@sara2 ~]# yum install xfsprogs Create a new xfs filesystem with external journal. [root@sara2 ~]# mkfs -t xfs -l logdev=/dev/vgsrv/xfsjournal /dev/vgsrv/xfsdata meta-data=/dev/vgsrv/xfsdata     isize=256    agcount=4, agsize=131072 blks          =                       sectsz=512   attr=2, projid32bit=0 data     =                  ...

Reverting LVM changes

On test1 create a 2GB Logical volume resizeme in vgsrv volume group and create a ext4 file-system, then mount your file-system and create some test files. [root@test1 ~]# vgs   VG    #PV #LV #SN Attr   VSize  VFree   vgsrv   1   1   0 wz--n- 20.00g 16.00g [root@test1 ~]# lvcreate -n resizeme -L2G vgsrv   Logical volume "resizeme" created [root@test1 ~]# mkfs -t ext4 /dev/vgsrv/resizeme [root@test1 ~]# mount /dev/vgsrv/resizeme /mnt [root@test1 ~]# touch /mnt/file{0..9} [root@test1 ~]# umount /mnt You want to resize your filesystem to 1GB, but accidently you forget to resize the file system first[resize2fs]. [root@test1 ~]# lvresize -L1G /dev/vgsrv/resizeme   WARNING: Reducing active logical volume to 1.00 GiB   THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce resizeme? [y/n]: y   Size of logical volume vgsrv/resizeme changed from 2.00 GiB (512 extents) to 1.00 GiB (...

Create and add quorum disk on cluster

Requirement: Two node cluster with multipathed iscsi storage. Test 1: Cluster server Test 22: Node1 Test 3:Node2 On test2, create a new 128MB partition in multipathed storage. [root@test22 ~]# fdisk -cu /dev/mapper/1IET_00010001 On all nodes, run the below commands to update the partitions to kernel and multipath daemon knows about. # partprobe ; multipath -r On test2, create quorum disk on that new partition [root@test22 ~]# mkqdisk -c /dev/mapper/1IET_00010001p3 -l qdisk On test3, see the quorum disk. [root@test3 ~]# mkqdisk -L mkqdisk v3.0.12.1 /dev/block/253:3: /dev/disk/by-id/dm-name-1IET_00010001p3: /dev/disk/by-id/dm-uuid-part3-mpath-1IET_00010001: /dev/dm-3: /dev/mapper/1IET_00010001p3:     Magic:                eb7a62c2     Label:                qdisk     Created:   ...

Bash Scripts

Need script that should find whether .deb os or .rpm os and install package    #!/bin/bash # Set variables RPM=$(cat /etc/issue | awk -F . '{ print $1 }' | head -1) DEB=$(cat /etc/issue | awk '( Print $1)' | cut -d ' ' -f1) # Test for distro types if [ -f $RPM ] 2> /dev/null then yum update yum install -y snmpd elif [ -f $DEB ] 2> /dev/null then apt-get update apt-get install -y snmpd fi # Start your service service snmpd status if [ $? -ne 0 ] then echo "Would you like to start your service? Enter yes or no" read ans if [ $ans == yes ] then service snmpd start elif [ $ans == no ] then echo "Your service has not been started" fi fi echo "Please check your installation"       Content Sync:   #!/bin/bash #Web Contents SRCDIR=/var/www/vhosts/grannliv.granngarden.se/ DESTDIR=root@10.224.44.126:/var/www/vhosts/grannliv.granngarden.se/ rsync -azvorgp --stats --progress --human-readable...

How to setup innodb_file_per_table in running mysql server with databases

1) MySQLDump all databases into a SQL text file (call it SQLData.sql) 2) Drop all databases (except mysql schema, phpmyadmin/mysql databases) 3) Stop mysql 4) Add the following lines to your /etc/my.cnf [mysqld] innodb_file_per_table innodb_flush_method=O_DIRECT innodb_log_file_size=1G innodb_buffer_pool_size=4G Note : Whatever your set for innodb _ buffer _ pool _ size, make sure innodb _ log _ file _ size is 25% of innodb _ buffer _ pool _ size 5) Delete ibdata1, ib _ logfile0 and ib _ logfile1 At this point, there should only be the mysql schema in /var/lib/mysql 6) Restart mysql This will recreate ibdata1 at 10MB, ib _ logfile0 and ib _ logfile1 at 1G each 7) Reload SQLData.sql into mysql to restore your data ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1. Now suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table:...

Configure a service/resource group in cluster[Luci]

We can created a webserver service we will need the below resources, 1. A file system for the document root[previous blog]. 2. Floating ip address where client connect to service[Configured in Luci]. 3. Httpd daemon listening for requests[Configured in Luci]. Under resources and service group tab we can configure step 2 & 3. Now resource group[apache service] configured in cluster on three nodes.

Configure a File system resource in cluster[luci]

Step 1: Server test1 have 4GB target /dev/vgsrv/storage, then we provide this share to other 3 nodes[test2,3,4-initiators]. In test1, [root@test1 ~]# tgt-admin --show Target 1: iqn.2008-09.com.example:first     System information:         Driver: iscsi         State: ready     I_T nexus information:     LUN information:         LUN: 0             Type: controller             SCSI ID: IET     00010000             SCSI SN: beaf10             Size: 0 MB, Block size: 1             Online: Yes           ...

Install High availability cluster on centos

Server 1: test1 Type : luci - Cluster Management server Server 2: test2 Type : ricci - Cluster Node Server 3 : test3 Type : ricci - Cluster Node Step 1: Install luci in test1,  yum -y install luci chkconfig luci on  service luci start Now luci available on port 8084 in web browser. Step 2: Install ricci on test2 and test3, yum -y install ricci passwd ricci chkconfig ricci on service ricci start Step 3: Create an cluster with conga through luci web interface. 1. Add nodes to the cluster. 2. It will install all cluster add-on packages to node servers automatically. Now test2 & test3 are added on Cluster.

Install and Configure Multipath on Centos

Server 1 : test1 213.x.x.x - eth0 10.50.68.15 - eth1 Server 2 : test2 31.x.x.x - eth0 10.50.68.16 - eth1 Step 1: Create an LVM on test1 and make this LVM as target. Step 2: On test2, Login to target you created on test1 using both ip's. iscsiadm -m discovery -t sendtargets -p 213.x.x.x iscsiadm -m node -T iqn.2008-09.com.example:first -p 213.x.x.x -l  iscsiadm -m discovery -t sendtargets -p 10.50.68.15  iscsiadm -m node -T iqn.2008-09.com.example:first -p 10.50.68.15 -l [root@test ~]# lsblk NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda1 202:1    0  15G  0 disk / xvda2 202:2    0   2G  0 disk [SWAP] sda     8:0    0   4G  0 disk sdb     8:16   0   4G  0 disk The same target will show like two disks on test2. On test1: [root@test ~]# tgt-admin --show Ta...