Posts

Showing posts from 2015

P2V Migration in VMware

Question:- --------------- i need to migrate a Linux box with no downtime to ESXi and with VMware tool that is not possible or i don't know how because you can't install the VCenter Converter into a linux OS and to the live migration with synchronization. Does any 1 know a way to do this? Solution:- --------------- Of course, we can migrate the Linux physical machine to ESXi host using vsphere converter tool. Yes, we can't install converter tool in linux but we can install it in windows machine then we can migrate the physical machine to ESXi through windows VM. Recently I did this task. Please follow the below steps. 1. Install a windows machine in ESXi host where you want to migrate the physical machine. 2. Install the vsphere converter tool in that windows machine. 3. But please choose the source machine as linux physical machine instead of "localhost". If you select localhost then that win...

KVM & Qemu

QEMU is a powerful emulator, which means that it can emulate a variety of processor types. Xen uses QEMU for HVM guests, more specifically for the HVM guest's device model. The Xen-specific QEMU is called qemu-dm (short for QEMU device model) QEMU uses emulation; KVM uses processor extensions (intel-VT) for virtualization. Both Xen and KVM merge their various functionality to upstream QEMU, that way upstream QEMU can be used directly to accomplish Xen device model emulation, etc. Xen is unique in that it has paravirtualized guests that don't require hardware virtualization. Both Xen and KVM have paravirtualized device drivers that can run on top of the HVM guests. The QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has ...

Packet Drop issue

Packet drops can happen at two layers. one at the NIC level or at the Network stack level. 1. Nic level:-  Check 'ifconfig' output. RX packets:297126179 errors:0 dropped:3981 overruns:0 frame:0 TX packets:233589229 errors:0 dropped:0 overruns:0 carrier:0 That means packets drops at the NIC level. These are most likely caused by exhaustion of the RX ring buffer. Increase the size of the Ethernet device ring buffer. First inspect the output of "ethtool -g eth0". If the "Pre-set maximums" are more than the what's listed in the current hardware settings it's recommend to increase this number. As an example: # ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 1020 RX Mini: 0 RX Jumbo: 16320 TX: 255 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255 To increase the RX ring buffer to 4080 you would run "ethtool -G eth0 rx 1020". 2. Network stack level:- Need  to tweak sysctl kernel...

Active vs Passive ftp

Active and passive are the two modes that FTP can run in. FTP uses two channels between client and server, the command channel and the data channel, which are actually separate TCP connections. The command channel is for commands and responses, the data channel is for actually transferring files. It's a nifty way of sending commands to the server without having to wait for the current data transfer to finish. In active mode, the client establishes the command channel (from client port X to server port 21 (b) ) but the server establishes the data channel (from server port 20 (b) to client port Y , where Y has been supplied by the client). In passive mode, the client establishes both channels. In that case, the server tells the client which port should be used for the data channel. Passive mode is generally used in situations where the FTP server is not able to establish the data channel. One of the major reasons for this is network firewalls. While you may have...

Create a simple webserver container on Docker

1. Start the container with a shell, #docker run -i -t -p 8080:80 214a4932132a /bin/bash Mapped the host port 8080 to container port 80, so that you can access the container webserver contents through apache. [It is similar as DNAT] 2. Install the web server package inside the container, At first, the containers have only internal IP addresses. To access the Internet, SNAT (Source Network Address Translation, also known as IP masquerading) should be configured on theHardware Node. [root@proxy ~]# iptables -t nat -A POSTROUTING -s 172.17.0.0/24 -o eth0 -j SNAT --to 31.x.x.x 172.17.0.0 - Container's ip range 31.x.x.x - Host public IP Now you can able to access the internet from container, then we can install the apache server. [root@c576532e21ab /]# yum install httpd -y 3. Create a custom content in document root, [root@c576532e21ab /]# echo "Hello world" > /var/www/html/index.html 4. Test the web server, We cant start the httpd service through ...

Install and Configure the Docker on Centos

We are going to install and configure the Docker on Centos 1. Enable the epel repository. 2. yum install docker-io 3. Start the docker daemon [root@proxy ~]# service docker start Starting cgconfig service:                                 [  OK  ] Starting docker:                                       [  OK  ] [root@proxy ~]# chkconfig docker on 4. Download any public container images and store them in a local repository, [root@proxy ~]# docker pull ubuntu ubuntu:latest: The image you are pulling has been verified 511136ea3c5a: Pull complete f3c84ac3a053: Pull complete a1a958a24818: Pull complete 9fec74352904: ...

How to install and configure GlusterFS server on centos

We are going to setup glusterfs storage server with four nodes. 1. On all of your four nodes install the glusterfs and xfs packages, wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo yum install glusterfs-server xfsprogs chkconfig glusterd on service glusterd start 2. On all of your cluster nodescreate a new 2Gb LV called brick1 in vgsrv VG and format this LV with an XFS filesystem with 512byte inodes. lvcreate -L 2G -n brick1 vgsrv mkfs.xfs -i size=512 /dev/vgsrv/brick1 mkdir /server1_export1 echo "/dev/vgsrv/brick1 /server1_export1 xfs defaults 0 1" >> /etc/fstab mount -a 3. From server1, add the other three nodes as trusted peers. [root@proxy ~]# gluster peer probe server2{ip} [root@proxy ~]# gluster peer probe server3{ip} [root@proxy ~]# gluster peer probe server4{ip} [root@proxy ~]# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: a381532b-81a0-41c7-9adb-cd29f9f38158 State: Peer in Cluster (C...

Create XFS filesystem on Centos

Server : Test1 Step 1: ------------ Create two logical volumes with size 2Gb & 512MB respectively. [root@sara2 ~]# lvcreate -n xfsdata -L 2G vgsrv -->xfs filesystem   Logical volume "xfsdata" created [root@sara2 ~]# lvcreate -n xfsjournal -L 512M vgsrv -->external journal setup   Logical volume "xfsjournal" created [root@sara2 ~]# yum install xfsprogs Create a new xfs filesystem with external journal. [root@sara2 ~]# mkfs -t xfs -l logdev=/dev/vgsrv/xfsjournal /dev/vgsrv/xfsdata meta-data=/dev/vgsrv/xfsdata     isize=256    agcount=4, agsize=131072 blks          =                       sectsz=512   attr=2, projid32bit=0 data     =                  ...

Reverting LVM changes

On test1 create a 2GB Logical volume resizeme in vgsrv volume group and create a ext4 file-system, then mount your file-system and create some test files. [root@test1 ~]# vgs   VG    #PV #LV #SN Attr   VSize  VFree   vgsrv   1   1   0 wz--n- 20.00g 16.00g [root@test1 ~]# lvcreate -n resizeme -L2G vgsrv   Logical volume "resizeme" created [root@test1 ~]# mkfs -t ext4 /dev/vgsrv/resizeme [root@test1 ~]# mount /dev/vgsrv/resizeme /mnt [root@test1 ~]# touch /mnt/file{0..9} [root@test1 ~]# umount /mnt You want to resize your filesystem to 1GB, but accidently you forget to resize the file system first[resize2fs]. [root@test1 ~]# lvresize -L1G /dev/vgsrv/resizeme   WARNING: Reducing active logical volume to 1.00 GiB   THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce resizeme? [y/n]: y   Size of logical volume vgsrv/resizeme changed from 2.00 GiB (512 extents) to 1.00 GiB (...

Create and add quorum disk on cluster

Requirement: Two node cluster with multipathed iscsi storage. Test 1: Cluster server Test 22: Node1 Test 3:Node2 On test2, create a new 128MB partition in multipathed storage. [root@test22 ~]# fdisk -cu /dev/mapper/1IET_00010001 On all nodes, run the below commands to update the partitions to kernel and multipath daemon knows about. # partprobe ; multipath -r On test2, create quorum disk on that new partition [root@test22 ~]# mkqdisk -c /dev/mapper/1IET_00010001p3 -l qdisk On test3, see the quorum disk. [root@test3 ~]# mkqdisk -L mkqdisk v3.0.12.1 /dev/block/253:3: /dev/disk/by-id/dm-name-1IET_00010001p3: /dev/disk/by-id/dm-uuid-part3-mpath-1IET_00010001: /dev/dm-3: /dev/mapper/1IET_00010001p3:     Magic:                eb7a62c2     Label:                qdisk     Created:   ...

Bash Scripts

Need script that should find whether .deb os or .rpm os and install package    #!/bin/bash # Set variables RPM=$(cat /etc/issue | awk -F . '{ print $1 }' | head -1) DEB=$(cat /etc/issue | awk '( Print $1)' | cut -d ' ' -f1) # Test for distro types if [ -f $RPM ] 2> /dev/null then yum update yum install -y snmpd elif [ -f $DEB ] 2> /dev/null then apt-get update apt-get install -y snmpd fi # Start your service service snmpd status if [ $? -ne 0 ] then echo "Would you like to start your service? Enter yes or no" read ans if [ $ans == yes ] then service snmpd start elif [ $ans == no ] then echo "Your service has not been started" fi fi echo "Please check your installation"       Content Sync:   #!/bin/bash #Web Contents SRCDIR=/var/www/vhosts/grannliv.granngarden.se/ DESTDIR=root@10.224.44.126:/var/www/vhosts/grannliv.granngarden.se/ rsync -azvorgp --stats --progress --human-readable...

How to setup innodb_file_per_table in running mysql server with databases

1) MySQLDump all databases into a SQL text file (call it SQLData.sql) 2) Drop all databases (except mysql schema, phpmyadmin/mysql databases) 3) Stop mysql 4) Add the following lines to your /etc/my.cnf [mysqld] innodb_file_per_table innodb_flush_method=O_DIRECT innodb_log_file_size=1G innodb_buffer_pool_size=4G Note : Whatever your set for innodb _ buffer _ pool _ size, make sure innodb _ log _ file _ size is 25% of innodb _ buffer _ pool _ size 5) Delete ibdata1, ib _ logfile0 and ib _ logfile1 At this point, there should only be the mysql schema in /var/lib/mysql 6) Restart mysql This will recreate ibdata1 at 10MB, ib _ logfile0 and ib _ logfile1 at 1G each 7) Reload SQLData.sql into mysql to restore your data ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1. Now suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table:...

Configure a service/resource group in cluster[Luci]

We can created a webserver service we will need the below resources, 1. A file system for the document root[previous blog]. 2. Floating ip address where client connect to service[Configured in Luci]. 3. Httpd daemon listening for requests[Configured in Luci]. Under resources and service group tab we can configure step 2 & 3. Now resource group[apache service] configured in cluster on three nodes.

Configure a File system resource in cluster[luci]

Step 1: Server test1 have 4GB target /dev/vgsrv/storage, then we provide this share to other 3 nodes[test2,3,4-initiators]. In test1, [root@test1 ~]# tgt-admin --show Target 1: iqn.2008-09.com.example:first     System information:         Driver: iscsi         State: ready     I_T nexus information:     LUN information:         LUN: 0             Type: controller             SCSI ID: IET     00010000             SCSI SN: beaf10             Size: 0 MB, Block size: 1             Online: Yes           ...

Install High availability cluster on centos

Server 1: test1 Type : luci - Cluster Management server Server 2: test2 Type : ricci - Cluster Node Server 3 : test3 Type : ricci - Cluster Node Step 1: Install luci in test1,  yum -y install luci chkconfig luci on  service luci start Now luci available on port 8084 in web browser. Step 2: Install ricci on test2 and test3, yum -y install ricci passwd ricci chkconfig ricci on service ricci start Step 3: Create an cluster with conga through luci web interface. 1. Add nodes to the cluster. 2. It will install all cluster add-on packages to node servers automatically. Now test2 & test3 are added on Cluster.

Install and Configure Multipath on Centos

Server 1 : test1 213.x.x.x - eth0 10.50.68.15 - eth1 Server 2 : test2 31.x.x.x - eth0 10.50.68.16 - eth1 Step 1: Create an LVM on test1 and make this LVM as target. Step 2: On test2, Login to target you created on test1 using both ip's. iscsiadm -m discovery -t sendtargets -p 213.x.x.x iscsiadm -m node -T iqn.2008-09.com.example:first -p 213.x.x.x -l  iscsiadm -m discovery -t sendtargets -p 10.50.68.15  iscsiadm -m node -T iqn.2008-09.com.example:first -p 10.50.68.15 -l [root@test ~]# lsblk NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda1 202:1    0  15G  0 disk / xvda2 202:2    0   2G  0 disk [SWAP] sda     8:0    0   4G  0 disk sdb     8:16   0   4G  0 disk The same target will show like two disks on test2. On test1: [root@test ~]# tgt-admin --show Ta...

Concepts of VZ container[with templates]

EZ templates are part and parcel of the Parallels Virtuozzo Containers philosophy because they  provide a way of sharing resources among lots of Containers, thus enabling huge savings in terms of disk space and memory. For example, when you install and cache an OS template on the Hardware Node, Parallels Virtuozzo Containers creates the  /vz/template/<name_of_the_OS> directory containing all the OS files that can be shared among Containers. When a Container based on this template is created, it contains only symlinks to the OS template files. These symlinks occupy very little space on the hard disk. They are situated in the so-called private area of the Container. The corresponding directory is /vz/private/<CT_ID>. The private area of a Container contains not only symlinks to the necessary template files, but also the copy-on-write area of the Container (the area for storing the  information about those changes that the Container makes to the template files...

Mysqli is not working properly

In plesk server, I have installed the php 5.6 as secondary php. Then if I have try to use "mysqli" dbtype in Joomla site, I got error such as database connection error. Cause: ----------- It seems mysqli uses mysqlnd API in secondary php, mysqli Client API library version => mysqlnd 5.0.11-dev - 20120503 - $Id: 3c688b6bbc30d36af3ac34fdd4b7b5b787fe5555 $ mysqli.allow_local_infile => On => On mysqli.allow_persistent => On => On mysqli.default_host => no value => no value mysqli.default_port => 3306 => 3306 mysqli.default_pw => no value => no value mysqli.default_socket => no value => no value mysqli.default_user => no value => no value mysqli.max_links => Unlimited => Unlimited mysqli.max_persistent => Unlimited => Unlimited mysqli.reconnect => Off => Off mysqli.rollback_on_cached_plink => Off => Off By default mysqli uses mysql API in main php, mysqli MYSQLI_SOCKET => /var/run/mysq...

LVM Snapshot Process

Image
Reference: http://www.clevernetsystems.com/tag/cow/ This article explains how LVM snapshots work and what the advantage of read/write snapshots is. We will go through a simple example to illustrate our explanation. First, create a dummy device that we will initialize as a PV: # dd if = / dev / zero of =dummydevice bs = 8192 count = 131072 # losetup / dev / loop0 dummydevice #  pvcreate / dev / loop0 #  pvs PV VG Fmt Attr PSize PFree /dev/loop0 lvm2 a-- 1.00g 1.00g We now have a 1GB LVM2 Physical Volume. # vgcreate vg0 # vgs VG #PV #LV #SN Attr VSize VFree vg0 1 0 0 wz--n- 1020.00m 1020.00m # lvcreate -n lv0 -l 100 vg0 # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lv0 vg0 -wi-a---- 400.00m We now have a Volume Group vg0 and a 400MB Logical Volume lv0. Let’s see what our device mapper looks like # dmsetup table vg0-lv0: 0 819200 linear 7:0 2048 We have a single device vg0-lv0, as expected. Let...