Posts

Showing posts from 2014

Openstack installation

Single Node[tips]:- http://www.doublecloud.org/2013/05/installing-openstack-on-centos-in-private-network/  http://www.doublecloud.org/2013/06/installing-openstack-with-multiple-nodes-tips-and-tricks/ Instances ip ranges:- https://openstack.redhat.com/Floating_IP_range https://www.youtube.com/watch?v=DGf-ny25OAw http://serverfault.com/questions/579789/openstack-packstack-basic-multi-node-network-setup http://docs.openstack.org/juno/install-guide/install/yum/content/

Non-existing pages with "?p=" redirects to 404 page in wordpress

<IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> RewriteCond %{QUERY_STRING} p=.* RewriteRule .* - [R=404]

Next Documents

For more information about the High Availability Add-On and the Resilient Storage Add-On for Red Hat Enterprise Linux 6, refer to the following resources: High Availability Add-On Overview — Provides a high-level overview of the Red Hat High Availability Add-On. Cluster Administration — Provides information about installing, configuring and managing the High Availability Add-On. DM Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux. Load Balancer Administration — Provides information on configuring high-performance systems and services with the Load Balancer Add-On, a set of integrated software components that provide Linux Virtual Servers (LVS) for balancing IP load across a set of real servers.

Configure postfix as a backup[secondary] mail server

Tried various possibilities to configure the secondary MX server to queue mails properly if primary MX fails. Today tried the secondary MX setup with below configuration and it worked out.....   In main.cf file,   relay_domains = $mydestination example.com smtpd_recipient_restrictions = permit_mynetworks check_relay_domains   Included the transport_maps too in main.cf file in which configured the domain smtp transport address: transport_maps = hash:/etc/postfix/transport   In /etc/postfix/transport file,   example.com smtp:mail.example.com     Concept:-   If primary mail server for the domain example.com will down then all the mails are backuped/queued in secondary/backup mail server[postfix] once the primary server will be ready then mails are delivered  from backup to primary server.

Cloudstack Overview

Refer:- http://docs.cloudstack.apache.org/en/master/concepts.html#what-is-apache-cloudstack What is Apache CloudStack? Apache CloudStack is an open source Infrastructure-as-a-Service platform that manages and orchestrates pools of storage, network, and computer resources to build a public or private IaaS compute cloud. With CloudStack you can: Set up an on-demand elastic cloud computing service. Allow end-users to provision resources Cloud Infrastructure Overview Resources within the cloud are managed as follows: Regions: A collection of one or more geographically proximate zones managed by one or more management servers. Zones: Typically, a zone is equivalent to a single datacenter. A zone consists of one or more pods and secondary storage. Pods: A pod is usually a rack, or row of racks that includes a layer-2 switch and one or more clusters. Clusters: A cluster consists of one or more homogenous hosts and primary storage. Host: A single compute node within a cl

Virtualization Hardware drivers and devices

Emulated devices Emulated devices, sometimes referred to as virtual devices, exist entirely in software. Emulated device drivers are a translation layer between the operating system running on the host (which manages the source device) and the operating systems running on the guests. T he device level instructions directed to and from the emulated device are intercepted and translated by the hypervisor. Any device of the same type as that being emulated and recognized by the Linux kernel is able to be used as the backing source device for the emulated drivers.  Para-virtualized Devices Para-virtualized devices require the installation of device drivers on the guest operating system providing it with an interface to communicate with the hypervisor on the host machine. T his interface is used to allow traditionally intensive tasks such as disk I/O to be performed outside of the virtualized environment. Lowering the overhead inherent in virtualization in this manner is intended to allow

Vlan Concepts

A VLAN (Virtual LAN) is an attribute that can be applied to network packets. Network packets can be "tagged" into a numbered VLAN. A VLAN is a security feature used to completely isolate network traffic at the switch level. VLANs are completely separate and mutually exclusive. T he Red Hat Enterprise Virtualization Manager is VLAN aware and able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs. At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deci

Concepts of QCOW2 & RAW format

QCOW2 Formatted Virtual Machine Storage QCOW2 is a storage format for virtual machine disk images. QCOW stands for QEMU copy on write. T he QCOW2 format decouples the physical storage layer from the virtual layer by adding  a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-comittment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying disk image. T he initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. T hen the map is updated to point to the new place. RAW T he RAW storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual machine disk images stored in the RAW format. Virtual machine d

PHP Fatal error: Class 'JFactory' not found

I have faced this issue in Joomla site and searched the forums but none of the forums including joomla did not give solution for me. They simply suggest to check joomla version, php version compatibility and php extensions. Finally I have fixed the issue. Issue:- ---------  [26-Sep-2014 18:11:29 Europe/Berlin] PHP Fatal error: Class 'JFactory' not found in /home/test/public_html/index.php on line 31 Cause & Solution:- ----------------------- In my case, it seems file which gives the class JFactory was missed. /home/test/public_html/libraries/joomla/factory.php --->Core file for Joomla. Simply restore that file under proper path to fix the issue.

What is Big Data

PPT:- http://www.slideshare.net/slideshow/embed_code/37907628

Synchronization tools

1.Lsyncd - Live Syncing (Mirror) Daemon[Directory level] 2.DRBD.[block device level] 3. GlusterFS and BindFS use a FUSE-Filesystem to interject kernel/userspace filesystem events.  Reference: ----------------- https://code.google.com/p/lsyncd/ http://configure.systems/glusterfs-and-why-you-should-consider-it/ GlusterFS would actually mitigate and simply so much more of that. There would be no need for a Load Balancer, no need for a special script to promote, demote the content servers, nothing, not even to replicate the data between the servers! Basically, you can create two or more servers, install GlusterFS on each of the servers, have node all of the nodes probe the master node, then you would create the volume. Easy. Once that’s done, one your actual web nodes, where you have Apache, PHP, and again Varnish installed, you would install GlusterFS, add the correct line to the /etc/fstab, and you’re set. Within that line, you can even add a failover s

Running VMware under Ovirt

Reference: https://xrsa.net/2014/08/25/running-vmware-esxi-under-ovirt/

Linux Booting Process

Image
Refer: http://www.thegeekstuff.com/2011/02/linux-boot-process/

Storage pools

Storage pools and volumes are not required for the proper operation of guest virtual machines. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined. NFS storage pool Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. T he system administrator defines a pool on the host physical machine with the details of the share (nfs.example.com:/path/to/share should be mounted on /vm _data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed m ount nfs.exam ple.com :/path/to/share /vm data. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvir

Storage Pools

A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to guest virtual machines. T he storage pool can be local or it can be shared over a network. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by guest virtual machines. Storage pools are divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices. In short storage volumes are to partitions what storage pools are to disks. Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. T he maximum sizes are as follows: virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk) Ext4 = ~ 16 T B (using 4 KB block s

Memory overcommiting process

Guest virtual machines running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each guest virtual machine functions as a Linux process where the host physical machine's Linux kernel allocates memory only when requested. In addition the host physical machine's memory manager can move the guest virtual machine's memory between its own physical memory and swap space. T his is why overcommitting requires allotting sufficient swap space on the host physical machine to accommodate all guest virtual machines as well as enought memory for the host physical machine's processes. As a basic rule, the host physical machine's operating system requires a maximum of 4GB of memory along with a minimum of 4GB of swap space. T his example demonstrates how to calculate swap space for overcommitting. Although it may appear to be simple in nature, the ramifications of overcommitting should not be ignored. Refer to Important before procee

Live migration Backend process

In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine. During migration,KVM monitors the source for any changes in pages it has already transferred, and begins to transfer these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer will take a certain configurable period of time (10ms by default), KVM suspends the original guest virtual machine, transfers the remaining data, and resumes the same guest virtual machine on the destination host physical machine.

Metadata

what the metadata is for. Your single file is split up into a bunch of small pieces and spread out of geographic location, servers, and hard drives. These small pieces also contain more data, they contain parity information for the other pieces of data, or maybe even outright duplication. The metadata is used to locate every piece of data for that file over different geographic locations, data centres, servers and hard drives as well as being used to restore any destroyed pieces from hardware failure. It does this automatically. It will even fluidly move these pieces around to have a better spread. It will even recreate a piece that is gone and store it on a new good hard drive.

filesystem making process

mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) -->Default block size 4KB. Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks --> no of inodes & blocks created under that partition. 655360 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,         4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

Cloud Computing

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Today, it is more or less accepted that there are three Cloud Computing models depending on the type of service provided, IaaS, Infrastructure as a Service, PaaS, Platform as a Service, and SaaS, Software as a Service. IaaS – Infrastructure as a Service   Infrastructure as a Service provides infrastructure capabilities like processing, storage, networking, security, and other resources which allow consumers to deploy their applications and data. This is the lowest level provided by the Cloud Computing paradigm. Some examples of IaaS are: Amazon S3/EC2, Microsoft Windows Azure, and VMWare vCloud.   PaaS – Platform as a Service   Platform as a Service provides application infrastructure such as pro

Linux-Filesystem

A filesystem is the methods and data structures that an operating system uses to keep track of files on a disk or partition; that is, the way the files are organized on the disk. The word is also used to refer to a partition or disk that is used to store the files or the type of the filesystem. Thus, one might say I have two filesystems meaning one has two partitions on which one stores files, or that one is using the extended filesystem, meaning the type of the filesystem. Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data structures need to be written to the disk. This process is called making a filesystem . Most UNIX filesystem types have a similar general structure, although the exact details vary quite a bit. The central concepts are superblock , inode , data block , directory block , and indirection block . The superblock contains information about the filesystem as a whole, such as its size (the exact inform

How to Mount the VM Disk

Mount the VM Disk which built On image file, ------------------------------------------------------------------------- losetup /dev/loop0 /root/mailserver.img -Attach the image file to loop device. kpartx -a -v /dev/loop0 -Map the vm partitions, Output: ------------ add map loop0p1 (253:4): 0 1024000 linear /dev/loop0 2048 add map loop0p2 (253:5): 0 7362560 linear /dev/loop0 1026048 Then mount the VM partitions, mount /dev/mapper/loop0p1 /mnt/boot mount /dev/mapper/loop0p2 /mnt/root -->if it is not LVM. if vm root partition is an LVM, You will receive the below error, mount: unknown filesystem type 'LVM2_member' Which means LVM not identify that vm lvm partition.   pvscan vgscan && lvscan Output: ------------- inactive '/dev/VolGroup/lv_root' [17.54 GiB] inherit inactive '/dev/VolGroup/lv_swap' [1.97 GiB] inherit   ACTIVE '/dev/vg0/ubuntu' [10.00 GiB] inherit ACTIVE

Storage Topics

https://www.brainshark.com/netapp/vu?pi=zHZzXcLDoz3ncNz0

Mapping physical storage to domU disk

Protocol Description Example phy: Block devices, such as a physical disk, in domain 0 phy:/dev/sdc file: Raw disk images accessed by using loopback file:/path/file nbd: Raw disk images accessed by using NBD ndb: ip_port tap:aio: Raw disk images accessed by using blktap . Similar to loopback but without using loop devices. tap:aio:/path/file tap:cdrom CD reader block devices tap:cdrom:/dev/sr0 tap:vmdk: VMware disk images accessed by using blktap tap:vmdk:/path/file tap:qcow: QEMU disk images accessed by using blktap tap:qcow:

OpenSource Cloud Projects that you could FOCUS on !

*1. Hypervisor and Container* *Docker. Io* - an open-source engine for building, packing and running any application as a lightweight container, built upon the LXC container mechanism included in the Linux kernel. It was written by dotCloud and released in 2013. *KVM* - a lightweight hypervisor that was accepted into the Linux kernel in February 2007. It was originally developed by Qumranet, a startup that was acquired by Red Hat in 2008. *Xen Project* - a cross-platform software hypervisor that runs on platforms such as BSD, Linux and Solaris. Xen was originally written at the University of Cambridge by a team led by Ian Pratt and is now a Linux Foundation Collaborative Project. *CoreOS* – a new Linux distribution that uses containers to help manage massive server deployments. Its beta version was released in May 2014. *2. Infrastructure as a Service* *Apache Cloud**S**tack* - an open source IaaS platform with Amazon Web Services (AWS) compatibility. CloudStack was originally

Xen HVM Migration

Migrate VM[vm139]: Source Node: ------------- --- Logical volume ---   LV Path                /dev/vg_grp/vm139_img   LV Name                vm139_img   VG Name                vg_grp     LV Status              available   # open                 3   LV Size                10.00 GiB   Current LE             320   Segments               1   Allocation             inherit   Read ahead sectors     auto   - currently set to     256   Block device           253:24 Source Node: ------------- Backup: -------- lvcreate -n vm139_backup --size 15G /dev/vg_grp mkfs.ext3 /dev/vg_grp/vm139_backup mkdir -p /home/vm139_backup mount /dev/vg_grp/vm139_backup /home/vm139_backup xm shutdown vm139 dd if=/dev/vg_grp/vm139_img of=/home/vm139_backup/vm139_backup.img Destination Node: ----------------- lvcreate -n vm139_backup --size 15G /dev/vg_xenhvm lvcreate -n vm139_img --size 10G /dev/vg_xenhvm mkfs.ext3 /dev/vg_xenhvm/vm139_backup mkdir -p /home/vm139_backup mount /dev/vg_xenhvm/vm139_backup /home/vm139_

Port Mirroring

Port mirroring is an approach to monitoring network traffic that involves forwarding a copy of each  packet  from one network  switch  port to another. Port mirroring enables the administrator to keep close track of switch performance by placing a protocol analyzer on the  port  that's receiving the mirrored data. An administrator configures port mirroring by assigning a port from which to copy all packets and another port to which those packets will be sent. A packet bound for -- or heading away from -- the first port will be forwarded to the second port as well. The administrator must then place a protocol analyzer on the port that's receiving the mirrored data to monitor each segment separately. Network administrators can use port mirroring as a diagnostic or  debugging  tool. 

Xen DomU booting process on HVM[pure]

For this, the booting process starting with.         Welcome to CentOS Starting udev: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr [  OK  ] Setting hostname localhost.localdomain:  [  OK  ] Setting up Logical Volume Management:   No volume groups found [  OK  ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda2 /dev/xvda2: clean, 18459/512064 files, 218237/2048000 blocks [/sbin/fsck.ext4 (1) -- /boot] fsck.ext4 -a /dev/xvda1 /dev/xvda1: clean, 38/51200 files, 34256/204800 blocks [  OK  ] Remounting root filesystem in read-write mode:  [  OK  ] Mounting local filesystems:  [  OK  ] Enabling /etc/fstab swaps:  [  OK  ] Entering non-interactive startup ip6tables: Applying firewall rules: [  OK  ] iptables: Applying firewall rules: [  OK  ] Bringing up loopback interface:  [  OK  ] Bringing up interface eth0:  Determining IP information for eth0... done. [  OK  ] Starting auditd: [  OK 

Xen DomU booting process [PV on HVM].

Centos: ------------- -Kernel loaded with parameter ide0=noprobe[it will prevent disk & NIC emulation, use xen pv drivers]. Linux version 2.6.32-431.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Nov 22 03:15:09 UTC 2013 Command line: ro root=UUID=07a30ea1-f06a-44e5-a85a-6e346bb9e3af rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quieti console=ttyS0 ide0=noprobe E.g. Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs. Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks. Booting paravirtualized kernel on Xen NR_CPUS:4096 nr_cpumask_bits:15 nr_cpu_ids:15 nr_node_ids:1  Xen HVM callback vector for event delivery is enabled Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) io scheduler noop registered io sch

SSL redirect issue in cpanel

If a domain using shared ip..... if someone access domain with https.. then the request would be sent to the default document root[htdocs] instead of the actual one... so that the redirect rules also  not working in .htaccess. -Need to update the below code in index.html under htdocs. [root@vm5 htdocs]# cat index.html <html><head><script> window.location.href = (window.location.protocol != "http:") ? "http:" + window.location.href.substring(window.location.protocol.length) : "/cgi-sys/defaultwebpage.cgi"; </script></head><body></body></html>

Database Backup script

#!/bin/bash export savepath='/var/mysqlbackups' export usr='mysql user' export pwd='' if [ ! -d $savepath ]; then     mkdir -p $savepath fi chmod 700 $savepath rm -rf $savepath/* echo 'mySQL Backup Script' echo 'Dumping individual tables..' for a in `echo 'show databases' | mysql -u$usr -p$pwd | grep -v Database | grep -v information_schema`; do echo $a   mkdir -p $savepath/$a   chmod 700 $savepath/$a   echo "Dumping database: $a" echo for i in `mysqldump --no-data -u $usr -p$pwd $a | grep 'CREATE TABLE' | sed -e 's/CREATE TABLE //' | sed -e 's/(.*//' | sed -e 's/\ /|/g' |sed -e's/|$//'`   do    echo "i = $i";    c=`echo $i|sed -e's/|/\ /g'|sed -e 's/\`//g'`;    echo " * Dumping table: $c"    mysqldump --compact --allow-keywords --add-drop-table --allow-keywords --skip-dump-date -q -a -c -u$usr -p$pwd $a "$c" > "$savepath/$a/$c.sql&quo

Enable text console for HVM Domu

 Is there any way to change Graphical console[vnc] to the non-graphical console (xen console)? For HVM guest, you need to enable serial port on domU config file (example here: http://pastebin.com/fb6fe631), and setup domU to use serial port (ttyS0 on Linux) by modifying (for Linux domU) /boot/grub/menu.lst, /etc/inittab, and /etc/securetty. If it's PV guest, you need to set up domU to use xen console (which is xvc0 on current xen version, hvc0 on pv_ops kernel). It's similar to setting up domU for serial console, you just need to change ttyS0 to hvc0. An example of domU setup that can use both xvc0 and vnc console is here : http://pastebin.com/f6a5022bf Referenece 1: ---------------- Part 2, converting HVM guest to PV guest #======================================================================= First we need to install kernel-xen with correct initrd - yum install kernel-xen - edit /boot/grub/menu.lst so it looks like this #====================================

How to create VM in xen virtualization

The command below will create an 8GB file that will be used as an 8GB drive. The whole file will be written to disk in one go so may take a short while to complete. dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M count=8192 Alternatively, you can use the command below to create the same size file as a sparse file. What this does is create the file, but only take up disk space as the file is used. In this case the file will only really take about 1mb of disk initially and grow as you use it. dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M seek=8191 count=1 There are pros and cons of using sparse files. On one hand they only take as much disk as is actually used, on the other hand the file can become fragmented and you could run out of real disk if you overcommit space. Next up we’ll mount the install CD and export it over nfs so that xen can use it as a network install. mkdir /tmp/centos52 mount /dev/hda /tmp/centos52 -o loop,ro Jus

Add some specific word to some files using For Loop

We Need to add the line/change it from unlimited to 500 for MAX_EMAIL_PER_HOUR=500 for all packages/files under the folder. Step 1 --------- cp -rpf /var/cpanel/packages /var/cpanel/packages_org Step 2 -------- 2).ls /var/cpanel/packages > test.txt  #but we need to remove some files which have spaces[in between] from the list test.txt, then need to add the entry MAX_EMAIL_PER_HOUR=500 for that files manually. Because it does not recognize the spaces b/w single file. E.G:  johnhe3_WebFarm Beef(single file) For loop consider it as two files johnhe3_WebFarm,Beef. Step 3 -------- 3).for i in `cat test.txt`; do if [ -z `grep -w 'MAX_EMAIL_PER_HOUR' "$i" | cut -d = -f1` ]; then echo -e 'MAX_EMAIL_PER_HOUR=500' >> "$i"; else sed -i 's/MAX_EMAIL_PER_HOUR=unlimited/MAX_EMAIL_PER_HOUR=500/g' "$i"; fi; done

Kernel Compile

Kernel compilation: ------------------- cd /usr/src wget ftp://ftp.kernel.org/pub/linux/kernel/v3.x/linux-3.13.6.tar.gz tar xvf linux-3.13.6.tar.gz cd linux-3.13.6 make menuconfig -------------------------------- This for xen virtualiaztion support Go into Processor type and features Statically enable all XEN features Go back to the  main menu and enter Device Drivers menu, then enter block devices menu Statically enable the 2 XEN options Go back to the Device Drivers menu and go down to XEN driver suppport Statically enable all features Go back to Device Drivers, go into Network device support and statically enable the 2 XEN options at the bottom Exit out and save. ----------------------------------- But you run directly " make menuconfig " & save, it creates new .config file. But if you copied the config-`uname -r` file to /usr/src/linux-3.13.6 and run " make menuconfig ". Now it included the new entries[e.g xen support] in t

Automount using cifs

Put entries like below in /etc/fstab: /192.168.1.1/backup /backup cifs defaults,noatime,username=root,password=PASSWD 0 0    

Multiple Rsync[parallel] during data transfer

Sometimes the data transfer is very slow due to network connections. At that time, we use parallel rsync to transfer the data to other server efficiently. Script: ----------- export SRCDIR="/home/."; -->Source Directory export DESTDIR=" root@server.example.com :/home/."; --> Destination Directory export THREADS="8"; rsync -lptgoDvzd $SRCDIR $DESTDIR; --> transfer the folders & sub-folders first. cd $SRCDIR; find . -type f | xargs -n1 -P$THREADS -I% rsync -az % $DESTDIR; -->rsync files in multiple process.

info [rebuildhttpdconf] Unable to determine group for user

-Unfortunately virtual host entry for some users are missing in apache configuration. Also you are facing the below issue while rebuild the apache configuration. info [rebuildhttpdconf] Unable to determine group for user It seems that user entry missing in /etc/group file. Fix: First check that user entry in /etc/passwd E.g. grep xxxx /etc/passwd crossjui:x:778:779::/home/xxxx:/bin/bash 779 is GID for that user. You need to add the below entry in /etc/passwd file. xxxx:x:779: -Once again rebuild the apache configuration, it will create vhost entry.

Cagefs enabled user[php selector] gives 500 error

Need to manually re-set PHP selector and php version for this user: # cagfesctl --setup-cl-selector # /usr/bin/cl-selector --select=php --version=5.3 --user=xxxxx

Routing Concept1

Sometimes you have more than one router in your network, and want different  containers  to use different routers. Other times you may have a single HN with IP addresses on different networks and want to assign containers addresses from those networks. Lets say you have a HN with an IP address in network 192.168.100.0/24 (192.168.100.10) and an IP address in 192.168.200.0 (192.168.200.10). Maybe those addresses are on different VLANs. Maybe one is an internal network and the other faces the wider internet. Maybe you have 10 different networks assigned to the HN. It does not matter as long as there is a gateway on each of those networks. In our example we will assume the gateways are 192.168.100.1 and 192.168.200.1. You want any container assigned an address in the 192.168.100.0/24 network to use 192.168.100.1 and any container assigned an address in the 192.168.200.0/24 network to use 192.168.200.1. By default the network traffic coming from a container will use the default gatew

Tracing a program

Suppose some program on your system refuses to work or it works, but much slower then you've expected.  One way is to use strace program to follow system calls performed by given process.   Use of strace Commonly to use strace you should give the following command: strace -o strace.out -ff touch /tmp/file Here -o strace.out option means that strace program will output all information to the file named strace.out ; -ff means to strace the forked children of the program. Child straces outputs will be placed to strace.out.PID files, where PID is a pid of the child. If you want all the output to a single file, use -f argument instead (i.e. single f not double). touch /tmp/file is the program with arguments which is to be straced.   Strace results So this is what we have in strace.out : execve("/usr/bin/touch", ["touch", "/tmp/file"], [/* 51 vars */]) = 0 uname({sys="Linux", node="dhcp0-138", ...}) = 0 brk(0)