Sunday, 27 April 2014

Routing Concept1

Sometimes you have more than one router in your network, and want different containers to use different routers. Other times you may have a single HN with IP addresses on different networks and want to assign containers addresses from those networks.
Lets say you have a HN with an IP address in network ( and an IP address in ( Maybe those addresses are on different VLANs. Maybe one is an internal network and the other faces the wider internet. Maybe you have 10 different networks assigned to the HN. It does not matter as long as there is a gateway on each of those networks. In our example we will assume the gateways are and You want any container assigned an address in the network to use and any container assigned an address in the network to use
By default the network traffic coming from a container will use the default gateway on the HN to reach the rest of the world. If we want our containers to use the gateways on their respective networks we need to configure source based routing. This involves creating an additional routing table to redirect the traffic.
For example:
# /sbin/ip rule add from table 10000
# /sbin/ip route add throw table 10000
# /sbin/ip route add default via table 10000
The first line adds a routing rule. This rule tells the system to use an alternate routing table when trying to route packets from a certain source. In this case we are telling the system that if a packet originates from a address we should use routing table 10000. The table number is unique and simply must be an unused table number from your system. I tend to start at 10000, but you can start your number wherever is convenient. To see a list of tables in use you can use:
# /sbin/ip rule list
Next we add two routing rules to table 10000. The first one is a throw rule. A throw rule merely tells the system to stop processing the current table if the destination address matches the criteria provided. This will allow the host system and the VPSs to continue to reach other systems on our network without trying to use the default gateway we provide. And the second rule provides that default gateway.
Now all we need to do is repeat this for our second network:
# /sbin/ip rule add from table 10001
# /sbin/ip route add throw table 10001
# /sbin/ip route add default via table 10001
Here we have changed the networks in the rule and routes and used a different table number. Everything else stays the same. You can, of course, as as many complex routes to a particular table as you like. If you want to allow a container in the network to reach the network without using the gateway, you can add another throw rule and allow the HN's default routing table to take effect:
# /sbin/ip route add throw table 10000
A previous version of this page suggested adding an additional route in order to allow the HN to contact the container. Indeed this would be required if we did not provide the throw rule, but maintaining such a configuration requires adding new rules for every container. Using vzctl set <ctid> --ipadd <ip> adds these rules to the main routing table by default, but not our custom routing table. The configuration here only requires rules to be modified when changes are made to the networks, not each container.

Saturday, 26 April 2014

Tracing a program

Suppose some program on your system refuses to work or it works, but much slower then you've expected.  One way is to use strace program to follow system calls performed by given process.

 Use of strace

Commonly to use strace you should give the following command:
strace -o strace.out -ff touch /tmp/file


  • -o strace.out option means that strace program will output all information to the file named strace.out;
  • -ff means to strace the forked children of the program. Child straces outputs will be placed to strace.out.PID files, where PID is a pid of the child. If you want all the output to a single file, use -f argument instead (i.e. single f not double).
  • touch /tmp/file is the program with arguments which is to be straced.

 Strace results

So this is what we have in strace.out:
execve("/usr/bin/touch", ["touch", "/tmp/file"], [/* 51 vars */]) = 0
uname({sys="Linux", node="dhcp0-138", ...}) = 0
brk(0)                                  = 0x804f000
access("/etc/", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=47843, ...}) = 0
mmap2(NULL, 47843, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f1a000
close(3)                                = 0
open("/lib/", O_RDONLY)        = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360V\1"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1227872, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f19000
mmap2(NULL, 1142148, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e02000
mmap2(0xb7f13000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x110) = 0xb7f13000
mmap2(0xb7f17000, 7556, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f17000
close(3)                                = 0
mprotect(0xb7f13000, 4096, PROT_READ)   = 0
munmap(0xb7f1a000, 47843)               = 0
open("/dev/urandom", O_RDONLY)          = 3
read(3, "v\0265\313", 4)                = 4
close(3)                                = 0
brk(0)                                  = 0x804f000
brk(0x8070000)                          = 0x8070000
open("/tmp/file", O_WRONLY|O_NONBLOCK|O_CREAT|O_NOCTTY|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
utime("/tmp/file", NULL)                = -1 EACCES (Permission denied)
write(2, "touch: ", 7)                  = 7
write(2, "cannot touch `/tmp/file\'", 24) = 24
write(2, ": Permission denied", 19)     = 19
write(2, "\n", 1)                       = 1
exit_group(1)                           = ?

In this case we see, that the problem is in access to /tmp/file:

open("/tmp/file", O_WRONLY|O_NONBLOCK|O_CREAT|O_NOCTTY|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
utime("/tmp/file", NULL)                = -1 EACCES (Permission denied)

Wednesday, 16 April 2014

Install NIC drivers in centos server


-Driver Installation steps for Atheros Communications Inc. AR8161 Gigabit Ethernet [NIC].

*verify ethernet controller in use :

#lspci -v

02:00.0 Ethernet controller: Atheros Communications Inc. AR8161 Gigabit Ethernet (rev 10)
Subsystem: Dell Device 0562
Flags: bus master, fast devsel, latency 0, IRQ 33
Memory at d0400000 (64-bit, non-prefetchable) [size=256K]
I/O ports at 2000 [size=128]
Capabilities: [40] Power Management version 3
Capabilities: [58] Express Endpoint, MSI 00
Capabilities: [c0] MSI: Enable+ Count=1/16 Maskable+ 64bit+
Capabilities: [d8] MSI-X: Enable- Count=16 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [180] Device Serial Number ff-4c-ed-51-5c-f9-dd-ff

* download .rar file here : ... _langue=en

* yum --enablerepo=rpmforge install unrar

*unrar -e *****.rar

------> replace Makefile : No

* # yum groupinstall "kernel-devel"


*cp -r /path-to-downloaded-rar/alx.ko /lib/modules/2.6.32-279.5.2.el6.i686/kernel/net/wired

*add "alx.ko" to /lib/modules/2.6.32-279.5.2.el6.i686/modules.networking

* #depmod -a

*vi /etc/sysconfig/modules/alx.modules


if [ ! -c /dev/input/alx.ko ] ; then
exec /sbin/modprobe alx >/dev/null 2>&1


* #modprobe alx

Thursday, 10 April 2014

Overview of OpenVZ

1. OS Virtualization - From the point of view of applications and Virtual Environment users, each VE is an independent system. This independency is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%).
2. Network virtualization - The OpenVZ network virtualization layer is designed to isolate VEs from each other and from the physical network
3. Resource Management - OpenVZ resource management controls the amount of resources available for Virtual Environments. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters, etc.
4. Two-Level Disk Quota - Host system (OpenVZ) owner (root) can set up a per-VE disk quotas, in terms of disk blocks and i-nodes (roughly number of files). This is the first level of disk quota. In addition to that, a VE owner (root) can use usual quota tools inside own VE to set standard UNIX per-user and per-group disk quotas.
5. Fair CPU scheduler - CPU scheduler in OpenVZ is a two-level implementation of fair-share scheduling strategy.
6. User Beancounters - User beancounters is a set of per-VE counters, limits, and guarantees. There is a set of about 20 parameters which are carefully chosen to cover all the aspects of VE operation, so no single VE can abuse any resource which is limited for the whole node and thus do harm to another VEs.