Virtual machine stacking Using LXC on top of ESX

by lesliebuna on January 1, 2012 | One comment

What is LXC?

From Dwight Schauer:
“Linux Containers (LXC) are an operating system-level virtualization method for running multiple isolated server installs (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation.”

Creating Virtual Environments within Virtual Machines using Red Hat Enterprise 6 or Centos 6

Why would anyone ever want to do this? Well for a number of reasons:

1. You only have access to one virtual machine but want to create multiple isolated services.

2. Have a requirement for multiple Linux OS versions but don’t want to invest in a heavyweight virtualisation solution.

3. Creating virtual multilayer services to that you wish to be deployed as a single virtual machine, think about a virtual network of machines encapsulated in a single easily deployed package that would only expose a single IP address e.g. reverse proxy, tomcat java container, database, and internal virtual networks).

4. Creation of lightweight development environment.

It so works on KVM, Virtualbox as well as VMware Vsphere.

LXC Setup

LXC needs kernel namespaces which is fully supported in 2.6.26 kernels and above which are currently upstream of Red Hat Enterprise Linux 5 so I opted to go to RHEL6 so I wouldn’t have to roll and maintain my own kernel (CentOS 6 wasn’t out when I tried this but it should just work).
For Red Hat Enterprise Linux 6 you will also need the RHN Optional Channel enabled so you can get ruby-selinux.

So let proceed onto how to get, compile and install LXC:

$ wget http://downloads.sourceforge.net/project/lxc/lxc/lxc-0.7.3/lxc-0.7.3.tar.gz
$ rpmbuild -ta lxc-0.7.3.tar.gz
$ yum --nogpg install lxc-0.7.3-1.x86_64.rpm libvirt
$ mkdir /var/lib/lxc /usr/lib64/lxc/rootfs /etc/lxc /cgroup
$ cat >> /etc/fstab < none /cgroup cgroup defaults 0 0
EOF
$ mount /cgroup

If your using DHCP on your network you can easily setup network bridging by doing the following:

?
$ cat > /etc/sysconfig/network-scripts/ifcfg-br0 <> /etc/sysconfig/network-scripts/ifcfg-eth0 < BRIDGE=br0
EOF<strong>
</strong>$ service network restart


And voila! LXC is now configured and ready for use, now all you need to do is install a LXC container.
Hint: I typically create a dedicated LVM Volume Group and Logical Volume for each LXC container and mount them under /srv/lxc/ e.g. /src/lxc/myvirtualbox in order to prevent it spilling over into the hosts file system and impacting other services.
You need to already have or create a lxc template here’s a link on how you can create one based on Centos 5.

Here’s some additional stuff I do: Chroot into the template and disable Bluetooth

$ chkconfig bluetooth off

Delete the ssh *key and *.pub from /etc/ssh

$ for key in `ls /etc/ssh/* | grep -e key -e pub` ; do echo rm $key ; done

Remove the HWADDR setting in the /etc/sysconfig/network-scripts/ifcfg-eth0

$ sed -i 's/HWADDR.*$//g' /etc/sysconfig/network-scripts/ifcfg-eth0

Ensure that tty’s 1-4 are not commented out in /etc/inittab from each image before packaging the template up.

After that’s done and you have your template on your LXC host and are ready to deploy it.

cd /srv/lxc
mkdir myvirtualbox
tar zxvf /path/to/template/tar/gz/file
vi /etc/lxc/myvirtualbox.conf

Paste the following into the file (adjust as necessary):

lxc.utsname = myvirtualbox
lxc.tty = 4
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.ipv4 = 0.0.0.0/24
lxc.rootfs = /srv/lxc/myvirtualbox
lxc.mount = /etc/lxc/myvirtualbox.fstab
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
# /dev/pts/*
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm

Save the file and create a new fstab file

Save the file and create a new fstab file

Paste in the following data and edit as necessary

none /srv/lxc/myvirtualbox/dev/pts devpts defaults   0 0
none /srv/lxc/myvirtualbox/proc    proc   defaults   0 0
none /srv/lxc/myvirtualbox/sys     sysfs  defaults   0 0
none /srv/lxc/myvirtualbox/dev/shm tmpfs  defaults   0 0

Now you can create myvirtualbox lxc container by typing in

$ lxc-create -f /etc/lxc/myvirtualbox.conf -n myvirtualbox

And fire it up and a console to go with it using screen:

$ screen -dmS init-myvirtualbox /usr/bin/lxc-start -n myvirtualbox
$ screen -dmS console-myvirtualbox /usr/bin/lxc-console -n myvirtualbox

You can now use the following command to see the init boot session

$ screen -r
init-myvirtualbox

To see and interact with the console.

$ screen -r console-
myvirtualbox

One comment

So, since we know it is possible, how did it turn out performance-wise? I saw this from your ServerFault post and was curious if you use this in production?

by Al on October 13, 2012 at 9:59 pm. Reply #

Leave your comment

Required.

Required. Not published.

If you have one.