Vagrant with LXC

Posted by Kaya Kupferschmidt • Wednesday, February 4. 2015 • Category: Programming

Nowadays working with virtual machines is almost a requirement for a software-developer, especially when he is working on a web based application. The idea is that the virtual machines provides a clean environment for the application, ideally reflecing the final production environment. But true virtual machines (like VirtualBox, VMWare or KVM) come at a high cost in terms of resource usage (disk space, performance penalties, memory usage). This is especially true if you need to maintain a lot of different virtual machines, either for different projects or for a virtual cluster. If you are working on Linux (and as a developer you really should do that, except if you depend on .NET), the situation can get much better if you don't rely on a full virtualisation, but if you use Linux Containers instead.

Linux Containers (LXC)

Linux Containers (LXC) are a chroot-ed environment on steroids. Or in more detail: A Linux container provides an isolated execution environment with a separate root directory (like chroot), possibly some cgroup settings (but I am no expert in this area), dedicated virtual network interface etc. But the big difference to full virtualisation is the fact that a container still runs on the same kernel as the host os. This means that there is no performance penalty due to virtualization and you can even use a subdirectory on your normal harddisc as the new root directory for the Linux container. This implies that the contents of the Linux container are directly accessible from the host runtime, while the container can only access that specific subdirectory (or possibly some other explicitly mounted directories). More than that, because there is only one kernel instance running, all containers share the same page cache, which greatly increases the caching efficience. But I don't want to dig any deeper into LXC at this point, but of course I invite you to investigate into LXC using your favorite search engine.

Vagrant

Vagrant is a great tool for providing virtual environments for developers, including automatic provisioning and configuration. At the first glance, Vagrant looks very similar to Docker, but for me Vagrant is much more powerful, while Docker seems to excel at deploying comparable simple applications (like MySQL). But once you try to setup a virtual Hadoop cluster (where the nodes need to access a lot of network ports of other nodes), Docker doesn't look like the right tool for me. Such situation are where Vagrant really shines.

Per default Vagrant uses VirtualBox as the provider for the environments, but luckily Vagrant offers a plugin API for implementing other providers (for example AWS, KVM, libvirt, ...). For me, LXC seems to be the natural choice for Linux, and although LXC is not supported out of the box by Vagrant, Fabio Rhem has undertaken great efforts into implementing a corresponding provider.


Vagrant Installation

First you need to download Vagrant from the Vagrant download page. Chose the correct package format (as indicated on the page), then install the corresponding package either as user root or using sudo. For example on Ubuntu using sudo

sudo dpkg -i vagrant_1.7.2_x86_64.deb

This will install vagrant into the directory /opt and it will also create some links in /usr/local/bin, such that you can now check if it is correctly installed:

root@dvorak:/opt/vmcluster# vagrant
Usage: vagrant [options] <command> [<args>]

    -v, --version                    Print the version and exit.
    -h, --help                       Print this help.

Common commands:
     box             manages boxes: installation, removal, etc.
     connect         connect to a remotely shared Vagrant environment
     destroy         stops and deletes all traces of the vagrant machine
     global-status   outputs status Vagrant environments for this user
     halt            stops the vagrant machine
     help            shows the help for a subcommand
     init            initializes a new Vagrant environment by creating a Vagrantfile
     login           log in to HashiCorp's Atlas
     package         packages a running vagrant environment into a box
     plugin          manages plugins: install, uninstall, update, etc.
     provision       provisions the vagrant machine
     push            deploys code in this environment to a configured destination
     rdp             connects to machine via RDP
     reload          restarts vagrant machine, loads new Vagrantfile configuration
     resume          resume a suspended vagrant machine
     share           share your Vagrant environment with anyone in the world
     ssh             connects to machine via SSH
     ssh-config      outputs OpenSSH valid configuration to connect to the machine
     status          outputs status of the vagrant machine
     suspend         suspends the machine
     up              starts and provisions the vagrant environment
     version         prints current and latest Vagrant version

For help on any individual command run `vagrant COMMAND -h`

Additional subcommands are available, but are either more advanced
or not commonly used. To see all subcommands, run the command
`vagrant list-commands`.

We will skip testing basic Vagrant functiuonality at this point, becaause this would require Virtual Box to be installed. But since we'd like to use LXC instead, we need to install the corresponding vagrant-lxc plugin written and maintained by Fabio Rhem. This can be done simply by typing

sudo vagrant plugin install vagrant-lxc

Note that the plugin will be installed into the users directory, so in the case above it will be installed for user root (since we used sudo). For now, I will use sudo for creating, starting, stopping and destroying Vagrant boxes.

Our first box

Now create a new directory, where the Vagrant box logically will reside (although the LXC container will be created physically in /var/lib/lxc).

cd
mkdir mybox
cd mybox

Inside the folder mybox create a file called Vagrantfile (for example with nano Vagrantfile), which will contain details about the container to be created. As a beginning, simply fill it with the following content:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.hostname = "vagrant-lxc"
  config.vm.box = "fgrehm/trusty64-lxc"
  config.vm.box_url = "https://atlas.hashicorp.com/fgrehm/boxes/trusty64-lxc/versions/1.2.0/providers/lxc.box"

  config.vm.provider :lxc do |lxc, override|
    lxc.container_name = "vagrant-lxc"
    lxc.customize 'network.type', 'veth'
    lxc.customize 'network.link', 'lxcbr0'
  end
end

Some of the settings should be self-explanatory (although the syntax might look a little bit strange at first if you aren't used to Ruby), while other need some explanation.

  • config.vm.hostname contains the hostname of the container.
  • config.vm.box is the name of the box image*. HashiCorp, the creator of Vagrant, provides some basic system images which can be used with Vagrant. You can search for specific images on Atlas - but you need to make sure that the image is compatible with the provider you chose. In out case they need to be compatible with LXC.
  • config.vm.box_url contains an URL for automatically downloading the box.
  • lxc.container_name contains the name of the Linux container which will be created by Vagrant. If you do not specify any name, some ugly name will be created by Vagrant instead.
  • lxc.customize lets you change some settings of the LXC configuration. In our case, we need to specify which network type we want (veth for bridged ethernet) and which bridge to use (lxcbr0, which is the default bridge created by the lxc package on Ubuntu)

You can read more about the many options and possibilities of Vagrantfile in the Vagrant documentation. If you already know Vagrant, you only have to pay attanetion to use a different box than normally (the config.vm.box parameter). Fabio Rhem built some Vagrant boxes specifically created to be used with LXC.

After you created the Vagrantfile, go into the directory mybox and simply enter the command sudo vagrant up inside the directory in order to bring up the virtual machine. Vagrant will take care of downloading the machine image, creating the Linux container and starting it.

kaya@ubuntu$ sudo vagrant up
Bringing machine 'default' up with 'lxc' provider...
==> default: Importing base box 'fgrehm/trusty64-lxc'...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Setting up mount entries for shared folders...
    default: /vagrant => /opt/vmcluster
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 10.110.44.252:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if its present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Setting hostname...
kaya@ubuntu$

Now a Linux container called vagrant-lxc has been created. You can either attach to the container using sudo lxc-attach -n vagrant-lxc or (the preferred method) ssh into the container using Vagrant itself

kaya@ubuntu$ sudo vagrant ssh
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-35-generic x86_64)

 </strong> Documentation:  https://help.ubuntu.com/
vagrant@vagrant-lxc:~$ 

Configuring the DHCP Server

When the package lxc is installed on Ubuntu, a new network bridge lxcbr0 will be created and a DHCP server will be bound to that bridge. The DHCP server can be configured by the file /etc/default/lxc-net, where you can enter the desired DHCP range and other parameters. For example it might look as follows:

# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
# containers.  Set to "false" if you'll use virbr0 or another existing
# bridge, or mavlan to your host's NIC.
USE_LXC_BRIDGE="true"

# If you change the LXC_BRIDGE to something other than lxcbr0, then
# you will also need to update your /etc/lxc/default.conf as well as the
# configuration (/var/lib/lxc/<container>/config) for any containers
# already created using the default config to reflect the new bridge
# name.
# If you have the dnsmasq daemon installed, you'll also have to update
# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.110.44.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.110.44.0/24"
LXC_DHCP_RANGE="10.110.44.128,10.110.44.254"
LXC_DHCP_MAX="253"

# Uncomment the next line if you'd like to use a conf-file for the lxcbr0
# dnsmasq.  For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
# container 'mail1' always get ip address 10.0.3.100.
#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf

# Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
# domain.  You can then add "server=/lxc/10.0.3.1' (or your actual )
# to /etc/dnsmasq.conf, after which 'container1.lxc' will resolve on your
# host.
#LXC_DOMAIN="lxc"

Note that you could also use an additional configuration file if you comment use the setting LXC_DHCP_CONFILE. I highly recommend to do so, then you can also configure DNS name servers, default gateway and so on in that file.

Static IP Addresses

In some cases (especially if you want to define a whole cluster), it is desirable to use static IP addresses for the container defined in the Vagrantfile. Unfortunately this is not so easy right now with LXC, but with a little tinkering it can be done.

The most simple approach would simply include the neccessary information within the Vagrantfile by customizing the lxc configuration with a new entry lxc.customize 'network.ipv4, '10.110.44.7'. Unfortunately this doesn't work very well, because the Ubuntu image has dhcp enabled for automatic network configuration. This means that the IP address will be replaced immediately by the dhcp server.

So the trick is to configure the dhcp server to use a static IP address for the container. In order to make this idea work, we need to assign a static MAC address to the virtual network adaptor. This can be done by customizing the lxc property network.hwaddr like in the following example

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.hostname = "vagrant-lxc"
  config.vm.box = "fgrehm/trusty64-lxc"
  config.vm.box_url = "https://atlas.hashicorp.com/fgrehm/boxes/trusty64-lxc/versions/1.2.0/providers/lxc.box"

  config.vm.provider :lxc do |lxc, override|
    lxc.container_name = "vagrant-lxc"
    lxc.customize 'network.type', 'veth'
    lxc.customize 'network.link', 'lxcbr0'
    lxc.customize 'network.hwaddr, '00:16:3e:33:44:46'
  end
end

With this configuration in place, we only need to add a corresponding mapping to the dhcp server bound to the ethernet bridge lxcbr0.

Vagrant Clusters

The feature I really love about Vagrant is the ability to define multiple machines in a single Vagrantfile. This fact dramatically simplifies setting up a virtual cluster.

Troubleshooting

Connecting to the container

If something doesn't work during startup of vagrant, but the LXC container has already been created, you can still attach to the container for examining its state. This can be done using standard lxc tools on the host

sudo lxc-attach -n <container-name>

This will attach the current console to the container, and you will become root inside the container. A good idea is to examine the network with ifconfig and make sure that an IP address has been assigned.

The container has no IP address

Unfortunately this is a very common problem and can have many causes. First you should make sure that the file /etc/lxc/default.conf does not contain any networking related settings. Otherwise the container may end up with two networking interfaces, but none of them are working.

Another common problem are some iptablesrules on the host machine, which prevent communication of the host with the container. You can examine the rules with sudo iptables --list

umask related problems

I also experienced some weird problems when running vagrant with a umask different than 022. umask is a setting which controls the default access rights when new files or directories are created. The normal setting is 022, which will allow other users and groups to read your files. This setting works well with vagrant. But when I changed it to a more restrictive 077 (which allows only the creator of a file or directory to read it), Vagrant would not correctly setup the ssh keys needed for accessing the LXC container.

Comments Comments

Display comments as (Linear | Threaded)
  1. No comments

Add Comment


Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Standard emoticons like :-) and ;-) are converted to images.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Markdown format allowed



A Simple Sidebar