I recently found myself needing a machine to compile binaries on for a CentOS server. I first considered actually spinning up a CentOS system on a VPS; however, that seemed a little overboard just for compiling, I then realized that this would be the perfect use for a container. I could have an identical system to the one where the binaries will be deployed on, and at little cost since it can simply be blown away when I’m done. In order to set up my compile machine I used LXC.

LXC, or “Linux Containers”, are a set of tools for creating full-featured containers. Compared to other tools such as systemd-nspawn, LXC is much more complex, and it has been used to build projects such as Docker. Docker has since moved away from LXC, however LXC is still one of the huge players in the Linux container game. The Linux container project also brings LXD, a daemon that can be used to manage containers. LXD makes a larger use of system images, as opposed to templates, in order to allow quick deployment of containers. Together these projects allow easy deployment and management of containers, as well as as advanced features and customizability.

In the following post I will go through the process of setting up several LXC containers, using both the traditional template-based method, as well as experiment with LXD to download image-based containers.

{% include centered_caption_image.html url="/images/lxc/cowsay.png" description="" %}

  • TOC will be output here {:toc}

LXC Link to heading

Requirements Link to heading

On Arch Linux, which is what I’m using, the packages lxc and arch-install-scripts are required.

In order to see if the currently running kernel is properly configured to use lxc, run lxc-checkconfig, it should detect if anything is missing from the current configuration.

[root]# lxc-checkconfig                                                               john@Archon
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: missing

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: missing
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/sbin/lxc-checkconfig

One of the features that it is normal to see missing is user namespaces. In order to use this feature on Arch Linux, a custom compiled kernel is necessary. The default kernel has user namespaces turned off due to security concerns.

Templates Link to heading

The traditional way of setting up an LXC container is using templates. Templates are prebuilt shell scripts that will build a system image.

View the available templates in /usr/share/lxc/templates:

total 235K
drwxr-xr-x 2 root root   21 Aug 30 06:10 .
drwxr-xr-x 6 root root    8 Aug 30 06:10 ..
-rwxr-xr-x 1 root root  13K Aug 30 06:10 lxc-alpine
-rwxr-xr-x 1 root root  14K Aug 30 06:10 lxc-altlinux
-rwxr-xr-x 1 root root  11K Aug 30 06:10 lxc-archlinux
-rwxr-xr-x 1 root root  12K Aug 30 06:10 lxc-busybox
-rwxr-xr-x 1 root root  29K Aug 30 06:10 lxc-centos
-rwxr-xr-x 1 root root  11K Aug 30 06:10 lxc-cirros
-rwxr-xr-x 1 root root  20K Aug 30 06:10 lxc-debian
-rwxr-xr-x 1 root root  18K Aug 30 06:10 lxc-download
-rwxr-xr-x 1 root root  49K Aug 30 06:10 lxc-fedora
-rwxr-xr-x 1 root root  28K Aug 30 06:10 lxc-gentoo
-rwxr-xr-x 1 root root  14K Aug 30 06:10 lxc-openmandriva
-rwxr-xr-x 1 root root  16K Aug 30 06:10 lxc-opensuse
-rwxr-xr-x 1 root root  42K Aug 30 06:10 lxc-oracle
-rwxr-xr-x 1 root root  12K Aug 30 06:10 lxc-plamo
-rwxr-xr-x 1 root root  19K Aug 30 06:10 lxc-slackware
-rwxr-xr-x 1 root root  27K Aug 30 06:10 lxc-sparclinux
-rwxr-xr-x 1 root root 6.7K Aug 30 06:10 lxc-sshd
-rwxr-xr-x 1 root root  26K Aug 30 06:10 lxc-ubuntu
-rwxr-xr-x 1 root root  12K Aug 30 06:10 lxc-ubuntu-cloud

To get help on container creation use lxc-create --help. lxc-create is used to create a container from the templates.

There are many options:

[root]# lxc-create --help
Options :
  -n, --name=NAME               NAME of the container
  -f, --config=CONFIG           Initial configuration file
  -t, --template=TEMPLATE       Template to use to setup container
  -B, --bdev=BDEV               Backing store type to use
      --dir=DIR                 Place rootfs directory under DIR

  BDEV options for LVM (with -B/--bdev lvm):
      --lvname=LVNAME           Use LVM lv name LVNAME
                                (Default: container name)
      --vgname=VG               Use LVM vg called VG
                                (Default: lxc)
      --thinpool=TP             Use LVM thin pool called TP
                                (Default: lxc)

  BDEV options for Ceph RBD (with -B/--bdev rbd) :
      --rbdname=RBDNAME         Use Ceph RBD name RBDNAME
                                (Default: container name)
      --rbdpool=POOL            Use Ceph RBD pool name POOL
                                (Default: lxc)

  BDEV option for ZFS (with -B/--bdev zfs) :
      --zfsroot=PATH            Create zfs under given zfsroot
                                (Default: tank/lxc)

  BDEV options for LVM or Loop (with -B/--bdev lvm/loop) :
      --fstype=TYPE             Create fstype TYPE
                                (Default: ext3)
      --fssize=SIZE[U]          Create filesystem of
                                size SIZE * unit U (bBkKmMgGtT)
                                (Default: 1G, default unit: M)

Common options :
  -o, --logfile=FILE               Output log to FILE instead of stderr
  -l, --logpriority=LEVEL          Set log priority to LEVEL
  -q, --quiet                      Don't produce any output
  -P, --lxcpath=PATH               Use specified container path
  -?, --help                       Give this help list
      --usage                      Give a short usage message
      --version                    Print the version number

Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.

See the lxc-create man page for further information.

Configuration Link to heading

Settings can be specified in the lxc config. I’m using ZFS as a backing store, btrfs and lvm are also supported as well as “none” which is the default.

I added my backend, lxc path, and zfs root.

[root]# nano /etc/lxc/lxc.conf
lxc.rootfs.backend = zfs
lxc.lxcpath = /var/lib/lxc
lxc.bdev.zfs.root = vault/lxc

I also set some container defaults, i’m using a systemd-network bridge setup the same way I detailed in a previous post on systemd-nspawn:

[root]# nano /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = br0

I created a new ZFS dataset for my lxc containers

[root]# zfs create vault/var/lib/lxc -o mountpoint=/var/lib/lxc

Container Creation Link to heading

I created a centos container using zfs as a backing store, it requires the AUR package yum:

[root]# lxc-create --name=centos-builder --bdev=zfs --template=centos
Host CPE ID from /etc/os-release:
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ...
Cache found. Updating...
Loaded plugins: fastestmirror
Setting up Update Process
Loading mirror speeds from cached hostfile
 * base: mirror.it.ubc.ca
 * extras: mirror.it.ubc.ca
 * updates: mirror.it.ubc.ca
base                                                                        | 3.7 kB     00:00
extras                                                                      | 3.4 kB     00:00
updates                                                                     | 3.4 kB     00:00
updates/primary_db                                                          | 3.1 MB     00:00
No Packages marked for Update
Loaded plugins: fastestmirror
Cleaning repos: base extras updates
0 package files removed
Update finished
Copy /var/cache/lxc/centos/x86_64/6/rootfs to /usr/lib/lxc/rootfs ...
Copying rootfs to /usr/lib/lxc/rootfs ...
Storing root password in '/var/lib/lxc/centos-builder/tmp_root_pass'
Expiring password for user root.
passwd: Success

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:

        '/var/lib/lxc/centos-builder/tmp_root_pass'

To reset the root password, you can do:

        lxc-start -n centos-builder
        lxc-attach -n centos-builder -- passwd
        lxc-stop -n centos-builder

A new data set has been created with the name of the container.

[root]# zfs list                                                                      john@Archon
NAME                               USED  AVAIL  REFER  MOUNTPOINT
vault                              337G  93.3G    96K  none
vault                              309G  93.3G    96K  none
vault/ROOT                        18.8G  93.3G    96K  none
vault/ROOT/default                18.8G  93.3G  10.2G  /
vault/home                         253G  93.3G   112G  /home
...
vault/var/lib/lxc                  224M   283G   100K  /var/lib/lxc
vault/var/lib/lxc/centos-builder   224M   283G   224M  /var/lib/lxc/centos-builder/rootfs

The config file made with the container can be found at /var/lib/lxc/centos-builder/config, it can be edited if the networking needs changing.

[root]# cat /var/lib/lxc/centos-builder/config
# Template used to create this container: /usr/share/lxc/templates/lxc-centos
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

lxc.network.type = veth
lxc.network.link = br0
lxc.network.hwaddr = fe:a1:a9:24:d8:54
lxc.rootfs = /var/lib/lxc/centos-builder/rootfs
lxc.rootfs.backend = zfs

# Include common configuration
lxc.include = /usr/share/lxc/config/centos.common.conf

lxc.arch = x86_64
lxc.utsname = centos-builder

Start the Container Link to heading

Start the container, and login to the console. The password will be in the container directory /var/lib/lxc/centos-builder/tmp_root_pass.

[root]# lxc-start -n centos-builder

Now attach to the console with lxc-console you should find yourself at the login of a brand-new machine.

[root]# lxc-console -n centos-builder
CentOS release 6.8 (Final)
Kernel 4.8.4-1-ARCH on an x86_64

centos-builder login:

Container Networking Link to heading

Networking can now be setup in the container using a bridge, check the interface name and set up the network as if on any other computer.

Check the interface:

[root@tex /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:49:f8:16:59:d9 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Here the interface shows up as as eth0@if5. Use the prefix eth0 as the interface.

I setup a static IP using systemd-networkd.

If a static IP is being used, the file /usr/lib/systemd/network/80-container-host0.network that would have setup DHCP needs to be masked.

[root@tex ~]# ln -sf /dev/null /etc/systemd/network/80-container-host0.network

Start and enable systemd-networkd.

[root@tex ~]# systemctl enable systemd-networkd
[root@tex ~]# systemctl start systemd-networkd

Create the configuration file.

[root@tex /]# nano /etc/systemd/network/vethernet.network
[Match]
Name=eth0

[Network]
DNS=192.168.0.1
Address=192.168.0.10/24
Gateway=192.168.0.1

Restart the service and you should have an IP.

[root@tex /]# systemctl restart systemd-networkd
[root@tex /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f2:5a:ef:12:40:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f05a:efff:fe12:40a4/64 scope link
       valid_lft forever preferred_lft forever

Pre-Built Images Link to heading

Alternatively to using templates, images that have been pre-built can also be downloaded using the lxc-download script.

Running lxc-download runs you through an interactive prompt that will let you download an image from the listed options.

You can also specify the release and architecture beforehand.

[root]# lxc-create -t download --help
LXC container image downloader

Special arguments:
[ -h | --help ]: Print this help message and exit.
[ -l | --list ]: List all available images and exit.

Required arguments:
[ -d | --dist <distribution> ]: The name of the distribution
[ -r | --release <release> ]: Release name/version
[ -a | --arch <architecture> ]: Architecture of the container

Optional arguments:
[ --variant <variant> ]: Variant of the image (default: "default")
[ --server <server> ]: Image server (default: "images.linuxcontainers.org")
[ --keyid <keyid> ]: GPG keyid (default: 0x...)
[ --keyserver <keyserver> ]: GPG keyserver to use
[ --no-validate ]: Disable GPG validation (not recommended)
[ --flush-cache ]: Flush the local copy (if present)
[ --force-cache ]: Force the use of the local copy even if expired

LXC internal arguments (do not pass manually!):
[ --name <name> ]: The container name
[ --path <path> ]: The path to the container
[ --rootfs <rootfs> ]: The path to the container's rootfs
[ --mapped-uid <map> ]: A uid map (user namespaces)
[ --mapped-gid <map> ]: A gid map (user namespaces)

Let’s set up Ubuntu 17.04 container non-interactively. When specifying LXC arguments, -- must be passed before the template specific arguments.

[root]#  lxc-create -t download -n buntzy -B zfs -- \
                   --dist ubuntu --release zesty --arch amd64
Using image from local cache
Unpacking the rootfs

---
You just created an Ubuntu container (release=zesty, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Since this container doesn’t come with a root password, we need to attach directly to the container. There we can specify the root password and from then on connect normally using lxc-console.

[root]# lxc-start -n buntzy
[root]# lxc-attach -n buntzy
root@buntzy:/#

LXD Link to heading

LXD builds on the idea of using image-based containers and functions similar to the way lxc-download works. In order to use it on Arch, install the lxd (AUR) package and start the service lxd.

lxd requires the ability to run unprivileged containers, and since Arch does not have this enabled by default, you can either build your own kernel, or run containers with the security.privileged=true command. I take the latter approach in the following examples.

Setup Link to heading

To configure lxd run lxd init, an interactive prompt should follow walking you through the configuration.

[root]# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]:
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: vault/ARCHON/lxd
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]? no
LXD has been successfully configured.

Networking Link to heading

Available networks can be viewed with the network command, listing them I see the bridge I made previously.

[root]# lxc network list
+------+----------+---------+---------+
| NAME |   TYPE   | MANAGED | USED BY |
+------+----------+---------+---------+
| br0  | bridge   | NO      | 0       |
+------+----------+---------+---------+
| eno1 | physical | NO      | 0       |
+------+----------+---------+---------+

The default configuration can be edited with lxc profile edit default.

[root]# lxc profile edit default
name: default
config: {}
description: Default LXD profile
devices: {}
usedby:

I added my network bridge.

name: default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
usedby:

Remote Images Link to heading

Using LXD you can talk to remote image servers and download containers.

To list the available images run lxc image list images. A long list of available images should appear in the following form.

| eda3a4fda048 | yes | Fedora 24 amd64 (20161028_01:27) | x86_64  | 227.93MB | Oct 28, 2016 at 12:00am (UTC) |

To create a container from one of these images, specify the distro release and architecture in the form distro/release/arch.

[root]# lxc launch images:fedora/24/amd64 hat -c security.privileged=true
Creating hat
Retrieving image: 100%
Starting hat

To view the current containers run lxc list

lxc list
+------+---------+----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+------+------------+-----------+
| hat  | RUNNING | 192.168.0.223 (eth0) |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+

Running Commands Link to heading

Individual commands can be run inside the containers by using the lxc exec command. This can be used to get a shell by running bash, or an alternate shell, as well as used to run one off commands.

[root]# lxc exec hat -- dnf install cowsay
[root]# lxc exec hat -- sh -c 'cowsay "Containers are awesome!!!"'
 ___________________________
< Containers are awesome!!! >
 ---------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

This is just a small look into what LXC can do, and if you’re interested in experimenting more with LXC and LXD I would recommend reading the great series of posts by Stéphane Graber, the LXC and LXD project leader.