Recently I was looking to install an application on Linux with a large number of dependencies: TeXstudio and it’s TeXLive libraries. I wasn’t sure I wanted the packages sitting around on my computer long term, and I didn’t feel like micro-managing the dependencies with a minimal install. I thought this would be the perfect chance to experiment with Linux containers.
There are multiple implementations of containerization right now on Linux, ranging from the extremely customizable, and more general – LXC – to the more specific application deployment technology – Docker. Since I wasn’t going to be deploying software images, I didn’t think using Docker made any sense. My use also didn’t involve something that needed to be extremely customizable, and since I wanted a relatively quick setup, I chose to go with an alternative: systemd-nspawn.
As you may have guessed from the name, systemd-nspawn is part of systemd. freedesktop.org describes it as being used to “run a command or OS in a light-weight namespace container. In many ways it is similar to chroot(1), but more powerful since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name.”
systemd-nspawn is extremely easy to use. In its most basic configuration it can be used to “spawn” a container simply by invoking
systemd-nspawn in a directory. Taking full advantage of its features, however, involves using more specific options, as well as, the many systemd commands that integrate well with it.
Container Setup Link to heading
Host Filesystem Link to heading
If you are using btrfs or ZFS you might want to create an individual subvolume or dataset for your containers.
I created a new ZFS dataset for my machines
[root]# zfs create vault/machines -o mountpoint=/var/lib/machines
New dataset for the container
[root]# zfs create vault/machines/tex -o mountpoint=legacy
Mounted them in fstab
vault/machines /var/lib/machines zfs rw,relatime,xattr,noacl 0 0 vault/machines/tex /var/lib/machines/tex zfs rw,relatime,xattr,noacl 0 0
Create Container Link to heading
I’m using Arch so it made sense to install an Arch base. Depending on what you’re comfortable with, or what your system is, you may want to set up an alternative. One of the really cool aspects of containers is that, like virtualization you can set up whatever type of system you choose.
Because containers can share the kernel of the host, when you’re setting up the container it’s not necessary to install another Linux kernel. On arch use
--ignore linux and the
-i option to avoid auto-confirmation.
[root]# pacstrap -i -c -d /var/lib/machines/tex base --ignore linux ==> Creating install root at /var/lib/machines/tex ==> Installing packages to /var/lib/machines/tex :: Synchronizing package databases... archzfs is up to date core is up to date extra is up to date community is up to date multilib is up to date :: linux is in IgnorePkg/IgnoreGroup. Install anyway? [Y/n] n
Boot Link to heading
Now it’s time to boot into the container. The container will share the host’s IP.
[root]# systemd-nspawn --boot --directory=/var/lib/machines/tex Spawning container tex on /var/lib/machines/tex. Press ^] three times within 1s to kill container. systemd 231 running in system mode. (+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization systemd-nspawn. Detected architecture x86-64. Welcome to Arch Linux!
You should be sitting at a login prompt. login as root with no password
Arch Linux 4.6.4-1-ARCH (console) tex login: root [root@tex ~]#
Networking Link to heading
While not entirely necessary for my use, a seperate IP can be given to the container. It can be configured to use DHCP to obtain an IP, or be manually assigned a static one. Alternatively, the container can share the networking and IP of the host. Depending on the application being used, for example in server applications, having an individual IP for each container can be very useful.
I chose to take the route of setting up a network bridge on the host allowing the container to have its own IP.
Host Networking Link to heading
The easiest way to configure networking is to use systemd-networkd on the host.
Configure systemd-resolved Link to heading
On the host and container we can use systemd-resolved in conjunction with systemd-networkd to manage DNS.
Start and enable
systemd-resolved, then delete
/etc/resolv.conf and link
[root]# systemctl start systemd-resolved [root]# systemctl enable systemd-resolved [root]# ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
Create a Bridge Link to heading
This step is not required, but will give the container its own IP by giving it a virtualized interface.
Host Networking Setup Link to heading
On the host remove any config files.
First create a bridge.
[root]# nano /etc/systemd/network/bridge.netdev
[NetDev] Name=br0 Kind=bridge
Setup the bridge network.
[root]# nano /etc/systemd/network/bridge.network
[Match] Name=br0 [Network] DNS=192.168.0.1 Address=192.168.0.2/24 Gateway=192.168.0.1
Assign the bridge an interface:
[root]# nano /etc/systemd/network/ethernet.network
[Match] Name=eno1 [Network] Bridge=br0
Restart systemd-networkd after setup.
[root]# systemctl restart systemd-networkd
Container Networking Setup Link to heading
systemd-resolved to manage DNS on the host, it can be used inside the container to manage
resolv.conf. Otherwise edit
/etc/resolv.conf in the container manually.
If a static IP is being used, the file
/usr/lib/systemd/network/80-container-host0.network that would have setup DHCP needs to be masked.
[root@tex ~]# ln -sf /dev/null /etc/systemd/network/80-container-host0.network
Start and enable
[root@tex ~]# systemctl enable systemd-networkd [root@tex ~]# systemctl start systemd-networkd
Now the containers networking can be set up.
The virtual interface we are matching is called
host0. Create the configuration file and fill in your settings. Mine is setup to have a static IP.
[root@tex ~]# nano /etc/systemd/network/vethernet.network
[Match] Name=host0 [Network] DNS=192.168.0.1 Address=192.168.0.10/24 Gateway=192.168.0.1
[root@tex ~]# systemctl restart systemd-networkd
The container can now be booted with the bridge:.
[root]# systemd-nspawn --boot \ --directory=/var/lib/machines/tex \ --network-bridge=br0
Check to make sure the static ip specified is in use in the container.
[root@tex ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: host0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:9e:aa:9a:22:78 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.0.10/24 brd 192.168.0.255 scope global host0 valid_lft forever preferred_lft forever inet6 fe80::89e:aaff:fe9a:2278/64 scope link valid_lft forever preferred_lft forever
The bridge should be visible on the host as well.
[root]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000 link/ether 74:d0:2b:7d:2b:eb brd ff:ff:ff:ff:ff:ff inet6 fe80::76d0:2bff:fe7d:2beb/64 scope link valid_lft forever preferred_lft forever 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 36:3d:45:70:0d:fd brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::343d:45ff:fe70:dfd/64 scope link valid_lft forever preferred_lft forever 5: vb-tex@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000 link/ether ce:3b:19:51:e4:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::cc3b:19ff:fe51:e406/64 scope link valid_lft forever preferred_lft forever
Configure systemd-nspawn Unit Override Link to heading
While it’s possible to continue starting the container with
systemd-nspawn, there is a tool called
machinectl that can be used to simplify the process, and give a nice command line interface. In order to use our new configuration with
machinectl we need to edit the default systemd unit.
Here’s the original unedited ExecStart.
[Service] ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth -U --settings=override --machine=%i
Edit the configuration to match your network setup, and add any additional settings you require.
[root]# systemctl edit systemd-nspawn@tex
[Service] ExecStart= ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot \ --link-journal=try-guest \ --network-bridge=br0 -U \ --settings=override \ --machine=%i
Now it is possible to start the container with
Root Login With machinectl Link to heading
Because of how
pam_security works on Arch, it is it not possible to login as root with machinectl. To allow root log in with machinectl
pts/0 needs to be added to
/etc/securetty on the container.
Now you can start and login to the container
[root]# machinectl start tex [root]# machinectl login tex
The system can now be configured like any other system where the command line is accessible.
Running Graphical Applications Link to heading
It is possible to run graphical applications inside of the container by sharing the hosts xorg server with the container, but it does require a little additional setup.
Setup Access to X Link to heading
The xhost command gives permission to connect to a user’s X server. Run it on the host as a regular user that you want the containers application to connect to. An alternative is using xauth and sharing access using your Xauthority file.
xhost exposes your xserver to everyone, only use it on trusted networks.
[john]$ xhost +local: non-network local connections being added to access control list
It can be set back to normal with
Check the host’s $DISPLAY environment variable as the regular user, the container should be set to match it.
[john]$ echo $DISPLAY :0
In the container, set the variable to match.
e.g. if it was
[root]# export DISPLAY=:0.0
I typically set up an alias to open my xserver using xhost when I start an application, and close it after.
alias tex-run="xhost +local:; sudo machinectl login tex; xhost -;"
Bind X Variables Link to heading
Xorg has certain variables that need to be passed in, add them to the start options:
[root]# systemctl edit systemd-nspawn@tex
Run Application Link to heading
I want to run texstudio in the container. I made it a user on the container so that it’s not running as root.
[root]# useradd -m -s /usr/bin/zsh texstudio
Now I am able to set the display environment variable in the users shell configuration file “~/.zshrc”.
In order to give the application running in the container access to the host system I shared a directory from the host.
[root]# systemctl edit systemd-nspawn@tex
My final unit file ended up looking like this:
[Service] ExecStart= ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot \ --link-journal=try-guest \ --network-bridge=br0 -U \ --settings=override \ --machine=%i \ --bind=/tmp/.X11-unix \ --bind-ro=/home/john/.Xauthority:/home/john/.Xauthority \ --bind=/home/john/share:/home/texstudio/share
To run my containerized application, I now just execute the
machinectl login command and login.
[root]# machinectl login tex Connected to machine tex. Press ^] three times within 1s to exit session. Arch Linux 4.7.4-1-ARCH (pts/0) tex login: texstudio Password: tex% texstudio
It surprised me how easy it was to run a graphical application in a container - when I ran texstudio I must admit that I didn’t expect to see it pop up on the first try.
Closing thoughts Link to heading
I set up the container partly for fun, but also because I didn’t feel like installing the massive number of dependencies texlive has on my main system, so I put them in the container instead. This way when I’m finished with using it, I can just destroy the container and that will be that. I can also store the container somewhere else until I need it, i.e. on my server, and then when the time comes to use it I can transfer it back to my computer. Containerization is such an interesting technology and it seems like it’s really getting its time on Linux.