Having been introduced to ZFS on my server, I started to feel as though I was somehow using an inferior filesystem on Linux while using ext4. I became used to copy on write features that are part of what make makes filesystems like ZFS so useful and started to realize how useful it could be on the desktop as well. The first thing I did was attempt to setup ZFS on Linux. While it is quite easy to install and use ZFS on an existing Linux system, attempting to run root on ZFS, at least on Archlinux at the time of writing this, proves to be much more difficult. After multiple installation attempts installing with first rEFInd, then grub, then systemd boot - and multiple failed boots and error messages while following the Archlinux Wiki to the letter, I decided that If I had to battle this hard to get it working, ZFS just didn’t have the kind of documentation or adoption on Linux that I was looking for. Fortunately there is a very easy way to set up a copy on write filesystem that can run natively on Linux: use Btrfs…
While Btrfs hasn’t been battle tested in the field for around a decade like ZFS, and some people say it is unstable, the developers of Btrfs have said that the on disk format of btrfs is stable. Seeing this I decided it was time to give it a try on my own computer, and form my own opinions.
Installing Linux on Btrfs Link to heading
During the install process it is definitely clear that Btrfs is not a second-class citizen on Linux. The initial process of getting a system up and running was quite similar to what I am used to with other filesystems. Unlike ZFS having a complicated install process due to having to get the code from an unofficial repository, and deal with kernel modules, Btrfs support is already in the kernel.
Setting up Disks for Btrfs Link to heading
To use the userspace facilities the package btrfs-progs
is needed.
I did my install on a single ssd but if I wanted to use soft RAID the process would be quite similar and would just involve multiple Btrfs partitions.
Partition the disk, make the boot partition and a single partition for Btrfs.
[root@archiso /] $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 238.5G 0 disk
├─sdb1 8:17 0 512M 0 part
└─sdb2 8:18 0 238G 0 part
Format partitions, I’m using a single disk however multiple disk RAID is as easy as listing multiple devices here.
[root@archiso ~] $ mkfs.btrfs -L Butters /dev/sdb2
I’m using a UEFI system so I need an additional partition to boot from.
[root@archiso ~] mkfs.vfat -F32 /dev/sdb1
Subvolumes Link to heading
Subvolumes can be compared to partitions, and although they are not the same, they are used in a similar way to how partitions are used in a regular install.
First mount the Btrfs partition
[root@archiso] ~ $ mount -t btrfs /dev/sdb2 /mnt/btrfs
Instead of just installing my system to the root of my Btrfs pool I wanted to use subvolumes so that snapshots could be taken. The decision as to how many subvolumes are made is a difficult one. The argument can be made that it is useful to have a separate partition for var
, tmp
, and other directories, but for sake of easing the ability of restoring the state of the computer from a snapshot and simplifying backup, I chose to have a single /
subvolume “ROOT
”, a single /home
subvolume “home
” and a subvolume for storing snapshots “snapshots
”.
Create the “ROOT
”, “home
”, and “snapshots
” subvolumes.
[root@archiso /mnt/btrfs] $ btrfs subvolume create /mnt/btrfs/ROOT
[root@archiso /mnt/btrfs] $ btrfs subvolume create /mnt/btrfs/home
[root@archiso /mnt/btrfs] $ btrfs subvolume create /mnt/btrfs/snapshots
Btrfs subvolumes have IDs and parent IDs this lets them keep track of their hierarchy. By default the root node ID is “5”. The command btrfs subvolume list -p ${directory}
shows the three subvolumes, their ID’s and their parent ID “5”.
[root@archiso /mnt/btrfs] $ btrfs subvolume list -p .
ID 257 gen 8 parent 5 top level 5 path ROOT
ID 258 gen 9 parent 5 top level 5 path home
ID 259 gen 10 parent 5 top level 5 path snaps
So my hierarchy tree looks like:
Butters (ID 5)
├── home
├── ROOT
└── snapshots
The Btrfs pool at /mnt/btrfs
can now be unmounted.
Install to Subvolumes Link to heading
In order to install to “ROOT
” it needs to be mounted in place of the /
directory. It’s important to turn compression on here so that the files are compressed during the install.
Select the subvolume to mount with subvol=${subvolume}
.
[root@archiso /] $ mount -o compress=lzo subvol=ROOT /dev/sdb2 /mnt
[root@archiso /] $ mount -o compress=lzo subvol=home /dev/sdb2 /mnt/home
[root@archiso /] $ mount /dev/sdb1 /mnt/boot
From here on the install is normal until fstab is configured.
Configure fstab Link to heading
Again specify subvolume, mount the Btrfs pool somewhere under /mnt
.
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
# Btrfs pool
UUID=${drive-UUID} /mnt/Butters btrfs rw,noatime,discard,ssd,space_cache 0 0
# ROOT
UUID=${drive-UUID} / btrfs rw,noatime,ssd,discard,space_cache,subvolid=257,subvol=/ROOT 0 0
# home
UUID=${drive-UUID} /home btrfs rw,noatime,ssd,discard,space_cache,subvolid=258,subvol=/home 0 0
# Boot ond other partitions...
NOTE: subvolid may not be necessary and will need to be changed during a rollback
Configure bootloader Link to heading
Setting up the bootloader is easy with systemd-boot. Similarly to the fstab setup, the “ROOT
” subvolume needs to be specified.
# SYSTEMD-BOOT
# GNU nano 2.5.2 File: /boot/loader/entries/arch.conf Modified
title Arch Linux
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img # Intel microcode for if an intel CPU is in use
options root=PARTUUID=68e329cd-01b9-45e7-b516-ba62c516b4e5 rw rootflags=subvol=ROOT
Snapshots Link to heading
Recurring Snapshots Link to heading
After finishing up the rest of the install and rebooting, regular snapshots can be setup with snapper
, a btrfs snapshot utility made by SUSE which does regular snapshot and cleanup.
This is where the subvolume snapshots can now be made use of.
Under Archlinux, Snapper is in the Official Repository. After installing the package create a default configuration with snapper under /etc/snapper/configs/
snapper -c ${config-name} create-config ${subvolume-mountpoint}
So for subvolumes “ROOT
” and “home
”, run:
snapper -c root create-config /
snapper -c home create-config /home
This creates a new subvolume “.snapshots
” at the root of the specified subvolume. Snapper has a utility to help you roll back; however, it is inherently flawed and can mess up your file system layout due to how it rolls back.
To avoid this issue, delete the new subvolume snapper just made. Instead create a new subvolume under the “snapshots
” subvolume for whichever subvolumes are being snapshotted by snapper.
Delete snapper subvolumes
btrfs subvolume delete /.snapshots
btrfs subvolume delete /home/.snapshots
Create snapshots/ROOT_snaps
and snapshots/home_snaps
.
btrfs subvolume create /mnt/Butters/snapshots/ROOT_snaps
btrfs subvolume create /mnt/Butters/snapshots/home_snaps
Now these subvolumes can be mounted to the mount location snapper expects.
Make the mountpoints as the root user and mount the subvolumes.
mkdir /home/.snapshots
mkdir /.snapshots
mount -o compress=lzo subvol=snapshots/home_snaps /dev/sdb2 /home/.snapshots
mount -o compress=lzo subvol=snapshots/ROOT_snaps /dev/sdb2 /.snapshots
Now that the directories are set up, snapper can be set to run automatic snapshots and clean up. The cleanup removes snapshots based on the config files in /etc/snapper/configs/
and can be changed.
systemctl start snapper-timeline.timer snapper-cleanup.timer
systemctl enable snapper-timeline.timer snapper-cleanup.timer
The snapper subvolumes should also be added to the fstab
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
# Other drives...
UUID=${drive-UUID} /.snapshots btrfs rw,noatime,compress=lzo,ssd,discard,space_cache,subvolid=420,subvol=snapshots/ROOT_snaps 0 0
UUID=${drive-UUID} /home/.snapshots btrfs rw,noatime,compress=lzo,ssd,discard,space_cache,subvolid=421,subvol=snapshots/home_snaps 0 0
Rollback Snapshots Link to heading
To rollback to an old snapshot; boot into a restore medium (like the arch installer) and mount the Btrfs pool.
To rollback “ROOT
”, first delete or move the unwanted subvolume.
btrfs subvolume delete /mnt/Butters/ROOT
The dates of the snapshot are stored under ${snapshot-number}/info.xml
if the date needs to be checked.
Checking the snapshot info
[root@Butters /mnt/Butters]$ cat snapshots/ROOT_snaps/995/info.xml
<?xml version="1.0"?>
<snapshot>
<type>single</type>
<num>995</num>
<date>2016-04-03 07:00:05</date>
<description>timeline</description>
<cleanup>timeline</cleanup>
</snapshot>
Snapshots are read only so after selecting the right snapshot, take a read-write snapshot of it under the location of the old subvolume.
btrfs subvol snapshot /mnt/Butters/snapshots/ROOT_snaps/${snapshot} /mnt/Butters/ROOT
After that the system can be rebooted.
Backups Link to heading
Since snapshots are not backups it’s also important a good backup is in place in addition to having snapshots. The backup solution I found is Btrbk, a very configurable backup solution written in perl. Btrbk was an easy install for me as it was in the Arch User Repository, Btrbk (AUR). It came with great documentation, as well as systemd timers and services. Btrbk takes snapshots and then backs up the selected snapshot to a variety of backup locations. The snapshots can then be chosen to be kept or deleted based on a set amount of time.
Backup Configuration Link to heading
After installing Btrbk, an example configuration file can be found at /etc/btrbk/btrbk.conf.example
. Copy the example config to /etc/btrbk/btrbk.conf
.
The configuration options I changed were
snapshot_dir
- The location for the initial snapshot, if left as default it will make the snapshot in the “volume” directory.volume
- The pool that the subvolume being backed up is in.subvolume
- The subvolume that is going to be backed up.target
- Location where the backup will be placed.snapshot_preserve_daily, snapshot_preserve_weekly, snapshot_preserve_monthly
- Amount of time to keep snapshots. I already keep snapshots from snapper so I set all my snapshot preserve settings to zero.target_preserve_daily, target_preserve_weekly, target_preserve_monthly
- Amount of time to keep backups.
For a basic backup everything else can be left as default.
I chose to create a new subvolume under snapshots/btrbk_snaps
to keep things organized; however, there should only ever be one snapshot here so this is not really necessary.
Options correspond to the last section encountered so global options must be set before each volume.
My basic configuration ended up as
snapshot_preserve_daily 0
snapshot_preserve_weekly 0
snapshot_preserve_monthly 0
target_preserve_daily 20
target_preserve_weekly 10
target_preserve_monthly all
snapshot_dir snapshots/btrbk_snaps
volume /mnt/Butters
subvolume ROOT
target send-receive /mnt/ButterBackup/ROOT
subvolume home
target send-receive /mnt/ButterBackup/home
Note: Indentation in the config is for readability purposes only , it does not change the results.
This configuration is for sending backups somewhere else on the same system, in my case to another drive, but it can easily be configured to send backups over SSH to another computer or server.
Directory Structure Link to heading
So this is what my directory structure ended up looking like
.
├── ButterBackup [POOL ONE]
| ├── home
| | └── (btrbk home pool backups...)
| └── ROOT
| └── (btrbk ROOT pool backups...)
|
└──── Butters [POOL TWO]
├── home
│ └── (home directories...)
├── ROOT
│ └── (root directories...)
└── snapshots
├── btrbk_snaps
| ├── (btrbk home pool backups...)
| └── (btrbk ROOT pool backups...)
├── home_snaps
| └── (Snapper home pool snapshots....)
└── ROOT_snaps
└── (Snapper ROOT pool snapshots....)
Conclusion Link to heading
I’ve been using this system for several months with no problems. I have rolled back several times and it has been as easy as booting into another Linux distro on my computer, and moving around snapshots. This could probably be automated in such a way that at boot a subvolume is chosen to be the volume booted into. This is what Snapper tried to achieve but the way it does it is very messy as the resulting subvolume ends up being in the wrong place. This problem is not simple to fix as snapshots are read-only so somewhere along the line another snapshot has to be taken that is read-write for the system to be usable. Once a solution arrives for this Btrfs will have a feature similar to ZFS boot environments. I’m looking forward to this.
Thus far I have not had any file corruption that I am aware of and I am happy with the resulting system. I have yet to use ZFS on Linux as my system root, but I’m looking forward to comparing the two when I finally get my problems ironed out.
Overall the future with btrfs looks interesting and I will be keeping an eye on it.