Backups with btrbk
Storage setup of my home server is btrfs raid1 over two, dm-crypt'ed 16TB HDDs cached with bcache on NVMe. It works fine, however for PostgreSQL database and my homedir I prefer full NVMe speed.
Therefore I've put those two directories on (dm-crypt'ed) btrfs subvolumes
directly on NVMe. Thanks to DUP profile there's a protection against bitrot.
But it's still a single device which may just die. Regardless of full backups,
I was doing daily rsync into main drives, but there's faster and more capable
way.
Enter btrbk, which operates on btrfs subvolumes.
It uses btrfs' native send capability to copy the subvolume between filesystems effectively.
Additionally, it's very easy to keep some number of historic subvolume snapshots. They utilize copy-on-write, minimizing space usage. This let me recover files quickly or compare filesystem's state over last few days.
The config it bit tricky, that's why I'm posting this. My full backup config below, divided in three sections for explanations.
timestamp_format long snapshot_preserve 14d snapshot_preserve_min 2d # defaults to 'all'
The source definition. preserve option combination is needed to have
daily snapshots kept for last two weeks and have older snapshots removed.
target_preserve 7d target_preserve_min latest # defaults to 'all'
What to do with subvolumes copies at the target directory. Above combination of options keeps last seven days of snapshots copies.
send_compressed_data yes
volume /run/btrbk-work
target /home/poligon/backs/btrbk_snaps
subvolume home_zdzichu
subvolume var_lib_pgsql
Job definition. /run/btrbk-work is a directory where I temporarily mount
NVMe drive root volume with subvolumes beneath. /home/poligon/backs/btrbk_snaps
is the directory on my main (raid1) pool where subvolume copies are stored.
And the last two lines are specific subvolumes to copy.
That works for me. btrbk is run by cron.daily/ from a short script
ensuring everything is mounted where it should be.
020/100 of #100DaysToOffload
Tomasz Torcz
Comments
Comments powered by Disqus