VM Organization

Over the past couple years, my Proxmox homelab has evolved into a deliberate separation of concerns across VMs and nodes. Here are just a few examples:

  • Recreation - Audiobooks, recipes, fun web scrapers that notify me when items drop below a price
  • Business - Invoicing, GitLab, client databases
  • Security Layer - WAF hub, auth, secrets management
  • Sandbox - Dev/practice environments for testing new tools

This separation isn't just organizational. Business/Security nodes run my most bleeding-edge (for me) security measures and IaC practices. They also get the most rigorous backup strategy.

The Strategy

Everything runs on Proxmox, with PBS (Proxmox Backup Server) capturing VM-level snapshots. But for elevated concerns—client data, invoicing, anything that would actually hurt if lost—I add Backblaze B2 as an offsite layer.

The key insight: I focus exclusively on critical data. Everything else is Infrastructure as Code. If I lose a VM, I rebuild it from config. What I can't recreate are the databases—the actual business data.

Phase 1: Encrypted Database Dumps

A dedicated backup user runs nightly scripts that dump and encrypt databases. Here's the Bookstack backup as an example:

#!/bin/bash
# /home/backupuser/scripts/backup-bookstack.sh
# Runs as 'backupuser' via cron - no sudo required for docker exec

set -euo pipefail

BACKUP_DIR="/home/backupuser/dumps/bookstack"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Dump the MariaDB database from the Bookstack container
# Uses docker exec to run mysqldump inside the container
docker exec bookstack-db mysqldump \
    --user=bookstack \
    --password="${BOOKSTACK_DB_PASS}" \
    --single-transaction \
    --routines \
    --triggers \
    bookstack | gzip > "$BACKUP_DIR/bookstack_${TIMESTAMP}.sql.gz"

# Encrypt the dump with GPG (symmetric encryption)
# The passphrase is stored in a file only readable by backupuser
gpg --batch --yes --symmetric \
    --cipher-algo AES256 \
    --passphrase-file /home/backupuser/.backup-passphrase \
    "$BACKUP_DIR/bookstack_${TIMESTAMP}.sql.gz"

# Remove the unencrypted dump - we only keep the .gpg file
rm "$BACKUP_DIR/bookstack_${TIMESTAMP}.sql.gz"

# Cleanup: remove encrypted dumps older than retention period
find "$BACKUP_DIR" -name "*.sql.gz.gpg" -mtime +$RETENTION_DAYS -delete

echo "[$(date)] Bookstack backup complete: bookstack_${TIMESTAMP}.sql.gz.gpg"

The backup user has access to run docker exec on specific containers but nothing else. Principle of least privilege.

Phase 2: Restic to Backblaze B2

A different user handles the offsite sync. This user has limited sudo permissions—just enough to read the encrypted dumps from Phase 1. Restic handles deduplication and uploads to Backblaze:

#!/bin/bash
# /home/resticuser/scripts/sync-to-backblaze.sh
# Runs as 'resticuser' with specific sudoers permissions

set -euo pipefail

# Restic repository in Backblaze B2
export RESTIC_REPOSITORY="b2:homelab-critical-backups"
export RESTIC_PASSWORD_FILE="/home/resticuser/.restic-password"
export B2_ACCOUNT_ID="${B2_ACCOUNT_ID}"
export B2_ACCOUNT_KEY="${B2_ACCOUNT_KEY}"

# Directories containing encrypted dumps from Phase 1
BACKUP_SOURCES=(
    "/home/backupuser/dumps/bookstack"
    "/home/backupuser/dumps/invoicing"
    "/home/backupuser/dumps/pocketbase"
)

# Run restic backup with deduplication
# --verbose shows progress, --exclude-caches skips cache directories
restic backup \
    --verbose \
    --exclude-caches \
    --tag "nightly" \
    --tag "$(hostname)" \
    "${BACKUP_SOURCES[@]}"

# Prune old snapshots: keep 7 daily, 4 weekly, 6 monthly
restic forget \
    --keep-daily 7 \
    --keep-weekly 4 \
    --keep-monthly 6 \
    --prune

# Verify repository integrity (runs weekly via separate cron)
# restic check --read-data-subset=10%

echo "[$(date)] Backblaze sync complete"

Restic's deduplication is key here. Since I'm backing up encrypted dumps that only change when the underlying data changes, storage stays minimal.

What It Costs

Because I'm laser-focused on critical data and everything else is IaC-reproducible, my total backup footprint is ~15GB.

That's... checks notes... $0.03/month in Backblaze B2 storage costs. 😄

Results

This hyper-focused strategy gives me solid 3-2-1 coverage:

  • 3 copies - Live database, PBS snapshot, Backblaze
  • 2 media types - Local NVMe, cloud object storage
  • 1 offsite - Backblaze B2

And yes—the restores have been tested. Successfully. 😉 Let's just say I saw other people's pain when skulking about how to do this on stack overflow, and decided to go the extra mile.

It was a great learning experience. And more importantly, I have peace of mind that if my entire homelab somehow gets wiped, I can rebuild what matters with a bit of work. The VMs are just infrastructure. The data is what counts.