No description
Find a file
2026-02-28 21:38:13 -07:00
inventory adding enable_caddy 2026-02-28 21:33:52 -07:00
playbooks adding enable_caddy 2026-02-28 21:33:52 -07:00
roles adding credit for coming soon page 2026-02-28 21:38:13 -07:00
.envrc initial commit 2026-02-27 15:09:25 -07:00
.gitignore initial commit 2026-02-27 15:09:25 -07:00
ansible.cfg initial commit 2026-02-27 15:09:25 -07:00
CLAUDE.md Automate DKIM sync and add Hetzner resource labels 2026-02-28 19:06:24 -07:00
README.md tightening up the docs 2026-02-28 21:34:02 -07:00
requirements.txt initial commit 2026-02-27 15:09:25 -07:00
requirements.yml Fix fresh-deploy blockers and clean up architecture 2026-02-28 00:51:16 -07:00
setup.sh Add landing page, Hetzner labels, and Codeberg link 2026-02-28 20:49:22 -07:00

Linderhof

Linderhof — the smallest and most intimate of Ludwig II's Bavarian palaces, the only one he lived to see completed; built entirely to his own vision as a private retreat. (Wikipedia)

codeberg.org/opennomad/linderhof

a self-hosting stack based on ansible and docker compose that comes with email, web server, git hosting, matrix, monitoring, web analytics, calendar & contacts, backups, overlay networking, and intrusion prevention — no databases, no external services.

set enable_<service>: false in config.yml to disable any service — DNS records, Docker networks, and deployment tasks are all skipped automatically.

service toggle default powered by
web server enable_caddy on caddy
email enable_mail on docker-mailserver, rainloop
git hosting enable_forgejo on forgejo
matrix homeserver enable_tuwunel on tuwunel
monitoring enable_monitoring on prometheus, grafana, loki, alloy
web analytics enable_goaccess on goaccess
calendar & contacts enable_radicale on radicale
backups enable_restic off restic
overlay network enable_nebula on nebula
image update alerts enable_diun on diun
intrusion prevention enable_fail2ban on fail2ban

restic is off by default — it requires a Hetzner Storage Box for its backup target. enable it and configure restic_repository in config.yml once you have one.

what you need

  • a domain name — from any registrar. you'll point its nameservers at Hetzner DNS, or manage DNS yourself
  • a Hetzner Cloud account with an API token (Read & Write) — used to provision the server and manage DNS records
  • local tools:
    • ansible and ansible-galaxy
    • direnv (optional but recommended — loads .envrc automatically)
    • ssh-keygen, openssl, envsubst (standard on most systems)

if you already have a server with SSH access and passwordless sudo, you can skip provisioning and jump straight to deploy.

setup

run the interactive setup wizard:

./setup.sh

it walks you through: stack name, SSH key, admin username, server hostname, domain, Hetzner API token, and generates all secrets. config is written to $XDG_CONFIG_HOME/linderhof/<stack>/ and won't overwrite existing files.

activate the stack and review the generated config:

direnv allow   # reads .stack file — or: export LINDERHOF_STACK=<stack>

vi $LINDERHOF_DIR/group_vars/all/config.yml
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml

install ansible collections:

ansible-galaxy collection install -r requirements.yml

deploy

full deployment order for a fresh server:

ansible-playbook playbooks/provision.yml          # create server, writes IP to stack config
ansible-playbook playbooks/dns.yml                # create DNS zones and records
ansible-playbook playbooks/site.yml --tags bootstrap  # users, SSH hardening, packages, Docker
ansible-playbook playbooks/site.yml               # deploy all services
ansible-playbook playbooks/dkim_sync.yml          # generate DKIM keys and publish to DNS

provision creates the server on Hetzner, registers your SSH key, and writes the IP to your stack config automatically. default type is cx23 (2 vCPU, 4 GB); override with -e hcloud_server_type=cx33.

dns creates all zones and records conditional on your enable_* settings — disabled services get no DNS entries.

bootstrap connects as root (the only user on a fresh server), creates your admin user with passwordless sudo, hardens SSH, and installs base packages including Docker.

site.yml deploys all enabled services. subsequent runs are idempotent — safe to re-run to apply config changes.

note: on first deployment, the mail role briefly stops Caddy to acquire a Let's Encrypt certificate for the mail hostname via certbot standalone. Caddy is restarted immediately after. this only happens once.

dkim_sync generates DKIM keys for all mail domains, writes them to your stack config, and publishes the mail._domainkey DNS records. safe to re-run.

bring your own server

if you already have an Ubuntu server with SSH access:

  1. run ./setup.sh — enter the server's existing hostname and IP when prompted
  2. ensure your SSH key is authorized for the admin user and they have passwordless sudo — or run ansible-playbook playbooks/site.yml --tags bootstrap first if starting from root access
  3. skip provision.yml and dns.yml if you're managing DNS elsewhere
  4. run ansible-playbook playbooks/site.yml

multiple stacks

each stack is an independent deployment with its own inventory, vault, and secrets. to create a second stack:

./setup.sh          # enter a different stack name when prompted
echo other-stack > .stack && direnv allow

switch between stacks by changing LINDERHOF_STACK or updating .stack:

echo home > .stack && direnv allow
echo work > .stack && direnv allow

stack config lives at $XDG_CONFIG_HOME/linderhof/<stack>/:

<stack>/
  hosts.yml              # server connection info
  vault-pass             # vault encryption key (chmod 600)
  stack.env              # per-stack shell vars (DOCKER_HOST, etc.)
  group_vars/
    all/
      config.yml         # public ansible settings
      vault.yml          # encrypted secrets
      dns.yml            # DNS zone definitions
      overrides.yml      # optional: variable overrides

overriding variables

to override any variable without editing config.yml, create overrides.yml in the stack's group_vars/all/. ansible loads all files there automatically:

vi $LINDERHOF_DIR/group_vars/all/overrides.yml
# override mail hostname (e.g. during migration)
mail_hostname: mail2.example.com

# add extra static sites to caddy
caddy_sites:
  - example.com
  - example2.com

# add extra mail-hosted domains
mail_domains:
  - example.com
  - example2.com

secrets

sensitive data is stored in $LINDERHOF_DIR/group_vars/all/vault.yml, encrypted with ansible-vault. generated by setup.sh and never committed to the repo.

# view secrets
ansible-vault view $LINDERHOF_DIR/group_vars/all/vault.yml

# edit secrets (decrypts, opens editor, re-encrypts on save)
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml

after first mail deployment — DKIM

run dkim_sync.yml once after the first mail deployment — it generates DKIM keys for all mail domains, writes them to your stack config, and publishes the mail._domainkey DNS records automatically:

ansible-playbook playbooks/dkim_sync.yml

keys are stored in $LINDERHOF_DIR/group_vars/all/dkim.yml (plain file — DKIM public keys are not secret). safe to re-run; only generates keys for domains that don't have one yet.

common operations

services are deployed to /srv/<service>. each has a compose.yml and can be managed with docker compose.

docker compose

cd /srv/mail && docker compose logs -f
cd /srv/caddy && docker compose restart
cd /srv/forgejo && docker compose ps

reloading caddy

cd /srv/caddy && docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile

managing email users

# list accounts
docker exec mailserver setup email list

# add account
docker exec mailserver setup email add user@domain.com password

# delete account
docker exec mailserver setup email del user@domain.com

# update password
docker exec mailserver setup email update user@domain.com newpassword

to manage users via ansible, edit mail_users in the vault and run:

ansible-playbook playbooks/mail.yml --tags users

managing email aliases

docker exec mailserver setup alias list
docker exec mailserver setup alias add alias@domain.com target@domain.com
docker exec mailserver setup alias del alias@domain.com

managing forgejo

docker exec forgejo forgejo admin user list
docker exec forgejo forgejo admin user create --username myuser --password mypassword --email user@domain.com
docker exec forgejo forgejo admin user change-password --username myuser --password newpassword

monitoring

# reload prometheus config
docker exec prometheus kill -HUP 1

# restart alloy
cd /srv/monitoring && docker compose restart alloy

nebula overlay network

nebula runs directly on the host (not in Docker). certificates live in /etc/nebula/.

# sign a client certificate
cd /etc/nebula
nebula-cert sign -name "laptop" -ip "192.168.100.2/24"
# copy laptop.crt, laptop.key, ca.crt to client

backups

docker exec restic restic snapshots
docker exec restic restic backup /data
docker exec restic restic restore latest --target /restore

upstream documentation