| inventory | ||
| playbooks | ||
| roles | ||
| .envrc | ||
| .gitignore | ||
| ansible.cfg | ||
| CLAUDE.md | ||
| README.md | ||
| requirements.txt | ||
| requirements.yml | ||
| setup.sh | ||
Linderhof
Linderhof — the smallest and most intimate of Ludwig II's Bavarian palaces, the only one he lived to see completed; built entirely to his own vision as a private retreat. (Wikipedia)
codeberg.org/opennomad/linderhof
a self-hosting stack based on ansible and docker compose that comes with
- web server
- git server
- matrix homeserver
- monitoring
- web analytics
- calendar & contacts
- backups
- overlay network
- docker image update notifications
- intrusion prevention
other features include:
- runs on opensource
- no databases / no external services
what you need
- a domain name — from any registrar. you'll point its nameservers at Hetzner DNS, or manage DNS yourself
- a Hetzner Cloud account with an API token (Read & Write) — used to provision the server and manage DNS records
- local tools:
ansibleandansible-galaxydirenv(optional but recommended — loads.envrcautomatically)ssh-keygen,openssl,envsubst(standard on most systems)
if you already have a server with SSH access and passwordless sudo, you can skip provisioning and jump straight to deploy.
setup
run the interactive setup wizard:
./setup.sh
it walks you through: stack name, SSH key, admin username, server hostname, domain, Hetzner API token, and generates all secrets. config is written to $XDG_CONFIG_HOME/linderhof/<stack>/ and won't overwrite existing files.
activate the stack and review the generated config:
direnv allow # reads .stack file — or: export LINDERHOF_STACK=<stack>
vi $LINDERHOF_DIR/group_vars/all/config.yml
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
install ansible collections:
ansible-galaxy collection install -r requirements.yml
deploy
provision a server (Hetzner)
ansible-playbook playbooks/provision.yml
creates the server, registers your SSH key, and writes the IP to your stack config automatically. default type is cx23 (2 vCPU, 4 GB); override with -e hcloud_server_type=cx33.
update DNS
ansible-playbook playbooks/dns.yml
creates all DNS zones and records for your domain. records are conditional on your enable_* settings — disabled services won't get DNS entries.
bootstrap the server
first-time setup of the server (users, SSH hardening, packages, Docker):
ansible-playbook playbooks/bootstrap.yml
this connects as root (the only user on a fresh server), creates your admin user with passwordless sudo, sets passwords for root and the admin user, hardens SSH, and installs base packages.
deploy services
ansible-playbook playbooks/site.yml
deploys all enabled services. subsequent runs are idempotent — safe to re-run to apply config changes.
note: on first deployment, the mail role briefly stops Caddy to acquire a Let's Encrypt certificate for the mail hostname via certbot standalone. Caddy is restarted immediately after. this only happens once — subsequent runs detect the existing certificate and skip it.
bring your own server
if you already have an Ubuntu server with SSH access:
- run
./setup.sh— enter the server's existing hostname and IP when prompted - ensure your SSH key is authorized for the admin user and they have passwordless sudo — or run
bootstrap.ymlfirst if starting from root access - skip
provision.ymlanddns.ymlif you're managing DNS elsewhere - run
ansible-playbook playbooks/site.yml
multiple stacks
each stack is an independent deployment with its own inventory, vault, and secrets. to create a second stack:
./setup.sh # enter a different stack name when prompted
echo other-stack > .stack && direnv allow
switch between stacks by changing LINDERHOF_STACK or updating .stack:
echo home > .stack && direnv allow
echo work > .stack && direnv allow
stack config lives at $XDG_CONFIG_HOME/linderhof/<stack>/:
<stack>/
hosts.yml # server connection info
vault-pass # vault encryption key (chmod 600)
stack.env # per-stack shell vars (DOCKER_HOST, etc.)
group_vars/
all/
config.yml # public ansible settings
vault.yml # encrypted secrets
dns.yml # DNS zone definitions
overrides.yml # optional: variable overrides
service toggles
set enable_<service>: false in config.yml to disable a service. DNS records, Docker networks, and deployment tasks for that service will all be skipped automatically.
| variable | service |
|---|---|
enable_mail |
email (docker-mailserver + rainloop) |
enable_forgejo |
git hosting |
enable_tuwunel |
Matrix homeserver |
enable_monitoring |
Prometheus, Grafana, Loki, Alloy |
enable_goaccess |
web analytics |
enable_goaccess_sync |
rsync analytics reports to a remote host (off by default) |
enable_radicale |
CalDAV/CardDAV |
enable_restic |
encrypted backups (requires a Hetzner Storage Box — off by default) |
enable_nebula |
overlay network |
enable_diun |
Docker image update notifications |
enable_fail2ban |
intrusion prevention |
overriding variables
to override any variable without editing config.yml, create overrides.yml in the stack's group_vars/all/. ansible loads all files there automatically:
vi $LINDERHOF_DIR/group_vars/all/overrides.yml
# override mail hostname (e.g. during migration)
mail_hostname: mail2.example.com
# add extra static sites to caddy
caddy_sites:
- example.com
- example2.com
# add extra mail-hosted domains
mail_domains:
- example.com
- example2.com
secrets
sensitive data is stored in $LINDERHOF_DIR/group_vars/all/vault.yml, encrypted with ansible-vault. generated by setup.sh and never committed to the repo.
# view secrets
ansible-vault view $LINDERHOF_DIR/group_vars/all/vault.yml
# edit secrets (decrypts, opens editor, re-encrypts on save)
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
after first mail deployment — DKIM
run dkim_sync.yml once after the first mail deployment — it generates DKIM keys for all mail domains, writes them to your stack config, and publishes the mail._domainkey DNS records automatically:
ansible-playbook playbooks/dkim_sync.yml
keys are stored in $LINDERHOF_DIR/group_vars/all/dkim.yml (plain file — DKIM public keys are not secret). safe to re-run; only generates keys for domains that don't have one yet.
common operations
services are deployed to /srv/<service>. each has a compose.yml and can be managed with docker compose.
docker compose
cd /srv/mail && docker compose logs -f
cd /srv/caddy && docker compose restart
cd /srv/forgejo && docker compose ps
reloading caddy
cd /srv/caddy && docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
managing email users
# list accounts
docker exec mailserver setup email list
# add account
docker exec mailserver setup email add user@domain.com password
# delete account
docker exec mailserver setup email del user@domain.com
# update password
docker exec mailserver setup email update user@domain.com newpassword
to manage users via ansible, edit mail_users in the vault and run:
ansible-playbook playbooks/mail.yml --tags users
managing email aliases
docker exec mailserver setup alias list
docker exec mailserver setup alias add alias@domain.com target@domain.com
docker exec mailserver setup alias del alias@domain.com
managing forgejo
docker exec forgejo forgejo admin user list
docker exec forgejo forgejo admin user create --username myuser --password mypassword --email user@domain.com
docker exec forgejo forgejo admin user change-password --username myuser --password newpassword
monitoring
# reload prometheus config
docker exec prometheus kill -HUP 1
# restart alloy
cd /srv/monitoring && docker compose restart alloy
nebula overlay network
nebula runs directly on the host (not in Docker). certificates live in /etc/nebula/.
# sign a client certificate
cd /etc/nebula
nebula-cert sign -name "laptop" -ip "192.168.100.2/24"
# copy laptop.crt, laptop.key, ca.crt to client
backups
docker exec restic restic snapshots
docker exec restic restic backup /data
docker exec restic restic restore latest --target /restore