No description
Find a file
Matthias Johnson 75891c3271 initial commit
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 15:09:25 -07:00
inventory initial commit 2026-02-27 15:09:25 -07:00
playbooks initial commit 2026-02-27 15:09:25 -07:00
roles initial commit 2026-02-27 15:09:25 -07:00
.envrc initial commit 2026-02-27 15:09:25 -07:00
.gitignore initial commit 2026-02-27 15:09:25 -07:00
ansible.cfg initial commit 2026-02-27 15:09:25 -07:00
CLAUDE.md initial commit 2026-02-27 15:09:25 -07:00
README.md initial commit 2026-02-27 15:09:25 -07:00
requirements.txt initial commit 2026-02-27 15:09:25 -07:00
requirements.yml initial commit 2026-02-27 15:09:25 -07:00
setup.sh initial commit 2026-02-27 15:09:25 -07:00

Linderhof

Linderhof — the smallest and most intimate of Ludwig II's Bavarian palaces, the only one he lived to see completed; built entirely to his own vision as a private retreat. (Wikipedia)

a self-hosting stack based on ansible and docker compose that comes with

other features include:

  • runs on opensource
  • no databases / no external services

setup

prerequisites

install python dependencies and ansible collections:

pip install -r requirements.txt
ansible-galaxy collection install -r requirements.yml

quickstart

./setup.sh

the setup script walks you through everything interactively: stack name, SSH key, vault password, server details, domain, and secrets. it writes all generated config outside the repo to $XDG_CONFIG_HOME/linderhof/<stack>/ and won't overwrite existing files.

after setup, activate the stack and review the generated config:

direnv allow   # reads .stack file — or: export LINDERHOF_STACK=<stack>

vi $LINDERHOF_DIR/group_vars/all/config.yml
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml

then provision and deploy:

ansible-playbook playbooks/provision.yml
ansible-playbook playbooks/dns.yml
ansible-playbook playbooks/site.yml

multiple stacks

each stack is an independent deployment with its own inventory, vault, and secrets. to create a second stack:

./setup.sh          # enter a different stack name when prompted
echo other-stack > .stack && direnv allow

switch between stacks by changing LINDERHOF_STACK or updating .stack:

echo home > .stack && direnv allow
echo work > .stack && direnv allow

stack config lives at $XDG_CONFIG_HOME/linderhof/<stack>/:

<stack>/
  hosts.yml              # server connection info
  vault-pass             # vault encryption key (chmod 600)
  stack.env              # per-stack shell vars (DOCKER_HOST, etc.)
  group_vars/
    all/
      config.yml         # public ansible settings
      vault.yml          # encrypted secrets
      dns.yml            # DNS zone definitions
      overrides.yml      # optional: variable overrides

overriding variables

to override any variable without editing config.yml, create overrides.yml in the stack's group_vars/all/. ansible loads all files there automatically, so any key here wins over config.yml:

vi $LINDERHOF_DIR/group_vars/all/overrides.yml
# override mail hostname (e.g. during migration)
mail_hostname: mail2.example.com

# add extra static sites to caddy
caddy_sites:
  - example.com
  - example2.com

# add extra mail-hosted domains
mail_domains:
  - example.com
  - example2.com
  - example3.com

upstream documentation

secrets

sensitive data like passwords and DKIM keys is stored in $LINDERHOF_DIR/group_vars/all/vault.yml and encrypted with ansible-vault. see the setup section for what goes in there.

after first mail deployment, retrieve and add the DKIM public key:

docker exec mailserver cat /tmp/docker-mailserver/rspamd/dkim/<domain>/mail.pub
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
# add: dkim_keys:
#        example.com: "v=DKIM1; k=rsa; p=..."

then uncomment the mail._domainkey record in dns.yml and re-run ansible-playbook playbooks/dns.yml.

# edit secrets (decrypts in place, opens editor, re-encrypts on save)
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml

# decrypt for manual editing
ansible-vault decrypt $LINDERHOF_DIR/group_vars/all/vault.yml

# re-encrypt after editing
ansible-vault encrypt $LINDERHOF_DIR/group_vars/all/vault.yml

provisioning

provision a new cloud VM (currently supports Hetzner):

# provision with defaults (server_name and cloud_provider from config.yml)
ansible-playbook playbooks/provision.yml

# override server name or type
ansible-playbook playbooks/provision.yml -e server_name=aspen -e hcloud_server_type=cpx21

this registers your SSH key, creates the server, waits for SSH, and updates $LINDERHOF_DIR/hosts.yml with the new IP. after provisioning, update DNS and run the stack:

ansible-playbook playbooks/dns.yml
ansible-playbook playbooks/site.yml --tags bootstrap
ansible-playbook playbooks/site.yml

ansible playbooks

Run everything:

ansible-playbook playbooks/site.yml

Run playbooks individually for initial setup (in this order):

# 1. Bootstrap the server (users, packages, ssh, etc.)
ansible-playbook playbooks/bootstrap.yml

# 2. Install docker
ansible-playbook playbooks/docker.yml

# 3. Set up nebula overlay network
ansible-playbook playbooks/nebula.yml

# 4. Set up the web server
ansible-playbook playbooks/caddy.yml

# 5. Set up the mail server
ansible-playbook playbooks/mail.yml

# 6. Set up forgejo (git server)
ansible-playbook playbooks/forgejo.yml

# 7. Set up tuwunel (matrix homeserver)
ansible-playbook playbooks/tuwunel.yml

# 8. Set up monitoring (prometheus, grafana, loki, alloy)
ansible-playbook playbooks/monitoring.yml

# 9. Set up goaccess (web analytics)
ansible-playbook playbooks/goaccess.yml

# 10. Set up diun (docker image update notifier)
ansible-playbook playbooks/diun.yml

# 11. Set up restic backups
ansible-playbook playbooks/restic.yml

# 12. Set up fail2ban
ansible-playbook playbooks/fail2ban.yml

Run only specific tags:

ansible-playbook playbooks/site.yml --tags mail,monitoring

common operations

Services are deployed to /srv/<service>. Each has a compose.yml and can be managed with docker compose.

running docker compose commands

# Always cd to the service directory first
cd /srv/mail && docker compose logs -f
cd /srv/caddy && docker compose restart
cd /srv/forgejo && docker compose ps
cd /srv/tuwunel && docker compose up -d
cd /srv/monitoring && docker compose up -d

reloading caddy

# Reload caddy configuration without downtime
cd /srv/caddy && docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile

managing email users

# List all email accounts
docker exec mailserver setup email list

# Add a new email account
docker exec mailserver setup email add user@domain.com password

# Delete an email account
docker exec mailserver setup email del user@domain.com

# Update password
docker exec mailserver setup email update user@domain.com newpassword

To add users via ansible, add them to mail_users in the vault and run:

ansible-playbook --tags users playbooks/mail.yml

managing email aliases

# List aliases
docker exec mailserver setup alias list

# Add an alias
docker exec mailserver setup alias add alias@domain.com target@domain.com

# Delete an alias
docker exec mailserver setup alias del alias@domain.com

managing forgejo

# Access the forgejo CLI
docker exec -it forgejo forgejo

# List users
docker exec forgejo forgejo admin user list

# Create a new user
docker exec forgejo forgejo admin user create --username myuser --password mypassword --email user@domain.com

# Reset a user's password
docker exec forgejo forgejo admin user change-password --username myuser --password newpassword

# Delete a user
docker exec forgejo forgejo admin user delete --username myuser

managing tuwunel (matrix)

# View tuwunel logs
cd /srv/tuwunel && docker compose logs -f

# Restart tuwunel
cd /srv/tuwunel && docker compose restart

# Check federation status
curl https://chat.example.com/_matrix/federation/v1/version

# Check well-known delegation
curl https://example.com/.well-known/matrix/server
curl https://example.com/.well-known/matrix/client

monitoring stack

# Reload prometheus configuration
docker exec prometheus kill -HUP 1

# Restart alloy to pick up config changes
cd /srv/monitoring && docker compose restart alloy

# Check prometheus targets
curl -s localhost:9090/api/v1/targets | jq '.data.activeTargets[] | {job: .labels.job, health: .health}'

# Check alloy status
curl -s localhost:12345/-/ready

viewing logs

cd /srv/mail && docker compose logs -f mailserver
cd /srv/caddy && docker compose logs -f caddy
cd /srv/forgejo && docker compose logs -f forgejo
cd /srv/tuwunel && docker compose logs -f tuwunel
cd /srv/monitoring && docker compose logs -f grafana
cd /srv/monitoring && docker compose logs -f prometheus
cd /srv/monitoring && docker compose logs -f loki
cd /srv/monitoring && docker compose logs -f alloy

managing nebula

Nebula runs directly on the host (not in Docker). The CA key and certificates are stored in /etc/nebula/.

# Sign a client certificate
ssh server
cd /etc/nebula
nebula-cert sign -name "laptop" -ip "192.168.100.2/24"
# Copy laptop.crt, laptop.key, and ca.crt to client device

On the client, install Nebula and create a config with am_lighthouse: false and a static_host_map pointing to the server's public IP:

static_host_map:
  "192.168.100.1": ["YOUR_SERVER_PUBLIC_IP:4242"]

lighthouse:
  am_lighthouse: false
  hosts:
    - "192.168.100.1"

dns management

DNS records are managed via the Hetzner DNS API:

ansible-playbook playbooks/dns.yml

backups

# Check backup status
docker exec restic restic snapshots

# Run a manual backup
docker exec restic restic backup /data

# Restore from backup
docker exec restic restic restore latest --target /restore