- Add dkim_sync.yml: generates DKIM keys for all mail_domains, writes keys to stack config (group_vars/all/dkim.yml), and publishes mail._domainkey TXT records via dns.yml — replaces manual vault editing - Remove dkim_keys from vault.yml.setup (public keys don't need encryption) - Add hcloud_labels to config.yml.setup and apply to server + SSH key in provision role, enabling project-level tagging of Hetzner resources - Fix setup.sh next steps: add missing bootstrap step, replace manual DKIM instructions with dkim_sync.yml - Update CLAUDE.md and README.md accordingly Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
5.7 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Linderhof is an Ansible-based self-hosting infrastructure stack that deploys email, web server, git hosting, Matrix homeserver, monitoring, and backup services using Docker Compose on Ubuntu servers.
Common Commands
# Select a stack (one-time per clone)
echo <stack-name> > .stack && direnv allow
# or: export LINDERHOF_STACK=<stack-name>
# Run all playbooks
ansible-playbook playbooks/site.yml
# Run a specific playbook
ansible-playbook playbooks/mail.yml
# Run specific tags only
ansible-playbook playbooks/site.yml --tags mail,monitoring
# Edit encrypted secrets
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
# Encrypt/decrypt vault
ansible-vault encrypt $LINDERHOF_DIR/group_vars/all/vault.yml
ansible-vault decrypt $LINDERHOF_DIR/group_vars/all/vault.yml
Note: Inventory and vault password are set via ANSIBLE_INVENTORY and ANSIBLE_VAULT_PASSWORD_FILE in .envrc, driven by LINDERHOF_STACK. No extra flags needed once the stack is selected.
Architecture
Deployment Pattern: Each service is deployed to /srv/<service>/ on the target host with a compose.yml and environment files.
Standalone Playbooks (not in site.yml):
provision.yml- Provision a cloud VM (Hetzner)dns.yml- Manage DNS zones/records via Hetzner DNS APIbootstrap.yml- First-time server setup (run once as root before site.yml)dkim_sync.yml- Fetch DKIM keys from mailserver and publish to DNS (run once after first mail deploy)
Full deployment order (fresh server):
provision.yml- create server, auto-writes IP to hosts.yml and config.ymldns.yml- create DNS recordsbootstrap.yml- users, SSH hardening, packages, Docker (connects as root)site.yml- deploy all servicesdkim_sync.yml- generate DKIM keys, write to stack config, publish to DNS
Playbook Execution Order (via site.yml):
- networks.yml - Pre-create all Docker networks (must run before any service)
- nebula.yml - Overlay network (Nebula)
- caddy.yml - Web server / reverse proxy
- mail.yml - Email (docker-mailserver + rainloop)
- forgejo.yml - Git server
- tuwunel.yml - Matrix homeserver (Tuwunel)
- radicale.yml - CalDAV/CardDAV
- monitoring.yml - Prometheus, Grafana, Loki, Alloy
- goaccess.yml - Web analytics
- diun.yml - Docker image update notifications
- restic.yml - Encrypted backups
- fail2ban.yml - Intrusion prevention
Mail TLS: on first deployment, the mail role stops Caddy, runs certbot standalone to acquire a Let's Encrypt cert for mail_hostname, then restarts Caddy. subsequent runs skip this (cert already exists). Caddy owns port 80 so standalone is the only viable approach without a DNS challenge plugin.
Role Structure: Each role in roles/ contains:
tasks/main.yml- Core provisioning taskstemplates/- Jinja2 templates (compose.yml.j2, config files)handlers/main.yml- Service restart handlersfiles/- Static configuration files
Configuration (lives outside the repo in $XDG_CONFIG_HOME/linderhof/<stack>/):
$LINDERHOF_DIR/hosts.yml- Host connection info only$LINDERHOF_DIR/group_vars/all/config.yml- All public configuration$LINDERHOF_DIR/group_vars/all/vault.yml- All secrets (encrypted)$LINDERHOF_DIR/group_vars/all/dns.yml- DNS zone definitions$LINDERHOF_DIR/group_vars/all/overrides.yml- Per-stack variable overrides (optional)$LINDERHOF_DIR/stack.env- Per-stack shell vars (DOCKER_HOST, etc.)$LINDERHOF_DIR/vault-pass- Vault encryption key (chmod 600)
Template files (in the repo, used by setup.sh):
inventory/group_vars/all/config.yml.setup- Config templateinventory/group_vars/all/vault.yml.setup- Vault templateinventory/group_vars/all/dns.yml.setup- DNS zones template
Overriding variables without editing config.yml — create overrides.yml:
# Example: override mail hostname during migration
mail_hostname: mail2.example.com
# Example: add extra static sites to Caddy
caddy_sites:
- example.com
- example2.com
Service Toggles: Set enable_<service>: false in config.yml to disable:
enable_mailenable_forgejoenable_tuwunelenable_monitoringenable_goaccessenable_goaccess_syncenable_radicaleenable_resticenable_fail2banenable_nebulaenable_diun
Docker Networks: All networks are pre-created by the docker_network role before any service deploys. Services declare all networks as external: true in their compose.yml.j2 — no service creates its own network. Networks are created conditionally based on enable_* flags:
| Network | Created when |
|---|---|
caddy |
always |
mail |
enable_mail |
webmail |
enable_mail |
git |
enable_forgejo |
monitoring |
enable_monitoring |
tuwunel |
enable_tuwunel |
radicale |
enable_radicale |
Caddy's compose.yml.j2 also conditionally declares network references using the same enable_* flags so it never references a network that wasn't created.
Adding a new service: create the network in docker_network/tasks/main.yml with the appropriate when: condition, declare it external: true in the service compose template, and add it to caddy's compose template if caddy needs to reach it.
Available Tags
bootstrap- Initial server setup (use--tags bootstrap)docker- Docker installationmail- Mail serverforgejo- Git servertuwunel- Matrix homeservermonitoring- Monitoring stackrestic- Backup configurationfail2ban- Intrusion preventionnebula- Overlay networkdiun- Docker image update notificationsconfig- Configuration-only updates