415 lines
11 KiB
Markdown
415 lines
11 KiB
Markdown
|
|
# Linderhof
|
||
|
|
|
||
|
|
> *Linderhof* — the smallest and most intimate of Ludwig II's Bavarian palaces, the only one he lived to see completed; built entirely to his own vision as a private retreat. ([Wikipedia](https://en.wikipedia.org/wiki/Linderhof_Palace))
|
||
|
|
|
||
|
|
a self-hosting stack based on ansible and docker compose that comes with
|
||
|
|
|
||
|
|
- email
|
||
|
|
- [docker-mailserver](https://github.com/docker-mailserver/docker-mailserver)
|
||
|
|
- [rainloop](https://www.rainloop.net/)
|
||
|
|
- web server
|
||
|
|
- [caddy](https://caddyserver.com/)
|
||
|
|
- git server
|
||
|
|
- [forgejo](https://forgejo.org/)
|
||
|
|
- matrix homeserver
|
||
|
|
- [tuwunel](https://github.com/matrix-construct/tuwunel)
|
||
|
|
- monitoring
|
||
|
|
- [alloy](https://github.com/grafana/alloy)
|
||
|
|
- [grafana](https://grafana.com/)
|
||
|
|
- [prometheus](https://prometheus.io/)
|
||
|
|
- [loki](https://github.com/grafana/loki)
|
||
|
|
- web analytics
|
||
|
|
- [goaccess](https://goaccess.io/)
|
||
|
|
- backups
|
||
|
|
- [restic](https://github.com/restic/restic)
|
||
|
|
- overlay network
|
||
|
|
- [nebula](https://github.com/slackhq/nebula)
|
||
|
|
- docker image update notifications
|
||
|
|
- [diun](https://github.com/crazy-max/diun)
|
||
|
|
- intrusion prevention
|
||
|
|
- [fail2ban](https://github.com/fail2ban/fail2ban)
|
||
|
|
|
||
|
|
|
||
|
|
other features include:
|
||
|
|
- runs on opensource
|
||
|
|
- no databases / no external services
|
||
|
|
|
||
|
|
|
||
|
|
## setup
|
||
|
|
|
||
|
|
### prerequisites
|
||
|
|
|
||
|
|
- [direnv](https://direnv.net/) (optional, loads `.envrc` automatically)
|
||
|
|
- a [Hetzner Cloud](https://console.hetzner.cloud/) account with an API token (Read & Write)
|
||
|
|
- a [Hetzner Storage Box](https://www.hetzner.com/storage/storage-box/) (for restic backups, optional)
|
||
|
|
|
||
|
|
install python dependencies and ansible collections:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
pip install -r requirements.txt
|
||
|
|
ansible-galaxy collection install -r requirements.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
### quickstart
|
||
|
|
|
||
|
|
```bash
|
||
|
|
./setup.sh
|
||
|
|
```
|
||
|
|
|
||
|
|
the setup script walks you through everything interactively: stack name, SSH key, vault password, server details, domain, and secrets. it writes all generated config outside the repo to `$XDG_CONFIG_HOME/linderhof/<stack>/` and won't overwrite existing files.
|
||
|
|
|
||
|
|
after setup, activate the stack and review the generated config:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
direnv allow # reads .stack file — or: export LINDERHOF_STACK=<stack>
|
||
|
|
|
||
|
|
vi $LINDERHOF_DIR/group_vars/all/config.yml
|
||
|
|
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
then provision and deploy:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ansible-playbook playbooks/provision.yml
|
||
|
|
ansible-playbook playbooks/dns.yml
|
||
|
|
ansible-playbook playbooks/site.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
### multiple stacks
|
||
|
|
|
||
|
|
each stack is an independent deployment with its own inventory, vault, and secrets. to create a second stack:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
./setup.sh # enter a different stack name when prompted
|
||
|
|
echo other-stack > .stack && direnv allow
|
||
|
|
```
|
||
|
|
|
||
|
|
switch between stacks by changing `LINDERHOF_STACK` or updating `.stack`:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
echo home > .stack && direnv allow
|
||
|
|
echo work > .stack && direnv allow
|
||
|
|
```
|
||
|
|
|
||
|
|
stack config lives at `$XDG_CONFIG_HOME/linderhof/<stack>/`:
|
||
|
|
|
||
|
|
```
|
||
|
|
<stack>/
|
||
|
|
hosts.yml # server connection info
|
||
|
|
vault-pass # vault encryption key (chmod 600)
|
||
|
|
stack.env # per-stack shell vars (DOCKER_HOST, etc.)
|
||
|
|
group_vars/
|
||
|
|
all/
|
||
|
|
config.yml # public ansible settings
|
||
|
|
vault.yml # encrypted secrets
|
||
|
|
dns.yml # DNS zone definitions
|
||
|
|
overrides.yml # optional: variable overrides
|
||
|
|
```
|
||
|
|
|
||
|
|
### overriding variables
|
||
|
|
|
||
|
|
to override any variable without editing `config.yml`, create `overrides.yml` in the stack's `group_vars/all/`. ansible loads all files there automatically, so any key here wins over `config.yml`:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
vi $LINDERHOF_DIR/group_vars/all/overrides.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
# override mail hostname (e.g. during migration)
|
||
|
|
mail_hostname: mail2.example.com
|
||
|
|
|
||
|
|
# add extra static sites to caddy
|
||
|
|
caddy_sites:
|
||
|
|
- example.com
|
||
|
|
- example2.com
|
||
|
|
|
||
|
|
# add extra mail-hosted domains
|
||
|
|
mail_domains:
|
||
|
|
- example.com
|
||
|
|
- example2.com
|
||
|
|
- example3.com
|
||
|
|
```
|
||
|
|
|
||
|
|
## upstream documentation
|
||
|
|
|
||
|
|
- [docker-mailserver](https://docker-mailserver.github.io/docker-mailserver/latest/)
|
||
|
|
- [rainloop](https://www.rainloop.net/docs/configuration/)
|
||
|
|
- [caddy](https://caddyserver.com/docs/)
|
||
|
|
- [forgejo](https://forgejo.org/docs/latest/)
|
||
|
|
- [tuwunel](https://github.com/matrix-construct/tuwunel)
|
||
|
|
- [alloy (Grafana Alloy)](https://grafana.com/docs/alloy/latest/)
|
||
|
|
- [grafana](https://grafana.com/docs/grafana/latest/)
|
||
|
|
- [prometheus](https://prometheus.io/docs/)
|
||
|
|
- [loki](https://grafana.com/docs/loki/latest/)
|
||
|
|
- [goaccess](https://goaccess.io/man)
|
||
|
|
- [restic](https://restic.readthedocs.io/)
|
||
|
|
- [nebula](https://nebula.defined.net/docs/)
|
||
|
|
- [diun](https://crazymax.dev/diun/)
|
||
|
|
- [fail2ban](https://fail2ban.readthedocs.io/)
|
||
|
|
|
||
|
|
## secrets
|
||
|
|
|
||
|
|
sensitive data like passwords and DKIM keys is stored in `$LINDERHOF_DIR/group_vars/all/vault.yml` and encrypted with ansible-vault. see the [setup](#setup) section for what goes in there.
|
||
|
|
|
||
|
|
after first mail deployment, retrieve and add the DKIM public key:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
docker exec mailserver cat /tmp/docker-mailserver/rspamd/dkim/<domain>/mail.pub
|
||
|
|
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
|
||
|
|
# add: dkim_keys:
|
||
|
|
# example.com: "v=DKIM1; k=rsa; p=..."
|
||
|
|
```
|
||
|
|
|
||
|
|
then uncomment the `mail._domainkey` record in `dns.yml` and re-run `ansible-playbook playbooks/dns.yml`.
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# edit secrets (decrypts in place, opens editor, re-encrypts on save)
|
||
|
|
ansible-vault edit $LINDERHOF_DIR/group_vars/all/vault.yml
|
||
|
|
|
||
|
|
# decrypt for manual editing
|
||
|
|
ansible-vault decrypt $LINDERHOF_DIR/group_vars/all/vault.yml
|
||
|
|
|
||
|
|
# re-encrypt after editing
|
||
|
|
ansible-vault encrypt $LINDERHOF_DIR/group_vars/all/vault.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
## provisioning
|
||
|
|
|
||
|
|
provision a new cloud VM (currently supports Hetzner):
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# provision with defaults (server_name and cloud_provider from config.yml)
|
||
|
|
ansible-playbook playbooks/provision.yml
|
||
|
|
|
||
|
|
# override server name or type
|
||
|
|
ansible-playbook playbooks/provision.yml -e server_name=aspen -e hcloud_server_type=cpx21
|
||
|
|
```
|
||
|
|
|
||
|
|
this registers your SSH key, creates the server, waits for SSH, and updates `$LINDERHOF_DIR/hosts.yml` with the new IP. after provisioning, update DNS and run the stack:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ansible-playbook playbooks/dns.yml
|
||
|
|
ansible-playbook playbooks/site.yml --tags bootstrap
|
||
|
|
ansible-playbook playbooks/site.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
## ansible playbooks
|
||
|
|
|
||
|
|
Run everything:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ansible-playbook playbooks/site.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
Run playbooks individually for initial setup (in this order):
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# 1. Bootstrap the server (users, packages, ssh, etc.)
|
||
|
|
ansible-playbook playbooks/bootstrap.yml
|
||
|
|
|
||
|
|
# 2. Install docker
|
||
|
|
ansible-playbook playbooks/docker.yml
|
||
|
|
|
||
|
|
# 3. Set up nebula overlay network
|
||
|
|
ansible-playbook playbooks/nebula.yml
|
||
|
|
|
||
|
|
# 4. Set up the web server
|
||
|
|
ansible-playbook playbooks/caddy.yml
|
||
|
|
|
||
|
|
# 5. Set up the mail server
|
||
|
|
ansible-playbook playbooks/mail.yml
|
||
|
|
|
||
|
|
# 6. Set up forgejo (git server)
|
||
|
|
ansible-playbook playbooks/forgejo.yml
|
||
|
|
|
||
|
|
# 7. Set up tuwunel (matrix homeserver)
|
||
|
|
ansible-playbook playbooks/tuwunel.yml
|
||
|
|
|
||
|
|
# 8. Set up monitoring (prometheus, grafana, loki, alloy)
|
||
|
|
ansible-playbook playbooks/monitoring.yml
|
||
|
|
|
||
|
|
# 9. Set up goaccess (web analytics)
|
||
|
|
ansible-playbook playbooks/goaccess.yml
|
||
|
|
|
||
|
|
# 10. Set up diun (docker image update notifier)
|
||
|
|
ansible-playbook playbooks/diun.yml
|
||
|
|
|
||
|
|
# 11. Set up restic backups
|
||
|
|
ansible-playbook playbooks/restic.yml
|
||
|
|
|
||
|
|
# 12. Set up fail2ban
|
||
|
|
ansible-playbook playbooks/fail2ban.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
Run only specific tags:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ansible-playbook playbooks/site.yml --tags mail,monitoring
|
||
|
|
```
|
||
|
|
|
||
|
|
## common operations
|
||
|
|
|
||
|
|
Services are deployed to `/srv/<service>`. Each has a `compose.yml` and can be managed with docker compose.
|
||
|
|
|
||
|
|
### running docker compose commands
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Always cd to the service directory first
|
||
|
|
cd /srv/mail && docker compose logs -f
|
||
|
|
cd /srv/caddy && docker compose restart
|
||
|
|
cd /srv/forgejo && docker compose ps
|
||
|
|
cd /srv/tuwunel && docker compose up -d
|
||
|
|
cd /srv/monitoring && docker compose up -d
|
||
|
|
```
|
||
|
|
|
||
|
|
### reloading caddy
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Reload caddy configuration without downtime
|
||
|
|
cd /srv/caddy && docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
|
||
|
|
```
|
||
|
|
|
||
|
|
### managing email users
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# List all email accounts
|
||
|
|
docker exec mailserver setup email list
|
||
|
|
|
||
|
|
# Add a new email account
|
||
|
|
docker exec mailserver setup email add user@domain.com password
|
||
|
|
|
||
|
|
# Delete an email account
|
||
|
|
docker exec mailserver setup email del user@domain.com
|
||
|
|
|
||
|
|
# Update password
|
||
|
|
docker exec mailserver setup email update user@domain.com newpassword
|
||
|
|
```
|
||
|
|
|
||
|
|
To add users via ansible, add them to `mail_users` in the vault and run:
|
||
|
|
```bash
|
||
|
|
ansible-playbook --tags users playbooks/mail.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
### managing email aliases
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# List aliases
|
||
|
|
docker exec mailserver setup alias list
|
||
|
|
|
||
|
|
# Add an alias
|
||
|
|
docker exec mailserver setup alias add alias@domain.com target@domain.com
|
||
|
|
|
||
|
|
# Delete an alias
|
||
|
|
docker exec mailserver setup alias del alias@domain.com
|
||
|
|
```
|
||
|
|
|
||
|
|
### managing forgejo
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Access the forgejo CLI
|
||
|
|
docker exec -it forgejo forgejo
|
||
|
|
|
||
|
|
# List users
|
||
|
|
docker exec forgejo forgejo admin user list
|
||
|
|
|
||
|
|
# Create a new user
|
||
|
|
docker exec forgejo forgejo admin user create --username myuser --password mypassword --email user@domain.com
|
||
|
|
|
||
|
|
# Reset a user's password
|
||
|
|
docker exec forgejo forgejo admin user change-password --username myuser --password newpassword
|
||
|
|
|
||
|
|
# Delete a user
|
||
|
|
docker exec forgejo forgejo admin user delete --username myuser
|
||
|
|
```
|
||
|
|
|
||
|
|
### managing tuwunel (matrix)
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# View tuwunel logs
|
||
|
|
cd /srv/tuwunel && docker compose logs -f
|
||
|
|
|
||
|
|
# Restart tuwunel
|
||
|
|
cd /srv/tuwunel && docker compose restart
|
||
|
|
|
||
|
|
# Check federation status
|
||
|
|
curl https://chat.example.com/_matrix/federation/v1/version
|
||
|
|
|
||
|
|
# Check well-known delegation
|
||
|
|
curl https://example.com/.well-known/matrix/server
|
||
|
|
curl https://example.com/.well-known/matrix/client
|
||
|
|
```
|
||
|
|
|
||
|
|
### monitoring stack
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Reload prometheus configuration
|
||
|
|
docker exec prometheus kill -HUP 1
|
||
|
|
|
||
|
|
# Restart alloy to pick up config changes
|
||
|
|
cd /srv/monitoring && docker compose restart alloy
|
||
|
|
|
||
|
|
# Check prometheus targets
|
||
|
|
curl -s localhost:9090/api/v1/targets | jq '.data.activeTargets[] | {job: .labels.job, health: .health}'
|
||
|
|
|
||
|
|
# Check alloy status
|
||
|
|
curl -s localhost:12345/-/ready
|
||
|
|
```
|
||
|
|
|
||
|
|
### viewing logs
|
||
|
|
|
||
|
|
```bash
|
||
|
|
cd /srv/mail && docker compose logs -f mailserver
|
||
|
|
cd /srv/caddy && docker compose logs -f caddy
|
||
|
|
cd /srv/forgejo && docker compose logs -f forgejo
|
||
|
|
cd /srv/tuwunel && docker compose logs -f tuwunel
|
||
|
|
cd /srv/monitoring && docker compose logs -f grafana
|
||
|
|
cd /srv/monitoring && docker compose logs -f prometheus
|
||
|
|
cd /srv/monitoring && docker compose logs -f loki
|
||
|
|
cd /srv/monitoring && docker compose logs -f alloy
|
||
|
|
```
|
||
|
|
|
||
|
|
### managing nebula
|
||
|
|
|
||
|
|
Nebula runs directly on the host (not in Docker). The CA key and certificates are stored in `/etc/nebula/`.
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Sign a client certificate
|
||
|
|
ssh server
|
||
|
|
cd /etc/nebula
|
||
|
|
nebula-cert sign -name "laptop" -ip "192.168.100.2/24"
|
||
|
|
# Copy laptop.crt, laptop.key, and ca.crt to client device
|
||
|
|
```
|
||
|
|
|
||
|
|
On the client, install Nebula and create a config with `am_lighthouse: false` and a `static_host_map` pointing to the server's public IP:
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
static_host_map:
|
||
|
|
"192.168.100.1": ["YOUR_SERVER_PUBLIC_IP:4242"]
|
||
|
|
|
||
|
|
lighthouse:
|
||
|
|
am_lighthouse: false
|
||
|
|
hosts:
|
||
|
|
- "192.168.100.1"
|
||
|
|
```
|
||
|
|
|
||
|
|
### dns management
|
||
|
|
|
||
|
|
DNS records are managed via the Hetzner DNS API:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ansible-playbook playbooks/dns.yml
|
||
|
|
```
|
||
|
|
|
||
|
|
### backups
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Check backup status
|
||
|
|
docker exec restic restic snapshots
|
||
|
|
|
||
|
|
# Run a manual backup
|
||
|
|
docker exec restic restic backup /data
|
||
|
|
|
||
|
|
# Restore from backup
|
||
|
|
docker exec restic restic restore latest --target /restore
|
||
|
|
```
|