Hostityourself/infra/docker-compose.yml
Claude 48b9ccf152
feat: M4 Hardening — encryption, resource limits, monitoring, backups
## Env var encryption at rest (AES-256-GCM)
- server/src/crypto.rs: new module — encrypt/decrypt with AES-256-GCM
  Key = SHA-256(HIY_SECRET_KEY); non-prefixed values pass through
  transparently for zero-downtime migration
- Cargo.toml: aes-gcm = "0.10"
- routes/envvars.rs: encrypt on SET; list returns masked values (••••)
- routes/databases.rs: pg_password and DATABASE_URL stored encrypted
- routes/ui.rs: decrypt pg_password when rendering DB card
- builder.rs: decrypt env vars when writing the .env file for containers
- .env.example: add HIY_SECRET_KEY entry

## Per-app resource limits
- apps table: memory_limit (default 512m) + cpu_limit (default 0.5)
  added via idempotent ALTER TABLE in db.rs migration
- models.rs: App, CreateApp, UpdateApp gain memory_limit + cpu_limit
- routes/apps.rs: persist limits on create, update via PUT
- builder.rs: pass MEMORY_LIMIT + CPU_LIMIT to build script
- builder/build.sh: use $MEMORY_LIMIT / $CPU_LIMIT in podman run
  (replaces hardcoded --cpus="0.5"; --memory now also set)

## Monitoring (opt-in compose profile)
- infra/docker-compose.yml: gatus + netdata under `monitoring` profile
  Enable: podman compose --profile monitoring up -d
  Gatus on :8080, Netdata on :19999
- infra/gatus.yml: Gatus config checking HIY /api/status every minute

## Backup cron job
- infra/backup.sh: dumps SQLite, copies env files + git repos into a
  dated .tar.gz; optional rclone upload; 30-day local retention
  Suggested cron: 0 3 * * * /path/to/infra/backup.sh

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 15:06:42 +00:00

149 lines
5 KiB
YAML

# HIY — local development stack
# Run with: podman compose up --build (or: docker compose up --build)
#
# On a real Pi you would run Caddy as a systemd service; here it runs in Compose
# so you can develop without changing the host.
services:
# ── Podman socket proxy (unix → TCP) ──────────────────────────────────────
# start.sh exports PODMAN_SOCK before invoking compose, so the correct
# socket is used regardless of rootful vs rootless:
# rootful: /run/podman/podman.sock
# rootless: /run/user/<UID>/podman/podman.sock (start.sh sets this)
podman-proxy:
image: alpine/socat
command: tcp-listen:2375,fork,reuseaddr unix-connect:/podman.sock
restart: unless-stopped
volumes:
- ${PODMAN_SOCK}:/podman.sock
networks:
- hiy-net
# ── Control plane ─────────────────────────────────────────────────────────
server:
build:
context: ..
dockerfile: infra/Dockerfile.server
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- hiy-data:/data
# Mount the builder script so edits take effect without rebuilding.
- ../builder:/app/builder:ro
env_file:
- path: ../.env
required: false
environment:
HIY_DATA_DIR: /data
HIY_ADDR: 0.0.0.0:3000
HIY_BUILD_SCRIPT: /app/builder/build.sh
CADDY_API_URL: http://caddy:2019
DOCKER_HOST: tcp://podman-proxy:2375
# CONTAINER_HOST is the Podman-native equivalent of DOCKER_HOST.
# Setting it makes `podman` automatically operate in remote mode and
# delegate all builds/runs to the host's Podman service via the proxy,
# instead of trying to run Podman locally inside this container (which
# would fail: no user-namespace support in an unprivileged container).
CONTAINER_HOST: tcp://podman-proxy:2375
RUST_LOG: hiy_server=debug,tower_http=info
POSTGRES_URL: postgres://hiy_admin:${POSTGRES_PASSWORD}@postgres:5432/hiy
depends_on:
caddy:
condition: service_started
podman-proxy:
condition: service_started
postgres:
condition: service_started
networks:
- hiy-net
- default
# ── Shared Postgres ───────────────────────────────────────────────────────
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: hiy
POSTGRES_USER: hiy_admin
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- hiy-pg-data:/var/lib/postgresql/data
networks:
- hiy-net
# ── Reverse proxy ─────────────────────────────────────────────────────────
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
# Port 2019 (Caddy admin API) is intentionally NOT published to the host.
# It is only reachable within the hiy-net Docker network (http://caddy:2019).
env_file:
- path: ../.env
required: false
volumes:
- ../proxy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
- caddy-config:/config
command: caddy run --config /etc/caddy/Caddyfile --adapter caddyfile --resume
networks:
- hiy-net
- default
# ── Uptime / health checks ────────────────────────────────────────────────
# Enable with: podman compose --profile monitoring up -d
gatus:
profiles: [monitoring]
image: twinproduction/gatus:latest
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./gatus.yml:/config/config.yaml:ro
networks:
- hiy-net
# ── Host metrics (rootful Podman / Docker only) ───────────────────────────
# On rootless Podman some host mounts may be unavailable; comment out if so.
netdata:
profiles: [monitoring]
image: netdata/netdata:stable
restart: unless-stopped
ports:
- "19999:19999"
pid: host
cap_add:
- SYS_PTRACE
- SYS_ADMIN
security_opt:
- apparmor:unconfined
volumes:
- netdata-config:/etc/netdata
- netdata-lib:/var/lib/netdata
- netdata-cache:/var/cache/netdata
- /etc/os-release:/host/etc/os-release:ro
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
networks:
- hiy-net
networks:
hiy-net:
name: hiy-net
# External so deployed app containers can join it.
external: false
volumes:
hiy-data:
caddy-data:
caddy-config:
hiy-pg-data:
netdata-config:
netdata-lib:
netdata-cache: