auto-update.sh fetches origin every 5 minutes. If new commits are
found it pulls and selectively restarts only what changed:
- server/ or Cargo.* → rebuild + restart server container
- docker-compose.yml → full stack up -d
- proxy/Caddyfile → caddy reload
- anything else → no restart needed
start.sh now installs hiy-update.service + hiy-update.timer alongside
the existing hiy.service boot unit.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Adds the act-runner service alongside Forgejo. It connects to the
Podman socket proxy so CI jobs can build and run containers on the Pi.
Also enables FORGEJO__actions__ENABLED on the Forgejo service.
FORGEJO_RUNNER_TOKEN must be set in .env — obtain it from:
Forgejo → Site Administration → Actions → Runners → Create new runner
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
backup.sh now covers all data:
- SQLite via podman exec into server container (fallback to host path)
- Postgres via pg_dumpall inside postgres container
- Forgejo data volume via podman volume export
- Caddy TLS certificates via podman volume export
- .env file (plaintext secrets — store archive securely)
restore.sh reverses each step: imports volumes, restores Postgres,
restores SQLite, optionally restores .env (--force to overwrite).
Both scripts find containers dynamically via compose service labels
so they work regardless of the container name podman-compose assigns.
.env.example documents HIY_BACKUP_DIR, HIY_BACKUP_REMOTE,
HIY_BACKUP_RETAIN_DAYS.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Adds a pg_isready healthcheck to the postgres service and upgrades the
Forgejo depends_on to condition: service_healthy, preventing the
"connection refused" crash on startup.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Replaced hardcoded 'CHANGE_ME' in the SQL init file with a shell script
that reads FORGEJO_DB_PASSWORD from the environment. Also pass the variable
into the postgres service in docker-compose.yml so it is available at init time.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
--resume caused Caddyfile changes (e.g. new Forgejo block) to be silently
ignored on restart because Caddy preferred its saved in-memory config.
Instead, Caddy now always starts clean from the Caddyfile, and the HIY
server re-registers every app's Caddy route from the DB on startup
(restore_caddy_routes). This gives us the best of both worlds:
- Caddyfile changes (static services, TLS config) are always picked up
- App routes are restored automatically without needing a redeploy
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- docker-compose.yml: Forgejo service on hiy-net, configured via env vars
- postgres-init/01-forgejo.sql: creates forgejo user + database on first Postgres init
- .env.example: document FORGEJO_DB_PASSWORD and FORGEJO_DOMAIN
Routing: add FORGEJO_DOMAIN as an app in HIY pointing to forgejo:3000,
or add a Caddyfile block manually.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
DOMAIN_SUFFIX=local (or any non-localhost LAN name) caused a TLS handshake
failure because Caddy attempted an ACME challenge that can never succeed for
private domains.
- Caddyfile: tls {$ACME_EMAIL:internal} — falls back to Caddy's built-in CA
when ACME_EMAIL is absent, uses Let's Encrypt when it is set.
- start.sh: ACME_EMAIL is now optional; missing it prints a warning instead
of aborting, so local/LAN setups work without an email address.
To trust the self-signed cert in a browser run: caddy trust
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
include_str!("../../templates/...") is resolved at compile time, so the
template files must be present in the Docker build context. The previous
Dockerfile only copied server/src, not server/templates.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman-compose does not populate BUILDPLATFORM/TARGETARCH build args, so
the platform-detection logic always fell back to x86_64 — even on arm64.
This caused cc-rs to look for 'x86_64-linux-gnu-gcc' instead of 'gcc'.
Replace the entire cross-compile scaffolding with a plain native build:
cargo build --release (no --target)
Cargo targets the host platform automatically. If cross-compilation is
ever needed it can be reintroduced with a properly-tested setup.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
rust:slim-bookworm doesn't include gcc, and aes-gcm's build deps (via
cc-rs) need a C compiler. With --target x86_64-unknown-linux-gnu set
explicitly, cc-rs looks for the cross-compiler 'x86_64-linux-gnu-gcc'
instead of native 'gcc'.
Fix: install gcc in the build stage and add a [target.x86_64-*] linker
entry pointing to 'gcc' so cc-rs finds it on native x86_64 builds.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Podman without unqualified-search registries configured in
/etc/containers/registries.conf refuses to resolve short image names.
Prefix every image with docker.io/library/ (official images) or
docker.io/<org>/ (third-party) so pulls succeed unconditionally.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman-compose requires all networks referenced in service configs to be
explicitly declared in the top-level networks block. Docker Compose
creates the default network implicitly, but podman-compose errors with
'missing networks: default'.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
One Postgres 16 instance runs in the infra stack (docker-compose).
Each app can be given its own isolated schema with a dedicated,
scoped Postgres user via the new Database card on the app detail page.
What was added:
infra/
docker-compose.yml — postgres:16-alpine service + hiy-pg-data
volume; POSTGRES_URL injected into server
.env.example — POSTGRES_PASSWORD entry
server/
Cargo.toml — sqlx postgres feature
src/db.rs — databases table (SQLite) migration
src/models.rs — Database model
src/main.rs — PgPool (lazy) added to AppState;
/api/apps/:id/database routes registered
src/routes/mod.rs — databases module
src/routes/databases.rs — GET / POST / DELETE handlers:
provision — creates schema + scoped PG user, sets search_path,
injects DATABASE_URL env var
deprovision — DROP OWNED BY + DROP ROLE + DROP SCHEMA CASCADE,
removes SQLite record
src/routes/ui.rs — app_detail queries databases table, renders
db_card based on provisioning state
templates/app_detail.html — {{db_card}} placeholder +
provisionDb / deprovisionDb JS
Apps connect via:
postgres://hiy-<app>:<pw>@postgres:5432/hiy
search_path is set on the role so no URL option is needed.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Two root causes:
1. Caddy was started without --resume, so every restart wiped all
dynamically-registered app routes (only the base Caddyfile survived).
Adding --resume makes Caddy reload its auto-saved config (stored in
the caddy-config volume) which includes all app routes.
2. App routes used the container IP address, which changes whenever
hiy-net is torn down and recreated by compose. Switch to the
container name as the upstream dial address; Podman's aardvark-dns
resolves it by name within hiy-net, so it stays valid across
network recreations.
Together with the existing reconnect loop in start.sh these two
changes mean deployed apps survive a platform restart without needing
a redeploy.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
compose down destroys hiy-net and evicts running hiy-* containers
from it. compose up recreates the network but leaves those containers
disconnected, making them unreachable until a redeploy.
After compose up, reconnect all running hiy-* containers to hiy-net.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Without podman system migrate, compose down/up only touches infra
containers. Deployed hiy-* containers are never stopped during a
platform restart so they need no special handling there.
The restart loop stays in boot.sh where it is needed (system reboot
stops all containers).
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
It was added to "pick up subuid/subgid mappings" but that's not what it
does — it migrates container storage after a Podman version upgrade.
Subuid/subgid changes are picked up by restarting the Podman socket,
which the script already does. The only effect of running it was stopping
all containers on every platform start.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman system migrate explicitly stops all containers, which overrides
the --restart unless-stopped policy set on deployed apps. After compose
up-d brings the infra stack back, any exited hiy-* container is now
restarted automatically.
Same logic added to boot.sh for the on-boot path.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- Add infra/boot.sh: lightweight startup (no build) that brings up the
Podman stack — used by the systemd unit on every system boot
- start.sh now installs/refreshes hiy.service (a systemd --user unit)
and enables loginctl linger so it runs without an active login session
After the next `infra/start.sh` run the Pi will automatically restart
the stack after a reboot or power cut.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman system migrate was stopping all containers immediately (visible in
the terminal output as "stopped <id>" lines), before the build even began.
Moving it to just before compose down/up means running containers stay
alive for the entire duration of the image build.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Old behaviour: compose down → long build → compose up
New behaviour: long build (service stays live) → compose down → compose up
Downtime is now limited to the few seconds of the swap instead of the
entire duration of the Rust/image build.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
runc (used by Podman) always writes memory.swap.max when initializing the
cgroup v2 memory controller, even without explicit --memory flags. On
Raspberry Pi OS this file is absent because swap accounting is disabled
by default in the kernel, causing every container start to fail with:
openat2 …/memory.swap.max: no such file or directory
start.sh now detects this condition early, patches the kernel cmdline
(cgroup_enable=memory cgroup_memory=1 swapaccount=1) in either
/boot/firmware/cmdline.txt (Pi OS Bookworm) or /boot/cmdline.txt
(older releases), and tells the user to reboot once before continuing.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
build.sh calls `podman build` inside the server container.
DOCKER_HOST is a Docker CLI variable; Podman does not use it to
automatically switch to remote mode. Without CONTAINER_HOST set,
Podman runs locally inside the (unprivileged) container, has no
user-namespace support, and lchown fails for any layer file owned
by a non-zero GID (e.g. gid=42 for /etc/shadow).
Setting CONTAINER_HOST=tcp://podman-proxy:2375 makes Podman
automatically operate in remote mode and delegate all operations
to the host Podman service, which has the correct subuid/subgid
mappings and full user-namespace support.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Two root causes for "invalid argument" when chowning non-root UIDs/GIDs
in image layers:
1. Missing uidmap package: without setuid newuidmap/newgidmap binaries,
Podman can only map a single UID (0 → current user) in the user
namespace. Any layer file owned by gid=42 (shadow) or similar then
has no mapping and lchown returns EINVAL. Now install uidmap if absent.
2. Stale Podman service: a service started before subuid/subgid entries
existed silently keeps the single-UID mapping for its lifetime even
after the entries are added and podman system migrate is run. Now
always kill and restart the service on each start.sh run so it always
reads the current subuid/subgid configuration.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
If entries already existed before this script first ran, _HIY_SUBID_CHANGED
stayed 0 and migrate was skipped, leaving Podman storage out of sync with
the namespace mappings and causing lchown errors on layer extraction.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Without entries in /etc/subuid and /etc/subgid, Podman cannot map the
UIDs/GIDs present in image layers (e.g. gid 42 for /etc/shadow) into
the user namespace, causing 'lchown: invalid argument' on layer extraction.
Add a 65536-ID range starting at 100000 for the current user if missing,
then run podman system migrate so existing storage is updated.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Rootless processes cannot bind privileged ports (<1024) by default.
Lower net.ipv4.ip_unprivileged_port_start to 80 at startup, and persist
it to /etc/sysctl.conf so the setting survives reboots.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Podman rootless unconditionally resets XDG_RUNTIME_DIR to /run/user/<uid>
if that directory exists, overriding any env var we set. Redirecting to
/tmp is therefore ineffective.
Instead, ensure /run/user/<uid> exists and is owned by the current user
(using sudo if needed), mirroring what PAM/logind does for login sessions.
All Podman runtime state (socket, events, netavark) then works correctly.
Remove the now-unnecessary storage.conf/containers.conf writes and the
inline env override on podman system service.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
make build was looking for Makefile in cwd (repo root) instead of infra/.
Use -C "$SCRIPT_DIR" so it always finds infra/Makefile regardless of where
the script is invoked from.
Add -f flag to podman compose up so it finds infra/docker-compose.yml
from any working directory.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Podman's events engine reads tmp_dir from containers.conf, not from
XDG_RUNTIME_DIR directly. Write both storage.conf and containers.conf
to /tmp/podman-<uid> so no path under /run/user/<uid> is ever used.
Also use `env XDG_RUNTIME_DIR=...` prefix on podman invocation to
override any stale value in the calling shell environment.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Stop relying on conditional checks. Always point XDG_RUNTIME_DIR and
storage.conf runroot to /tmp/podman-<uid> so Podman never touches
/run/user/<uid>, which requires PAM/logind to create.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
SSH sessions may export XDG_RUNTIME_DIR=/run/user/<uid> even when that
directory doesn't exist or isn't writable. Check writability rather than
emptiness before falling back to /tmp/podman-<uid>.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Podman uses XDG_RUNTIME_DIR for its RunRoot, events dirs, and default
socket path. Without it pointing to a writable location, podman fails
with 'mkdir /run/user/<uid>: permission denied' even before the socket
is created. Export it to /tmp/podman-<uid> when unset.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
/run/user/<uid> is created by PAM/logind and doesn't exist in non-login
shells. Fall back to /tmp/podman-<uid> when XDG_RUNTIME_DIR is unset so
mkdir always succeeds.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
systemctl --user fails in non-interactive shells (no D-Bus session bus).
podman system service starts the socket directly without systemd/D-Bus,
backgrounding the process and waiting up to 5 s for the socket to appear.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
start.sh now activates the Podman user socket via systemctl --user if it
isn't running yet, then exports DOCKER_HOST and PODMAN_SOCK so that
podman compose (which delegates to the docker-compose plugin) can connect.
docker-compose.yml mounts ${PODMAN_SOCK} into the socat proxy container
at a fixed internal path (/podman.sock), so it works for both rootful
(/run/podman/podman.sock) and rootless (/run/user/<UID>/podman/podman.sock)
without hardcoding the UID.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- New HIY_ADMIN_USER / HIY_ADMIN_PASS env vars control access
- Login page at /login with redirect-after-login support
- Cookie-based sessions (HttpOnly, SameSite=Strict); cleared on restart
- Auth middleware applied to all routes except /webhook/:app_id (HMAC) and /login
- Auth is skipped when credentials are not configured (dev mode, warns at startup)
- Logout link in both dashboard nav bars
- Caddy admin port 2019 no longer published to the host in docker-compose
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH