Adds restore_app_containers() which runs at startup alongside
restore_caddy_routes(). For each app with a successful deploy it
inspects the container state via `podman inspect` and runs
`podman start` if the container is exited (e.g. after a host reboot).
Missing containers are logged as warnings requiring a manual redeploy.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
backup.sh now covers all data:
- SQLite via podman exec into server container (fallback to host path)
- Postgres via pg_dumpall inside postgres container
- Forgejo data volume via podman volume export
- Caddy TLS certificates via podman volume export
- .env file (plaintext secrets — store archive securely)
restore.sh reverses each step: imports volumes, restores Postgres,
restores SQLite, optionally restores .env (--force to overwrite).
Both scripts find containers dynamically via compose service labels
so they work regardless of the container name podman-compose assigns.
.env.example documents HIY_BACKUP_DIR, HIY_BACKUP_REMOTE,
HIY_BACKUP_RETAIN_DAYS.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Adds a pg_isready healthcheck to the postgres service and upgrades the
Forgejo depends_on to condition: service_healthy, preventing the
"connection refused" crash on startup.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Replaced hardcoded 'CHANGE_ME' in the SQL init file with a shell script
that reads FORGEJO_DB_PASSWORD from the environment. Also pass the variable
into the postgres service in docker-compose.yml so it is available at init time.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
--resume caused Caddyfile changes (e.g. new Forgejo block) to be silently
ignored on restart because Caddy preferred its saved in-memory config.
Instead, Caddy now always starts clean from the Caddyfile, and the HIY
server re-registers every app's Caddy route from the DB on startup
(restore_caddy_routes). This gives us the best of both worlds:
- Caddyfile changes (static services, TLS config) are always picked up
- App routes are restored automatically without needing a redeploy
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Forgejo is a docker-compose service, not a HIY-deployed container. HIY's
dynamic routing uses the hiy-<id>:<port> naming convention which doesn't
match. A static block pointing to forgejo:3000 is the correct approach.
FORGEJO_DOMAIN falls back to forgejo.localhost so Caddy starts cleanly
on installs that don't use Forgejo.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- docker-compose.yml: Forgejo service on hiy-net, configured via env vars
- postgres-init/01-forgejo.sql: creates forgejo user + database on first Postgres init
- .env.example: document FORGEJO_DB_PASSWORD and FORGEJO_DOMAIN
Routing: add FORGEJO_DOMAIN as an app in HIY pointing to forgejo:3000,
or add a Caddyfile block manually.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Apps default to private (require login). Marking an app public bypasses
the forward_auth check so anyone can access it without logging in.
Changes:
- db.rs: is_public INTEGER NOT NULL DEFAULT 0 column (idempotent)
- models.rs: is_public: i64 on App; is_public: Option<bool> on UpdateApp
- Cargo.toml: add reqwest for Caddy admin API calls from Rust
- routes/apps.rs: PATCH is_public → save flag + immediately push updated
Caddy route (no redeploy needed); caddy_route() builds correct JSON for
both public (plain reverse_proxy) and private (forward_auth) cases
- builder.rs: pass IS_PUBLIC env var to build.sh
- build.sh: use IS_PUBLIC to select route type on deploy
- ui.rs + app_detail.html: private/public badge + toggle button in subtitle
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
x-token-auth is Bitbucket/Gitea-specific; GitHub doesn't recognise it and
returns a misleading 403 'Write access not granted'. x-access-token is the
username GitHub documents for PAT auth and is also accepted by GitLab/Gitea.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Adds a 'Git Authentication' card to the app detail page with:
- Status badge (Token configured / No token)
- Password input to set/update the token
- Clear button (only shown when a token is stored)
Token is saved/cleared via PATCH /api/apps/:id — no new endpoints needed.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- db.rs: add nullable git_token column (idempotent ALTER TABLE ADD COLUMN)
- models.rs: git_token on App (#[serde(skip_serializing)]), CreateApp, UpdateApp
- routes/apps.rs: encrypt token on create/update; empty string clears it
- builder.rs: decrypt token, pass as GIT_TOKEN env var to build script
- build.sh: GIT_TERMINAL_PROMPT=0 (fail fast, not hang); when GIT_TOKEN is
set, inject it into the HTTPS clone URL as x-token-auth; strip credentials
from .git/config after clone/fetch so the token is never persisted to disk
Token usage: PATCH /api/apps/:id with {"git_token": "ghp_..."}
Clear token: PATCH /api/apps/:id with {"git_token": ""}
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
DOMAIN_SUFFIX=local (or any non-localhost LAN name) caused a TLS handshake
failure because Caddy attempted an ACME challenge that can never succeed for
private domains.
- Caddyfile: tls {$ACME_EMAIL:internal} — falls back to Caddy's built-in CA
when ACME_EMAIL is absent, uses Let's Encrypt when it is set.
- start.sh: ACME_EMAIL is now optional; missing it prints a warning instead
of aborting, so local/LAN setups work without an email address.
To trust the self-signed cert in a browser run: caddy trust
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
include_str!("../../templates/...") is resolved at compile time, so the
template files must be present in the Docker build context. The previous
Dockerfile only copied server/src, not server/templates.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman-compose does not populate BUILDPLATFORM/TARGETARCH build args, so
the platform-detection logic always fell back to x86_64 — even on arm64.
This caused cc-rs to look for 'x86_64-linux-gnu-gcc' instead of 'gcc'.
Replace the entire cross-compile scaffolding with a plain native build:
cargo build --release (no --target)
Cargo targets the host platform automatically. If cross-compilation is
ever needed it can be reintroduced with a properly-tested setup.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
rust:slim-bookworm doesn't include gcc, and aes-gcm's build deps (via
cc-rs) need a C compiler. With --target x86_64-unknown-linux-gnu set
explicitly, cc-rs looks for the cross-compiler 'x86_64-linux-gnu-gcc'
instead of native 'gcc'.
Fix: install gcc in the build stage and add a [target.x86_64-*] linker
entry pointing to 'gcc' so cc-rs finds it on native x86_64 builds.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Podman without unqualified-search registries configured in
/etc/containers/registries.conf refuses to resolve short image names.
Prefix every image with docker.io/library/ (official images) or
docker.io/<org>/ (third-party) so pulls succeed unconditionally.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman-compose requires all networks referenced in service configs to be
explicitly declared in the top-level networks block. Docker Compose
creates the default network implicitly, but podman-compose errors with
'missing networks: default'.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
One Postgres 16 instance runs in the infra stack (docker-compose).
Each app can be given its own isolated schema with a dedicated,
scoped Postgres user via the new Database card on the app detail page.
What was added:
infra/
docker-compose.yml — postgres:16-alpine service + hiy-pg-data
volume; POSTGRES_URL injected into server
.env.example — POSTGRES_PASSWORD entry
server/
Cargo.toml — sqlx postgres feature
src/db.rs — databases table (SQLite) migration
src/models.rs — Database model
src/main.rs — PgPool (lazy) added to AppState;
/api/apps/:id/database routes registered
src/routes/mod.rs — databases module
src/routes/databases.rs — GET / POST / DELETE handlers:
provision — creates schema + scoped PG user, sets search_path,
injects DATABASE_URL env var
deprovision — DROP OWNED BY + DROP ROLE + DROP SCHEMA CASCADE,
removes SQLite record
src/routes/ui.rs — app_detail queries databases table, renders
db_card based on provisioning state
templates/app_detail.html — {{db_card}} placeholder +
provisionDb / deprovisionDb JS
Apps connect via:
postgres://hiy-<app>:<pw>@postgres:5432/hiy
search_path is set on the role so no URL option is needed.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Move all inline markup out of ui.rs into server/templates/:
styles.css — shared stylesheet
index.html — dashboard page
app_detail.html — app detail page
users.html — users admin page
Templates are embedded at compile time via include_str! and rendered
with simple str::replace("{{placeholder}}", value) calls. JS/CSS
braces no longer need escaping, making the templates editable with
normal syntax highlighting.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Previously clicking Deploy redirected to the app detail page.
Now the page stays put and immediately flips the deploy badge to
"building". The existing 5-second status poller advances both the
deploy badge (building → success/failed) and the container badge
(→ running) without any manual refresh.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Two root causes:
1. Caddy was started without --resume, so every restart wiped all
dynamically-registered app routes (only the base Caddyfile survived).
Adding --resume makes Caddy reload its auto-saved config (stored in
the caddy-config volume) which includes all app routes.
2. App routes used the container IP address, which changes whenever
hiy-net is torn down and recreated by compose. Switch to the
container name as the upstream dial address; Podman's aardvark-dns
resolves it by name within hiy-net, so it stays valid across
network recreations.
Together with the existing reconnect loop in start.sh these two
changes mean deployed apps survive a platform restart without needing
a redeploy.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
compose down destroys hiy-net and evicts running hiy-* containers
from it. compose up recreates the network but leaves those containers
disconnected, making them unreachable until a redeploy.
After compose up, reconnect all running hiy-* containers to hiy-net.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Without podman system migrate, compose down/up only touches infra
containers. Deployed hiy-* containers are never stopped during a
platform restart so they need no special handling there.
The restart loop stays in boot.sh where it is needed (system reboot
stops all containers).
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
It was added to "pick up subuid/subgid mappings" but that's not what it
does — it migrates container storage after a Podman version upgrade.
Subuid/subgid changes are picked up by restarting the Podman socket,
which the script already does. The only effect of running it was stopping
all containers on every platform start.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman system migrate explicitly stops all containers, which overrides
the --restart unless-stopped policy set on deployed apps. After compose
up-d brings the infra stack back, any exited hiy-* container is now
restarted automatically.
Same logic added to boot.sh for the on-boot path.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
- Add infra/boot.sh: lightweight startup (no build) that brings up the
Podman stack — used by the systemd unit on every system boot
- start.sh now installs/refreshes hiy.service (a systemd --user unit)
and enables loginctl linger so it runs without an active login session
After the next `infra/start.sh` run the Pi will automatically restart
the stack after a reboot or power cut.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
podman system migrate was stopping all containers immediately (visible in
the terminal output as "stopped <id>" lines), before the build even began.
Moving it to just before compose down/up means running containers stay
alive for the entire duration of the image build.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Old behaviour: compose down → long build → compose up
New behaviour: long build (service stays live) → compose down → compose up
Downtime is now limited to the few seconds of the swap instead of the
entire duration of the Rust/image build.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
The user_apps check was silently failing because sqlx::query_scalar
without an explicit type annotation would hit a runtime decoding error,
which .unwrap_or(None) swallowed — always returning None → 403.
All three DB calls in check_push_access now use match + tracing::error!
so failures are visible in logs instead of looking like a missing grant.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Useful for git-push-only deploys where no external repo URL is needed.
- CreateApp.repo_url: String → Option<String>
- DB schema default: repo_url TEXT NOT NULL DEFAULT ''
- UI validation no longer requires the field
- Label marked (optional) in the form
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Fixes potential silent failure where sqlx::query_scalar couldn't infer
the return type at runtime. Also adds step-by-step tracing so the exact
failure point (no header / bad base64 / key not found / db error) is
visible in `docker compose logs server`.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
Replaces SSH as the primary git push path — no key generation needed.
# Admin UI: Users → Generate key (shown once)
git remote add hiy http://hiy:API_KEY@myserver/git/myapp
git push hiy main
What was added:
- api_keys DB table (id, user_id, label, key_hash/SHA-256, created_at)
Keys are stored as SHA-256 hashes; the plaintext is shown once on
creation and never stored.
- routes/api_keys.rs
GET/POST /api/users/:id/api-keys — list / generate
DELETE /api/api-keys/:key_id — revoke
- HTTP Smart Protocol endpoints (public, auth via Basic + API key)
GET /git/:app/info/refs — ref advertisement
POST /git/:app/git-receive-pack — receive pack, runs post-receive hook
Authentication: HTTP Basic where the password is the API key.
git prompts once and caches via the OS credential store.
post-receive hook fires as normal and queues the build.
- Admin UI: API keys section per user with generate/revoke and a
one-time reveal box showing the ready-to-use git remote command.
SSH path (git-shell + authorized_keys) is still functional for users
who prefer it; both paths feed the same build queue.
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH