Commit graph

118 commits

Author SHA1 Message Date
Claude
bd863cdf33
fix: hardcode pg_isready args to avoid podman-compose $$ escaping issue
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:33:57 +00:00
Claude
22a6ab103c
fix: wait for Postgres to be ready before starting Forgejo
Adds a pg_isready healthcheck to the postgres service and upgrades the
Forgejo depends_on to condition: service_healthy, preventing the
"connection refused" crash on startup.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:27:10 +00:00
Claude
ea172ae336
feat: lock Forgejo install wizard via env var
Sets FORGEJO__security__INSTALL_LOCK=true so Forgejo skips the first-run
wizard and uses the env var configuration directly.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:21:09 +00:00
Claude
36b89d7620
fix: use FORGEJO_DB_PASSWORD env var in postgres init script
Replaced hardcoded 'CHANGE_ME' in the SQL init file with a shell script
that reads FORGEJO_DB_PASSWORD from the environment. Also pass the variable
into the postgres service in docker-compose.yml so it is available at init time.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:11:53 +00:00
Claude
9ba81bd809
fix: drop Caddy --resume, restore app routes from DB on startup
--resume caused Caddyfile changes (e.g. new Forgejo block) to be silently
ignored on restart because Caddy preferred its saved in-memory config.

Instead, Caddy now always starts clean from the Caddyfile, and the HIY
server re-registers every app's Caddy route from the DB on startup
(restore_caddy_routes). This gives us the best of both worlds:
- Caddyfile changes (static services, TLS config) are always picked up
- App routes are restored automatically without needing a redeploy

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:56:04 +00:00
Claude
97929c11de
fix: add static Caddyfile block for Forgejo (forgejo:3000, not hiy-forgejo)
Forgejo is a docker-compose service, not a HIY-deployed container. HIY's
dynamic routing uses the hiy-<id>:<port> naming convention which doesn't
match. A static block pointing to forgejo:3000 is the correct approach.

FORGEJO_DOMAIN falls back to forgejo.localhost so Caddy starts cleanly
on installs that don't use Forgejo.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:49:19 +00:00
Claude
06a8cc189a
fix: remove docker.io/ prefix from Forgejo image (Codeberg registry)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:46:09 +00:00
Claude
b6e223291a
feat: add Forgejo service + Postgres database provisioning
- docker-compose.yml: Forgejo service on hiy-net, configured via env vars
- postgres-init/01-forgejo.sql: creates forgejo user + database on first Postgres init
- .env.example: document FORGEJO_DB_PASSWORD and FORGEJO_DOMAIN

Routing: add FORGEJO_DOMAIN as an app in HIY pointing to forgejo:3000,
or add a Caddyfile block manually.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:44:19 +00:00
Claude
54ceedbe5a
feat: add Settings card to app detail page (repo, branch, port, memory, cpu)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 09:45:52 +00:00
Claude
eb9a500987
feat: per-app public/private visibility toggle
Apps default to private (require login). Marking an app public bypasses
the forward_auth check so anyone can access it without logging in.

Changes:
- db.rs: is_public INTEGER NOT NULL DEFAULT 0 column (idempotent)
- models.rs: is_public: i64 on App; is_public: Option<bool> on UpdateApp
- Cargo.toml: add reqwest for Caddy admin API calls from Rust
- routes/apps.rs: PATCH is_public → save flag + immediately push updated
  Caddy route (no redeploy needed); caddy_route() builds correct JSON for
  both public (plain reverse_proxy) and private (forward_auth) cases
- builder.rs: pass IS_PUBLIC env var to build.sh
- build.sh: use IS_PUBLIC to select route type on deploy
- ui.rs + app_detail.html: private/public badge + toggle button in subtitle

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:55:58 +00:00
Claude
c7ed5cfe95
fix: use x-access-token username for HTTPS git auth (GitHub compatibility)
x-token-auth is Bitbucket/Gitea-specific; GitHub doesn't recognise it and
returns a misleading 403 'Write access not granted'. x-access-token is the
username GitHub documents for PAT auth and is also accepted by GitLab/Gitea.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:41:54 +00:00
Claude
def40aa7f9
fix: register PATCH on /api/apps/:id (JS was sending PATCH, route only had PUT)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:38:22 +00:00
Claude
4fb8c6b2c7
feat: git token management in app detail UI
Adds a 'Git Authentication' card to the app detail page with:
- Status badge (Token configured / No token)
- Password input to set/update the token
- Clear button (only shown when a token is stored)

Token is saved/cleared via PATCH /api/apps/:id — no new endpoints needed.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:32:58 +00:00
Claude
0b3cbf8734
feat: private repo support via encrypted git token
- db.rs: add nullable git_token column (idempotent ALTER TABLE ADD COLUMN)
- models.rs: git_token on App (#[serde(skip_serializing)]), CreateApp, UpdateApp
- routes/apps.rs: encrypt token on create/update; empty string clears it
- builder.rs: decrypt token, pass as GIT_TOKEN env var to build script
- build.sh: GIT_TERMINAL_PROMPT=0 (fail fast, not hang); when GIT_TOKEN is
  set, inject it into the HTTPS clone URL as x-token-auth; strip credentials
  from .git/config after clone/fetch so the token is never persisted to disk

Token usage: PATCH /api/apps/:id with {"git_token": "ghp_..."}
Clear token:  PATCH /api/apps/:id with {"git_token": ""}

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:24:55 +00:00
Claude
73ea7320fd
fix: use Caddy internal CA when ACME_EMAIL is not set
DOMAIN_SUFFIX=local (or any non-localhost LAN name) caused a TLS handshake
failure because Caddy attempted an ACME challenge that can never succeed for
private domains.

- Caddyfile: tls {$ACME_EMAIL:internal} — falls back to Caddy's built-in CA
  when ACME_EMAIL is absent, uses Let's Encrypt when it is set.
- start.sh: ACME_EMAIL is now optional; missing it prints a warning instead
  of aborting, so local/LAN setups work without an email address.

To trust the self-signed cert in a browser run: caddy trust

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-25 22:09:00 +00:00
Claude
60f5df52f7
fix: copy server/templates into build image for include_str! macros
include_str!("../../templates/...") is resolved at compile time, so the
template files must be present in the Docker build context. The previous
Dockerfile only copied server/src, not server/templates.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:29:15 +00:00
Claude
0bd7b44b81
fix: drop cross-compilation, build natively in Dockerfile
podman-compose does not populate BUILDPLATFORM/TARGETARCH build args, so
the platform-detection logic always fell back to x86_64 — even on arm64.
This caused cc-rs to look for 'x86_64-linux-gnu-gcc' instead of 'gcc'.

Replace the entire cross-compile scaffolding with a plain native build:
  cargo build --release (no --target)

Cargo targets the host platform automatically. If cross-compilation is
ever needed it can be reintroduced with a properly-tested setup.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:25:48 +00:00
Claude
a873049e96
fix: install gcc and configure native x86_64 linker in build image
rust:slim-bookworm doesn't include gcc, and aes-gcm's build deps (via
cc-rs) need a C compiler. With --target x86_64-unknown-linux-gnu set
explicitly, cc-rs looks for the cross-compiler 'x86_64-linux-gnu-gcc'
instead of native 'gcc'.

Fix: install gcc in the build stage and add a [target.x86_64-*] linker
entry pointing to 'gcc' so cc-rs finds it on native x86_64 builds.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:23:02 +00:00
Claude
f50492f132
fix: fully-qualify all image names for Podman without search registries
Podman without unqualified-search registries configured in
/etc/containers/registries.conf refuses to resolve short image names.
Prefix every image with docker.io/library/ (official images) or
docker.io/<org>/ (third-party) so pulls succeed unconditionally.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:20:22 +00:00
Claude
b23e02f2d2
fix: declare default network for podman-compose compatibility
podman-compose requires all networks referenced in service configs to be
explicitly declared in the top-level networks block. Docker Compose
creates the default network implicitly, but podman-compose errors with
'missing networks: default'.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:18:46 +00:00
Claude
48b9ccf152
feat: M4 Hardening — encryption, resource limits, monitoring, backups
## Env var encryption at rest (AES-256-GCM)
- server/src/crypto.rs: new module — encrypt/decrypt with AES-256-GCM
  Key = SHA-256(HIY_SECRET_KEY); non-prefixed values pass through
  transparently for zero-downtime migration
- Cargo.toml: aes-gcm = "0.10"
- routes/envvars.rs: encrypt on SET; list returns masked values (••••)
- routes/databases.rs: pg_password and DATABASE_URL stored encrypted
- routes/ui.rs: decrypt pg_password when rendering DB card
- builder.rs: decrypt env vars when writing the .env file for containers
- .env.example: add HIY_SECRET_KEY entry

## Per-app resource limits
- apps table: memory_limit (default 512m) + cpu_limit (default 0.5)
  added via idempotent ALTER TABLE in db.rs migration
- models.rs: App, CreateApp, UpdateApp gain memory_limit + cpu_limit
- routes/apps.rs: persist limits on create, update via PUT
- builder.rs: pass MEMORY_LIMIT + CPU_LIMIT to build script
- builder/build.sh: use $MEMORY_LIMIT / $CPU_LIMIT in podman run
  (replaces hardcoded --cpus="0.5"; --memory now also set)

## Monitoring (opt-in compose profile)
- infra/docker-compose.yml: gatus + netdata under `monitoring` profile
  Enable: podman compose --profile monitoring up -d
  Gatus on :8080, Netdata on :19999
- infra/gatus.yml: Gatus config checking HIY /api/status every minute

## Backup cron job
- infra/backup.sh: dumps SQLite, copies env files + git repos into a
  dated .tar.gz; optional rclone upload; 30-day local retention
  Suggested cron: 0 3 * * * /path/to/infra/backup.sh

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 15:06:42 +00:00
Sander Hautvast
92d37d9199
Merge pull request #2 from shautvast/main
Merge pull request #1 from shautvast/claude/heroku-clone-mvp-plan-NREhc
2026-03-24 15:56:53 +01:00
Sander Hautvast
88c5c9c790
Merge pull request #1 from shautvast/claude/heroku-clone-mvp-plan-NREhc
Add README with project features and overview
2026-03-24 15:55:59 +01:00
Shautvast
47a50689bf readme 2026-03-24 15:48:16 +01:00
Shautvast
2654a26b06 rust 1.94 2026-03-24 14:27:12 +01:00
Claude
f4aa6972e1
feat: shared Postgres with per-app schemas
One Postgres 16 instance runs in the infra stack (docker-compose).
Each app can be given its own isolated schema with a dedicated,
scoped Postgres user via the new Database card on the app detail page.

What was added:

infra/
  docker-compose.yml  — postgres:16-alpine service + hiy-pg-data
                        volume; POSTGRES_URL injected into server
  .env.example        — POSTGRES_PASSWORD entry

server/
  Cargo.toml          — sqlx postgres feature
  src/db.rs           — databases table (SQLite) migration
  src/models.rs       — Database model
  src/main.rs         — PgPool (lazy) added to AppState;
                        /api/apps/:id/database routes registered
  src/routes/mod.rs   — databases module
  src/routes/databases.rs — GET / POST / DELETE handlers:
      provision  — creates schema + scoped PG user, sets search_path,
                   injects DATABASE_URL env var
      deprovision — DROP OWNED BY + DROP ROLE + DROP SCHEMA CASCADE,
                   removes SQLite record
  src/routes/ui.rs    — app_detail queries databases table, renders
                        db_card based on provisioning state
  templates/app_detail.html — {{db_card}} placeholder +
                              provisionDb / deprovisionDb JS

Apps connect via:
  postgres://hiy-<app>:<pw>@postgres:5432/hiy
search_path is set on the role so no URL option is needed.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 13:16:39 +00:00
Claude
c113b098e1
refactor: extract HTML/CSS/JS from ui.rs into template files
Move all inline markup out of ui.rs into server/templates/:
  styles.css       — shared stylesheet
  index.html       — dashboard page
  app_detail.html  — app detail page
  users.html       — users admin page

Templates are embedded at compile time via include_str! and rendered
with simple str::replace("{{placeholder}}", value) calls. JS/CSS
braces no longer need escaping, making the templates editable with
normal syntax highlighting.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 13:03:10 +00:00
Claude
84b5b2d028
feat: stay on main screen after triggering a deploy
Previously clicking Deploy redirected to the app detail page.
Now the page stays put and immediately flips the deploy badge to
"building". The existing 5-second status poller advances both the
deploy badge (building → success/failed) and the container badge
(→ running) without any manual refresh.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:56:11 +00:00
Claude
8267b30b15
fix: restore app reachability after platform restart
Two root causes:

1. Caddy was started without --resume, so every restart wiped all
   dynamically-registered app routes (only the base Caddyfile survived).
   Adding --resume makes Caddy reload its auto-saved config (stored in
   the caddy-config volume) which includes all app routes.

2. App routes used the container IP address, which changes whenever
   hiy-net is torn down and recreated by compose. Switch to the
   container name as the upstream dial address; Podman's aardvark-dns
   resolves it by name within hiy-net, so it stays valid across
   network recreations.

Together with the existing reconnect loop in start.sh these two
changes mean deployed apps survive a platform restart without needing
a redeploy.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:47:32 +00:00
Claude
a8e73df2c3
fix: reconnect app containers to hiy-net after platform restart
compose down destroys hiy-net and evicts running hiy-* containers
from it. compose up recreates the network but leaves those containers
disconnected, making them unreachable until a redeploy.

After compose up, reconnect all running hiy-* containers to hiy-net.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:42:26 +00:00
Claude
31944d128b
remove: unnecessary app restart loop from start.sh
Without podman system migrate, compose down/up only touches infra
containers. Deployed hiy-* containers are never stopped during a
platform restart so they need no special handling there.

The restart loop stays in boot.sh where it is needed (system reboot
stops all containers).

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:36:03 +00:00
Claude
9fbbdf62ee
remove: podman system migrate (wrong tool for the wrong problem)
It was added to "pick up subuid/subgid mappings" but that's not what it
does — it migrates container storage after a Podman version upgrade.
Subuid/subgid changes are picked up by restarting the Podman socket,
which the script already does. The only effect of running it was stopping
all containers on every platform start.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:34:16 +00:00
Claude
852e3f6ccb
fix: restart deployed app containers after platform start
podman system migrate explicitly stops all containers, which overrides
the --restart unless-stopped policy set on deployed apps. After compose
up-d brings the infra stack back, any exited hiy-* container is now
restarted automatically.

Same logic added to boot.sh for the on-boot path.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:32:45 +00:00
Claude
88f6e02d4e
feat: auto-restart stack on boot via systemd user service
- Add infra/boot.sh: lightweight startup (no build) that brings up the
  Podman stack — used by the systemd unit on every system boot
- start.sh now installs/refreshes hiy.service (a systemd --user unit)
  and enables loginctl linger so it runs without an active login session

After the next `infra/start.sh` run the Pi will automatically restart
the stack after a reboot or power cut.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 12:22:34 +00:00
Claude
031c3bdd41
fix: defer podman system migrate to after the build to eliminate early downtime
podman system migrate was stopping all containers immediately (visible in
the terminal output as "stopped <id>" lines), before the build even began.

Moving it to just before compose down/up means running containers stay
alive for the entire duration of the image build.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 10:48:45 +00:00
Claude
a16ccdcef4
fix: build images before tearing down compose to reduce downtime
Old behaviour: compose down → long build → compose up
New behaviour: long build (service stays live) → compose down → compose up

Downtime is now limited to the few seconds of the swap instead of the
entire duration of the Rust/image build.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 10:43:36 +00:00
Claude
872efc74ce
fix: non-admin users with app grants can now push via HTTP
The user_apps check was silently failing because sqlx::query_scalar
without an explicit type annotation would hit a runtime decoding error,
which .unwrap_or(None) swallowed — always returning None → 403.

All three DB calls in check_push_access now use match + tracing::error!
so failures are visible in logs instead of looking like a missing grant.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 10:41:26 +00:00
Claude
5bc1948f1a
feat: make repo URL optional when creating an app
Useful for git-push-only deploys where no external repo URL is needed.
- CreateApp.repo_url: String → Option<String>
- DB schema default: repo_url TEXT NOT NULL DEFAULT ''
- UI validation no longer requires the field
- Label marked (optional) in the form

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 10:37:17 +00:00
Claude
bb26a81fe6
fix: add tracing + explicit types to check_push_access
Makes the exact failure reason (app not found vs. no access grant)
visible in `docker compose logs server`.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 09:57:18 +00:00
Claude
4504c22af8
fix: explicit SQLx type + debug tracing for HTTP git auth
Fixes potential silent failure where sqlx::query_scalar couldn't infer
the return type at runtime. Also adds step-by-step tracing so the exact
failure point (no header / bad base64 / key not found / db error) is
visible in `docker compose logs server`.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 08:33:10 +00:00
Claude
a2627a3e2f
chore: update Cargo.lock (add base64 0.22)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-23 12:59:30 +00:00
Claude
0c995f9a0a
feat: HTTP git push with API key auth
Replaces SSH as the primary git push path — no key generation needed.

  # Admin UI: Users → Generate key (shown once)
  git remote add hiy http://hiy:API_KEY@myserver/git/myapp
  git push hiy main

What was added:

- api_keys DB table (id, user_id, label, key_hash/SHA-256, created_at)
  Keys are stored as SHA-256 hashes; the plaintext is shown once on
  creation and never stored.

- routes/api_keys.rs
  GET/POST /api/users/:id/api-keys  — list / generate
  DELETE   /api/api-keys/:key_id    — revoke

- HTTP Smart Protocol endpoints (public, auth via Basic + API key)
    GET  /git/:app/info/refs        — ref advertisement
    POST /git/:app/git-receive-pack — receive pack, runs post-receive hook

  Authentication: HTTP Basic where the password is the API key.
  git prompts once and caches via the OS credential store.
  post-receive hook fires as normal and queues the build.

- Admin UI: API keys section per user with generate/revoke and a
  one-time reveal box showing the ready-to-use git remote command.

SSH path (git-shell + authorized_keys) is still functional for users
who prefer it; both paths feed the same build queue.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-23 12:59:02 +00:00
Claude
cb0795617f
feat: git push deploy (roadmap step 2)
Full self-contained git push flow — no GitHub required:

  git remote add hiy ssh://hiy@myserver/myapp
  git push hiy main

What was added:

- Bare git repo per app (HIY_DATA_DIR/repos/<app-id>.git)
  Initialised automatically on app create; removed on app delete.
  post-receive hook is written into each repo and calls the internal
  API to queue a build using the same pipeline as webhook deploys.

- SSH key management
  New ssh_keys DB table. Admin UI (/admin/users) now shows SSH keys
  per user with add/remove. New API routes:
    GET/POST /api/users/:id/ssh-keys
    DELETE   /api/ssh-keys/:key_id
  On every change, HIY rewrites HIY_SSH_AUTHORIZED_KEYS with
  command= restricted entries pointing at hiy-git-shell.

- scripts/git-shell
  SSH command= override installed at HIY_GIT_SHELL (default
  /usr/local/bin/hiy-git-shell). Validates the push via
  GET /internal/git/auth, then exec's git-receive-pack on the
  correct bare repo.

- Internal API routes (authenticated by shared internal_token)
    GET  /internal/git/auth          -- git-shell permission check
    POST /internal/git/:app_id/push  -- post-receive build trigger

- Builder: git-push deploys use file:// path to the local bare repo
  instead of the app's remote repo_url.

- internal_token persists across restarts in HIY_DATA_DIR/internal-token.

New env vars:
  HIY_SSH_AUTHORIZED_KEYS  path to the authorized_keys file to manage
  HIY_GIT_SHELL            path to the git-shell script on the host

Both webhook and git-push deploys feed the same build queue.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-23 08:54:55 +00:00
Claude
1671aaf8e8
fix: break infinite redirect for non-admin users on admin UI
Root cause: auth_middleware redirected all non-admins (including logged-in
ones) to /login, and login_page redirected logged-in users back — a loop.

Fix:
- auth_middleware now distinguishes unauthenticated (→ /login?next=) from
  logged-in-but-not-admin (→ /denied), breaking the loop entirely
- /denied page's "sign in with a different account" link now goes to /logout
  first, so clicking it clears the session before the login form appears

The login_page auto-redirect for logged-in users is restored, which is
required for the Caddy forward_auth flow (deployed apps redirecting through
/login?next=<app-url>).

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-23 08:24:41 +00:00
Claude
812c81104a
fix: prevent infinite redirect loop for non-admin users on admin UI
When a non-admin user with a valid session cookie visited an admin-protected
route, auth_middleware redirected them to /login?next=<admin-path>, and
login_page immediately redirected them back because they were "logged in",
causing an infinite redirect loop.

Fix: only skip the login page when the logged-in user is also an admin.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-23 07:50:32 +00:00
Claude
4aea0357b6
fix: app login redirect broken after forward_auth challenge
Two bugs:
1. verify() built the login URL with `//login` (double slash) — now `/login`
2. safe_path() rejected absolute https:// next-URLs, so after login the
   user was silently dropped at `/` instead of their original app URL.

Replaced safe_path with safe_redirect(next, domain) which allows relative
paths OR absolute URLs whose host is the configured domain (or a subdomain).
safe_path is kept as a thin wrapper (domain="") for the admin-UI middleware
where next is always a relative path.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-22 18:34:07 +00:00
Claude
c261f9d133
chore: gitignore generated proxy/caddy.json
caddy.json is generated by start.sh from .env (domain + ACME email)
and is environment-specific — it should not be version-controlled.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-22 18:18:08 +00:00
Claude
e7fd2a4365
fix: auto-enable cgroup swap accounting on Pi before starting containers
runc (used by Podman) always writes memory.swap.max when initializing the
cgroup v2 memory controller, even without explicit --memory flags. On
Raspberry Pi OS this file is absent because swap accounting is disabled
by default in the kernel, causing every container start to fail with:

  openat2 …/memory.swap.max: no such file or directory

start.sh now detects this condition early, patches the kernel cmdline
(cgroup_enable=memory cgroup_memory=1 swapaccount=1) in either
/boot/firmware/cmdline.txt (Pi OS Bookworm) or /boot/cmdline.txt
(older releases), and tells the user to reboot once before continuing.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-22 18:05:11 +00:00
Claude
63a1ae6065
Remove --memory limit to avoid memory.swap.max cgroup error on Pi
Raspberry Pi OS does not enable swap cgroup accounting by default.
Even --memory-swap=-1 causes runc to write "max" to memory.swap.max,
which fails with ENOENT when the file does not exist.

Removing --memory entirely means runc skips all memory.* cgroup writes.
--cpus is unaffected (uses cpu.max, which is always present).

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-22 15:23:49 +00:00
Claude
ffe76144fb
Fix container start failure on Pi: disable cgroup swap limit
Raspberry Pi OS does not enable swap accounting in cgroups by default,
so the memory.swap.max cgroup v2 file does not exist.  Setting --memory
without --memory-swap causes runc to write a swap limit to that file,
which fails with ENOENT.

Adding --memory-swap=-1 tells runc to leave swap unlimited, skipping
the memory.swap.max write entirely.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-22 11:02:58 +00:00