Compare commits

..

43 commits

Author SHA1 Message Date
506912ff09 Merge pull request 'claude/heroku-clone-mvp-plan-NREhc' (#1) from claude/heroku-clone-mvp-plan-NREhc into main
Reviewed-on: #1
2026-03-29 07:24:39 +00:00
Shautvast
55e3f97946 Use patched rclone for proton drive integration 2026-03-28 14:35:36 +01:00
Claude
2b4f066234
fix: source .env at startup in backup.sh
Automatically loads HIY_BACKUP_DIR, HIY_BACKUP_REMOTE, HIY_BACKUP_RETAIN_DAYS
and other vars from .env so the cron job works without extra shell setup.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 17:52:29 +00:00
Claude
7b37f88fb5
fix: use internal Forgejo URL for act_runner — avoids routing through public IP
The runner is on hiy-net and can reach Forgejo directly at http://forgejo:3000
rather than going out through the public IP and Caddy.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 15:35:13 +00:00
Claude
84c36c464d
fix: use forgejo-runner register --token instead of create-runner-file --secret
The UI registration token is not a hex string — create-runner-file --secret
expects a hex secret. Use the register subcommand with --token instead,
which accepts the token from the Forgejo UI directly.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 15:28:59 +00:00
Shautvast
868bfbc365 container name removed 2026-03-27 15:58:06 +01:00
Claude
4ac5700ac5
fix: add runner-entrypoint.sh to register Forgejo runner on first start
The data.forgejo.org/forgejo/runner image doesn't auto-register from
env vars — it needs create-runner-file called explicitly before the
daemon starts. The entrypoint handles registration on first run (no
/data/.runner file) then execs the daemon on all subsequent starts.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 14:41:45 +00:00
Claude
3afdc66ec2
fix: correct Forgejo runner image — data.forgejo.org/forgejo/runner:6
code.forgejo.org is the source repo, not the container registry.
The OCI registry is data.forgejo.org and the image is 'runner', not 'act_runner'.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 14:37:02 +00:00
Claude
f9eacd03be
fix: add container_name to act_runner to bypass podman-compose naming bug
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 14:33:54 +00:00
Claude
99ab28d3be
fix: rename act-runner to act_runner — podman-compose chokes on hyphens in service names
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 14:30:30 +00:00
Claude
e80c3dc9a4
feat: add install.sh for fresh Raspberry Pi setup
Covers everything that was done manually on the Pi:
- apt packages: podman, aardvark-dns, sqlite3, git, uidmap, python3-pip
- podman-compose via pip (to ~/.local/bin)
- rclone (optional, prompted)
- .env creation from template with prompted values and generated passwords
- git upstream tracking for auto-update
- hands off to start.sh at the end

Safe to re-run — all steps are idempotent.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 13:52:28 +00:00
Claude
8561ee3e74
feat: add systemd timer for automatic git pull + service restart
auto-update.sh fetches origin every 5 minutes. If new commits are
found it pulls and selectively restarts only what changed:
- server/ or Cargo.*  → rebuild + restart server container
- docker-compose.yml  → full stack up -d
- proxy/Caddyfile     → caddy reload
- anything else       → no restart needed

start.sh now installs hiy-update.service + hiy-update.timer alongside
the existing hiy.service boot unit.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 13:47:12 +00:00
Claude
4ef77bf255
feat: add Forgejo Actions runner (act_runner) to docker-compose
Adds the act-runner service alongside Forgejo. It connects to the
Podman socket proxy so CI jobs can build and run containers on the Pi.

Also enables FORGEJO__actions__ENABLED on the Forgejo service.

FORGEJO_RUNNER_TOKEN must be set in .env — obtain it from:
  Forgejo → Site Administration → Actions → Runners → Create new runner

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 13:33:00 +00:00
Claude
fa12b80638
chore: update Cargo.lock
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 10:30:09 +00:00
Claude
c7af43ab33
feat: restart stopped app containers on server startup
Adds restore_app_containers() which runs at startup alongside
restore_caddy_routes(). For each app with a successful deploy it
inspects the container state via `podman inspect` and runs
`podman start` if the container is exited (e.g. after a host reboot).
Missing containers are logged as warnings requiring a manual redeploy.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 10:29:52 +00:00
Claude
0fb3a6bfe1
fix: add PATH to systemd service so podman-compose is found at boot
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-27 10:23:38 +00:00
Claude
b7430cbb65
fix: add --transfers 1 --retries 5 to rclone — workaround for Proton Drive parallel upload bug
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 15:44:46 +00:00
Claude
84ac8f3b9f
fix: copy hiy.db out of container before dumping — server image has no sqlite3
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 15:32:26 +00:00
Claude
e8d303f184
feat: extend backup script and add restore script
backup.sh now covers all data:
- SQLite via podman exec into server container (fallback to host path)
- Postgres via pg_dumpall inside postgres container
- Forgejo data volume via podman volume export
- Caddy TLS certificates via podman volume export
- .env file (plaintext secrets — store archive securely)

restore.sh reverses each step: imports volumes, restores Postgres,
restores SQLite, optionally restores .env (--force to overwrite).

Both scripts find containers dynamically via compose service labels
so they work regardless of the container name podman-compose assigns.

.env.example documents HIY_BACKUP_DIR, HIY_BACKUP_REMOTE,
HIY_BACKUP_RETAIN_DAYS.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 15:06:16 +00:00
Claude
d3ef4d2030
fix: use /bin/sh in postgres init script — Alpine has no bash
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:47:58 +00:00
Claude
de4b5c49ab
fix: drop service_healthy depends_on — podman-compose doesn't support it
Forgejo restart: unless-stopped handles the retry loop until Postgres is ready.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:41:27 +00:00
Claude
bd863cdf33
fix: hardcode pg_isready args to avoid podman-compose $$ escaping issue
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:33:57 +00:00
Claude
22a6ab103c
fix: wait for Postgres to be ready before starting Forgejo
Adds a pg_isready healthcheck to the postgres service and upgrades the
Forgejo depends_on to condition: service_healthy, preventing the
"connection refused" crash on startup.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:27:10 +00:00
Claude
ea172ae336
feat: lock Forgejo install wizard via env var
Sets FORGEJO__security__INSTALL_LOCK=true so Forgejo skips the first-run
wizard and uses the env var configuration directly.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:21:09 +00:00
Claude
36b89d7620
fix: use FORGEJO_DB_PASSWORD env var in postgres init script
Replaced hardcoded 'CHANGE_ME' in the SQL init file with a shell script
that reads FORGEJO_DB_PASSWORD from the environment. Also pass the variable
into the postgres service in docker-compose.yml so it is available at init time.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 11:11:53 +00:00
Claude
9ba81bd809
fix: drop Caddy --resume, restore app routes from DB on startup
--resume caused Caddyfile changes (e.g. new Forgejo block) to be silently
ignored on restart because Caddy preferred its saved in-memory config.

Instead, Caddy now always starts clean from the Caddyfile, and the HIY
server re-registers every app's Caddy route from the DB on startup
(restore_caddy_routes). This gives us the best of both worlds:
- Caddyfile changes (static services, TLS config) are always picked up
- App routes are restored automatically without needing a redeploy

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:56:04 +00:00
Claude
97929c11de
fix: add static Caddyfile block for Forgejo (forgejo:3000, not hiy-forgejo)
Forgejo is a docker-compose service, not a HIY-deployed container. HIY's
dynamic routing uses the hiy-<id>:<port> naming convention which doesn't
match. A static block pointing to forgejo:3000 is the correct approach.

FORGEJO_DOMAIN falls back to forgejo.localhost so Caddy starts cleanly
on installs that don't use Forgejo.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:49:19 +00:00
Claude
06a8cc189a
fix: remove docker.io/ prefix from Forgejo image (Codeberg registry)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:46:09 +00:00
Claude
b6e223291a
feat: add Forgejo service + Postgres database provisioning
- docker-compose.yml: Forgejo service on hiy-net, configured via env vars
- postgres-init/01-forgejo.sql: creates forgejo user + database on first Postgres init
- .env.example: document FORGEJO_DB_PASSWORD and FORGEJO_DOMAIN

Routing: add FORGEJO_DOMAIN as an app in HIY pointing to forgejo:3000,
or add a Caddyfile block manually.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 10:44:19 +00:00
Claude
54ceedbe5a
feat: add Settings card to app detail page (repo, branch, port, memory, cpu)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 09:45:52 +00:00
Claude
eb9a500987
feat: per-app public/private visibility toggle
Apps default to private (require login). Marking an app public bypasses
the forward_auth check so anyone can access it without logging in.

Changes:
- db.rs: is_public INTEGER NOT NULL DEFAULT 0 column (idempotent)
- models.rs: is_public: i64 on App; is_public: Option<bool> on UpdateApp
- Cargo.toml: add reqwest for Caddy admin API calls from Rust
- routes/apps.rs: PATCH is_public → save flag + immediately push updated
  Caddy route (no redeploy needed); caddy_route() builds correct JSON for
  both public (plain reverse_proxy) and private (forward_auth) cases
- builder.rs: pass IS_PUBLIC env var to build.sh
- build.sh: use IS_PUBLIC to select route type on deploy
- ui.rs + app_detail.html: private/public badge + toggle button in subtitle

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:55:58 +00:00
Claude
c7ed5cfe95
fix: use x-access-token username for HTTPS git auth (GitHub compatibility)
x-token-auth is Bitbucket/Gitea-specific; GitHub doesn't recognise it and
returns a misleading 403 'Write access not granted'. x-access-token is the
username GitHub documents for PAT auth and is also accepted by GitLab/Gitea.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:41:54 +00:00
Claude
def40aa7f9
fix: register PATCH on /api/apps/:id (JS was sending PATCH, route only had PUT)
https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:38:22 +00:00
Claude
4fb8c6b2c7
feat: git token management in app detail UI
Adds a 'Git Authentication' card to the app detail page with:
- Status badge (Token configured / No token)
- Password input to set/update the token
- Clear button (only shown when a token is stored)

Token is saved/cleared via PATCH /api/apps/:id — no new endpoints needed.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:32:58 +00:00
Claude
0b3cbf8734
feat: private repo support via encrypted git token
- db.rs: add nullable git_token column (idempotent ALTER TABLE ADD COLUMN)
- models.rs: git_token on App (#[serde(skip_serializing)]), CreateApp, UpdateApp
- routes/apps.rs: encrypt token on create/update; empty string clears it
- builder.rs: decrypt token, pass as GIT_TOKEN env var to build script
- build.sh: GIT_TERMINAL_PROMPT=0 (fail fast, not hang); when GIT_TOKEN is
  set, inject it into the HTTPS clone URL as x-token-auth; strip credentials
  from .git/config after clone/fetch so the token is never persisted to disk

Token usage: PATCH /api/apps/:id with {"git_token": "ghp_..."}
Clear token:  PATCH /api/apps/:id with {"git_token": ""}

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-26 08:24:55 +00:00
Claude
73ea7320fd
fix: use Caddy internal CA when ACME_EMAIL is not set
DOMAIN_SUFFIX=local (or any non-localhost LAN name) caused a TLS handshake
failure because Caddy attempted an ACME challenge that can never succeed for
private domains.

- Caddyfile: tls {$ACME_EMAIL:internal} — falls back to Caddy's built-in CA
  when ACME_EMAIL is absent, uses Let's Encrypt when it is set.
- start.sh: ACME_EMAIL is now optional; missing it prints a warning instead
  of aborting, so local/LAN setups work without an email address.

To trust the self-signed cert in a browser run: caddy trust

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-25 22:09:00 +00:00
Claude
60f5df52f7
fix: copy server/templates into build image for include_str! macros
include_str!("../../templates/...") is resolved at compile time, so the
template files must be present in the Docker build context. The previous
Dockerfile only copied server/src, not server/templates.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:29:15 +00:00
Claude
0bd7b44b81
fix: drop cross-compilation, build natively in Dockerfile
podman-compose does not populate BUILDPLATFORM/TARGETARCH build args, so
the platform-detection logic always fell back to x86_64 — even on arm64.
This caused cc-rs to look for 'x86_64-linux-gnu-gcc' instead of 'gcc'.

Replace the entire cross-compile scaffolding with a plain native build:
  cargo build --release (no --target)

Cargo targets the host platform automatically. If cross-compilation is
ever needed it can be reintroduced with a properly-tested setup.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:25:48 +00:00
Claude
a873049e96
fix: install gcc and configure native x86_64 linker in build image
rust:slim-bookworm doesn't include gcc, and aes-gcm's build deps (via
cc-rs) need a C compiler. With --target x86_64-unknown-linux-gnu set
explicitly, cc-rs looks for the cross-compiler 'x86_64-linux-gnu-gcc'
instead of native 'gcc'.

Fix: install gcc in the build stage and add a [target.x86_64-*] linker
entry pointing to 'gcc' so cc-rs finds it on native x86_64 builds.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:23:02 +00:00
Claude
f50492f132
fix: fully-qualify all image names for Podman without search registries
Podman without unqualified-search registries configured in
/etc/containers/registries.conf refuses to resolve short image names.
Prefix every image with docker.io/library/ (official images) or
docker.io/<org>/ (third-party) so pulls succeed unconditionally.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:20:22 +00:00
Claude
b23e02f2d2
fix: declare default network for podman-compose compatibility
podman-compose requires all networks referenced in service configs to be
explicitly declared in the top-level networks block. Docker Compose
creates the default network implicitly, but podman-compose errors with
'missing networks: default'.

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 16:18:46 +00:00
Claude
48b9ccf152
feat: M4 Hardening — encryption, resource limits, monitoring, backups
## Env var encryption at rest (AES-256-GCM)
- server/src/crypto.rs: new module — encrypt/decrypt with AES-256-GCM
  Key = SHA-256(HIY_SECRET_KEY); non-prefixed values pass through
  transparently for zero-downtime migration
- Cargo.toml: aes-gcm = "0.10"
- routes/envvars.rs: encrypt on SET; list returns masked values (••••)
- routes/databases.rs: pg_password and DATABASE_URL stored encrypted
- routes/ui.rs: decrypt pg_password when rendering DB card
- builder.rs: decrypt env vars when writing the .env file for containers
- .env.example: add HIY_SECRET_KEY entry

## Per-app resource limits
- apps table: memory_limit (default 512m) + cpu_limit (default 0.5)
  added via idempotent ALTER TABLE in db.rs migration
- models.rs: App, CreateApp, UpdateApp gain memory_limit + cpu_limit
- routes/apps.rs: persist limits on create, update via PUT
- builder.rs: pass MEMORY_LIMIT + CPU_LIMIT to build script
- builder/build.sh: use $MEMORY_LIMIT / $CPU_LIMIT in podman run
  (replaces hardcoded --cpus="0.5"; --memory now also set)

## Monitoring (opt-in compose profile)
- infra/docker-compose.yml: gatus + netdata under `monitoring` profile
  Enable: podman compose --profile monitoring up -d
  Gatus on :8080, Netdata on :19999
- infra/gatus.yml: Gatus config checking HIY /api/status every minute

## Backup cron job
- infra/backup.sh: dumps SQLite, copies env files + git repos into a
  dated .tar.gz; optional rclone upload; 30-day local retention
  Suggested cron: 0 3 * * * /path/to/infra/backup.sh

https://claude.ai/code/session_01FKCW3FDjNFj6jve4niMFXH
2026-03-24 15:06:42 +00:00
Sander Hautvast
92d37d9199
Merge pull request #2 from shautvast/main
Merge pull request #1 from shautvast/claude/heroku-clone-mvp-plan-NREhc
2026-03-24 15:56:53 +01:00
28 changed files with 1747 additions and 155 deletions

448
Cargo.lock generated
View file

@ -2,6 +2,41 @@
# It is not intended for manual editing.
version = 4
[[package]]
name = "aead"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d122413f284cf2d62fb1b7db97e02edb8cda96d769b16e443a4f6195e35662b0"
dependencies = [
"crypto-common",
"generic-array",
]
[[package]]
name = "aes"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b169f7a6d4742236a0a00c541b845991d0ac43e546831af1249753ab4c3aa3a0"
dependencies = [
"cfg-if",
"cipher",
"cpufeatures",
]
[[package]]
name = "aes-gcm"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "831010a0f742e1209b3bcea8fab6a8e149051ba6099432c8cb2cc117dec3ead1"
dependencies = [
"aead",
"aes",
"cipher",
"ctr",
"ghash",
"subtle",
]
[[package]]
name = "ahash"
version = "0.8.12"
@ -259,6 +294,12 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
[[package]]
name = "cfg_aliases"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chrono"
version = "0.4.44"
@ -341,9 +382,19 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a"
dependencies = [
"generic-array",
"rand_core 0.6.4",
"typenum",
]
[[package]]
name = "ctr"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0369ee1ad671834580515889b80f2ea915f23b8be8d0daa4bbaf2ac5c7590835"
dependencies = [
"cipher",
]
[[package]]
name = "der"
version = "0.7.10"
@ -580,8 +631,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"wasi",
"wasm-bindgen",
]
[[package]]
@ -591,9 +644,11 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"r-efi 5.3.0",
"wasip2",
"wasm-bindgen",
]
[[package]]
@ -609,6 +664,16 @@ dependencies = [
"wasip3",
]
[[package]]
name = "ghash"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0d8a4362ccb29cb0b265253fb0a2728f592895ee6854fd9bc13f2ffda266ff1"
dependencies = [
"opaque-debug",
"polyval",
]
[[package]]
name = "hashbrown"
version = "0.14.5"
@ -668,6 +733,7 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
name = "hiy-server"
version = "0.1.0"
dependencies = [
"aes-gcm",
"anyhow",
"async-stream",
"axum",
@ -678,12 +744,13 @@ dependencies = [
"futures",
"hex",
"hmac",
"reqwest",
"serde",
"serde_json",
"sha2",
"sqlx",
"tokio",
"tower-http",
"tower-http 0.5.2",
"tracing",
"tracing-subscriber",
"uuid",
@ -780,6 +847,24 @@ dependencies = [
"pin-utils",
"smallvec",
"tokio",
"want",
]
[[package]]
name = "hyper-rustls"
version = "0.27.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3c93eb611681b207e1fe55d5a71ecf91572ec8a6705cdb6857f7d8d5242cf58"
dependencies = [
"http",
"hyper",
"hyper-util",
"rustls 0.23.37",
"rustls-pki-types",
"tokio",
"tokio-rustls",
"tower-service",
"webpki-roots 1.0.6",
]
[[package]]
@ -788,13 +873,21 @@ version = "0.1.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96547c2556ec9d12fb1578c4eaf448b04993e7fb79cbaad930a656880a6bdfa0"
dependencies = [
"base64 0.22.1",
"bytes",
"futures-channel",
"futures-util",
"http",
"http-body",
"hyper",
"ipnet",
"libc",
"percent-encoding",
"pin-project-lite",
"socket2",
"tokio",
"tower-service",
"tracing",
]
[[package]]
@ -950,6 +1043,22 @@ dependencies = [
"generic-array",
]
[[package]]
name = "ipnet"
version = "2.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d98f6fed1fde3f8c21bc40a1abb88dd75e67924f9cffc3ef95607bad8017f8e2"
[[package]]
name = "iri-string"
version = "0.7.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8e7418f59cc01c88316161279a7f665217ae316b388e58a0d10e29f54f1e5eb"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "itoa"
version = "1.0.17"
@ -1043,6 +1152,12 @@ version = "0.4.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
[[package]]
name = "lru-slab"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154"
[[package]]
name = "matchers"
version = "0.2.0"
@ -1127,7 +1242,7 @@ dependencies = [
"num-integer",
"num-iter",
"num-traits",
"rand",
"rand 0.8.5",
"smallvec",
"zeroize",
]
@ -1168,6 +1283,12 @@ version = "1.21.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f7c3e4beb33f85d45ae3e3a1792185706c8e16d043238c593331cc7cd313b50"
[[package]]
name = "opaque-debug"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08d65885ee38876c4f86fa503fb49d7b507c2b62552df7c70b2fce627e06381"
[[package]]
name = "parking_lot"
version = "0.12.5"
@ -1257,6 +1378,18 @@ version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4596b6d070b27117e987119b4dac604f3c58cfb0b191112e24771b2faeac1a6"
[[package]]
name = "polyval"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d1fe60d06143b2430aa532c94cfe9e29783047f06c0d7fd359a9a51b729fa25"
dependencies = [
"cfg-if",
"cpufeatures",
"opaque-debug",
"universal-hash",
]
[[package]]
name = "potential_utf"
version = "0.1.4"
@ -1294,6 +1427,61 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "quinn"
version = "0.11.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9e20a958963c291dc322d98411f541009df2ced7b5a4f2bd52337638cfccf20"
dependencies = [
"bytes",
"cfg_aliases",
"pin-project-lite",
"quinn-proto",
"quinn-udp",
"rustc-hash",
"rustls 0.23.37",
"socket2",
"thiserror 2.0.18",
"tokio",
"tracing",
"web-time",
]
[[package]]
name = "quinn-proto"
version = "0.11.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "434b42fec591c96ef50e21e886936e66d3cc3f737104fdb9b737c40ffb94c098"
dependencies = [
"bytes",
"getrandom 0.3.4",
"lru-slab",
"rand 0.9.2",
"ring",
"rustc-hash",
"rustls 0.23.37",
"rustls-pki-types",
"slab",
"thiserror 2.0.18",
"tinyvec",
"tracing",
"web-time",
]
[[package]]
name = "quinn-udp"
version = "0.5.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "addec6a0dcad8a8d96a771f815f0eaf55f9d1805756410b39f5fa81332574cbd"
dependencies = [
"cfg_aliases",
"libc",
"once_cell",
"socket2",
"tracing",
"windows-sys 0.52.0",
]
[[package]]
name = "quote"
version = "1.0.45"
@ -1322,8 +1510,18 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [
"libc",
"rand_chacha",
"rand_core",
"rand_chacha 0.3.1",
"rand_core 0.6.4",
]
[[package]]
name = "rand"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [
"rand_chacha 0.9.0",
"rand_core 0.9.5",
]
[[package]]
@ -1333,7 +1531,17 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
dependencies = [
"ppv-lite86",
"rand_core",
"rand_core 0.6.4",
]
[[package]]
name = "rand_chacha"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core 0.9.5",
]
[[package]]
@ -1345,6 +1553,15 @@ dependencies = [
"getrandom 0.2.17",
]
[[package]]
name = "rand_core"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76afc826de14238e6e8c374ddcc1fa19e374fd8dd986b0d2af0d02377261d83c"
dependencies = [
"getrandom 0.3.4",
]
[[package]]
name = "redox_syscall"
version = "0.5.18"
@ -1380,6 +1597,44 @@ version = "0.8.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc897dd8d9e8bd1ed8cdad82b5966c3e0ecae09fb1907d58efaa013543185d0a"
[[package]]
name = "reqwest"
version = "0.12.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
dependencies = [
"base64 0.22.1",
"bytes",
"futures-core",
"http",
"http-body",
"http-body-util",
"hyper",
"hyper-rustls",
"hyper-util",
"js-sys",
"log",
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls 0.23.37",
"rustls-pki-types",
"serde",
"serde_json",
"serde_urlencoded",
"sync_wrapper",
"tokio",
"tokio-rustls",
"tower",
"tower-http 0.6.8",
"tower-service",
"url",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
"webpki-roots 1.0.6",
]
[[package]]
name = "ring"
version = "0.17.14"
@ -1407,13 +1662,19 @@ dependencies = [
"num-traits",
"pkcs1",
"pkcs8",
"rand_core",
"rand_core 0.6.4",
"signature",
"spki",
"subtle",
"zeroize",
]
[[package]]
name = "rustc-hash"
version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
[[package]]
name = "rustix"
version = "1.1.4"
@ -1434,10 +1695,24 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f56a14d1f48b391359b22f731fd4bd7e43c97f3c50eee276f3aa09c94784d3e"
dependencies = [
"ring",
"rustls-webpki",
"rustls-webpki 0.101.7",
"sct",
]
[[package]]
name = "rustls"
version = "0.23.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4"
dependencies = [
"once_cell",
"ring",
"rustls-pki-types",
"rustls-webpki 0.103.10",
"subtle",
"zeroize",
]
[[package]]
name = "rustls-pemfile"
version = "1.0.4"
@ -1447,6 +1722,16 @@ dependencies = [
"base64 0.21.7",
]
[[package]]
name = "rustls-pki-types"
version = "1.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd"
dependencies = [
"web-time",
"zeroize",
]
[[package]]
name = "rustls-webpki"
version = "0.101.7"
@ -1457,6 +1742,17 @@ dependencies = [
"untrusted",
]
[[package]]
name = "rustls-webpki"
version = "0.103.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
dependencies = [
"ring",
"rustls-pki-types",
"untrusted",
]
[[package]]
name = "rustversion"
version = "1.0.22"
@ -1611,7 +1907,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77549399552de45a898a580c1b41d445bf730df867cc44e6c0233bbc4b8329de"
dependencies = [
"digest",
"rand_core",
"rand_core 0.6.4",
]
[[package]]
@ -1706,19 +2002,19 @@ dependencies = [
"once_cell",
"paste",
"percent-encoding",
"rustls",
"rustls 0.21.12",
"rustls-pemfile",
"serde",
"serde_json",
"sha2",
"smallvec",
"sqlformat",
"thiserror",
"thiserror 1.0.69",
"tokio",
"tokio-stream",
"tracing",
"url",
"webpki-roots",
"webpki-roots 0.25.4",
]
[[package]]
@ -1790,7 +2086,7 @@ dependencies = [
"memchr",
"once_cell",
"percent-encoding",
"rand",
"rand 0.8.5",
"rsa",
"serde",
"sha1",
@ -1798,7 +2094,7 @@ dependencies = [
"smallvec",
"sqlx-core",
"stringprep",
"thiserror",
"thiserror 1.0.69",
"tracing",
"whoami",
]
@ -1830,14 +2126,14 @@ dependencies = [
"md-5",
"memchr",
"once_cell",
"rand",
"rand 0.8.5",
"serde",
"serde_json",
"sha2",
"smallvec",
"sqlx-core",
"stringprep",
"thiserror",
"thiserror 1.0.69",
"tracing",
"whoami",
]
@ -1916,6 +2212,9 @@ name = "sync_wrapper"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263"
dependencies = [
"futures-core",
]
[[package]]
name = "synstructure"
@ -1947,7 +2246,16 @@ version = "1.0.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52"
dependencies = [
"thiserror-impl",
"thiserror-impl 1.0.69",
]
[[package]]
name = "thiserror"
version = "2.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4"
dependencies = [
"thiserror-impl 2.0.18",
]
[[package]]
@ -1961,6 +2269,17 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "thiserror-impl"
version = "2.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc4ee7f67670e9b64d05fa4253e753e016c6c95ff35b89b7941d6b856dec1d5"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "thread_local"
version = "1.1.9"
@ -2023,6 +2342,16 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "tokio-rustls"
version = "0.26.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61"
dependencies = [
"rustls 0.23.37",
"tokio",
]
[[package]]
name = "tokio-stream"
version = "0.1.18"
@ -2067,6 +2396,24 @@ dependencies = [
"tracing",
]
[[package]]
name = "tower-http"
version = "0.6.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
dependencies = [
"bitflags",
"bytes",
"futures-util",
"http",
"http-body",
"iri-string",
"pin-project-lite",
"tower",
"tower-layer",
"tower-service",
]
[[package]]
name = "tower-layer"
version = "0.3.3"
@ -2141,6 +2488,12 @@ dependencies = [
"tracing-log",
]
[[package]]
name = "try-lock"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
[[package]]
name = "typenum"
version = "1.19.0"
@ -2192,6 +2545,16 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39ec24b3121d976906ece63c9daad25b85969647682eee313cb5779fdd69e14e"
[[package]]
name = "universal-hash"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc1de2c688dc15305988b563c3854064043356019f97a4b46276fe734c4f07ea"
dependencies = [
"crypto-common",
"subtle",
]
[[package]]
name = "untrusted"
version = "0.9.0"
@ -2207,6 +2570,7 @@ dependencies = [
"form_urlencoded",
"idna",
"percent-encoding",
"serde",
]
[[package]]
@ -2250,6 +2614,15 @@ version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]]
name = "want"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfa7760aed19e106de2c7c0b581b509f2f25d3dacaf737cb82ac61bc6d760b0e"
dependencies = [
"try-lock",
]
[[package]]
name = "wasi"
version = "0.11.1+wasi-snapshot-preview1"
@ -2293,6 +2666,20 @@ dependencies = [
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.64"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9c5522b3a28661442748e09d40924dfb9ca614b21c00d3fd135720e48b67db8"
dependencies = [
"cfg-if",
"futures-util",
"js-sys",
"once_cell",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.114"
@ -2359,12 +2746,41 @@ dependencies = [
"semver",
]
[[package]]
name = "web-sys"
version = "0.3.91"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "854ba17bb104abfb26ba36da9729addc7ce7f06f5c0f90f3c391f8461cca21f9"
dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "web-time"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb"
dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "webpki-roots"
version = "0.25.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f20c57d8d7db6d3b86154206ae5d8fba62dd39573114de97c2cb0578251f8e1"
[[package]]
name = "webpki-roots"
version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22cfaf3c063993ff62e73cb4311efde4db1efb31ab78a3e5c457939ad5cc0bed"
dependencies = [
"rustls-pki-types",
]
[[package]]
name = "whoami"
version = "1.6.1"

View file

@ -8,4 +8,5 @@
* Runs on your hardware (linux vm/host)
* Integrate with git using github webhooks or add your own git remote
* automatic redeployment after git push
* Builtin ssl. Automatically provisioned using let's encrypt.
* Builtin ssl. Automatically provisioned using let's encrypt.
* Caddy reverse proxy

View file

@ -2,8 +2,16 @@
# HIY Build Engine
# Environment variables injected by hiy-server:
# APP_ID, APP_NAME, REPO_URL, BRANCH, PORT, ENV_FILE, SHA, BUILD_DIR
# MEMORY_LIMIT (e.g. "512m"), CPU_LIMIT (e.g. "0.5")
set -euo pipefail
# Never prompt for git credentials — fail immediately if auth is missing.
export GIT_TERMINAL_PROMPT=0
# Defaults — overridden by per-app settings stored in the control plane.
MEMORY_LIMIT="${MEMORY_LIMIT:-512m}"
CPU_LIMIT="${CPU_LIMIT:-0.5}"
log() { echo "[hiy] $*"; }
log "=== HostItYourself Build Engine ==="
@ -13,17 +21,33 @@ log "Branch: $BRANCH"
log "Build dir: $BUILD_DIR"
# ── 1. Clone or pull ───────────────────────────────────────────────────────────
# Build an authenticated URL when a git token is set (private repos).
# GIT_TOKEN is passed by hiy-server and never echoed here.
CLONE_URL="$REPO_URL"
if [ -n "${GIT_TOKEN:-}" ]; then
case "$REPO_URL" in
https://*)
CLONE_URL="https://x-access-token:${GIT_TOKEN}@${REPO_URL#https://}"
;;
esac
fi
mkdir -p "$BUILD_DIR"
cd "$BUILD_DIR"
if [ -d ".git" ]; then
log "Updating existing clone…"
git remote set-url origin "$CLONE_URL"
git fetch origin "$BRANCH" --depth=50
git checkout "$BRANCH"
git reset --hard "origin/$BRANCH"
# Strip credentials from the stored remote so they don't sit in .git/config.
git remote set-url origin "$REPO_URL"
else
log "Cloning repository…"
git clone --depth=50 --branch "$BRANCH" "$REPO_URL" .
git clone --depth=50 --branch "$BRANCH" "$CLONE_URL" .
# Strip credentials from the stored remote so they don't sit in .git/config.
git remote set-url origin "$REPO_URL"
fi
ACTUAL_SHA=$(git rev-parse HEAD)
@ -105,7 +129,8 @@ podman run --detach \
--label "hiy.app=${APP_ID}" \
--label "hiy.port=${PORT}" \
--restart unless-stopped \
--cpus="0.5" \
--memory="${MEMORY_LIMIT}" \
--cpus="${CPU_LIMIT}" \
"$IMAGE_TAG"
# ── 6. Update Caddy via its admin API ─────────────────────────────────────────
@ -124,11 +149,20 @@ if curl --silent --fail "${CADDY_API}/config/" >/dev/null 2>&1; then
else
ROUTES_URL="${CADDY_API}/config/apps/http/servers/${CADDY_SERVER}/routes"
# Route JSON uses Caddy's forward_auth pattern:
# 1. HIY server checks the session cookie and app-level permission at /auth/verify
# 2. On 2xx → Caddy proxies to the app container
# 3. On anything else (e.g. 302 redirect to /login) → Caddy passes through to the client
ROUTE_JSON=$(python3 -c "
# Route JSON: public apps use plain reverse_proxy; private apps use forward_auth.
if [ "${IS_PUBLIC:-0}" = "1" ]; then
ROUTE_JSON=$(python3 -c "
import json, sys
upstream = sys.argv[1]
app_host = sys.argv[2]
route = {
'match': [{'host': [app_host]}],
'handle': [{'handler': 'reverse_proxy', 'upstreams': [{'dial': upstream}]}]
}
print(json.dumps(route))
" "${UPSTREAM}" "${APP_ID}.${DOMAIN_SUFFIX}")
else
ROUTE_JSON=$(python3 -c "
import json, sys
upstream = sys.argv[1]
app_host = sys.argv[2]
@ -162,6 +196,7 @@ route = {
}
print(json.dumps(route))
" "${UPSTREAM}" "${APP_ID}.${DOMAIN_SUFFIX}")
fi
# Upsert the route for this app.
ROUTES=$(curl --silent --fail "${ROUTES_URL}" 2>/dev/null || echo "[]")
# Remove existing route for the same host, rebuild list, keep dashboard as catch-all.

View file

@ -261,11 +261,11 @@ hostityourself/
- [ ] Deploy history
### M4 — Hardening (Week 5)
- [ ] Env var encryption at rest
- [ ] Resource limits on containers
- [ ] Netdata + Gatus setup
- [ ] Backup cron job
- [ ] Dashboard auth
- [x] Env var encryption at rest (AES-256-GCM via `HIY_SECRET_KEY`; transparent plaintext passthrough for migration)
- [x] Resource limits on containers (per-app `memory_limit` + `cpu_limit`; defaults 512m / 0.5 CPU)
- [x] Netdata + Gatus setup (`monitoring` compose profile; `infra/gatus.yml`)
- [x] Backup cron job (`infra/backup.sh` — SQLite dump + env files + git repos; local + rclone remote)
- [x] Dashboard auth (multi-user sessions, bcrypt, API keys — done in earlier milestone)
### M5 — Polish (Week 6)
- [ ] Buildpack detection (Dockerfile / Node / Python / static)

View file

@ -45,7 +45,9 @@ ssh pi@hiypi.local
```bash
sudo apt update && sudo apt full-upgrade -y
sudo apt install -y git curl ufw fail2ban unattended-upgrades
sudo apt install -y git curl ufw fail2ban unattended-upgrades podman python3 pipx aardvark-dns sqlite3
pipx install podman-compose
pipx ensurepath
```
### Static IP (optional but recommended)

View file

@ -11,3 +11,19 @@ HIY_ADMIN_PASS=changeme
# Postgres admin password — used by the shared cluster.
# App schemas get their own scoped users; this password never leaves the server.
POSTGRES_PASSWORD=changeme
# Forgejo (optional — only needed if you add the forgejo service to docker-compose.yml).
FORGEJO_DB_PASSWORD=changeme
FORGEJO_DOMAIN=git.yourdomain.com
# Actions runner registration token — obtain from Forgejo:
# Site Administration → Actions → Runners → Create new runner
FORGEJO_RUNNER_TOKEN=
# ── Backup (infra/backup.sh) ──────────────────────────────────────────────────
# Local directory to store backup archives.
HIY_BACKUP_DIR=/mnt/usb/hiy-backups
# Optional rclone remote (e.g. "b2:mybucket/hiy", "s3:mybucket/hiy").
# Requires rclone installed and configured. Leave blank to skip remote upload.
HIY_BACKUP_REMOTE=
# How many days to keep local archives (default 30).
HIY_BACKUP_RETAIN_DAYS=30

View file

@ -1,66 +1,32 @@
# syntax=docker/dockerfile:1
# ── Build stage ───────────────────────────────────────────────────────────────
# Run the compiler on the *build* host; cross-compile to target when needed.
FROM --platform=$BUILDPLATFORM rust:1.94-slim-bookworm AS builder
# Native build: Cargo targets the host platform automatically.
# No --target flag means no cross-compiler confusion regardless of which
# arch podman-compose runs on (x86_64, arm64, armv7…).
FROM docker.io/library/rust:1.94-slim-bookworm AS builder
ARG BUILDPLATFORM
ARG TARGETPLATFORM
ARG TARGETARCH
ARG TARGETVARIANT
# Install cross-compilation toolchains only when actually cross-compiling.
RUN apt-get update && apt-get install -y pkg-config && \
if [ "${BUILDPLATFORM}" != "${TARGETPLATFORM}" ]; then \
case "${TARGETARCH}:${TARGETVARIANT}" in \
"arm64:") apt-get install -y gcc-aarch64-linux-gnu ;; \
"arm:v7") apt-get install -y gcc-arm-linux-gnueabihf ;; \
"arm:v6") apt-get install -y gcc-arm-linux-gnueabi ;; \
esac; \
fi && \
# gcc is required by cc-rs (used by aes-gcm / ring build scripts).
# rust:slim does not include a C compiler.
RUN apt-get update && apt-get install -y gcc pkg-config && \
rm -rf /var/lib/apt/lists/*
# Map TARGETARCH + TARGETVARIANT → Rust target triple, then install it.
RUN case "${TARGETARCH}:${TARGETVARIANT}" in \
"amd64:") echo x86_64-unknown-linux-gnu ;; \
"arm64:") echo aarch64-unknown-linux-gnu ;; \
"arm:v7") echo armv7-unknown-linux-gnueabihf ;; \
"arm:v6") echo arm-unknown-linux-gnueabi ;; \
*) echo x86_64-unknown-linux-gnu ;; \
esac > /rust_target && \
rustup target add "$(cat /rust_target)"
# Tell Cargo which cross-linker to use (ignored on native builds).
RUN mkdir -p /root/.cargo && printf '\
[target.aarch64-unknown-linux-gnu]\n\
linker = "aarch64-linux-gnu-gcc"\n\
\n\
[target.armv7-unknown-linux-gnueabihf]\n\
linker = "arm-linux-gnueabihf-gcc"\n\
\n\
[target.arm-unknown-linux-gnueabi]\n\
linker = "arm-linux-gnueabi-gcc"\n' >> /root/.cargo/config.toml
WORKDIR /build
# Cache dependencies separately from source.
# Cache dependency compilation separately from application source.
COPY Cargo.toml Cargo.lock* ./
COPY server/Cargo.toml ./server/
RUN mkdir -p server/src && echo 'fn main(){}' > server/src/main.rs
RUN TARGET=$(cat /rust_target) && \
cargo build --release --target "$TARGET" -p hiy-server 2>/dev/null || true
RUN cargo build --release -p hiy-server 2>/dev/null || true
RUN rm -f server/src/main.rs
# Build actual source.
COPY server/src ./server/src
RUN TARGET=$(cat /rust_target) && \
touch server/src/main.rs && \
cargo build --release --target "$TARGET" -p hiy-server
# Normalise binary location so the runtime stage doesn't need to know the target.
RUN cp /build/target/"$(cat /rust_target)"/release/hiy-server /usr/local/bin/hiy-server
COPY server/src ./server/src
COPY server/templates ./server/templates
RUN touch server/src/main.rs && \
cargo build --release -p hiy-server
# ── Runtime stage ─────────────────────────────────────────────────────────────
FROM debian:bookworm-slim
FROM docker.io/library/debian:bookworm-slim
RUN apt-get update && apt-get install -y \
ca-certificates \
@ -71,7 +37,7 @@ RUN apt-get update && apt-get install -y \
podman \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/local/bin/hiy-server /usr/local/bin/hiy-server
COPY --from=builder /build/target/release/hiy-server /usr/local/bin/hiy-server
WORKDIR /app

51
infra/auto-update.sh Executable file
View file

@ -0,0 +1,51 @@
#!/usr/bin/env bash
# auto-update.sh — pull latest changes and restart affected services.
# Run by the hiy-update.timer systemd user unit every 5 minutes.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
log() { echo "[hiy-update] $(date '+%H:%M:%S') $*"; }
cd "$REPO_ROOT"
# Fetch without touching the working tree.
git fetch origin 2>&1 | sed 's/^/[git] /' || { log "git fetch failed — skipping"; exit 0; }
LOCAL=$(git rev-parse HEAD)
REMOTE=$(git rev-parse "@{u}" 2>/dev/null || echo "$LOCAL")
if [ "$LOCAL" = "$REMOTE" ]; then
log "Already up to date ($LOCAL)."
exit 0
fi
log "New commits detected — pulling ($LOCAL$REMOTE)…"
git pull 2>&1 | sed 's/^/[git] /'
# Determine which services need restarting based on what changed.
CHANGED=$(git diff --name-only "$LOCAL" "$REMOTE")
log "Changed files: $(echo "$CHANGED" | tr '\n' ' ')"
# Always rebuild the server if any server-side code changed.
SERVER_CHANGED=$(echo "$CHANGED" | grep -E '^server/|^Cargo' || true)
COMPOSE_CHANGED=$(echo "$CHANGED" | grep '^infra/docker-compose' || true)
CADDY_CHANGED=$(echo "$CHANGED" | grep '^proxy/Caddyfile' || true)
COMPOSE_CMD="podman compose --env-file $REPO_ROOT/.env -f $SCRIPT_DIR/docker-compose.yml"
if [ -n "$COMPOSE_CHANGED" ]; then
log "docker-compose.yml changed — restarting full stack…"
$COMPOSE_CMD up -d
elif [ -n "$SERVER_CHANGED" ]; then
log "Server code changed — rebuilding server…"
$COMPOSE_CMD up -d --build server
elif [ -n "$CADDY_CHANGED" ]; then
log "Caddyfile changed — reloading Caddy…"
$COMPOSE_CMD exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile
else
log "No service restart needed for these changes."
fi
log "Done."

162
infra/backup.sh Executable file
View file

@ -0,0 +1,162 @@
#!/usr/bin/env bash
# HIY daily backup script
#
# What is backed up:
# 1. SQLite database (hiy.db) — apps, deploys, env vars, users
# 2. Env files — per-deploy decrypted env files
# 3. Git repos — bare repos for git-push deploys
# 4. Postgres — pg_dumpall (hiy + forgejo databases)
# 5. Forgejo data volume — repositories, avatars, LFS objects
# 6. Caddy TLS certificates — caddy-data volume
# 7. .env file — secrets (handle the archive with care)
#
# Destination options (mutually exclusive; set one):
# HIY_BACKUP_DIR — local path (e.g. /mnt/usb/hiy-backups, default /tmp/hiy-backups)
# HIY_BACKUP_REMOTE — rclone remote:path (e.g. "b2:mybucket/hiy")
# requires rclone installed and configured
#
# Retention: 30 days local (remote retention managed by the storage provider).
#
# Suggested cron (run as the same user that owns the containers):
# 0 3 * * * /path/to/infra/backup.sh >> /var/log/hiy-backup.log 2>&1
set -euo pipefail
# ── Load .env ──────────────────────────────────────────────────────────────────
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ENV_FILE="${SCRIPT_DIR}/../.env"
if [ -f "$ENV_FILE" ]; then
set -a; source "$ENV_FILE"; set +a
fi
# ── Config ─────────────────────────────────────────────────────────────────────
HIY_DATA_DIR="${HIY_DATA_DIR:-/data}"
BACKUP_DIR="${HIY_BACKUP_DIR:-/tmp/hiy-backups}"
BACKUP_REMOTE="${HIY_BACKUP_REMOTE:-}"
RETAIN_DAYS="${HIY_BACKUP_RETAIN_DAYS:-30}"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
ARCHIVE_NAME="hiy-backup-${TIMESTAMP}.tar.gz"
STAGING="${BACKUP_DIR}/staging-${TIMESTAMP}"
log() { echo "[hiy-backup] $(date '+%H:%M:%S') $*"; }
log "=== HIY Backup ==="
log "Data dir : ${HIY_DATA_DIR}"
log "Staging : ${STAGING}"
mkdir -p "${STAGING}"
# ── Helper: find a running container by compose service label ──────────────────
find_container() {
local service="$1"
podman ps --filter "label=com.docker.compose.service=${service}" \
--format '{{.Names}}' | head -1
}
# ── 1. SQLite ──────────────────────────────────────────────────────────────────
log "--- SQLite ---"
SERVER_CTR=$(find_container server)
if [ -n "${SERVER_CTR}" ]; then
log "Copying hiy.db from container ${SERVER_CTR}"
podman cp "${SERVER_CTR}:${HIY_DATA_DIR}/hiy.db" "${STAGING}/hiy.db"
log "Dumping hiy.db…"
sqlite3 "${STAGING}/hiy.db" .dump > "${STAGING}/hiy.sql"
rm "${STAGING}/hiy.db"
elif [ -f "${HIY_DATA_DIR}/hiy.db" ]; then
log "Server container not running — dumping from host path…"
sqlite3 "${HIY_DATA_DIR}/hiy.db" .dump > "${STAGING}/hiy.sql"
else
log "WARNING: hiy.db not found — skipping SQLite dump"
fi
# ── 2. Env files ───────────────────────────────────────────────────────────────
log "--- Env files ---"
if [ -n "${SERVER_CTR}" ]; then
podman exec "${SERVER_CTR}" sh -c \
"[ -d ${HIY_DATA_DIR}/envs ] && tar -C ${HIY_DATA_DIR} -czf - envs" \
> "${STAGING}/envs.tar.gz" 2>/dev/null || true
elif [ -d "${HIY_DATA_DIR}/envs" ]; then
tar -czf "${STAGING}/envs.tar.gz" -C "${HIY_DATA_DIR}" envs
fi
# ── 3. Git repos ───────────────────────────────────────────────────────────────
log "--- Git repos ---"
if [ -n "${SERVER_CTR}" ]; then
podman exec "${SERVER_CTR}" sh -c \
"[ -d ${HIY_DATA_DIR}/repos ] && tar -C ${HIY_DATA_DIR} -czf - repos" \
> "${STAGING}/repos.tar.gz" 2>/dev/null || true
elif [ -d "${HIY_DATA_DIR}/repos" ]; then
tar -czf "${STAGING}/repos.tar.gz" -C "${HIY_DATA_DIR}" repos
fi
# ── 4. Postgres ────────────────────────────────────────────────────────────────
log "--- Postgres ---"
PG_CTR=$(find_container postgres)
if [ -n "${PG_CTR}" ]; then
log "Running pg_dumpall via container ${PG_CTR}"
podman exec "${PG_CTR}" pg_dumpall -U hiy_admin \
> "${STAGING}/postgres.sql"
else
log "WARNING: postgres container not running — skipping Postgres dump"
fi
# ── 5. Forgejo data volume ─────────────────────────────────────────────────────
log "--- Forgejo volume ---"
if podman volume exists forgejo-data 2>/dev/null; then
log "Exporting forgejo-data volume…"
podman volume export forgejo-data > "${STAGING}/forgejo-data.tar"
else
log "forgejo-data volume not found — skipping"
fi
# ── 6. Caddy TLS certificates ──────────────────────────────────────────────────
log "--- Caddy volume ---"
if podman volume exists caddy-data 2>/dev/null; then
log "Exporting caddy-data volume…"
podman volume export caddy-data > "${STAGING}/caddy-data.tar"
else
log "caddy-data volume not found — skipping"
fi
# ── 7. .env file ───────────────────────────────────────────────────────────────
log "--- .env ---"
if [ -f "${ENV_FILE}" ]; then
cp "${ENV_FILE}" "${STAGING}/dot-env"
log "WARNING: archive contains plaintext secrets — store it securely"
else
log ".env not found at ${ENV_FILE} — skipping"
fi
# ── Create archive ─────────────────────────────────────────────────────────────
mkdir -p "${BACKUP_DIR}"
ARCHIVE_PATH="${BACKUP_DIR}/${ARCHIVE_NAME}"
log "Creating archive: ${ARCHIVE_PATH}"
tar -czf "${ARCHIVE_PATH}" -C "${STAGING}" .
rm -rf "${STAGING}"
ARCHIVE_SIZE=$(du -sh "${ARCHIVE_PATH}" | cut -f1)
log "Archive size: ${ARCHIVE_SIZE}"
# ── Upload to remote (optional) ────────────────────────────────────────────────
if [ -n "${BACKUP_REMOTE}" ]; then
if command -v rclone &>/dev/null; then
log "Uploading to ${BACKUP_REMOTE}"
#use patched rclone for now
/home/sander/dev/rclone/rclone copy --transfers 1 --retries 5 "${ARCHIVE_PATH}" "${BACKUP_REMOTE}/"
log "Upload complete."
else
log "WARNING: HIY_BACKUP_REMOTE is set but rclone is not installed — skipping"
log "Install: https://rclone.org/install/"
fi
fi
# ── Rotate old local backups ───────────────────────────────────────────────────
log "Removing local backups older than ${RETAIN_DAYS} days…"
find "${BACKUP_DIR}" -maxdepth 1 -name 'hiy-backup-*.tar.gz' \
-mtime "+${RETAIN_DAYS}" -delete
REMAINING=$(find "${BACKUP_DIR}" -maxdepth 1 -name 'hiy-backup-*.tar.gz' | wc -l)
log "Local backups retained: ${REMAINING}"
log "=== Backup complete: ${ARCHIVE_NAME} ==="

View file

@ -12,7 +12,7 @@ services:
# rootful: /run/podman/podman.sock
# rootless: /run/user/<UID>/podman/podman.sock (start.sh sets this)
podman-proxy:
image: alpine/socat
image: docker.io/alpine/socat
command: tcp-listen:2375,fork,reuseaddr unix-connect:/podman.sock
restart: unless-stopped
volumes:
@ -62,20 +62,71 @@ services:
# ── Shared Postgres ───────────────────────────────────────────────────────
postgres:
image: postgres:16-alpine
image: docker.io/library/postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: hiy
POSTGRES_USER: hiy_admin
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
FORGEJO_DB_PASSWORD: ${FORGEJO_DB_PASSWORD}
volumes:
- hiy-pg-data:/var/lib/postgresql/data
# SQL files here run once on first init (ignored if data volume already exists).
- ./postgres-init:/docker-entrypoint-initdb.d:ro
networks:
- hiy-net
# ── Forgejo (self-hosted Git) ──────────────────────────────────────────────
forgejo:
image: codeberg.org/forgejo/forgejo:10
restart: unless-stopped
environment:
USER_UID: 1000
USER_GID: 1000
FORGEJO__database__DB_TYPE: postgres
FORGEJO__database__HOST: postgres:5432
FORGEJO__database__NAME: forgejo
FORGEJO__database__USER: forgejo
FORGEJO__database__PASSWD: ${FORGEJO_DB_PASSWORD}
FORGEJO__server__DOMAIN: ${FORGEJO_DOMAIN}
FORGEJO__server__ROOT_URL: https://${FORGEJO_DOMAIN}/
FORGEJO__server__SSH_DOMAIN: ${FORGEJO_DOMAIN}
# Skip the first-run wizard — everything is configured via env vars above.
FORGEJO__security__INSTALL_LOCK: "true"
# Enable Actions.
FORGEJO__actions__ENABLED: "true"
volumes:
- forgejo-data:/data
depends_on:
- postgres
networks:
- hiy-net
# ── Forgejo Actions runner ─────────────────────────────────────────────────
# Obtain FORGEJO_RUNNER_TOKEN from Forgejo:
# Site Administration → Actions → Runners → Create new runner
act_runner:
image: data.forgejo.org/forgejo/runner:6
restart: unless-stopped
command: ["/entrypoint.sh"]
environment:
FORGEJO_INSTANCE_URL: http://forgejo:3000
FORGEJO_RUNNER_TOKEN: ${FORGEJO_RUNNER_TOKEN}
FORGEJO_RUNNER_NAME: hiy-runner
# Give the runner access to Podman so CI jobs can build/run containers.
DOCKER_HOST: tcp://podman-proxy:2375
volumes:
- act_runner_data:/data
- ./runner-entrypoint.sh:/entrypoint.sh:ro
depends_on:
- forgejo
- podman-proxy
networks:
- hiy-net
# ── Reverse proxy ─────────────────────────────────────────────────────────
caddy:
image: caddy:2-alpine
image: docker.io/library/caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
@ -89,19 +140,64 @@ services:
- ../proxy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
- caddy-config:/config
command: caddy run --config /etc/caddy/Caddyfile --adapter caddyfile --resume
command: caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
networks:
- hiy-net
- default
# ── Uptime / health checks ────────────────────────────────────────────────
# Enable with: podman compose --profile monitoring up -d
gatus:
profiles: [monitoring]
image: docker.io/twinproduction/gatus:latest
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./gatus.yml:/config/config.yaml:ro
networks:
- hiy-net
# ── Host metrics (rootful Podman / Docker only) ───────────────────────────
# On rootless Podman some host mounts may be unavailable; comment out if so.
netdata:
profiles: [monitoring]
image: docker.io/netdata/netdata:stable
restart: unless-stopped
ports:
- "19999:19999"
pid: host
cap_add:
- SYS_PTRACE
- SYS_ADMIN
security_opt:
- apparmor:unconfined
volumes:
- netdata-config:/etc/netdata
- netdata-lib:/var/lib/netdata
- netdata-cache:/var/cache/netdata
- /etc/os-release:/host/etc/os-release:ro
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
networks:
- hiy-net
networks:
hiy-net:
name: hiy-net
# External so deployed app containers can join it.
external: false
default: {}
volumes:
hiy-data:
forgejo-data:
act_runner_data:
caddy-data:
caddy-config:
hiy-pg-data:
netdata-config:
netdata-lib:
netdata-cache:

39
infra/gatus.yml Normal file
View file

@ -0,0 +1,39 @@
# Gatus uptime / health check configuration for HIY.
# Docs: https://github.com/TwiN/gatus
web:
port: 8080
# In-memory storage — no persistence needed for uptime checks.
storage:
type: memory
# Alert via email when an endpoint is down (optional — remove if not needed).
# alerting:
# email:
# from: gatus@yourdomain.com
# username: gatus@yourdomain.com
# password: ${EMAIL_PASSWORD}
# host: smtp.yourdomain.com
# port: 587
# to: you@yourdomain.com
endpoints:
- name: HIY Dashboard
url: http://server:3000/api/status
interval: 1m
conditions:
- "[STATUS] == 200"
alerts:
- type: email
description: HIY dashboard is unreachable
send-on-resolved: true
# Add an entry per deployed app:
#
# - name: my-app
# url: http://my-app:3001/health
# interval: 1m
# conditions:
# - "[STATUS] == 200"
# - "[RESPONSE_TIME] < 500"

142
infra/install.sh Executable file
View file

@ -0,0 +1,142 @@
#!/usr/bin/env bash
# install.sh — one-time setup for a fresh Raspberry Pi.
#
# Run this once after cloning the repo:
# cd ~/Hostityourself && ./infra/install.sh
#
# What it does:
# 1. Installs system packages (podman, aardvark-dns, sqlite3, git, uidmap)
# 2. Installs podman-compose (via pip, into ~/.local/bin)
# 3. Installs rclone (for off-site backups — optional)
# 4. Creates .env from the template and prompts for required values
# 5. Runs infra/start.sh to build and launch the stack
#
# Safe to re-run — all steps are idempotent.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
log() { echo; echo "$*"; }
info() { echo " $*"; }
ok() { echo "$*"; }
echo "╔══════════════════════════════════════════╗"
echo "║ HostItYourself — installer ║"
echo "╚══════════════════════════════════════════╝"
# ── 1. System packages ─────────────────────────────────────────────────────────
log "Installing system packages…"
sudo apt-get update -qq
sudo apt-get install -y \
podman \
aardvark-dns \
sqlite3 \
git \
uidmap \
python3-pip \
python3-venv \
curl
ok "System packages installed."
# ── 2. podman-compose ──────────────────────────────────────────────────────────
log "Checking podman-compose…"
if command -v podman-compose &>/dev/null; then
ok "podman-compose already installed ($(podman-compose --version 2>&1 | head -1))."
else
info "Installing podman-compose via pip…"
pip3 install --user podman-compose
ok "podman-compose installed to ~/.local/bin"
fi
# Ensure ~/.local/bin is in PATH for this session and future logins.
if ! echo "$PATH" | grep -q "$HOME/.local/bin"; then
export PATH="$HOME/.local/bin:$PATH"
fi
PROFILE="$HOME/.bashrc"
if ! grep -q '\.local/bin' "$PROFILE" 2>/dev/null; then
echo 'export PATH="$HOME/.local/bin:$PATH"' >> "$PROFILE"
info "Added ~/.local/bin to PATH in $PROFILE"
fi
# ── 3. rclone (optional) ───────────────────────────────────────────────────────
log "rclone (used for off-site backups)…"
if command -v rclone &>/dev/null; then
ok "rclone already installed ($(rclone --version 2>&1 | head -1))."
else
read -r -p " Install rclone? [y/N] " _RCLONE
if [[ "${_RCLONE,,}" == "y" ]]; then
curl -fsSL https://rclone.org/install.sh | sudo bash
ok "rclone installed."
info "Configure a remote later with: rclone config"
info "Then set HIY_BACKUP_REMOTE in .env"
else
info "Skipped. Install later with: curl https://rclone.org/install.sh | sudo bash"
fi
fi
# ── 4. .env setup ──────────────────────────────────────────────────────────────
log "Setting up .env…"
ENV_FILE="$REPO_ROOT/.env"
ENV_EXAMPLE="$SCRIPT_DIR/.env.example"
if [ -f "$ENV_FILE" ]; then
ok ".env already exists — skipping (edit manually if needed)."
else
cp "$ENV_EXAMPLE" "$ENV_FILE"
info "Created .env from template. Filling in required values…"
echo
# Helper: prompt for a value and write it into .env.
set_env() {
local key="$1" prompt="$2" default="$3" secret="${4:-}"
local current
current=$(grep "^${key}=" "$ENV_FILE" | cut -d= -f2- || echo "")
if [ -z "$current" ] || [ "$current" = "changeme" ] || [ "$current" = "" ]; then
if [ -n "$secret" ]; then
read -r -s -p " ${prompt} [${default}]: " _VAL; echo
else
read -r -p " ${prompt} [${default}]: " _VAL
fi
_VAL="${_VAL:-$default}"
# Replace the line in .env (works on both macOS and Linux).
sed -i "s|^${key}=.*|${key}=${_VAL}|" "$ENV_FILE"
fi
}
set_env "DOMAIN_SUFFIX" "Your domain (e.g. example.com)" "yourdomain.com"
set_env "ACME_EMAIL" "Email for Let's Encrypt notices" ""
set_env "HIY_ADMIN_USER" "Dashboard admin username" "admin"
set_env "HIY_ADMIN_PASS" "Dashboard admin password" "$(openssl rand -hex 12)" "secret"
set_env "POSTGRES_PASSWORD" "Postgres admin password" "$(openssl rand -hex 16)" "secret"
set_env "FORGEJO_DB_PASSWORD" "Forgejo DB password" "$(openssl rand -hex 16)" "secret"
set_env "FORGEJO_DOMAIN" "Forgejo domain (e.g. git.example.com)" "git.yourdomain.com"
echo
ok ".env written to $ENV_FILE"
info "Review it with: cat $ENV_FILE"
fi
# ── 5. Git remote check ────────────────────────────────────────────────────────
log "Checking git remote…"
cd "$REPO_ROOT"
REMOTE_URL=$(git remote get-url origin 2>/dev/null || echo "")
if [ -n "$REMOTE_URL" ]; then
ok "Git remote: $REMOTE_URL"
else
info "No git remote configured — auto-update will not work."
info "Set one with: git remote add origin <url>"
fi
# Ensure the tracking branch is set so auto-update.sh can compare commits.
BRANCH=$(git rev-parse --abbrev-ref HEAD)
if ! git rev-parse --abbrev-ref --symbolic-full-name '@{u}' &>/dev/null; then
info "Setting upstream tracking branch…"
git branch --set-upstream-to="origin/$BRANCH" "$BRANCH" 2>/dev/null || true
fi
# ── 6. Launch the stack ────────────────────────────────────────────────────────
log "Running start.sh to build and launch the stack…"
echo
exec "$SCRIPT_DIR/start.sh"

View file

@ -0,0 +1,10 @@
#!/bin/sh
# Create a dedicated database and user for Forgejo.
# Runs once when the Postgres container is first initialised.
# FORGEJO_DB_PASSWORD must be set in the environment (via docker-compose.yml).
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER forgejo WITH PASSWORD '${FORGEJO_DB_PASSWORD}';
CREATE DATABASE forgejo OWNER forgejo;
EOSQL

143
infra/restore.sh Executable file
View file

@ -0,0 +1,143 @@
#!/usr/bin/env bash
# HIY restore script
#
# Restores a backup archive produced by infra/backup.sh.
#
# Usage:
# ./infra/restore.sh /path/to/hiy-backup-20260101-030000.tar.gz
#
# What is restored:
# 1. SQLite database (hiy.db)
# 2. Env files and git repos
# 3. Postgres databases (pg_dumpall dump)
# 4. Forgejo data volume
# 5. Caddy TLS certificates
# 6. .env file (optional — skipped if already present unless --force is passed)
#
# ⚠ Run this with the stack STOPPED, then bring it back up afterwards:
# podman compose -f infra/docker-compose.yml down
# ./infra/restore.sh hiy-backup-*.tar.gz
# podman compose -f infra/docker-compose.yml up -d
set -euo pipefail
ARCHIVE="${1:-}"
FORCE="${2:-}"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ENV_FILE="${SCRIPT_DIR}/../.env"
HIY_DATA_DIR="${HIY_DATA_DIR:-/data}"
log() { echo "[hiy-restore] $(date '+%H:%M:%S') $*"; }
die() { log "ERROR: $*"; exit 1; }
# ── Validate ───────────────────────────────────────────────────────────────────
[ -z "${ARCHIVE}" ] && die "Usage: $0 <archive.tar.gz> [--force]"
[ -f "${ARCHIVE}" ] || die "Archive not found: ${ARCHIVE}"
WORK_DIR=$(mktemp -d)
trap 'rm -rf "${WORK_DIR}"' EXIT
log "=== HIY Restore ==="
log "Archive : ${ARCHIVE}"
log "Work dir: ${WORK_DIR}"
log "Extracting archive…"
tar -xzf "${ARCHIVE}" -C "${WORK_DIR}"
# ── Helper: find a running container by compose service label ──────────────────
find_container() {
local service="$1"
podman ps --filter "label=com.docker.compose.service=${service}" \
--format '{{.Names}}' | head -1
}
# ── 1. .env file ───────────────────────────────────────────────────────────────
log "--- .env ---"
if [ -f "${WORK_DIR}/dot-env" ]; then
if [ -f "${ENV_FILE}" ] && [ "${FORCE}" != "--force" ]; then
log "SKIP: ${ENV_FILE} already exists (pass --force to overwrite)"
else
cp "${WORK_DIR}/dot-env" "${ENV_FILE}"
log "Restored .env to ${ENV_FILE}"
fi
else
log "No .env in archive — skipping"
fi
# ── 2. SQLite ──────────────────────────────────────────────────────────────────
log "--- SQLite ---"
if [ -f "${WORK_DIR}/hiy.sql" ]; then
DB_PATH="${HIY_DATA_DIR}/hiy.db"
mkdir -p "$(dirname "${DB_PATH}")"
if [ -f "${DB_PATH}" ]; then
log "Moving existing hiy.db to hiy.db.bak…"
mv "${DB_PATH}" "${DB_PATH}.bak"
fi
log "Restoring hiy.db…"
sqlite3 "${DB_PATH}" < "${WORK_DIR}/hiy.sql"
log "SQLite restored."
else
log "No hiy.sql in archive — skipping"
fi
# ── 3. Env files & git repos ───────────────────────────────────────────────────
log "--- Env files ---"
if [ -f "${WORK_DIR}/envs.tar.gz" ]; then
log "Restoring envs/…"
tar -xzf "${WORK_DIR}/envs.tar.gz" -C "${HIY_DATA_DIR}"
fi
log "--- Git repos ---"
if [ -f "${WORK_DIR}/repos.tar.gz" ]; then
log "Restoring repos/…"
tar -xzf "${WORK_DIR}/repos.tar.gz" -C "${HIY_DATA_DIR}"
fi
# ── 4. Postgres ────────────────────────────────────────────────────────────────
log "--- Postgres ---"
if [ -f "${WORK_DIR}/postgres.sql" ]; then
PG_CTR=$(find_container postgres)
if [ -n "${PG_CTR}" ]; then
log "Restoring Postgres via container ${PG_CTR}"
# Drop existing connections then restore.
podman exec -i "${PG_CTR}" psql -U hiy_admin -d postgres \
-c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname IN ('hiy','forgejo') AND pid <> pg_backend_pid();" \
> /dev/null 2>&1 || true
podman exec -i "${PG_CTR}" psql -U hiy_admin -d postgres \
< "${WORK_DIR}/postgres.sql"
log "Postgres restored."
else
log "WARNING: postgres container not running"
log " Start Postgres first, then run:"
log " podman exec -i <postgres_container> psql -U hiy_admin -d postgres < ${WORK_DIR}/postgres.sql"
fi
else
log "No postgres.sql in archive — skipping"
fi
# ── 5. Forgejo data volume ─────────────────────────────────────────────────────
log "--- Forgejo volume ---"
if [ -f "${WORK_DIR}/forgejo-data.tar" ]; then
log "Importing forgejo-data volume…"
podman volume exists forgejo-data 2>/dev/null || podman volume create forgejo-data
podman volume import forgejo-data "${WORK_DIR}/forgejo-data.tar"
log "forgejo-data restored."
else
log "No forgejo-data.tar in archive — skipping"
fi
# ── 6. Caddy TLS certificates ──────────────────────────────────────────────────
log "--- Caddy volume ---"
if [ -f "${WORK_DIR}/caddy-data.tar" ]; then
log "Importing caddy-data volume…"
podman volume exists caddy-data 2>/dev/null || podman volume create caddy-data
podman volume import caddy-data "${WORK_DIR}/caddy-data.tar"
log "caddy-data restored."
else
log "No caddy-data.tar in archive — skipping"
fi
log "=== Restore complete ==="
log "Bring the stack back up with:"
log " podman compose -f ${SCRIPT_DIR}/docker-compose.yml up -d"

23
infra/runner-entrypoint.sh Executable file
View file

@ -0,0 +1,23 @@
#!/bin/sh
# runner-entrypoint.sh — register the Forgejo runner on first start, then run the daemon.
#
# On first run (no /data/.runner file) it calls create-runner-file to register
# with the Forgejo instance using FORGEJO_RUNNER_TOKEN. On subsequent starts it
# goes straight to the daemon.
set -e
CONFIG=/data/.runner
if [ ! -f "$CONFIG" ]; then
echo "[runner] No registration found — registering with Forgejo…"
forgejo-runner register \
--instance "${FORGEJO_INSTANCE_URL}" \
--token "${FORGEJO_RUNNER_TOKEN}" \
--name "${FORGEJO_RUNNER_NAME:-hiy-runner}" \
--labels "ubuntu-latest:docker://node:20-bookworm,ubuntu-22.04:docker://node:20-bookworm" \
--no-interactive
echo "[runner] Registration complete."
fi
echo "[runner] Starting daemon…"
exec forgejo-runner daemon --config "$CONFIG"

View file

@ -20,45 +20,10 @@ if [ -z "$DOMAIN_SUFFIX" ] || [ "$DOMAIN_SUFFIX" = "localhost" ]; then
fi
if [ -z "$ACME_EMAIL" ]; then
echo "ERROR: Set ACME_EMAIL in infra/.env (required for Let's Encrypt)"
exit 1
echo "[hiy] ACME_EMAIL not set — Caddy will use its internal CA (self-signed)."
echo "[hiy] For a public domain with Let's Encrypt, set ACME_EMAIL in infra/.env"
fi
# ── Generate production caddy.json ─────────────────────────────────────────────
# Writes TLS-enabled config using Let's Encrypt (no Cloudflare required).
# Caddy will use the HTTP-01 challenge (port 80) or TLS-ALPN-01 (port 443).
cat > "$SCRIPT_DIR/../proxy/caddy.json" <<EOF
{
"admin": { "listen": "0.0.0.0:2019" },
"apps": {
"tls": {
"automation": {
"policies": [{
"subjects": ["${DOMAIN_SUFFIX}"],
"issuers": [{"module": "acme", "email": "${ACME_EMAIL}"}]
}]
}
},
"http": {
"servers": {
"hiy": {
"listen": [":80", ":443"],
"automatic_https": {},
"routes": [
{
"match": [{"host": ["${DOMAIN_SUFFIX}"]}],
"handle": [{"handler": "reverse_proxy", "upstreams": [{"dial": "server:3000"}]}]
}
]
}
}
}
}
}
EOF
echo "[hiy] Generated proxy/caddy.json for ${DOMAIN_SUFFIX}"
# ── Ensure cgroup swap accounting is enabled (required by runc/Podman) ────────
# runc always writes memory.swap.max when the memory cgroup controller is
# present. On Raspberry Pi OS swap accounting is disabled by default, so that
@ -206,6 +171,7 @@ Wants=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/$(id -un)/.local/bin
ExecStart=${SCRIPT_DIR}/boot.sh
ExecStop=podman compose --env-file ${REPO_ROOT}/.env -f ${SCRIPT_DIR}/docker-compose.yml down
@ -217,3 +183,37 @@ systemctl --user daemon-reload
systemctl --user enable hiy.service
loginctl enable-linger "$(id -un)" 2>/dev/null || true
echo "[hiy] Boot service installed: systemctl --user status hiy.service"
# ── Install systemd timer for auto-update ─────────────────────────────────────
UPDATE_SERVICE="$SERVICE_DIR/hiy-update.service"
UPDATE_TIMER="$SERVICE_DIR/hiy-update.timer"
cat > "$UPDATE_SERVICE" <<UNIT
[Unit]
Description=HIY auto-update (git pull + restart changed services)
After=network-online.target
[Service]
Type=oneshot
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/$(id -un)/.local/bin
ExecStart=${SCRIPT_DIR}/auto-update.sh
StandardOutput=journal
StandardError=journal
UNIT
cat > "$UPDATE_TIMER" <<UNIT
[Unit]
Description=HIY auto-update every 5 minutes
[Timer]
OnBootSec=2min
OnUnitActiveSec=5min
Unit=hiy-update.service
[Install]
WantedBy=timers.target
UNIT
systemctl --user daemon-reload
systemctl --user enable --now hiy-update.timer
echo "[hiy] Auto-update timer installed: systemctl --user status hiy-update.timer"

View file

@ -23,10 +23,23 @@
}
# HIY dashboard — served at your root domain.
# TLS behaviour:
# ACME_EMAIL set → Caddy requests a Let's Encrypt cert (production)
# ACME_EMAIL unset → Caddy uses its built-in internal CA (local / LAN domains)
{$DOMAIN_SUFFIX:localhost} {
tls {$ACME_EMAIL:internal}
reverse_proxy server:3000
}
# ── Static services (not managed by HIY) ──────────────────────────────────────
# Set FORGEJO_DOMAIN in .env (e.g. git.yourdomain.com). Falls back to a
# non-routable placeholder so Caddy starts cleanly even if Forgejo isn't used.
{$FORGEJO_DOMAIN:forgejo.localhost} {
tls {$ACME_EMAIL:internal}
reverse_proxy forgejo:3000
}
# Deployed apps are added here dynamically by hiy-server via the Caddy API.
# Each entry looks like:
#

View file

@ -24,6 +24,8 @@ tracing-subscriber = { version = "0.3", features = ["env-filter"] }
dotenvy = "0.15"
async-stream = "0.3"
bcrypt = "0.15"
aes-gcm = "0.10"
anyhow = "1"
futures = "0.3"
base64 = "0.22"
reqwest = { version = "0.12", features = ["json", "rustls-tls"], default-features = false }

View file

@ -81,10 +81,15 @@ async fn run_build(state: &AppState, deploy_id: &str) -> anyhow::Result<()> {
.fetch_all(&state.db)
.await?;
let env_content: String = env_vars
.iter()
.map(|e| format!("{}={}\n", e.key, e.value))
.collect();
let mut env_content = String::new();
for e in &env_vars {
let plain = crate::crypto::decrypt(&e.value)
.unwrap_or_else(|err| {
tracing::warn!("Could not decrypt env var {}: {} — using raw value", e.key, err);
e.value.clone()
});
env_content.push_str(&format!("{}={}\n", e.key, plain));
}
std::fs::write(&env_file, env_content)?;
// Mark as building.
@ -128,16 +133,30 @@ async fn run_build(state: &AppState, deploy_id: &str) -> anyhow::Result<()> {
let domain_suffix = std::env::var("DOMAIN_SUFFIX").unwrap_or_else(|_| "localhost".into());
let caddy_api_url = std::env::var("CADDY_API_URL").unwrap_or_else(|_| "http://localhost:2019".into());
let mut child = Command::new("bash")
.arg(&build_script)
let mut cmd = Command::new("bash");
cmd.arg(&build_script)
.env("APP_ID", &app.id)
.env("APP_NAME", &app.name)
.env("REPO_URL", &repo_url)
.env("REPO_URL", &repo_url);
// Decrypt the git token (if any) and pass it separately so build.sh can
// inject it into the clone URL without it appearing in REPO_URL or logs.
if let Some(enc) = &app.git_token {
match crate::crypto::decrypt(enc) {
Ok(tok) => { cmd.env("GIT_TOKEN", tok); }
Err(e) => tracing::warn!("Could not decrypt git_token for {}: {}", app.id, e),
}
}
let mut child = cmd
.env("BRANCH", &app.branch)
.env("PORT", app.port.to_string())
.env("ENV_FILE", &env_file)
.env("SHA", deploy.sha.as_deref().unwrap_or(""))
.env("BUILD_DIR", &build_dir)
.env("MEMORY_LIMIT", &app.memory_limit)
.env("CPU_LIMIT", &app.cpu_limit)
.env("IS_PUBLIC", if app.is_public != 0 { "1" } else { "0" })
.env("DOMAIN_SUFFIX", &domain_suffix)
.env("CADDY_API_URL", &caddy_api_url)
.stdout(std::process::Stdio::piped())

60
server/src/crypto.rs Normal file
View file

@ -0,0 +1,60 @@
/// AES-256-GCM envelope encryption for values stored at rest.
///
/// Encrypted blobs are prefixed with `enc:v1:` so plaintext values written
/// before encryption was enabled are transparently passed through on decrypt.
///
/// Key derivation: SHA-256 of `HIY_SECRET_KEY` env var. If the var is
/// absent a hard-coded default is used and a warning is logged once.
use aes_gcm::{
aead::{Aead, AeadCore, KeyInit, OsRng},
Aes256Gcm, Key, Nonce,
};
use base64::{engine::general_purpose::STANDARD, Engine};
use sha2::{Digest, Sha256};
const PREFIX: &str = "enc:v1:";
fn key_bytes() -> [u8; 32] {
let secret = std::env::var("HIY_SECRET_KEY").unwrap_or_else(|_| {
tracing::warn!(
"HIY_SECRET_KEY is not set — env vars are encrypted with the default insecure key. \
Set HIY_SECRET_KEY in .env to a random 32+ char string."
);
"hiy-default-insecure-key-please-change-me".into()
});
Sha256::digest(secret.as_bytes()).into()
}
/// Encrypt a plaintext value and return `enc:v1:<b64(nonce||ciphertext)>`.
pub fn encrypt(plaintext: &str) -> anyhow::Result<String> {
let kb = key_bytes();
let cipher = Aes256Gcm::new(Key::<Aes256Gcm>::from_slice(&kb));
let nonce = Aes256Gcm::generate_nonce(&mut OsRng);
let ct = cipher
.encrypt(&nonce, plaintext.as_bytes())
.map_err(|e| anyhow::anyhow!("encrypt: {}", e))?;
let mut blob = nonce.to_vec();
blob.extend_from_slice(&ct);
Ok(format!("{}{}", PREFIX, STANDARD.encode(&blob)))
}
/// Decrypt an `enc:v1:…` value. Non-prefixed strings are returned as-is
/// (transparent migration path for pre-encryption data).
pub fn decrypt(value: &str) -> anyhow::Result<String> {
if !value.starts_with(PREFIX) {
return Ok(value.to_string());
}
let blob = STANDARD
.decode(&value[PREFIX.len()..])
.map_err(|e| anyhow::anyhow!("base64: {}", e))?;
if blob.len() < 12 {
anyhow::bail!("ciphertext too short");
}
let (nonce_bytes, ct) = blob.split_at(12);
let kb = key_bytes();
let cipher = Aes256Gcm::new(Key::<Aes256Gcm>::from_slice(&kb));
let plain = cipher
.decrypt(Nonce::from_slice(nonce_bytes), ct)
.map_err(|e| anyhow::anyhow!("decrypt: {}", e))?;
String::from_utf8(plain).map_err(Into::into)
}

View file

@ -101,6 +101,16 @@ pub async fn migrate(pool: &DbPool) -> anyhow::Result<()> {
.execute(pool)
.await?;
// Idempotent column additions for existing databases (SQLite ignores "column exists" errors).
let _ = sqlx::query("ALTER TABLE apps ADD COLUMN memory_limit TEXT NOT NULL DEFAULT '512m'")
.execute(pool).await;
let _ = sqlx::query("ALTER TABLE apps ADD COLUMN cpu_limit TEXT NOT NULL DEFAULT '0.5'")
.execute(pool).await;
let _ = sqlx::query("ALTER TABLE apps ADD COLUMN git_token TEXT")
.execute(pool).await;
let _ = sqlx::query("ALTER TABLE apps ADD COLUMN is_public INTEGER NOT NULL DEFAULT 0")
.execute(pool).await;
sqlx::query(
r#"CREATE TABLE IF NOT EXISTS databases (
app_id TEXT PRIMARY KEY REFERENCES apps(id) ON DELETE CASCADE,

View file

@ -10,6 +10,7 @@ use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
mod auth;
mod builder;
mod crypto;
mod db;
mod models;
mod routes;
@ -153,6 +154,20 @@ async fn main() -> anyhow::Result<()> {
builder::build_worker(worker_state).await;
});
// Re-register all app Caddy routes from the DB on startup.
// Caddy no longer uses --resume, so routes must be restored each time the
// stack restarts (ensures Caddyfile changes are always picked up).
let restore_db = state.db.clone();
tokio::spawn(async move {
routes::apps::restore_caddy_routes(&restore_db).await;
});
// Restart any app containers that are stopped (e.g. after a host reboot).
let containers_db = state.db.clone();
tokio::spawn(async move {
routes::apps::restore_app_containers(&containers_db).await;
});
// ── Protected routes (admin login required) ───────────────────────────────
let protected = Router::new()
.route("/", get(routes::ui::index))
@ -163,6 +178,7 @@ async fn main() -> anyhow::Result<()> {
.route("/api/apps", get(routes::apps::list).post(routes::apps::create))
.route("/api/apps/:id", get(routes::apps::get_one)
.put(routes::apps::update)
.patch(routes::apps::update)
.delete(routes::apps::delete))
.route("/api/apps/:id/stop", post(routes::apps::stop))
.route("/api/apps/:id/restart", post(routes::apps::restart))

View file

@ -8,8 +8,14 @@ pub struct App {
pub branch: String,
pub port: i64,
pub webhook_secret: String,
pub memory_limit: String,
pub cpu_limit: String,
pub created_at: String,
pub updated_at: String,
/// Encrypted git token for cloning private repos. Never serialised to API responses.
#[serde(skip_serializing)]
pub git_token: Option<String>,
pub is_public: i64,
}
#[derive(Debug, Deserialize)]
@ -18,6 +24,9 @@ pub struct CreateApp {
pub repo_url: Option<String>,
pub branch: Option<String>,
pub port: i64,
pub memory_limit: Option<String>,
pub cpu_limit: Option<String>,
pub git_token: Option<String>,
}
#[derive(Debug, Deserialize)]
@ -25,6 +34,10 @@ pub struct UpdateApp {
pub repo_url: Option<String>,
pub branch: Option<String>,
pub port: Option<i64>,
pub memory_limit: Option<String>,
pub cpu_limit: Option<String>,
pub git_token: Option<String>,
pub is_public: Option<bool>,
}
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)]

View file

@ -12,6 +12,187 @@ use crate::{
AppState,
};
/// Build the Caddy route JSON for an app.
/// Public apps get a plain reverse_proxy; private apps get forward_auth via HIY.
fn caddy_route(app_host: &str, upstream: &str, is_public: bool) -> serde_json::Value {
if is_public {
serde_json::json!({
"match": [{"host": [app_host]}],
"handle": [{"handler": "reverse_proxy", "upstreams": [{"dial": upstream}]}]
})
} else {
serde_json::json!({
"match": [{"host": [app_host]}],
"handle": [{
"handler": "subroute",
"routes": [{
"handle": [{
"handler": "reverse_proxy",
"rewrite": {"method": "GET", "uri": "/auth/verify"},
"headers": {"request": {"set": {
"X-Forwarded-Method": ["{http.request.method}"],
"X-Forwarded-Uri": ["{http.request.uri}"],
"X-Forwarded-Host": ["{http.request.host}"],
"X-Forwarded-Proto": ["{http.request.scheme}"]
}}},
"upstreams": [{"dial": "server:3000"}],
"handle_response": [{
"match": {"status_code": [2]},
"routes": [{"handle": [{"handler": "reverse_proxy", "upstreams": [{"dial": upstream}]}]}]
}]
}]
}]
}]
})
}
}
/// Re-register every app's Caddy route from the database.
/// Called at startup so that removing `--resume` from Caddy doesn't lose
/// routes when the stack restarts.
pub async fn restore_caddy_routes(db: &crate::DbPool) {
// Give Caddy a moment to finish loading the Caddyfile before we PATCH it.
let caddy_api = std::env::var("CADDY_API_URL").unwrap_or_else(|_| "http://caddy:2019".into());
let client = reqwest::Client::new();
for attempt in 1..=10u32 {
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
if client.get(format!("{}/config/", caddy_api)).send().await.is_ok() {
break;
}
tracing::info!("restore_caddy_routes: waiting for Caddy ({}/10)…", attempt);
}
let apps = match sqlx::query_as::<_, crate::models::App>("SELECT * FROM apps")
.fetch_all(db)
.await
{
Ok(a) => a,
Err(e) => { tracing::error!("restore_caddy_routes: DB error: {}", e); return; }
};
for app in &apps {
push_visibility_to_caddy(&app.id, app.port, app.is_public != 0).await;
}
tracing::info!("restore_caddy_routes: registered {} app routes", apps.len());
}
/// On startup, ensure every app that had a successful deploy is actually running.
/// If the host rebooted, containers will be in "exited" state — start them.
/// If a container is missing entirely, log a warning (we don't rebuild automatically).
pub async fn restore_app_containers(db: &crate::DbPool) {
let apps = match sqlx::query_as::<_, crate::models::App>("SELECT * FROM apps")
.fetch_all(db)
.await
{
Ok(a) => a,
Err(e) => { tracing::error!("restore_app_containers: DB error: {}", e); return; }
};
for app in &apps {
// Only care about apps that have at least one successful deploy.
let has_deploy: bool = sqlx::query_scalar(
"SELECT COUNT(*) > 0 FROM deploys WHERE app_id = ? AND status = 'success'"
)
.bind(&app.id)
.fetch_one(db)
.await
.unwrap_or(false);
if !has_deploy {
continue;
}
let container = format!("hiy-{}", app.id);
// Check container state via `podman inspect`.
let inspect = tokio::process::Command::new("podman")
.args(["inspect", "--format", "{{.State.Status}}", &container])
.output()
.await;
match inspect {
Ok(out) if out.status.success() => {
let status = String::from_utf8_lossy(&out.stdout).trim().to_string();
if status == "running" {
tracing::debug!("restore_app_containers: {} already running", container);
} else {
tracing::info!("restore_app_containers: starting {} (was {})", container, status);
let start = tokio::process::Command::new("podman")
.args(["start", &container])
.output()
.await;
match start {
Ok(o) if o.status.success() =>
tracing::info!("restore_app_containers: {} started", container),
Ok(o) =>
tracing::warn!("restore_app_containers: failed to start {}: {}",
container, String::from_utf8_lossy(&o.stderr).trim()),
Err(e) =>
tracing::warn!("restore_app_containers: error starting {}: {}", container, e),
}
}
}
_ => {
tracing::warn!(
"restore_app_containers: container {} not found — redeploy needed",
container
);
}
}
}
tracing::info!("restore_app_containers: done");
}
/// Push a visibility change to Caddy without requiring a full redeploy.
/// Best-effort: logs a warning on failure but does not surface an error to the caller.
async fn push_visibility_to_caddy(app_id: &str, port: i64, is_public: bool) {
if let Err(e) = try_push_visibility_to_caddy(app_id, port, is_public).await {
tracing::warn!("caddy visibility update for {}: {}", app_id, e);
}
}
async fn try_push_visibility_to_caddy(app_id: &str, port: i64, is_public: bool) -> anyhow::Result<()> {
let caddy_api = std::env::var("CADDY_API_URL").unwrap_or_else(|_| "http://caddy:2019".into());
let domain = std::env::var("DOMAIN_SUFFIX").unwrap_or_else(|_| "localhost".into());
let app_host = format!("{}.{}", app_id, domain);
let upstream = format!("hiy-{}:{}", app_id, port);
let client = reqwest::Client::new();
// Discover the Caddy server name (Caddyfile adapter names it "srv0").
let servers: serde_json::Value = client
.get(format!("{}/config/apps/http/servers/", caddy_api))
.send().await?
.json().await?;
let server_name = servers.as_object()
.and_then(|m| m.keys().next().cloned())
.ok_or_else(|| anyhow::anyhow!("no servers in Caddy config"))?;
let routes_url = format!("{}/config/apps/http/servers/{}/routes", caddy_api, server_name);
let routes: Vec<serde_json::Value> = client.get(&routes_url).send().await?.json().await?;
let dashboard = serde_json::json!({
"handle": [{"handler": "reverse_proxy", "upstreams": [{"dial": "server:3000"}]}]
});
let mut updated: Vec<serde_json::Value> = routes.into_iter()
.filter(|r| {
let is_this_app = r.pointer("/match/0/host")
.and_then(|h| h.as_array())
.map(|hosts| hosts.iter().any(|h| h.as_str() == Some(app_host.as_str())))
.unwrap_or(false);
let is_catchall = r.get("match").is_none();
!is_this_app && !is_catchall
})
.collect();
updated.insert(0, caddy_route(&app_host, &upstream, is_public));
updated.push(dashboard);
client.patch(&routes_url).json(&updated).send().await?;
Ok(())
}
pub async fn list(State(s): State<AppState>) -> Result<Json<Vec<App>>, StatusCode> {
let apps = sqlx::query_as::<_, App>("SELECT * FROM apps ORDER BY created_at DESC")
.fetch_all(&s.db)
@ -29,10 +210,18 @@ pub async fn create(
let now = Utc::now().to_rfc3339();
let branch = payload.branch.unwrap_or_else(|| "main".into());
let secret = Uuid::new_v4().to_string().replace('-', "");
let memory_limit = payload.memory_limit.unwrap_or_else(|| "512m".into());
let cpu_limit = payload.cpu_limit.unwrap_or_else(|| "0.5".into());
let git_token_enc = payload.git_token
.as_deref()
.filter(|t| !t.is_empty())
.map(crate::crypto::encrypt)
.transpose()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
sqlx::query(
"INSERT INTO apps (id, name, repo_url, branch, port, webhook_secret, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
"INSERT INTO apps (id, name, repo_url, branch, port, webhook_secret, memory_limit, cpu_limit, git_token, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
)
.bind(&id)
.bind(&payload.name)
@ -40,6 +229,9 @@ pub async fn create(
.bind(&branch)
.bind(payload.port)
.bind(&secret)
.bind(&memory_limit)
.bind(&cpu_limit)
.bind(&git_token_enc)
.bind(&now)
.bind(&now)
.execute(&s.db)
@ -89,6 +281,44 @@ pub async fn update(
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
}
if let Some(v) = payload.memory_limit {
sqlx::query("UPDATE apps SET memory_limit = ?, updated_at = ? WHERE id = ?")
.bind(v).bind(&now).bind(&id)
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
}
if let Some(v) = payload.cpu_limit {
sqlx::query("UPDATE apps SET cpu_limit = ?, updated_at = ? WHERE id = ?")
.bind(v).bind(&now).bind(&id)
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
}
if let Some(v) = payload.is_public {
let flag: i64 = if v { 1 } else { 0 };
sqlx::query("UPDATE apps SET is_public = ?, updated_at = ? WHERE id = ?")
.bind(flag).bind(&now).bind(&id)
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
// Immediately reconfigure the Caddy route so the change takes effect
// without a full redeploy.
let app = fetch_app(&s, &id).await?;
push_visibility_to_caddy(&id, app.port, v).await;
}
if let Some(v) = payload.git_token {
if v.is_empty() {
sqlx::query("UPDATE apps SET git_token = NULL, updated_at = ? WHERE id = ?")
.bind(&now).bind(&id)
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
} else {
let enc = crate::crypto::encrypt(&v)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
sqlx::query("UPDATE apps SET git_token = ?, updated_at = ? WHERE id = ?")
.bind(enc).bind(&now).bind(&id)
.execute(&s.db).await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
}
}
fetch_app(&s, &id).await.map(Json)
}

View file

@ -5,7 +5,7 @@ use axum::{
};
use serde_json::json;
use crate::{models::Database, AppState};
use crate::{crypto, models::Database, AppState};
type ApiError = (StatusCode, String);
@ -39,13 +39,17 @@ pub async fn get_db(
match db {
None => Err(err(StatusCode::NOT_FOUND, "No database provisioned")),
Some(d) => Ok(Json(json!({
"app_id": d.app_id,
"schema": d.app_id,
"pg_user": d.pg_user,
"conn_str": conn_str(&d.pg_user, &d.pg_password),
"created_at": d.created_at,
}))),
Some(d) => {
let pw = crypto::decrypt(&d.pg_password)
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(json!({
"app_id": d.app_id,
"schema": d.app_id,
"pg_user": d.pg_user,
"conn_str": conn_str(&d.pg_user, &pw),
"created_at": d.created_at,
})))
}
}
}
@ -107,27 +111,31 @@ pub async fn provision(
.execute(pg).await
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Persist credentials.
// Persist credentials (password encrypted at rest).
let enc_password = crypto::encrypt(&password)
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let now = chrono::Utc::now().to_rfc3339();
sqlx::query(
"INSERT INTO databases (app_id, pg_user, pg_password, created_at) VALUES (?, ?, ?, ?)",
)
.bind(&app_id)
.bind(&pg_user)
.bind(&password)
.bind(&enc_password)
.bind(&now)
.execute(&s.db)
.await
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Inject DATABASE_URL as an app env var (picked up on next deploy).
// Inject DATABASE_URL as an encrypted app env var (picked up on next deploy).
let url = conn_str(&pg_user, &password);
let enc_url = crypto::encrypt(&url)
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
sqlx::query(
"INSERT INTO env_vars (app_id, key, value) VALUES (?, 'DATABASE_URL', ?)
ON CONFLICT (app_id, key) DO UPDATE SET value = excluded.value",
)
.bind(&app_id)
.bind(&url)
.bind(&enc_url)
.execute(&s.db)
.await
.map_err(|e| err(StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;

View file

@ -5,6 +5,7 @@ use axum::{
};
use crate::{
crypto,
models::{EnvVar, SetEnvVar},
AppState,
};
@ -20,7 +21,12 @@ pub async fn list(
.fetch_all(&s.db)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(vars))
// Return keys only; values are masked in the UI and never sent in plaintext.
let masked: Vec<EnvVar> = vars
.into_iter()
.map(|e| EnvVar { value: "••••••••".into(), ..e })
.collect();
Ok(Json(masked))
}
pub async fn set(
@ -28,13 +34,15 @@ pub async fn set(
Path(app_id): Path<String>,
Json(payload): Json<SetEnvVar>,
) -> Result<StatusCode, StatusCode> {
let encrypted = crypto::encrypt(&payload.value)
.map_err(|e| { tracing::error!("encrypt env var: {}", e); StatusCode::INTERNAL_SERVER_ERROR })?;
sqlx::query(
"INSERT INTO env_vars (app_id, key, value) VALUES (?, ?, ?)
ON CONFLICT(app_id, key) DO UPDATE SET value = excluded.value",
)
.bind(&app_id)
.bind(&payload.key)
.bind(&payload.value)
.bind(&encrypted)
.execute(&s.db)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;

View file

@ -278,7 +278,8 @@ pub async fn app_detail(
<button class="primary" onclick="provisionDb()">Provision Database</button></div>"#
.to_string(),
(true, Some(db)) => {
let url = format!("postgres://{}:{}@postgres:5432/hiy", db.pg_user, db.pg_password);
let pw = crate::crypto::decrypt(&db.pg_password).unwrap_or_default();
let url = format!("postgres://{}:{}@postgres:5432/hiy", db.pg_user, pw);
format!(r#"<div class="card"><h2>Database</h2>
<table style="margin-bottom:16px">
<tr><td style="width:160px">Schema</td><td><code>{schema}</code></td></tr>
@ -296,18 +297,45 @@ pub async fn app_detail(
}
};
let is_public = app.is_public != 0;
let visibility_badge = if is_public {
r#"<span class="badge badge-success">public</span>"#
} else {
r#"<span class="badge badge-unknown">private</span>"#
};
let visibility_toggle_label = if is_public { "Make private" } else { "Make public" };
let (git_token_status, git_token_clear_btn) = if app.git_token.is_some() {
(
r#"<span class="badge badge-success">Token configured</span>"#.to_string(),
r#"<button class="danger" onclick="clearGitToken()">Clear</button>"#.to_string(),
)
} else {
(
r#"<span class="badge badge-unknown">No token — public repos only</span>"#.to_string(),
String::new(),
)
};
let body = APP_DETAIL_TMPL
.replace("{{name}}", &app.name)
.replace("{{repo}}", &app.repo_url)
.replace("{{branch}}", &app.branch)
.replace("{{port}}", &app.port.to_string())
.replace("{{host}}", &host)
.replace("{{app_id}}", &app.id)
.replace("{{secret}}", &app.webhook_secret)
.replace("{{deploy_rows}}", &deploy_rows)
.replace("{{env_rows}}", &env_rows)
.replace("{{c_badge}}", &container_badge(&container_state))
.replace("{{db_card}}", &db_card);
.replace("{{name}}", &app.name)
.replace("{{repo}}", &app.repo_url)
.replace("{{branch}}", &app.branch)
.replace("{{port}}", &app.port.to_string())
.replace("{{host}}", &host)
.replace("{{app_id}}", &app.id)
.replace("{{secret}}", &app.webhook_secret)
.replace("{{memory_limit}}", &app.memory_limit)
.replace("{{cpu_limit}}", &app.cpu_limit)
.replace("{{deploy_rows}}", &deploy_rows)
.replace("{{env_rows}}", &env_rows)
.replace("{{c_badge}}", &container_badge(&container_state))
.replace("{{db_card}}", &db_card)
.replace("{{git_token_status}}", &git_token_status)
.replace("{{git_token_clear_btn}}", &git_token_clear_btn)
.replace("{{visibility_badge}}", visibility_badge)
.replace("{{visibility_toggle_label}}", visibility_toggle_label)
.replace("{{is_public_js}}", if is_public { "true" } else { "false" });
Html(page(&app.name, &body)).into_response()
}

View file

@ -14,6 +14,8 @@
&nbsp;·&nbsp; branch <code>{{branch}}</code>
&nbsp;·&nbsp; port <code>{{port}}</code>
&nbsp;·&nbsp; <a href="http://{{name}}.{{host}}" target="_blank">{{name}}.{{host}}</a>
&nbsp;·&nbsp; {{visibility_badge}}
<button style="font-size:0.78rem;padding:2px 10px;margin-left:4px" onclick="toggleVisibility()">{{visibility_toggle_label}}</button>
</p>
<div class="card">
@ -33,8 +35,42 @@
</div>
</div>
<div class="card">
<h2>Settings</h2>
<div class="row" style="margin-bottom:12px">
<div style="flex:3"><label>Repo URL</label><input id="cfg-repo" type="text" value="{{repo}}"></div>
<div style="flex:1"><label>Branch</label><input id="cfg-branch" type="text" value="{{branch}}"></div>
</div>
<div class="row" style="margin-bottom:12px">
<div style="flex:1"><label>Port</label><input id="cfg-port" type="number" value="{{port}}"></div>
<div style="flex:1"><label>Memory limit</label><input id="cfg-memory" type="text" value="{{memory_limit}}"></div>
<div style="flex:1"><label>CPU limit</label><input id="cfg-cpu" type="text" value="{{cpu_limit}}"></div>
</div>
<button class="primary" onclick="saveSettings()">Save</button>
</div>
{{db_card}}
<div class="card">
<h2>Git Authentication</h2>
<p class="muted" style="margin-bottom:12px;font-size:0.9rem">
Required for private repos. Store a Personal Access Token (GitHub: <em>repo</em> scope,
GitLab: <em>read_repository</em>) so deploys can clone without interactive prompts.
Only HTTPS repo URLs are supported; SSH URLs use the server's own key pair.
</p>
<p style="margin-bottom:12px">{{git_token_status}}</p>
<div class="row">
<div style="flex:1">
<label>Personal Access Token</label>
<input id="git-token-input" type="password" placeholder="ghp_…">
</div>
<div style="align-self:flex-end;display:flex;gap:8px">
<button class="primary" onclick="saveGitToken()">Save</button>
{{git_token_clear_btn}}
</div>
</div>
</div>
<div class="card">
<h2>Environment Variables</h2>
<div class="row" style="margin-bottom:16px">
@ -145,6 +181,53 @@ async function deprovisionDb() {
if (r.ok) window.location.reload();
else alert('Error: ' + await r.text());
}
const IS_PUBLIC = {{is_public_js}};
async function saveSettings() {
const body = {
repo_url: document.getElementById('cfg-repo').value,
branch: document.getElementById('cfg-branch').value,
port: parseInt(document.getElementById('cfg-port').value, 10),
memory_limit: document.getElementById('cfg-memory').value,
cpu_limit: document.getElementById('cfg-cpu').value,
};
const r = await fetch('/api/apps/' + APP_ID, {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify(body),
});
if (r.ok) window.location.reload();
else alert('Error saving settings: ' + await r.text());
}
async function toggleVisibility() {
const r = await fetch('/api/apps/' + APP_ID, {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({is_public: !IS_PUBLIC}),
});
if (r.ok) window.location.reload();
else alert('Error updating visibility: ' + await r.text());
}
async function saveGitToken() {
const tok = document.getElementById('git-token-input').value;
if (!tok) { alert('Enter a token first'); return; }
const r = await fetch('/api/apps/' + APP_ID, {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({git_token: tok}),
});
if (r.ok) window.location.reload();
else alert('Error saving token: ' + await r.text());
}
async function clearGitToken() {
if (!confirm('Remove the stored git token for ' + APP_ID + '?')) return;
const r = await fetch('/api/apps/' + APP_ID, {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({git_token: ''}),
});
if (r.ok) window.location.reload();
else alert('Error clearing token: ' + await r.text());
}
async function stopApp() {
if (!confirm('Stop ' + APP_ID + '?')) return;
const r = await fetch('/api/apps/' + APP_ID + '/stop', {method:'POST'});