Restructure main and translate workshop docs to Dutch

This commit is contained in:
Paul Harkink 2026-02-28 17:06:24 +01:00
parent ced5b64be1
commit 8d05689a7a
31 changed files with 995 additions and 1311 deletions

120
README.md
View file

@ -1,116 +1,104 @@
# Kubernetes GitOps Workshop
A hands-on 2.54 hour workshop teaching real-world cluster operations using
ArgoCD, MetalLB, Ingress-Nginx, and Tekton — all on a local single-node k3s cluster.
Twee en een half tot vier uur hands-on cluster operations met ArgoCD, MetalLB, Ingress-Nginx en Tekton. Alles draait lokaal op een single-node k3s cluster in een VM.
---
## Quick start
## Vóór je begint
### Requirements
Je hebt drie dingen nodig op je laptop. Installeer ze de dag van tevoren — niet op de dag zelf.
Install all three **before** the workshop (ideally the day before — downloads are large):
| Tool | Download |
|----------------|-------------------------------------------------------|
| VirtualBox 7.x | https://www.virtualbox.org/wiki/Downloads — herstart je laptop erna |
| Vagrant 2.4.x | https://developer.hashicorp.com/vagrant/downloads |
| Git | https://git-scm.com/downloads |
| Tool | Install |
|--------------------|------------------------------------------------------------------|
| **VirtualBox 7.x** | https://www.virtualbox.org/wiki/Downloads — reboot after install |
| **Vagrant 2.4.x** | https://developer.hashicorp.com/vagrant/downloads |
| **Git** | https://git-scm.com/downloads |
Minimaal 12 GB vrij RAM en ~15 GB schijfruimte. Snelle check:
12 GB RAM free, ~15 GB disk. On Apple Silicon (M1/M2/M3/M4), download the **Apple Silicon build** of VirtualBox — see [docs/vm-setup.md](docs/vm-setup.md).
Quick check — all three must print a version:
```bash
VBoxManage --version && vagrant --version && git --version
```
### 1. Start the VM
```bash
git clone https://github.com/paulharkink/ops-demo.git
cd ops-demo
vagrant up # first run: ~1015 min
vagrant ssh
cd /vagrant
```
See [docs/vm-setup.md](docs/vm-setup.md) for verification steps and troubleshooting.
### 2. Bootstrap ArgoCD
```bash
./scripts/bootstrap.sh
```
Then follow the exercises in order.
Als één van de drie niets teruggeeft: installeren en opnieuw proberen.
---
## Exercises
## Aan de slag
| # | Exercise | Guide | Type | Est. Time |
|----|---------------------------|------------------------------------------------------------|-------|-----------|
| 01 | Bootstrap ArgoCD | [docs/01-argocd-bootstrap.md](docs/01-argocd-bootstrap.md) | Core | 30 min |
| 02 | Deploy podinfo via GitOps | [docs/02-deploy-podinfo.md](docs/02-deploy-podinfo.md) | Core | 30 min |
| 03 | MetalLB + Ingress-Nginx | [docs/03-metallb-ingress.md](docs/03-metallb-ingress.md) | Core | 45 min |
| 04 | Tekton pipeline | [docs/04-tekton-pipeline.md](docs/04-tekton-pipeline.md) | Core | 45 min |
| 05 | App upgrade + reflection | [docs/05-app-upgrade.md](docs/05-app-upgrade.md) | Core | 15 min |
| 06 | Prometheus + Grafana | [docs/06-monitoring.md](docs/06-monitoring.md) | Bonus | 60 min |
**Fork eerst de repo** naar je eigen GitHub-account — ga naar https://github.com/paulharkink/ops-demo en klik Fork. Zo kun je zelf pushen zonder dat je toegang nodig hebt tot de originele repo.
**Beginners**: aim for Exercises 0103 (~1h45m).
**Everyone else**: target 0105 for the full core loop.
```bash
git clone https://github.com/JOUW_USERNAME/ops-demo.git
cd ops-demo
vagrant up # eerste keer ~1015 min
vagrant ssh
cd /vagrant
./scripts/bootstrap.sh
```
Volg dan de oefeningen in volgorde. Zie [docs/vm-setup.md](docs/vm-setup.md) als er iets misgaat bij de VM.
---
## Oefeningen
| # | Oefening | Gids | Type | Tijd |
|----|-----------------------------|------------------------------------------------------------|-------|--------|
| 01 | ArgoCD bootstrappen | [docs/01-argocd-bootstrap.md](docs/01-argocd-bootstrap.md) | Kern | 30 min |
| 02 | podinfo deployen via GitOps | [docs/02-deploy-podinfo.md](docs/02-deploy-podinfo.md) | Kern | 30 min |
| 03 | MetalLB + Ingress-Nginx | [docs/03-metallb-ingress.md](docs/03-metallb-ingress.md) | Kern | 45 min |
| 04 | Tekton pipeline | [docs/04-tekton-pipeline.md](docs/04-tekton-pipeline.md) | Kern | 45 min |
| 05 | App upgrade + reflectie | [docs/05-app-upgrade.md](docs/05-app-upgrade.md) | Kern | 15 min |
| 06 | Prometheus + Grafana | [docs/06-monitoring.md](docs/06-monitoring.md) | Bonus | 60 min |
Beginners: focus op 0103 (~1u45m). De rest: probeer 0105 te halen.
---
## Stack
| Component | Purpose | Version |
| Component | Rol | Versie |
|-----------------------|-------------------------|------------------------|
| k3s | Kubernetes | v1.31.4 |
| ArgoCD | GitOps engine | v2.13.x (chart 7.7.11) |
| MetalLB | Bare-metal LoadBalancer | v0.14.9 |
| Ingress-Nginx | HTTP routing | chart 4.12.0 |
| Tekton | CI pipeline | v0.65.1 |
| podinfo | Demo app | 6.6.2 → 6.7.0 |
| Ingress-Nginx | HTTP-routing | chart 4.12.0 |
| Tekton | CI-pipeline | v0.65.1 |
| podinfo | Demo-app | 6.6.2 → 6.7.0 |
| kube-prometheus-stack | Observability (bonus) | chart 68.4.4 |
---
## Solution branches
## Vastgelopen?
Stuck on an exercise? Each solution branch is cumulative — it contains the complete
working state up to and including that exercise.
Elke solution branch is cumulatief — hij bevat alles t/m die oefening. Je kunt een PR openen van een solution branch naar jouw eigen branch om precies te zien wat er mist.
```bash
# View a specific file without checking out the branch
git fetch origin
git show origin/solution/03-metallb-ingress:manifests/networking/metallb/metallb-config.yaml
```
| Branch | State |
|--------------------------------|----------------------------------|
| `solution/01-argocd-bootstrap` | ArgoCD running |
| `solution/02-deploy-podinfo` | podinfo synced via ArgoCD |
| `solution/03-metallb-ingress` | LAN access via MetalLB + Ingress |
| `solution/04-tekton-pipeline` | Full GitOps CI loop |
| `solution/05-app-upgrade` | podinfo at v6.7.0 |
| `solution/06-monitoring` | Prometheus + Grafana running |
| Branch | Bevat |
|--------------------------------|-------------------------------------|
| `solution/01-argocd-bootstrap` | ArgoCD draait |
| `solution/02-deploy-podinfo` | podinfo gesynchroniseerd via ArgoCD |
| `solution/03-metallb-ingress` | LAN-toegang via MetalLB + Ingress |
| `solution/04-tekton-pipeline` | Volledige GitOps CI-loop |
| `solution/05-app-upgrade` | podinfo op v6.7.0 |
| `solution/06-monitoring` | Prometheus + Grafana actief |
---
## Network layout
## Netwerk
```
Your laptop
│ 192.168.56.x (VirtualBox host-only)
Jouw laptop
│ 192.168.56.x (VirtualBox host-only)
VM: 192.168.56.10
└── MetalLB pool: 192.168.56.200192.168.56.220
└── 192.168.56.200 → Ingress-Nginx
├── podinfo.192.168.56.200.nip.io
├── argocd.192.168.56.200.nip.io
└── grafana.192.168.56.200.nip.io (bonus)

View file

@ -1,22 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: podinfo
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
project: workshop
source:
repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
path: manifests/apps/podinfo
destination:
server: https://kubernetes.default.svc
namespace: podinfo
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,36 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
project: workshop
source:
repoURL: https://argoproj.github.io/argo-helm
chart: argo-cd
targetRevision: "7.7.11"
helm:
valueFiles:
- $values/manifests/argocd/values.yaml
sources:
- repoURL: https://argoproj.github.io/argo-helm
chart: argo-cd
targetRevision: "7.7.11"
helm:
valueFiles:
- $values/manifests/argocd/values.yaml
- repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true

View file

@ -1,22 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: workshop-pipeline
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "6"
spec:
project: workshop
source:
repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
path: manifests/ci/pipeline
destination:
server: https://kubernetes.default.svc
namespace: tekton-pipelines
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,24 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tekton
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "5"
spec:
project: workshop
source:
repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
path: manifests/ci/tekton
kustomize: {}
destination:
server: https://kubernetes.default.svc
namespace: tekton-pipelines
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true

View file

@ -1,29 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-grafana
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
project: workshop
sources:
- repoURL: https://prometheus-community.github.io/helm-charts
chart: kube-prometheus-stack
targetRevision: "68.4.4"
helm:
valueFiles:
- $values/manifests/monitoring/values.yaml
- repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: monitoring
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true

View file

@ -1,28 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ingress-nginx
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "3"
spec:
project: workshop
sources:
- repoURL: https://kubernetes.github.io/ingress-nginx
chart: ingress-nginx
targetRevision: "4.12.0"
helm:
valueFiles:
- $values/manifests/networking/ingress-nginx/values.yaml
- repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: ingress-nginx
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,24 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-config
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
project: workshop
source:
repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
path: manifests/networking/metallb
directory:
include: "metallb-config.yaml"
destination:
server: https://kubernetes.default.svc
namespace: metallb-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,29 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
project: workshop
sources:
- repoURL: https://metallb.github.io/metallb
chart: metallb
targetRevision: "0.14.9"
helm:
valueFiles:
- $values/manifests/networking/metallb/values.yaml
- repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: metallb-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true

View file

@ -1,20 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: workshop
source:
repoURL: https://github.com/paulharkink/ops-demo.git
targetRevision: HEAD
path: apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View file

@ -1,117 +1,151 @@
# Exercise 01 — Bootstrap ArgoCD
# Oefening 01 — ArgoCD bootstrappen
**Time**: ~30 min
**Goal**: Get ArgoCD running on your local k3s cluster and apply the App-of-Apps root application.
**Tijd**: ~30 minuten
**Doel**: ArgoCD aan de praat krijgen op je cluster en de App-of-Apps opzetten.
---
## What you'll learn
- How to install ArgoCD via Helm
- The App-of-Apps pattern: one ArgoCD Application that manages all others
- How ArgoCD watches a Git repository and syncs cluster state
## Wat je leert
- ArgoCD installeren via Helm
- Het App-of-Apps patroon: één ArgoCD Application die alle andere beheert
- Hoe ArgoCD je Git-repo in de gaten houdt en de cluster-staat synchroniseert
---
## Prerequisites
## Vereisten
Make sure your VM is up and you are SSHed in:
De VM draait en je bent ingelogd:
```bash
vagrant up # first time takes ~10 min (downloads images)
vagrant ssh
cd /vagrant
```
Verify k3s is healthy:
```bash
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ops-demo Ready control-plane,master Xm v1.31.x+k3s1
# ops-demo Ready control-plane,master ...
```
---
## Steps
## Stappen
### 1. Run the bootstrap script
### 1. Bootstrap-script uitvoeren
```bash
./scripts/bootstrap.sh
```
This script:
1. Creates the `argocd` namespace
2. Installs ArgoCD via Helm (chart 7.7.11 → ArgoCD v2.13.x)
3. Applies `apps/project.yaml` — a permissive `AppProject` for all workshop apps
4. Applies `apps/root.yaml` — the App-of-Apps entry point
Het script doet het volgende:
1. Detecteert de URL van jouw fork op basis van `git remote`
2. Maakt de `argocd` namespace aan
3. Installeert ArgoCD via Helm
4. Past `apps/project.yaml` toe
5. Genereert `apps/root.yaml` met jouw fork-URL en past het toe
At the end it prints the admin password. **Copy it now.**
Aan het einde zie je het admin-wachtwoord. **Kopieer het nu.**
---
### 2. Open the ArgoCD UI
### 2. ArgoCD UI openen
In a second terminal on your laptop (not the VM), run:
In een tweede terminal op je laptop:
```bash
vagrant ssh -- -L 8080:localhost:8080 &
# or, inside the VM:
kubectl port-forward svc/argocd-server -n argocd 8080:443
```
Then open **https://localhost:8080** in your browser (accept the self-signed cert).
Login: `admin` / `<password from script output>`
Open **https://localhost:8080** (accepteer het self-signed certificaat).
Login: `admin` / het wachtwoord uit de output van het script.
---
### 3. Explore the root Application
### 3. root.yaml committen en pushen
In the ArgoCD UI you should see one application: **root**.
- Click it. Notice it is syncing the `apps/` directory from this repo.
- It found `apps/argocd.yaml` and `apps/project.yaml` and is managing them.
- ArgoCD is now **self-managing** — any change you push to `apps/` will be picked up automatically.
Het bootstrap-script heeft `apps/root.yaml` aangemaakt met jouw fork-URL. Dit bestand moet in je repo staan zodat ArgoCD het kan synchroniseren:
```bash
git add apps/root.yaml
git commit -m "feat: add root app-of-apps"
git push
```
---
### 4. De root Application bekijken
In de ArgoCD UI zie je nu de **root** application verschijnen. Klik erop.
- Hij kijkt naar de `apps/` directory in jouw fork
- Alles wat je daar commit, pikt ArgoCD automatisch op
Controleer ook via de CLI:
```bash
# Confirm from the CLI too
kubectl get applications -n argocd
```
---
### 4. Check the self-managing ArgoCD app
### 5. ArgoCD zichzelf laten beheren (optioneel maar mooi)
Click the **argocd** application in the UI. It should show **Synced / Healthy**.
Maak `apps/argocd.yaml` aan:
ArgoCD is now reconciling its own Helm release from Git. If you push a change to
`manifests/argocd/values.yaml`, ArgoCD will apply it to itself.
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
project: workshop
sources:
- repoURL: https://argoproj.github.io/argo-helm
chart: argo-cd
targetRevision: "7.7.11"
helm:
valueFiles:
- $values/manifests/argocd/values.yaml
- repoURL: JOUW_FORK_URL
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
```
Vervang `JOUW_FORK_URL` door jouw fork-URL (staat ook in `apps/root.yaml`). Commit en push — ArgoCD beheert zichzelf vanaf nu via Git.
---
## Expected outcome
## Verwacht resultaat
```
NAME SYNC STATUS HEALTH STATUS
argocd Synced Healthy
root Synced Healthy
argocd Synced Healthy
```
---
## Troubleshooting
## Probleemoplossing
| Symptom | Fix |
|---------|-----|
| `kubectl get nodes` shows NotReady | Wait 3060 s; k3s is starting |
| Helm install fails with timeout | Run `kubectl get pods -n argocd` — if image pull is slow, wait |
| UI shows "Unknown" sync status | Click **Refresh** on the application |
| Port-forward drops | Re-run the `kubectl port-forward` command |
| Symptoom | Oplossing |
|----------|-----------|
| root Application toont "Unknown" | Nog niet gepusht, of ArgoCD kan de repo nog niet bereiken — even wachten |
| Helm install time-out | `kubectl get pods -n argocd` — waarschijnlijk nog images aan het downloaden |
| UI toont "Unknown" sync status | Klik **Refresh** op de application |
---
## What's next
## Volgende stap
In Exercise 02 you will deploy your first application — **podinfo** — purely through
Git: no `kubectl apply`, just a YAML file committed to the repo.
In Oefening 02 deploy je je eerste echte applicatie via GitOps — geen `kubectl apply`, alleen een YAML-bestand in Git.

View file

@ -1,158 +1,222 @@
# Exercise 02 — Deploy podinfo via GitOps
# Oefening 02 — podinfo deployen via GitOps
**Time**: ~30 min
**Goal**: Deploy a real application to the cluster purely through Git — no `kubectl apply`.
**Tijd**: ~30 minuten
**Doel**: Een echte applicatie deployen puur via Git — geen `kubectl apply`.
---
## What you'll learn
- How adding an ArgoCD `Application` manifest to Git is the only deploy action needed
- How to read ArgoCD sync status and application health
- The GitOps feedback loop: commit → push → ArgoCD detects change → cluster updated
## Wat je leert
- Hoe je een ArgoCD Application toevoegt door één bestand te committen
- Hoe je de sync-status en health van een applicatie leest
- De GitOps-loop in de praktijk: commit → push → ArgoCD detecteert → cluster bijgewerkt
---
## Prerequisites
## Vereisten
Exercise 01 complete: ArgoCD is running and the root app is Synced.
Oefening 01 afgerond. ArgoCD draait en de root app is Synced.
---
## Background: what is podinfo?
## Achtergrond: wat is podinfo?
`podinfo` is a tiny Go web app by Stefan Prodan (ArgoCD's author) — often used in
Kubernetes demos. It shows its own version number, has `/healthz` and `/readyz`
endpoints, and looks good in a browser. No external dependencies, no secrets needed.
podinfo is een kleine Go-webapp van Stefan Prodan (ook de maker van Flux). Hij wordt veel gebruikt in Kubernetes-demo's: toont zijn eigen versienummer, heeft `/healthz` en `/readyz` endpoints, en ziet er prima uit in een browser. Geen externe dependencies, geen secrets nodig.
---
## Steps
## Stappen
### 1. Understand what already exists
### 1. De manifests aanmaken
The repo already contains the podinfo manifests. Take a look:
```bash
ls manifests/apps/podinfo/
# namespace.yaml deployment.yaml service.yaml
```
Open `manifests/apps/podinfo/deployment.yaml` and find the image tag:
Maak de volgende bestanden aan:
**`manifests/apps/podinfo/namespace.yaml`**
```yaml
image: ghcr.io/stefanprodan/podinfo:6.6.2
apiVersion: v1
kind: Namespace
metadata:
name: podinfo
```
This is version **6.6.2**. Remember it — you'll upgrade it later.
**`manifests/apps/podinfo/deployment.yaml`**
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: podinfo
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
spec:
containers:
- name: podinfo
image: ghcr.io/stefanprodan/podinfo:6.6.2
ports:
- containerPort: 9898
name: http
env:
- name: PODINFO_UI_COLOR
value: "#6C48C5"
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
cpu: 50m
memory: 64Mi
```
**`manifests/apps/podinfo/service.yaml`**
```yaml
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: podinfo
spec:
selector:
app: podinfo
ports:
- port: 80
targetPort: 9898
name: http
```
---
### 2. Create the ArgoCD Application
### 2. De ArgoCD Application aanmaken
This is the only thing you need to "deploy" the app — tell ArgoCD to watch the manifests:
**`apps/apps/podinfo.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: podinfo
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
project: workshop
source:
repoURL: JOUW_FORK_URL
targetRevision: HEAD
path: manifests/apps/podinfo
destination:
server: https://kubernetes.default.svc
namespace: podinfo
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
Vervang `JOUW_FORK_URL` door jouw fork-URL.
---
### 3. Committen en pushen
```bash
cat apps/apps/podinfo.yaml
git add apps/apps/podinfo.yaml manifests/apps/podinfo/
git commit -m "feat: deploy podinfo via GitOps"
git push
```
You'll see it points ArgoCD at `manifests/apps/podinfo/` in this repo. The app
already exists in the repo, so ArgoCD's root app will pick it up automatically.
Dit is de enige actie die nodig is om de applicatie te deployen.
Check ArgoCD now — you should already see a **podinfo** application appearing.
> **The GitOps point**: You didn't run any `kubectl apply` for podinfo. You committed
> `apps/apps/podinfo.yaml` to Git, and ArgoCD synced it. That's the entire workflow.
> **Het GitOps-punt**: je hebt geen `kubectl apply` uitgevoerd voor podinfo. Je hebt een bestand gecommit, en ArgoCD regelt de rest.
---
### 3. Watch it sync
### 4. Wachten tot het Synced is
```bash
kubectl get application podinfo -n argocd -w
```
Wait until you see `Synced` and `Healthy`. Then:
Wacht tot je `Synced` en `Healthy` ziet. Dan:
```bash
kubectl get pods -n podinfo
# NAME READY STATUS RESTARTS AGE
# podinfo-xxxxxxxxx-xxxxx 1/1 Running 0 30s
# NAME READY STATUS RESTARTS AGE
# podinfo-xxx-xxx 1/1 Running 0 30s
```
---
### 4. Verify the app is working
Port-forward to test locally (inside the VM):
### 5. Controleer dat de app werkt
```bash
kubectl port-forward svc/podinfo -n podinfo 9898:80
```
In another terminal (or using curl inside the VM):
In een ander terminal (of via curl):
```bash
curl http://localhost:9898
# {"hostname":"podinfo-xxx","version":"6.6.2", ...}
```
You can see `"version":"6.6.2"` — that matches the image tag in `deployment.yaml`.
Versie `6.6.2` — dat klopt met de image-tag in `deployment.yaml`.
---
### 5. Make a GitOps change
### 6. Maak een GitOps-wijziging
Let's change the UI color to prove the loop works.
Pas de UI-kleur aan om te bewijzen dat de loop werkt.
Edit `manifests/apps/podinfo/deployment.yaml` and change:
Verander in `manifests/apps/podinfo/deployment.yaml`:
```yaml
value: "#6C48C5"
```
to any hex color you like, e.g.:
naar bijv.:
```yaml
value: "#2ecc71"
```
Commit and push:
Commit en push:
```bash
git add manifests/apps/podinfo/deployment.yaml
git commit -m "chore: change podinfo UI color"
git commit -m "chore: verander podinfo UI-kleur"
git push
```
Within ~3 minutes (ArgoCD's default poll interval) you'll see the pod restart and
the new color appear. You can also click **Refresh** in the ArgoCD UI to trigger
an immediate sync.
Binnen ~3 minuten (standaard poll-interval van ArgoCD) herstart de pod en zie je de nieuwe kleur. Je kunt ook op **Refresh** klikken in de UI voor direct effect.
---
## Expected outcome
## Verwacht resultaat
```
NAME SYNC STATUS HEALTH STATUS
podinfo Synced Healthy
```
```bash
curl http://localhost:9898 | jq .version
# "6.6.2"
```
---
## Probleemoplossing
| Symptoom | Oplossing |
|----------|-----------|
| Application blijft "Progressing" | `kubectl describe pod -n podinfo` — waarschijnlijk image pull |
| ArgoCD toont OutOfSync na push | Klik **Refresh** of wacht 3 minuten |
---
## Troubleshooting
## Volgende stap
| Symptom | Fix |
|---------|-----|
| Application stuck in "Progressing" | `kubectl describe pod -n podinfo` — usually image pull |
| `ImagePullBackOff` | Image was pre-pulled; run `kubectl get events -n podinfo` |
| ArgoCD shows OutOfSync after push | Click **Refresh** or wait 3 min for next poll |
---
## What's next
podinfo is running but only accessible via port-forward. In Exercise 03 you'll
expose it on your LAN using MetalLB (a real load balancer) and Ingress-Nginx,
so you can reach it from your laptop's browser without any port-forward.
podinfo draait maar is alleen bereikbaar via port-forward. In Oefening 03 stel je MetalLB en Ingress-Nginx in zodat je de app vanuit je browser op je laptop kunt bereiken — zonder port-forward.

View file

@ -1,109 +1,240 @@
# Exercise 03 — MetalLB + Ingress-Nginx (LAN exposure)
# Oefening 03 — MetalLB + Ingress-Nginx
**Time**: ~45 min
**Goal**: Expose podinfo and the ArgoCD UI on a real LAN IP — accessible from your laptop's browser without any port-forward.
**Tijd**: ~45 minuten
**Doel**: podinfo en de ArgoCD UI bereikbaar maken op een echt LAN-IP — vanuit je browser op je laptop, zonder port-forward.
---
## What you'll learn
- What MetalLB is and why you need it in a bare-metal / local Kubernetes cluster
- How a LoadBalancer service gets a real IP via L2 ARP
- How Ingress-Nginx routes HTTP traffic by hostname
- `nip.io` — a public wildcard DNS service for local development
## Wat je leert
- Waarom je MetalLB nodig hebt op een bare-metal of lokaal Kubernetes-cluster
- Hoe een LoadBalancer-service een echt IP krijgt via L2 ARP
- Hoe Ingress-Nginx HTTP-verkeer routeert op basis van hostname
- nip.io: gratis wildcard-DNS voor lokale development
---
## Background
## Achtergrond
In cloud Kubernetes (EKS, GKE, AKS), `type: LoadBalancer` automatically provisions a cloud load balancer with a public IP. On bare metal or local VMs, nothing does that — so pods stay unreachable.
In cloud-Kubernetes (EKS, GKE, AKS) regelt `type: LoadBalancer` automatisch een load balancer met een extern IP. Op bare metal of lokale VMs doet niets dat — pods blijven onbereikbaar van buitenaf.
**MetalLB** fills that gap: it watches for `LoadBalancer` services and assigns IPs from a pool you define. In L2 mode it uses ARP to answer "who has 192.168.56.200?" — so your laptop routes directly to the VM.
**MetalLB** lost dit op: hij luistert naar LoadBalancer-services en kent IPs toe uit een pool die jij definieert. In L2-modus gebruikt hij ARP — jouw laptop vraagt "wie heeft 192.168.56.200?" en MetalLB antwoordt namens de VM.
**Ingress-Nginx** is a single LoadBalancer service that MetalLB gives one IP. All your apps share that IP — Nginx routes to the right service based on the `Host:` header.
**Ingress-Nginx** is één LoadBalancer-service die van MetalLB één IP krijgt. Al je apps delen dat IP — Nginx routeert op basis van de `Host:` header.
**nip.io** is a public DNS wildcard: `anything.192.168.56.200.nip.io` resolves to `192.168.56.200`. No `/etc/hosts` editing needed.
**nip.io** is publieke wildcard-DNS: `iets.192.168.56.200.nip.io` resolvet altijd naar `192.168.56.200`. Geen `/etc/hosts` aanpassen.
---
## Steps
## Stappen
### 1. Enable MetalLB
### 1. MetalLB installeren
The ArgoCD Application manifests for MetalLB are already in this repo. The root
App-of-Apps watches the `apps/` directory, which includes `apps/networking/`.
They are already being applied — MetalLB just needs a moment to become healthy.
Maak de volgende bestanden aan:
Check MetalLB is running:
```bash
kubectl get pods -n metallb-system
# NAME READY STATUS RESTARTS AGE
# controller-xxx 1/1 Running 0 Xm
# speaker-xxx 1/1 Running 0 Xm
**`manifests/networking/metallb/values.yaml`**
```yaml
speaker:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
```
Check the IP pool is configured:
**`manifests/networking/metallb/metallb-config.yaml`**
```yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: workshop-pool
namespace: metallb-system
spec:
addresses:
- 192.168.56.200-192.168.56.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: workshop-l2
namespace: metallb-system
spec:
ipAddressPools:
- workshop-pool
```
```bash
kubectl get ipaddresspool -n metallb-system
# NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
# workshop-pool true false ["192.168.56.200-192.168.56.220"]
**`apps/networking/metallb.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
project: workshop
sources:
- repoURL: https://metallb.github.io/metallb
chart: metallb
targetRevision: "0.14.9"
helm:
valueFiles:
- $values/manifests/networking/metallb/values.yaml
- repoURL: JOUW_FORK_URL
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: metallb-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
```
**`apps/networking/metallb-config.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: metallb-config
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
project: workshop
source:
repoURL: JOUW_FORK_URL
targetRevision: HEAD
path: manifests/networking/metallb
directory:
include: "metallb-config.yaml"
destination:
server: https://kubernetes.default.svc
namespace: metallb-system
syncPolicy:
automated:
prune: true
selfHeal: true
```
---
### 2. Enable Ingress-Nginx
### 2. Ingress-Nginx installeren
Similarly, `apps/networking/ingress-nginx.yaml` is already in the repo. Wait for it
to become Synced in ArgoCD, then:
**`manifests/networking/ingress-nginx/values.yaml`**
```yaml
controller:
ingressClassResource:
name: nginx
default: true
service:
type: LoadBalancer
loadBalancerIP: "192.168.56.200"
resources:
requests:
cpu: 100m
memory: 128Mi
```
**`apps/networking/ingress-nginx.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ingress-nginx
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "3"
spec:
project: workshop
sources:
- repoURL: https://kubernetes.github.io/ingress-nginx
chart: ingress-nginx
targetRevision: "4.12.0"
helm:
valueFiles:
- $values/manifests/networking/ingress-nginx/values.yaml
- repoURL: JOUW_FORK_URL
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: ingress-nginx
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
---
### 3. Alles committen en pushen
```bash
git add apps/networking/ manifests/networking/
git commit -m "feat: MetalLB + Ingress-Nginx"
git push
```
Wacht tot beide applications Synced zijn, en controleer dan:
```bash
kubectl get svc -n ingress-nginx
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# ingress-nginx-controller LoadBalancer 10.43.x.x 192.168.56.200 80:xxx,443:xxx
# NAME TYPE EXTERNAL-IP PORT(S)
# ingress-nginx-controller LoadBalancer 192.168.56.200 80:xxx,443:xxx
```
The `EXTERNAL-IP` column shows `192.168.56.200`. MetalLB assigned it.
From your **laptop** (not the VM), verify:
Vanuit je laptop:
```bash
curl http://192.168.56.200
# 404 from Nginx — correct! No ingress rule yet, but Nginx is reachable.
# 404 van Nginx — klopt, nog geen Ingress-regel
```
---
### 3. Add a podinfo Ingress
### 4. Ingress voor podinfo toevoegen
The Ingress resource is already in `manifests/apps/podinfo/ingress.yaml`.
ArgoCD will sync it automatically. After sync:
**`manifests/apps/podinfo/ingress.yaml`**
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo
namespace: podinfo
spec:
ingressClassName: nginx
rules:
- host: podinfo.192.168.56.200.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: podinfo
port:
name: http
```
```bash
kubectl get ingress -n podinfo
# NAME CLASS HOSTS ADDRESS PORTS
# podinfo nginx podinfo.192.168.56.200.nip.io 192.168.56.200 80
git add manifests/apps/podinfo/ingress.yaml
git commit -m "feat: voeg podinfo Ingress toe"
git push
```
Open from your **laptop browser**: **http://podinfo.192.168.56.200.nip.io**
You should see the podinfo UI with version 6.6.2.
Open vanuit je laptop: **http://podinfo.192.168.56.200.nip.io**
---
### 4. Enable the ArgoCD ingress
### 5. ArgoCD-ingress inschakelen
Now let's expose ArgoCD itself on a nice URL. Open `manifests/argocd/values.yaml`
and find the commented-out ingress block near the `server:` section:
```yaml
# ── Exercise 03: uncomment this block after Ingress-Nginx is deployed ──────
# ingress:
# enabled: true
# ...
```
Uncomment the entire block (remove the `#` characters):
Pas `manifests/argocd/values.yaml` aan. Zoek het uitgecommentarieerde ingress-blok en verwijder de `#`-tekens:
```yaml
ingress:
@ -115,52 +246,39 @@ Uncomment the entire block (remove the `#` characters):
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
```
Commit and push:
```bash
git add manifests/argocd/values.yaml
git commit -m "feat(ex03): enable ArgoCD ingress"
git commit -m "feat: schakel ArgoCD ingress in"
git push
```
ArgoCD will detect the change, upgrade its own Helm release, and create the Ingress.
Within a minute or two:
```bash
kubectl get ingress -n argocd
# NAME CLASS HOSTS ADDRESS
# argocd-server nginx argocd.192.168.56.200.nip.io 192.168.56.200
```
Open from your laptop: **http://argocd.192.168.56.200.nip.io**
ArgoCD detecteert de wijziging, past zijn eigen Helm-release aan en maakt de Ingress aan.
Open: **http://argocd.192.168.56.200.nip.io**
---
## Expected outcome
## Verwacht resultaat
| URL | App |
|-----|-----|
| http://podinfo.192.168.56.200.nip.io | podinfo v6.6.2 |
| http://argocd.192.168.56.200.nip.io | ArgoCD UI |
Both accessible from your laptop without any port-forward.
Beide bereikbaar vanaf je laptop zonder port-forward.
---
## Troubleshooting
## Probleemoplossing
| Symptom | Fix |
|---------|-----|
| `EXTERNAL-IP` is `<pending>` on ingress-nginx svc | MetalLB not ready yet — check `kubectl get pods -n metallb-system` |
| Curl to 192.168.56.200 times out from laptop | VirtualBox host-only adapter not configured; check `VBoxManage list hostonlyifs` |
| `nip.io` doesn't resolve | Temporary DNS issue; try again or use `/etc/hosts` with `192.168.56.200 podinfo.local` |
| ArgoCD ingress gives 502 | Wait for ArgoCD to restart after values change; ArgoCD now runs in insecure (HTTP) mode |
| Symptoom | Oplossing |
|----------|-----------|
| `EXTERNAL-IP` blijft `<pending>` | MetalLB is nog niet klaar — check `kubectl get pods -n metallb-system` |
| curl naar 192.168.56.200 time-out | VirtualBox host-only adapter niet geconfigureerd — zie vm-setup.md |
| nip.io resolvet niet | Tijdelijk DNS-probleem, probeer opnieuw of voeg toe aan `/etc/hosts` |
| ArgoCD ingress geeft 502 | Wacht tot ArgoCD herstart na de values-wijziging |
---
## What's next
## Volgende stap
In Exercise 04 you'll build a Tekton pipeline that:
1. Validates manifests
2. Bumps the podinfo image tag from `6.6.2` to `6.7.0` in `deployment.yaml`
3. Pushes the commit — and ArgoCD picks it up automatically
In Oefening 04 bouw je een Tekton-pipeline die automatisch de image-tag in Git aanpast, pusht, en laat ArgoCD de update uitrollen.

View file

@ -1,216 +1,272 @@
# Exercise 04 — Tekton Pipeline (GitOps Loop)
# Oefening 04 — Tekton Pipeline
**Time**: ~45 min
**Goal**: Build an automated pipeline that bumps the podinfo image tag in Git and watches ArgoCD roll out the new version — the full GitOps CI/CD loop.
**Tijd**: ~45 minuten
**Doel**: Een pipeline bouwen die automatisch de image-tag in Git aanpast en ArgoCD de update laat uitrollen — de volledige GitOps CI/CD-loop.
---
## What you'll learn
- Tekton concepts: Task, Pipeline, PipelineRun, Workspace
- How a pipeline commits to Git to trigger a GitOps deployment (no container registry needed)
- The full loop: pipeline push → ArgoCD detects → rolling update → new version in browser
## Wat je leert
- Tekton-concepten: Task, Pipeline, PipelineRun, Workspace
- Hoe een pipeline via een Git-commit een GitOps-deployment triggert (geen container registry nodig)
- De volledige loop: pipeline push → ArgoCD detecteert → rolling update → nieuwe versie in browser
---
## The loop visualised
## De loop
```
You trigger PipelineRun
Jij triggert een PipelineRun
Task 1: clone repo
Task 2: validate manifests (kubectl dry-run)
Task 3: bump image tag → deployment.yaml: 6.6.2 → 6.7.0
Task 2: valideer manifests (kubectl dry-run)
Task 3: pas image-tag aan → deployment.yaml: 6.6.2 → 6.7.0
Task 4: git commit + push
ArgoCD polls repo (or click Refresh)
ArgoCD detecteert de commit
ArgoCD syncs podinfo Deployment
ArgoCD synchroniseert de podinfo Deployment
Rolling update → podinfo v6.7.0 in your browser
Rolling update → podinfo v6.7.0 in je browser
```
---
## Prerequisites
## Vereisten
Exercises 0103 complete. podinfo is reachable at **http://podinfo.192.168.56.200.nip.io** and shows version **6.6.2**.
Oefeningen 0103 afgerond. podinfo is bereikbaar via **http://podinfo.192.168.56.200.nip.io** en toont versie **6.6.2**.
You need:
- A GitHub account with write access to the `ops-demo` repo
- A GitHub Personal Access Token (PAT) with **repo** scope
Je hebt nodig:
- Een GitHub Personal Access Token (PAT) met **repo**-scope (lezen + schrijven)
---
## Steps
## Stappen
### 1. Verify Tekton is installed
### 1. Tekton installeren via ArgoCD
The `apps/ci/tekton.yaml` and `apps/ci/pipeline.yaml` ArgoCD Applications are
already in the repo. ArgoCD is installing Tekton via a kustomize remote reference.
**`manifests/ci/tekton/kustomization.yaml`**
```yaml
resources:
- https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.65.1/release.yaml
```
Wait for the install to complete (~35 min after the app appears in ArgoCD):
**`apps/ci/tekton.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tekton
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "5"
spec:
project: workshop
source:
repoURL: JOUW_FORK_URL
targetRevision: HEAD
path: manifests/ci/tekton
kustomize: {}
destination:
server: https://kubernetes.default.svc
namespace: tekton-pipelines
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
```
```bash
git add apps/ci/tekton.yaml manifests/ci/tekton/
git commit -m "feat: installeer Tekton via ArgoCD"
git push
```
Wacht tot Tekton draait (~35 minuten):
```bash
kubectl get pods -n tekton-pipelines
# NAME READY STATUS RESTARTS
# tekton-pipelines-controller-xxx 1/1 Running 0
# tekton-pipelines-webhook-xxx 1/1 Running 0
```
Also check that the pipeline resources are synced:
```bash
kubectl get pipeline -n tekton-pipelines
# NAME AGE
# gitops-image-bump Xm
# tekton-pipelines-controller-xxx 1/1 Running
# tekton-pipelines-webhook-xxx 1/1 Running
```
---
### 2. Set up Git credentials
### 2. Pipeline-resources aanmaken
The pipeline needs to push a commit to GitHub. Create a Personal Access Token:
1. Go to **GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens** (or classic with `repo` scope)
2. Give it write access to the `ops-demo` repository
Then create the Kubernetes Secret (this command replaces the placeholder Secret if it already exists):
```bash
./scripts/set-git-credentials.sh <your-github-username> <your-pat>
**`manifests/ci/pipeline/serviceaccount.yaml`**
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline-runner
namespace: tekton-pipelines
```
Verify:
**`manifests/ci/pipeline/pipeline.yaml`** — zie de solution branch voor de volledige inhoud, of kopieer uit `reference-solution`:
```bash
kubectl get secret git-credentials -n tekton-pipelines
# NAME TYPE DATA AGE
# git-credentials Opaque 2 5s
git show origin/solution/04-tekton-pipeline:manifests/ci/pipeline/pipeline.yaml
```
**`manifests/ci/pipeline/pipelinerun.yaml`**
```yaml
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: bump-podinfo-to-670
namespace: tekton-pipelines
spec:
pipelineRef:
name: gitops-image-bump
taskRunTemplate:
serviceAccountName: pipeline-runner
params:
- name: repo-url
value: JOUW_FORK_URL
- name: new-tag
value: "6.7.0"
workspaces:
- name: source
volumeClaimTemplate:
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 1Gi
- name: git-credentials
secret:
secretName: git-credentials
```
**`apps/ci/pipeline.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: workshop-pipeline
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "6"
spec:
project: workshop
source:
repoURL: JOUW_FORK_URL
targetRevision: HEAD
path: manifests/ci/pipeline
destination:
server: https://kubernetes.default.svc
namespace: tekton-pipelines
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
```bash
git add apps/ci/pipeline.yaml manifests/ci/pipeline/
git commit -m "feat: voeg pipeline-resources toe"
git push
```
---
### 3. Trigger the pipeline
### 3. Git-credentials instellen
Apply the PipelineRun (this is the only `kubectl apply` you'll run in this exercise):
De pipeline moet kunnen pushen naar jouw fork. Maak een GitHub PAT aan met `repo`-scope en voer dan uit:
```bash
./scripts/set-git-credentials.sh <jouw-github-gebruikersnaam> <jouw-pat>
```
Dit maakt een Kubernetes Secret aan in de cluster — **het PAT komt niet in Git**.
---
### 4. Pipeline triggeren
```bash
kubectl apply -f manifests/ci/pipeline/pipelinerun.yaml
```
Watch it run:
Volg de voortgang:
```bash
kubectl get pipelinerun -n tekton-pipelines -w
```
Or follow the logs with tkn (Tekton CLI, optional):
```bash
# If tkn is installed:
tkn pipelinerun logs -f -n tekton-pipelines bump-podinfo-to-670
```
Or follow individual TaskRun pods:
Of per pod:
```bash
kubectl get pods -n tekton-pipelines -w
# Once a pod appears, you can:
kubectl logs -n tekton-pipelines <pod-name> -c step-bump --follow
```
The PipelineRun should complete in ~23 minutes.
De PipelineRun duurt ~23 minuten.
---
### 4. Verify the commit
### 5. Controleer de commit
```bash
# Inside the VM — check the latest commit on the remote
git fetch origin
git log origin/main --oneline -3
# You should see something like:
# a1b2c3d chore(pipeline): bump podinfo to 6.7.0
# ...
# Je ziet: chore(pipeline): bump podinfo to 6.7.0
```
Or check GitHub directly in your browser.
---
### 5. Watch ArgoCD sync
### 6. ArgoCD laten synchroniseren
In the ArgoCD UI, click **Refresh** on the **podinfo** application.
ArgoCD will detect that `manifests/apps/podinfo/deployment.yaml` changed and
start a rolling update.
Klik **Refresh** op de **podinfo** application in ArgoCD, of wacht op het automatische poll-interval.
```bash
kubectl rollout status deployment/podinfo -n podinfo
# deployment "podinfo" successfully rolled out
```
---
### 6. Confirm in the browser
### 7. Controleer in de browser
Open **http://podinfo.192.168.56.200.nip.io**you should now see **version 6.7.0**.
Open **http://podinfo.192.168.56.200.nip.io**je ziet nu versie **6.7.0**.
```bash
curl http://podinfo.192.168.56.200.nip.io | jq .version
# "6.7.0"
```
The full loop is complete.
---
## Expected outcome
## Pipeline opnieuw uitvoeren
```
PipelineRun STATUS: Succeeded
deployment.yaml image tag: 6.7.0
podinfo UI version: 6.7.0
```
---
## Re-running the pipeline
The `PipelineRun` name must be unique. To run again:
De naam van een PipelineRun moet uniek zijn:
```bash
# Option A: delete and re-apply with same name
kubectl delete pipelinerun bump-podinfo-to-670 -n tekton-pipelines
kubectl apply -f manifests/ci/pipeline/pipelinerun.yaml
# Option B: create a new run with a different name
kubectl create -f manifests/ci/pipeline/pipelinerun.yaml
```
---
## Troubleshooting
## Probleemoplossing
| Symptom | Fix |
|---------|-----|
| PipelineRun stuck in "Running" forever | `kubectl describe pipelinerun -n tekton-pipelines bump-podinfo-to-670` |
| `git-credentials` Secret not found | Run `./scripts/set-git-credentials.sh` first |
| Push fails: 403 Forbidden | PAT has insufficient scope — needs `repo` write access |
| Push fails: remote already has this commit | Image tag already at 6.7.0; the pipeline is idempotent (nothing to push) |
| ArgoCD not syncing after push | Click **Refresh** in the UI; default poll interval is 3 min |
| Validate task fails | Check `kubectl apply --dry-run=client -f manifests/apps/podinfo/` manually |
| Symptoom | Oplossing |
|----------|-----------|
| PipelineRun blijft "Running" | `kubectl describe pipelinerun -n tekton-pipelines bump-podinfo-to-670` |
| Secret `git-credentials` niet gevonden | Voer `./scripts/set-git-credentials.sh` uit |
| Push mislukt: 403 Forbidden | PAT heeft onvoldoende rechten — `repo`-scope vereist |
| ArgoCD synchroniseert niet | Klik **Refresh** in de UI |
---
## What's next
## Volgende stap
Exercise 05 is a quick wrap-up: you'll look at the full picture of what you built
and optionally trigger another upgrade cycle to cement the GitOps loop.
If you have time, try the **Bonus Exercise 06**: deploy Prometheus + Grafana and
see cluster and podinfo metrics in a live dashboard.
In Oefening 05 kijk je terug op wat je gebouwd hebt en experimenteer je met drift detection.

View file

@ -1,80 +1,75 @@
# Exercise 05 — App Upgrade via GitOps
# Oefening 05 — App upgrade en reflectie
**Time**: ~15 min (often done as the final step of Exercise 04)
**Goal**: Reflect on the complete GitOps loop you've built and optionally run another upgrade cycle.
**Tijd**: ~15 minuten
**Doel**: Terugkijken op wat je gebouwd hebt en de GitOps-loop nog een keer doorlopen.
---
## What you built
You now have a fully functioning GitOps platform:
## Wat je gebouwd hebt
```
Git repo (source of truth)
Git-repo (single source of truth)
│ ArgoCD polls every 3 min (or on Refresh)
│ ArgoCD pollt elke 3 minuten
ArgoCD (GitOps engine)
│ detects drift between Git and cluster
│ detecteert drift tussen Git en cluster
Kubernetes cluster
│ MetalLB assigns LAN IP to Ingress-Nginx
│ MetalLB kent LAN-IP toe aan Ingress-Nginx
Ingress-Nginx (routes by hostname)
Ingress-Nginx (routeert op hostname)
├─► podinfo.192.168.56.200.nip.io → podinfo Deployment
└─► argocd.192.168.56.200.nip.io → ArgoCD UI
```
And a CI pipeline that closes the loop:
En een CI-pipeline die de loop sluit:
```
Tekton PipelineRun
├─ validate manifests
├─ bump image tag in deployment.yaml
├─ valideer manifests
├─ pas image-tag aan in deployment.yaml
└─ git push
ArgoCD detects commit → syncs → rolling update
ArgoCD detecteert commit → synchroniseert → rolling update
```
---
## Reflect: What makes this "GitOps"?
## Waarom is dit "GitOps"?
1. **Git is the source of truth** — the cluster state is always derived from this repo
2. **No manual kubectl apply** — all cluster changes go through Git commits
3. **Drift detection** — if someone manually changes something in the cluster, ArgoCD reverts it
4. **Audit trail** — every cluster change has a corresponding Git commit
5. **Rollback = git revert** — no special tooling needed
1. **Git is de enige bron van waarheid** — de cluster-staat is altijd afgeleid van deze repo
2. **Geen handmatige `kubectl apply`** — alle cluster-wijzigingen gaan via Git-commits
3. **Drift detection** — iemand past iets handmatig aan in de cluster? ArgoCD draait het terug
4. **Auditlog** — elke cluster-wijziging heeft een bijbehorende Git-commit
5. **Rollback = `git revert`** — geen speciale tooling nodig
---
## Optional: Try a manual upgrade
## Probeer het: handmatige downgrade
If the pipeline already bumped podinfo to `6.7.0`, try a manual downgrade to see
the loop in reverse:
Als de pipeline podinfo al naar `6.7.0` heeft gebracht, probeer dan een handmatige downgrade:
```bash
# Edit the image tag back to 6.6.2
# Pas de image-tag terug aan naar 6.6.2
vim manifests/apps/podinfo/deployment.yaml
# Change: ghcr.io/stefanprodan/podinfo:6.7.0
# To: ghcr.io/stefanprodan/podinfo:6.6.2
git add manifests/apps/podinfo/deployment.yaml
git commit -m "chore: downgrade podinfo to 6.6.2 for demo"
git commit -m "chore: downgrade podinfo naar 6.6.2"
git push
```
Watch ArgoCD sync in the UI, then verify:
Kijk hoe ArgoCD synchroniseert, en verifieer:
```bash
curl http://podinfo.192.168.56.200.nip.io | jq .version
# "6.6.2"
```
Now upgrade again via the pipeline:
En upgrade dan weer via de pipeline:
```bash
kubectl delete pipelinerun bump-podinfo-to-670 -n tekton-pipelines
@ -83,40 +78,34 @@ kubectl apply -f manifests/ci/pipeline/pipelinerun.yaml
---
## Optional: Test drift detection
## Probeer het: drift detection
ArgoCD's `selfHeal: true` means it will automatically revert manual cluster changes.
Try bypassing GitOps:
ArgoCD heeft `selfHeal: true` — hij draait handmatige cluster-wijzigingen automatisch terug.
```bash
# Change the image tag directly in the cluster (not via Git)
# Wijzig de image-tag direct in de cluster (buiten Git om)
kubectl set image deployment/podinfo podinfo=ghcr.io/stefanprodan/podinfo:6.5.0 -n podinfo
```
Watch the ArgoCD UI — within seconds you'll see the `podinfo` app go **OutOfSync**,
then ArgoCD reverts it back to whatever tag is in Git. The cluster drifted; GitOps corrected it.
Kijk in de ArgoCD UI — binnen seconden gaat de podinfo-app op **OutOfSync**, en daarna zet ArgoCD hem terug naar wat er in Git staat.
---
## Summary
## Samenvatting
| Component | Purpose | How deployed |
|-----------|---------|-------------|
| k3s | Kubernetes | Vagrantfile |
| ArgoCD | GitOps engine | bootstrap.sh → self-manages |
| MetalLB | LoadBalancer IPs | ArgoCD |
| Ingress-Nginx | HTTP routing | ArgoCD |
| podinfo | Demo app | ArgoCD |
| Tekton | CI pipeline | ArgoCD |
| Component | Rol | Hoe gedeployed |
|---------------|----------------------|--------------------|
| k3s | Kubernetes | Vagrantfile |
| ArgoCD | GitOps engine | bootstrap.sh |
| MetalLB | LoadBalancer IPs | ArgoCD |
| Ingress-Nginx | HTTP-routing | ArgoCD |
| podinfo | Demo-applicatie | ArgoCD |
| Tekton | CI-pipeline | ArgoCD |
---
## What's next
## Volgende stap
If you have time, try **Exercise 06 (Bonus)**: deploy Prometheus + Grafana and
observe your cluster and podinfo metrics in a live dashboard.
Als je nog tijd hebt: **Oefening 06 (bonus)** — Prometheus + Grafana deployen en cluster-metrics bekijken in een live dashboard.
Otherwise, join the **final presentation** for a discussion on:
- Why GitOps in production
- What comes next: Vault, ApplicationSets, Argo Rollouts
Anders: sluit af met de **presentatie** over GitOps in productie.

View file

@ -1,138 +1,193 @@
# Exercise 06 (Bonus) — Monitoring: Prometheus + Grafana
# Oefening 06 (Bonus) — Prometheus + Grafana
**Time**: ~60 min
**Goal**: Deploy a full observability stack via ArgoCD and explore cluster + application metrics in Grafana.
**Tijd**: ~60 minuten
**Doel**: Een volledige observability-stack deployen via ArgoCD en cluster- en applicatiemetrics bekijken in Grafana.
---
## What you'll learn
- How to deploy a complex multi-component stack (kube-prometheus-stack) purely via GitOps
- How Prometheus scrapes metrics from Kubernetes and applications
- How to navigate Grafana dashboards for cluster and pod-level metrics
## Wat je leert
- Hoe je een complexe multi-component stack (kube-prometheus-stack) puur via GitOps deployet
- Hoe Prometheus metrics scrapt van Kubernetes en applicaties
- Navigeren door Grafana-dashboards voor cluster- en pod-metrics
---
## Prerequisites
## Vereisten
Exercises 0103 complete. Ingress-Nginx is running and nip.io URLs are reachable from your laptop.
Oefeningen 0103 afgerond. Ingress-Nginx draait en nip.io-URLs zijn bereikbaar vanaf je laptop.
**Note**: This exercise adds ~700 MB of additional memory usage. It works on an 8 GB VM but may be slow. If the VM feels sluggish, reduce `replicas` or skip Prometheus `storageSpec`.
> De monitoring-stack gebruikt extra ~700 MB geheugen. Op een 8 GB VM werkt het, maar kan wat traag aanvoelen. Als het te zwaar wordt, kun je `alertmanager` uitschakelen in de values.
---
## Steps
## Stappen
### 1. Enable the monitoring Application
### 1. Monitoring-Application aanmaken
The ArgoCD Application manifest for the monitoring stack is already in `apps/monitoring/`.
The root App-of-Apps watches this directory, so the application should already appear
in ArgoCD as **prometheus-grafana**.
**`manifests/monitoring/values.yaml`**
```yaml
grafana:
adminPassword: workshop123
ingress:
enabled: true
ingressClassName: nginx
hosts:
- grafana.192.168.56.200.nip.io
resources:
requests:
cpu: 100m
memory: 256Mi
Check its sync status:
prometheus:
prometheusSpec:
resources:
requests:
cpu: 200m
memory: 512Mi
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
retention: 6h
retentionSize: "1GB"
storageSpec:
volumeClaimTemplate:
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 2Gi
```bash
kubectl get application prometheus-grafana -n argocd
alertmanager:
enabled: false
kubeStateMetrics:
resources:
requests:
cpu: 50m
memory: 64Mi
nodeExporter:
resources:
requests:
cpu: 50m
memory: 64Mi
```
The initial sync takes 58 minutes — the kube-prometheus-stack chart is large and
installs many CRDs.
**`apps/monitoring/prometheus-grafana.yaml`**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus-grafana
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "10"
spec:
project: workshop
sources:
- repoURL: https://prometheus-community.github.io/helm-charts
chart: kube-prometheus-stack
targetRevision: "68.4.4"
helm:
valueFiles:
- $values/manifests/monitoring/values.yaml
- repoURL: JOUW_FORK_URL
targetRevision: HEAD
ref: values
destination:
server: https://kubernetes.default.svc
namespace: monitoring
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
```
```bash
git add apps/monitoring/ manifests/monitoring/
git commit -m "feat: Prometheus + Grafana via kube-prometheus-stack"
git push
```
De initiële sync duurt 58 minuten — de chart is groot en installeert veel CRDs.
---
### 2. Watch the stack come up
### 2. Wachten tot de stack klaar is
```bash
kubectl get pods -n monitoring -w
# You'll see prometheus, grafana, kube-state-metrics, node-exporter pods appear
```
Once all pods are Running:
Zodra alles Running is:
```bash
kubectl get ingress -n monitoring
# NAME CLASS HOSTS ADDRESS
# grafana nginx grafana.192.168.56.200.nip.io 192.168.56.200
# NAME HOSTS ADDRESS
# grafana grafana.192.168.56.200.nip.io 192.168.56.200
```
---
### 3. Open Grafana
### 3. Grafana openen
From your laptop: **http://grafana.192.168.56.200.nip.io**
Vanuit je laptop: **http://grafana.192.168.56.200.nip.io**
Login: `admin` / `workshop123`
---
### 4. Explore dashboards
### 4. Dashboards verkennen
kube-prometheus-stack ships with pre-built dashboards. In the Grafana sidebar:
**Dashboards → Browse**
kube-prometheus-stack levert kant-en-klare dashboards mee. In de Grafana-sidebar: **Dashboards → Browse**.
Useful dashboards for this workshop:
Interessant voor deze workshop:
| Dashboard | What to look at |
|-----------|----------------|
| **Kubernetes / Compute Resources / Namespace (Pods)** | CPU + memory per pod in `podinfo` namespace |
| **Kubernetes / Compute Resources / Node (Pods)** | Node-level resource view |
| **Node Exporter / Full** | VM-level CPU, memory, disk, network |
| Dashboard | Wat je ziet |
|-----------|-------------|
| Kubernetes / Compute Resources / Namespace (Pods) | CPU + geheugen per pod in de `podinfo` namespace |
| Kubernetes / Compute Resources / Node (Pods) | Overzicht op node-niveau |
| Node Exporter / Full | VM-niveau: CPU, geheugen, schijf, netwerk |
---
### 5. Generate some load on podinfo
In a new terminal, run a simple load loop:
### 5. Load genereren op podinfo
```bash
# Inside the VM
# In de VM
while true; do curl -s http://podinfo.192.168.56.200.nip.io > /dev/null; sleep 0.2; done
```
Switch back to Grafana → **Kubernetes / Compute Resources / Namespace (Pods)**
set namespace to `podinfo`. You should see CPU usage climb for the podinfo pod.
Open in Grafana: **Kubernetes / Compute Resources / Namespace (Pods)** → namespace `podinfo`. Je ziet het CPU-gebruik stijgen.
---
### 6. Explore the GitOps aspect
### 6. GitOps ook hier
Every configuration change to the monitoring stack goes through Git.
Try changing the Grafana admin password:
Probeer het Grafana-wachtwoord aan te passen:
```bash
vim manifests/monitoring/values.yaml
# Change: adminPassword: workshop123
# To: adminPassword: supersecret
# Verander: adminPassword: workshop123
# Naar: adminPassword: nieuwwachtwoord
git add manifests/monitoring/values.yaml
git commit -m "chore(monitoring): update grafana admin password"
git commit -m "chore: pas Grafana-wachtwoord aan"
git push
```
Watch ArgoCD sync the Helm release, then try logging into Grafana with the new password.
ArgoCD synchroniseert de Helm-release en Grafana herstart. Log daarna in met het nieuwe wachtwoord.
---
## Expected outcome
## Probleemoplossing
- Grafana accessible at **http://grafana.192.168.56.200.nip.io**
- Prometheus scraping cluster metrics
- Pre-built Kubernetes dashboards visible and populated
---
## Troubleshooting
| Symptom | Fix |
|---------|-----|
| Pods in Pending state | VM may be low on memory; `kubectl describe pod` to confirm |
| Grafana 502 from Nginx | Grafana pod not ready yet; wait and retry |
| No data in dashboards | Prometheus needs ~2 min to scrape first metrics; wait and refresh |
| CRD conflict on sync | First sync installs CRDs; second sync applies resources — retry |
---
## Going further (at home)
- Add a podinfo `ServiceMonitor` so Prometheus scrapes podinfo's `/metrics` endpoint
- Create a custom Grafana dashboard for podinfo request rate and error rate
- Alert on high memory usage with Alertmanager (enable it in `values.yaml`)
| Symptoom | Oplossing |
|----------|-----------|
| Pods in Pending | VM heeft te weinig geheugen — `kubectl describe pod` voor details |
| Grafana 502 van Nginx | Pod is nog niet klaar, even wachten |
| Geen data in dashboards | Prometheus heeft ~2 minuten nodig voor de eerste scrape |
| CRD-conflict bij sync | Eerste sync installeert CRDs, tweede sync past resources toe — opnieuw proberen |

View file

@ -1,17 +1,17 @@
# Final Talk — GitOps in Practice
# Final Talk — GitOps in de praktijk
**Duration**: ~20 min + Q&A
**Format**: Slides or whiteboard; optional live demo
**Duur**: ~20 min + Q&A
**Format**: Slides of whiteboard, optioneel met live demo
---
## 1. What We Built (7 min)
## 1. Wat we gebouwd hebben (7 min)
### Architecture diagram
### Architectuurdiagram
```
┌─────────────────────────────────────────────────────────┐
Your Laptop │
Jouw laptop │
│ │
│ Browser ──────────────────────────────────────────► │
│ podinfo.192.168.56.200.nip.io │
@ -25,13 +25,13 @@
│ │
│ ┌──────────────────┐ ┌───────────────────────────┐ │
│ │ Ingress-Nginx │ │ ArgoCD │ │
│ │ (LB: .200) │ │ watches this Git repo │ │
│ │ (LB: .200) │ │ kijkt naar deze Git repo │ │
│ └──────┬───────────┘ └───────────┬───────────────┘ │
│ │ │ syncs
│ │ │ synct
│ ▼ ▼ │
│ ┌──────────────────┐ ┌───────────────────────────┐ │
│ │ podinfo │ │ MetalLB │ │
│ │ (Deployment) │ │ (assigns LAN IPs) │ │
│ │ (Deployment) │ │ (geeft LAN IP's uit) │ │
│ └──────────────────┘ └───────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
@ -41,63 +41,63 @@
└─────────────────────────────────────────────────────────┘
```
### The GitOps loop (narrate this)
### De GitOps loop (vertel dit hardop)
1. Everything in the cluster is defined in **this Git repo**
2. ArgoCD watches the repo and reconciles the cluster to match
3. The Tekton pipeline is itself deployed by ArgoCD — and it pushes commits that ArgoCD then syncs
4. The only `kubectl apply` you ran today was: bootstrap ArgoCD + trigger PipelineRun
1. Alles in de cluster staat als declaratie in deze Git repo
2. ArgoCD kijkt naar de repo en reconcilet de cluster naar die gewenste state
3. De Tekton pipeline wordt zelf ook door ArgoCD gedeployed, en pusht commits die ArgoCD daarna weer synct
4. De enige `kubectl apply` die je vandaag deed: bootstrap van ArgoCD + PipelineRun triggeren
### Stack recap
| Component | Role |
|-----------|------|
| Component | Rol |
|-----------|-----|
| k3s | Single-binary Kubernetes |
| ArgoCD | GitOps engine (App-of-Apps) |
| MetalLB | Bare-metal LoadBalancer |
| Ingress-Nginx | HTTP routing by hostname |
| Ingress-Nginx | HTTP routing op hostname |
| Tekton | CI pipeline (in-cluster) |
| podinfo | Demo application |
| podinfo | Demo-applicatie |
| kube-prometheus-stack | Observability (bonus) |
---
## 2. Why GitOps in Production (8 min)
## 2. Waarom GitOps in productie (8 min)
### The old way: imperative deploys
### De oude manier: imperatieve deploys
```bash
# Someone runs this on a Friday afternoon
# Iemand draait dit op vrijdagmiddag
kubectl set image deployment/api api=company/api:v2.3.1-hotfix
# No review. No audit trail. No way to know who ran it at 16:47.
# Geen review. Geen audit trail. Niemand weet wie dit om 16:47 deed.
```
### The GitOps way
### De GitOps manier
```
PR: "bump API to v2.3.1-hotfix"
PR: "bump API naar v2.3.1-hotfix"
→ peer review
→ merge
→ ArgoCD syncs
→ deploy happens
→ Git commit IS the audit trail
→ ArgoCD synct
→ deploy gebeurt
→ Git commit IS de audit trail
```
### Key benefits
### Belangrijkste voordelen
**Audit trail**: Every cluster change has a Git commit — who, what, when, why.
**Audit trail**: Elke clusterwijziging heeft een Git commit: wie, wat, wanneer, waarom.
**Drift detection**: If someone `kubectl apply`s directly, ArgoCD detects the drift and can auto-revert. The cluster always converges to what's in Git.
**Drift detection**: Als iemand direct `kubectl apply` doet, ziet ArgoCD drift en kan het automatisch terugdraaien. De cluster convergeert altijd terug naar wat in Git staat.
**Disaster recovery**: The cluster is destroyed? `vagrant up` + `./scripts/bootstrap.sh` + `kubectl apply -f apps/root.yaml` — and ArgoCD recreates everything. Git is the backup.
**Disaster recovery**: Cluster weg? `vagrant up` + `./scripts/bootstrap.sh` + `kubectl apply -f apps/root.yaml` en ArgoCD bouwt alles opnieuw op. Git is je backup.
**Multi-team collaboration**: Developers open PRs to deploy. Ops reviews the manifest changes. No SSH keys to production.
**Samenwerking tussen teams**: Developers openen PR's voor deploys. Ops reviewt manifest-wijzigingen. Geen SSH-sleutels op productie nodig.
**Rollback**: `git revert <commit>` + `git push`. No special tooling.
**Rollback**: `git revert <commit>` + `git push`. Geen speciale tooling nodig.
### The App-of-Apps pattern (brief)
### Het App-of-Apps pattern (kort)
One root Application manages all other Applications. Adding a new service = adding a single YAML file to `apps/`. The root app picks it up automatically.
Eén root Application beheert alle andere Applications. Nieuwe service toevoegen = één YAML-file in `apps/` toevoegen. De root app pakt die automatisch op.
```
apps/root.yaml ──manages──► apps/argocd.yaml
@ -111,27 +111,27 @@ apps/root.yaml ──manages──► apps/argocd.yaml
---
## 3. What's Next (5 min)
## 3. Wat is de volgende stap (5 min)
### Secrets management
Today: plain Kubernetes Secrets with GitHub PATs.
Production: **Vault + external-secrets-operator**
Vandaag: plain Kubernetes Secrets met GitHub PATs.
In productie: **Vault + external-secrets-operator**
```
Vault (secret store)
→ external-secrets-operator pulls secrets
creates Kubernetes Secrets
→ ArgoCD syncs everything else
→ external-secrets-operator haalt secrets op
maakt Kubernetes Secrets aan
→ ArgoCD synct de rest
```
### Multi-cluster with ApplicationSets
### Multi-cluster met ApplicationSets
Today: one cluster, one repo.
Production: 10 clusters, one repo.
Vandaag: één cluster, één repo.
In productie: 10 clusters, één repo.
```yaml
# ArgoCD ApplicationSet: deploy podinfo to every cluster in a list
# ArgoCD ApplicationSet: deploy podinfo naar elke cluster uit de lijst
generators:
- list:
elements:
@ -142,29 +142,29 @@ generators:
### Progressive delivery
Today: rolling update (all-or-nothing).
Production: **Argo Rollouts** with canary or blue/green strategies.
Vandaag: rolling update (all-or-nothing).
In productie: **Argo Rollouts** met canary of blue/green.
```
New version → 5% of traffic
→ metrics look good → 20% → 50% → 100%
→ metrics bad → auto-rollback
Nieuwe versie → 5% van traffic
→ metrics goed → 20% → 50% → 100%
→ metrics slecht → auto-rollback
```
---
## Optional live demo (~5 min)
## Optionele live demo (~5 min)
Make a one-line change to `manifests/apps/podinfo/deployment.yaml` (e.g. UI color),
push to GitHub, click **Refresh** in ArgoCD, and show the pod restart and new UI.
Doe een one-line wijziging in `manifests/apps/podinfo/deployment.yaml` (bijv. UI-kleur),
push naar GitHub, klik **Refresh** in ArgoCD en laat pod restart + nieuwe UI zien.
The audience has already done this — seeing it narrated makes the loop visceral.
De groep heeft dit al gedaan, maar live verteld landt de GitOps loop beter.
---
## Q&A prompts (if the room is quiet)
## Q&A prompts (als het stil valt)
- "How would you handle database migrations in a GitOps flow?"
- "What happens if two people push to Git at the same time?"
- "When is GitOps NOT the right tool?" (answer: local dev, scripts, one-off jobs)
- "How do you keep secrets out of Git at scale?"
- "Hoe pak je database migrations aan in een GitOps flow?"
- "Wat gebeurt er als twee mensen tegelijk naar Git pushen?"
- "Wanneer is GitOps NIET de juiste tool?" (antwoord: local dev, scripts, one-off jobs)
- "Hoe houd je secrets op schaal uit Git?"

View file

@ -1,164 +1,128 @@
# VM Setup — Getting Started
# VM-setup
Everything runs inside a VirtualBox VM provisioned by Vagrant.
This page walks you through starting the VM and verifying it is healthy before the workshop begins.
Alles draait in een VirtualBox-VM die Vagrant voor je opzet. Volg deze stappen voordat je aan de oefeningen begint.
---
## Requirements (install on your laptop before the workshop)
## Wat je nodig hebt
> **Do this the day before** — downloads can be slow on conference WiFi.
Doe dit de dag ervoor — niet op de ochtend van de workshop zelf.
| Tool | Version | Download |
|------|---------|----------|
| VirtualBox | 7.x | https://www.virtualbox.org/wiki/Downloads |
| Vagrant | 2.4.x | https://developer.hashicorp.com/vagrant/downloads |
| Git | any | https://git-scm.com/downloads |
| Tool | Versie | Download |
|----------------|-------------|----------|
| VirtualBox | 7.x | https://www.virtualbox.org/wiki/Downloads |
| Vagrant | 2.4.x | https://developer.hashicorp.com/vagrant/downloads |
| Git | willekeurig | https://git-scm.com/downloads |
**RAM**: The VM uses 8 GB. Your laptop should have at least 12 GB total RAM free.
**Disk**: ~15 GB free (Vagrant box ~1 GB + k3s images ~5 GB + workspace).
Minimaal 12 GB vrij RAM, ~15 GB schijfruimte.
> **Apple Silicon (M1/M2/M3/M4)**: VirtualBox 7.1+ supports Apple Silicon — make sure
> to download the **"macOS / Apple Silicon hosts"** build from the VirtualBox download page,
> not the Intel build.
**Na installatie van VirtualBox: herstart je laptop.** VirtualBox installeert een kernel-extensie en die werkt pas na een reboot.
### Verify your installs before continuing
Snelle check — alle drie moeten een versie tonen:
```bash
# All three must return a version number — if any fail, install it first
VBoxManage --version # e.g. 7.1.4r165100
vagrant --version # e.g. Vagrant 2.4.3
git --version # e.g. git version 2.x.x
VBoxManage --version && vagrant --version && git --version
```
If `VBoxManage` is not found after installing VirtualBox, reboot your laptop —
VirtualBox installs a kernel extension that requires a restart.
---
## Step 1 — Clone the repo
## Stap 1 — Fork en clone de repo
Fork de repo naar je eigen GitHub-account via https://github.com/paulharkink/ops-demo → **Fork**.
```bash
git clone https://github.com/paulharkink/ops-demo.git
git clone https://github.com/JOUW_USERNAME/ops-demo.git
cd ops-demo
```
---
## Step 2 — Start the VM
## Stap 2 — VM opstarten
```bash
vagrant up
```
First run takes **1015 minutes**: Vagrant downloads the Ubuntu 24.04 box, installs
k3s, Helm, yq, and pre-pulls the workshop container images. Subsequent `vagrant up`
calls start the existing VM in under a minute.
De eerste keer duurt dit 1015 minuten. Vagrant downloadt de Ubuntu 24.04 box, installeert k3s, Helm en yq, en haalt de workshop-images alvast op. Daarna start de VM in een paar seconden.
You should see:
Aan het einde zie je:
```
════════════════════════════════════════════════════════
VM provisioned successfully!
SSH: vagrant ssh
Next step: follow docs/vm-setup.md to verify, then
run scripts/bootstrap.sh to install ArgoCD
════════════════════════════════════════════════════════
```
---
## Step 3 — SSH into the VM
## Stap 3 — Inloggen
```bash
vagrant ssh
cd /vagrant
```
You are now inside the VM. All workshop commands run here unless stated otherwise.
Alle workshop-commando's voer je vanaf hier uit, tenzij anders aangegeven.
---
## Step 4 — Verify the setup
## Stap 4 — Controleer de setup
```bash
# 1. k3s is running
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# ops-demo Ready control-plane,master Xm v1.31.x+k3s1
# 2. Helm is available
helm version
# version.BuildInfo{Version:"v3.16.x", ...}
# 3. The workshop repo is mounted at /vagrant
ls /vagrant
# apps/ docs/ manifests/ scripts/ Vagrantfile README.md
# 4. The host-only interface has the right IP
ip addr show eth1
# inet 192.168.56.10/24
# Vagrantfile README.md apps/ docs/ manifests/ scripts/
```
---
## Step 5 — Verify host connectivity
## Stap 5 — Controleer bereikbaarheid vanaf je laptop
From your **laptop** (not the VM), confirm you can reach the VM's host-only IP:
Vanuit je laptop (niet de VM):
```bash
ping 192.168.56.10
```
If this times out, check your VirtualBox host-only network adapter:
Werkt dit niet, controleer dan of de VirtualBox host-only adapter bestaat:
```bash
# macOS/Linux
VBoxManage list hostonlyifs
# Should show vboxnet0 with IP 192.168.56.1
# Windows
VBoxManage list hostonlyifs
# Verwacht: vboxnet0 met IP 192.168.56.1
```
If no host-only adapter exists:
Bestaat hij niet:
```bash
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet0 --ip 192.168.56.1 --netmask 255.255.255.0
```
Then re-run `vagrant up`.
Dan `vagrant up` opnieuw.
---
## Working directory
Inside the VM, the repo is available at `/vagrant` (a VirtualBox shared folder).
All workshop commands are run from `/vagrant`:
## Handige Vagrant-commando's
```bash
cd /vagrant
vagrant halt # afsluiten
vagrant up # opstarten
vagrant suspend # pauzeren
vagrant resume # hervatten
vagrant destroy # VM volledig verwijderen
```
---
## Stopping and restarting the VM
## Probleemoplossing
```bash
vagrant halt # graceful shutdown (preserves state)
vagrant up # restart
vagrant suspend # pause (faster resume, uses disk space)
vagrant resume # resume from suspend
vagrant destroy # delete the VM entirely (start fresh)
```
---
## Troubleshooting
| Symptom | Fix |
|---------|-----|
| `vagrant up`: "No usable default provider" | VirtualBox is not installed — install it and reboot, then retry |
| `vagrant up` fails: VT-x/AMD-V not enabled | Enable virtualisation in BIOS/UEFI settings |
| `vagrant up` fails: port conflict | Another VM may be using the host-only range; stop it |
| `kubectl get nodes` shows NotReady | k3s is still starting; wait 3060 s |
| `/vagrant` is empty inside VM | Shared folder issue; try `vagrant reload` |
| Very slow image pulls | Images should be pre-pulled; if not, wait 510 min |
| Symptoom | Oplossing |
|----------|-----------|
| "No usable default provider" | VirtualBox niet geïnstalleerd of laptop niet herstart |
| VT-x/AMD-V niet beschikbaar | Schakel virtualisatie in via BIOS/UEFI |
| `kubectl get nodes` → NotReady | k3s start nog op, wacht 3060 seconden |
| `/vagrant` is leeg | Shared folder probleem — probeer `vagrant reload` |

View file

@ -1,42 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: podinfo
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
spec:
containers:
- name: podinfo
image: ghcr.io/stefanprodan/podinfo:6.6.2
ports:
- containerPort: 9898
name: http
env:
- name: PODINFO_UI_COLOR
value: "#6C48C5"
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 5
periodSeconds: 30
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 128Mi

View file

@ -1,20 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo
namespace: podinfo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: podinfo.192.168.56.200.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: podinfo
port:
name: http

View file

@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: podinfo

View file

@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: podinfo
spec:
selector:
app: podinfo
ports:
- port: 80
targetPort: 9898
name: http

View file

@ -1,173 +0,0 @@
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: gitops-image-bump
namespace: tekton-pipelines
spec:
description: |
Validates manifests, bumps the podinfo image tag in deployment.yaml,
and pushes the commit back to the ops-demo repo.
ArgoCD then detects the change and rolls out the new image.
params:
- name: repo-url
type: string
description: URL of the ops-demo git repository
default: https://github.com/paulharkink/ops-demo.git
- name: new-tag
type: string
description: New podinfo image tag to set (e.g. 6.7.0)
default: "6.7.0"
- name: git-user-name
type: string
description: Git author name for the bump commit
default: "Workshop Pipeline"
- name: git-user-email
type: string
description: Git author email for the bump commit
default: "pipeline@workshop.local"
workspaces:
- name: source
description: Workspace for cloning the repo
- name: git-credentials
description: Secret with GitHub username + PAT (basic-auth)
tasks:
# ── Task 1: Clone the repo ─────────────────────────────────────────────
- name: clone
taskSpec:
workspaces:
- name: source
- name: git-credentials
params:
- name: repo-url
- name: git-user-name
- name: git-user-email
steps:
- name: clone
image: alpine/git:latest
workingDir: /workspace/source
env:
- name: GIT_USERNAME
valueFrom:
secretKeyRef:
name: git-credentials
key: username
- name: GIT_PASSWORD
valueFrom:
secretKeyRef:
name: git-credentials
key: password
script: |
#!/bin/sh
set -eu
# Inject credentials into the clone URL
REPO=$(echo "$(params.repo-url)" | sed "s|https://|https://${GIT_USERNAME}:${GIT_PASSWORD}@|")
git clone "${REPO}" .
git config user.name "$(params.git-user-name)"
git config user.email "$(params.git-user-email)"
echo "Cloned $(git log --oneline -1)"
workspaces:
- name: source
workspace: source
- name: git-credentials
workspace: git-credentials
params:
- name: repo-url
value: $(params.repo-url)
- name: git-user-name
value: $(params.git-user-name)
- name: git-user-email
value: $(params.git-user-email)
# ── Task 2: Validate manifests (dry-run) ──────────────────────────────
- name: validate
runAfter: [clone]
taskSpec:
workspaces:
- name: source
steps:
- name: dry-run
image: bitnami/kubectl:latest
workingDir: /workspace/source
script: |
#!/bin/sh
set -eu
echo "Running kubectl dry-run on manifests/apps/podinfo/"
kubectl apply --dry-run=client -f manifests/apps/podinfo/
echo "Validation passed."
workspaces:
- name: source
workspace: source
# ── Task 3: Bump image tag ─────────────────────────────────────────────
- name: bump-image-tag
runAfter: [validate]
taskSpec:
workspaces:
- name: source
params:
- name: new-tag
steps:
- name: bump
image: mikefarah/yq:4.44.3
workingDir: /workspace/source
script: |
#!/bin/sh
set -eu
FILE="manifests/apps/podinfo/deployment.yaml"
CURRENT=$(yq '.spec.template.spec.containers[0].image' "${FILE}")
echo "Current image: ${CURRENT}"
yq -i '.spec.template.spec.containers[0].image = "ghcr.io/stefanprodan/podinfo:$(params.new-tag)"' "${FILE}"
UPDATED=$(yq '.spec.template.spec.containers[0].image' "${FILE}")
echo "Updated image: ${UPDATED}"
workspaces:
- name: source
workspace: source
params:
- name: new-tag
value: $(params.new-tag)
# ── Task 4: Commit and push ────────────────────────────────────────────
- name: git-commit-push
runAfter: [bump-image-tag]
taskSpec:
workspaces:
- name: source
- name: git-credentials
params:
- name: new-tag
steps:
- name: push
image: alpine/git:latest
workingDir: /workspace/source
env:
- name: GIT_USERNAME
valueFrom:
secretKeyRef:
name: git-credentials
key: username
- name: GIT_PASSWORD
valueFrom:
secretKeyRef:
name: git-credentials
key: password
script: |
#!/bin/sh
set -eu
git add manifests/apps/podinfo/deployment.yaml
git commit -m "chore(pipeline): bump podinfo to $(params.new-tag)"
# Inject credentials for push
REMOTE_URL=$(git remote get-url origin | sed "s|https://|https://${GIT_USERNAME}:${GIT_PASSWORD}@|")
git push "${REMOTE_URL}" HEAD:main
echo "Pushed commit: $(git log --oneline -1)"
workspaces:
- name: source
workspace: source
- name: git-credentials
workspace: git-credentials
params:
- name: new-tag
value: $(params.new-tag)

View file

@ -1,32 +0,0 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
# Change the name (e.g. bump-to-670-run2) each time you trigger the pipeline,
# or delete the old PipelineRun first — names must be unique.
name: bump-podinfo-to-670
namespace: tekton-pipelines
spec:
pipelineRef:
name: gitops-image-bump
taskRunTemplate:
serviceAccountName: pipeline-runner
params:
- name: repo-url
value: https://github.com/paulharkink/ops-demo.git
- name: new-tag
value: "6.7.0"
- name: git-user-name
value: "Workshop Pipeline"
- name: git-user-email
value: "pipeline@workshop.local"
workspaces:
- name: source
volumeClaimTemplate:
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 1Gi
- name: git-credentials
secret:
secretName: git-credentials

View file

@ -1,8 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: pipeline-runner
namespace: tekton-pipelines
# The git-credentials Secret is NOT in this repo (it contains a real GitHub token).
# Create it before running the pipeline:
# ./scripts/set-git-credentials.sh <github-user> <github-pat>

View file

@ -1,6 +0,0 @@
# Installs Tekton Pipelines v0.65.1 via kustomize remote reference.
# ArgoCD applies this with its built-in kustomize support.
# Images are pre-pulled by the Vagrantfile, so this only needs network to
# fetch the manifest once (not the images).
resources:
- https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.65.1/release.yaml

View file

@ -1,56 +0,0 @@
# kube-prometheus-stack Helm values (workshop — lightweight config)
# Chart: prometheus-community/kube-prometheus-stack 68.x
grafana:
adminPassword: workshop123
ingress:
enabled: true
ingressClassName: nginx
hosts:
- grafana.192.168.56.200.nip.io
# Lightweight for a workshop VM
resources:
requests:
cpu: 100m
memory: 256Mi
prometheus:
prometheusSpec:
resources:
requests:
cpu: 200m
memory: 512Mi
# Scrape everything in the cluster
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
# Short retention for a workshop
retention: 6h
retentionSize: "1GB"
storageSpec:
volumeClaimTemplate:
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 2Gi
alertmanager:
enabled: false # not needed for the workshop
# Reduce resource footprint
kubeStateMetrics:
resources:
requests:
cpu: 50m
memory: 64Mi
nodeExporter:
resources:
requests:
cpu: 50m
memory: 64Mi

View file

@ -1,17 +0,0 @@
# Ingress-Nginx Helm values
# The controller's LoadBalancer service will get 192.168.56.200 from MetalLB.
# All workshop ingresses use IngressClass "nginx".
controller:
ingressClassResource:
name: nginx
default: true
service:
type: LoadBalancer
# Request a specific IP so docs can reference it reliably
loadBalancerIP: "192.168.56.200"
resources:
requests:
cpu: 100m
memory: 128Mi

View file

@ -1,20 +0,0 @@
# MetalLB L2 configuration
# IP pool: 192.168.56.200192.168.56.220 (VirtualBox host-only subnet)
# First IP (200) will be claimed by Ingress-Nginx LoadBalancer service.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: workshop-pool
namespace: metallb-system
spec:
addresses:
- 192.168.56.200-192.168.56.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: workshop-l2
namespace: metallb-system
spec:
ipAddressPools:
- workshop-pool

View file

@ -1,8 +0,0 @@
# MetalLB Helm values
# No special configuration needed at chart level;
# IP pool is configured via metallb-config.yaml (CRDs).
speaker:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule

View file

@ -1,38 +1,59 @@
#!/usr/bin/env bash
# bootstrap.sh — Install ArgoCD via Helm and apply the root App-of-Apps
# Run this once inside the VM after `vagrant up`.
# bootstrap.sh — Installeer ArgoCD via Helm en genereer de root App-of-Apps
# Eénmalig uitvoeren in de VM na `vagrant up`.
#
# Usage:
# Gebruik:
# cd /vagrant
# ./scripts/bootstrap.sh
#
# What it does:
# 1. Creates the argocd namespace
# 2. Installs ArgoCD via Helm using manifests/argocd/values.yaml
# 3. Waits for ArgoCD server to be ready
# 4. Applies apps/root.yaml (App-of-Apps entry point)
# 5. Prints the initial admin password and a port-forward hint
# Wat het doet:
# 1. Detecteert de URL van jouw fork op basis van de git remote
# 2. Maakt de argocd namespace aan
# 3. Installeert ArgoCD via Helm (manifests/argocd/values.yaml)
# 4. Wacht tot ArgoCD klaar is
# 5. Past apps/project.yaml toe
# 6. Genereert apps/root.yaml met jouw fork-URL en past het toe
# 7. Print het admin-wachtwoord en een port-forward hint
set -euo pipefail
ARGOCD_NAMESPACE="argocd"
ARGOCD_CHART_VERSION="7.7.11" # ArgoCD chart 7.x → ArgoCD v2.13.x
ARGOCD_CHART_VERSION="7.7.11"
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
echo "══════════════════════════════════════════════"
echo " ops-demo Bootstrap"
echo "══════════════════════════════════════════════"
# ── 1. Namespace ──────────────────────────────────────────────────────────────
echo "→ Creating namespace: ${ARGOCD_NAMESPACE}"
# ── 1. Detecteer fork URL ─────────────────────────────────────────────────────
REMOTE_URL=$(git -C "${REPO_ROOT}" remote get-url origin 2>/dev/null || echo "")
if [[ -z "${REMOTE_URL}" ]]; then
echo "FOUT: geen git remote 'origin' gevonden."
echo " Clone de repo eerst via: git clone https://github.com/JOUW_USERNAME/ops-demo.git"
exit 1
fi
# Converteer SSH naar HTTPS als nodig (git@github.com:user/repo.git → https://...)
if [[ "${REMOTE_URL}" == git@* ]]; then
REPO_URL=$(echo "${REMOTE_URL}" | sed 's|git@github.com:|https://github.com/|')
else
REPO_URL="${REMOTE_URL}"
fi
# Zorg dat de URL eindigt op .git
[[ "${REPO_URL}" == *.git ]] || REPO_URL="${REPO_URL}.git"
echo "→ Fork URL gedetecteerd: ${REPO_URL}"
# ── 2. Namespace ──────────────────────────────────────────────────────────────
echo "→ Namespace aanmaken: ${ARGOCD_NAMESPACE}"
kubectl create namespace "${ARGOCD_NAMESPACE}" --dry-run=client -o yaml | kubectl apply -f -
# ── 2. Helm install ArgoCD ────────────────────────────────────────────────────
echo "→ Adding Argo Helm repo"
# ── 3. Helm install ArgoCD ────────────────────────────────────────────────────
echo "→ Argo Helm-repo toevoegen"
helm repo add argo https://argoproj.github.io/argo-helm --force-update
helm repo update argo
echo "→ Installing ArgoCD (chart ${ARGOCD_CHART_VERSION})"
echo "ArgoCD installeren (chart ${ARGOCD_CHART_VERSION})"
helm upgrade --install argocd argo/argo-cd \
--namespace "${ARGOCD_NAMESPACE}" \
--version "${ARGOCD_CHART_VERSION}" \
@ -40,26 +61,53 @@ helm upgrade --install argocd argo/argo-cd \
--wait \
--timeout 5m
# ── 3. Apply root App-of-Apps ─────────────────────────────────────────────────
echo "→ Applying root App-of-Apps"
# ── 4. AppProject toepassen ───────────────────────────────────────────────────
echo "→ AppProject 'workshop' aanmaken"
kubectl apply -f "${REPO_ROOT}/apps/project.yaml"
# ── 5. Genereer en pas apps/root.yaml toe ─────────────────────────────────────
echo "→ apps/root.yaml genereren voor ${REPO_URL}"
mkdir -p "${REPO_ROOT}/apps"
cat > "${REPO_ROOT}/apps/root.yaml" <<EOF
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: workshop
source:
repoURL: ${REPO_URL}
targetRevision: HEAD
path: apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
kubectl apply -f "${REPO_ROOT}/apps/root.yaml"
# ── 4. Print admin password ───────────────────────────────────────────────────
# ── 6. Print admin-wachtwoord ─────────────────────────────────────────────────
ARGOCD_PASSWORD=$(kubectl -n "${ARGOCD_NAMESPACE}" get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d)
echo ""
echo "══════════════════════════════════════════════"
echo " Bootstrap complete!"
echo " Bootstrap geslaagd!"
echo ""
echo " ArgoCD admin password: ${ARGOCD_PASSWORD}"
echo " ArgoCD admin-wachtwoord: ${ARGOCD_PASSWORD}"
echo ""
echo " To open the ArgoCD UI, run in a new terminal:"
echo " Open de ArgoCD UI — voer dit uit in een nieuw terminal:"
echo " kubectl port-forward svc/argocd-server -n argocd 8080:443"
echo " Then open: https://localhost:8080"
echo " Login: admin / ${ARGOCD_PASSWORD}"
echo " Dan: https://localhost:8080 (login: admin / ${ARGOCD_PASSWORD})"
echo ""
echo " After Exercise 03, ArgoCD will also be reachable at:"
echo " https://argocd.192.168.56.200.nip.io"
echo " apps/root.yaml is aangemaakt met jouw fork-URL."
echo " Volgende stap (Oefening 01):"
echo " git add apps/root.yaml && git commit -m 'feat: add root app' && git push"
echo "══════════════════════════════════════════════"