Production-like Kubernetes environment using k3d + K3s for local development.
One script setup. Clean URLs. No port-forwarding needed.
| Service | Purpose | URL |
|---|---|---|
| Keycloak | Identity & Access Management | http://keycloak.local:30080 |
| Grafana | Monitoring Dashboards | http://grafana.local:30080 |
| Prometheus | Metrics Collection | http://prometheus.local:30080 |
| Vault | Secrets Management | http://vault.local:30080 |
| MinIO | Object Storage (S3-compatible) | http://minio.local:30080 |
| Kafka + UI | Message Streaming & Management | http://kafka-ui.local:30080 |
| Dashboard | Cluster Management | https://localhost:8443 |
ℹ️ Dashboard Access Note
The Kubernetes Dashboard runs onhttps://localhost:8443via port-forwarding instead of Ingress. This approach provides better security and token validation for local development. Use./dashboard.shto manage port-forwarding easily (start, stop, open browser, get token).
A new CI lint job is configured in the .gitlab-ci.yml to ensure Helm charts are linted and validated before deployment. Here's how it works:
- Job Definition: Uses Helm's official Docker image to run the
helm lintcommand against the votchain chart. - Run Locally: You can run the same tests locally using:
Or run individual components:
make helm-test
make helm-lint # Just lint the charts make helm-template # Validate template generation
# 1. Install everything
chmod +x install.sh && ./install.sh
# 2. Setup clean URLs
chmod +x deploy-ingress.sh && ./deploy-ingress.sh# Use the following command to Stop & Start the cluster :
k3d cluster stop dev-cluster
k3d cluster start dev-cluster
PostgreSQL Access:
- CLI:
kubectl exec -it postgresql-0 -n storage -- psql -U postgres - **Connection from Localhost **:
kubectl port-forward -n storage svc/postgresql 5432:5432 - From cluster:
postgresql://myapp:PASSWORD@postgresql.storage.svc.cluster.local:5432/myapp
Redis Access:
- CLI:
kubectl exec -it redis-master-0 -n storage -- redis-cli - Connection URL:
redis://:$(cat redis_password.txt)@localhost:30379 - From cluster:
redis://:PASSWORD@redis-master.storage.svc.cluster.local:6379
Services exposed on NodePort - PostgreSQL: 30432, Redis: 30379
# Check everything is running
kubectl get pods -A
# View all services
kubectl get ingress -A
# Dashboard management
./dashboard.sh start # Start Dashboard port-forward
./dashboard.sh open # Open Dashboard in browser
./dashboard.sh token # Get access token
./dashboard.sh status # Check Dashboard status
# Cleanup
k3d cluster delete dev-cluster
# Monitor resources usage
kubectl top nodes
kubectl describe nodes- Start:
./install.sh→ Select services to install - Access:
./deploy-ingress.sh→ Get clean URLs - Develop: Use supporting services (auth, monitoring, storage)
- Deploy: Add your apps to the cluster (Exemples to add)
- Cleanup:
k3d cluster delete dev-clusteror./uninstall.sh
Ingress not working?
kubectl get svc -n ingress-nginx
grep "local" /etc/hostsService down?
kubectl logs -n <namespace> <pod-name>
kubectl describe pod <pod-name> -n <namespace>- Namespaces and Argo CD: • You've defined namespaces under argocd, storage, messaging, security, monitoring, and vault. • There's a default argocd project in local-k3d.yaml for application management.
- GitLab CI Configuration: • The GitLab CI script creates directories for each branch slug and sets up Helm values making use of the branch name dynamically.
- Ingress and ServiceAccount Management: • Ingress configurations for services seem to follow the discussed pattern. • There is some mention of creating service accounts and RBAC under the argo-cd values file.