Conversation
Introduces Kustomize patches to tailor HorizontalPodAutoscaler (HPA) and Deployment resource settings (CPU/memory requests and limits) for development and staging environments, overriding the base defaults. Development Overlay (`apps/contextapi/overlays/development`): - HPA: Patched to `minReplicas: 1`, `maxReplicas: 1`. - Deployment (`node` container): - Requests: `cpu: 100m`, `memory: 100Mi` - Limits: `cpu: 200m`, `memory: 200Mi` Staging Overlay (`apps/contextapi/overlays/staging`): - HPA: Patched to `minReplicas: 1`, `maxReplicas: 2`. - Deployment (`node` container): - Requests: `cpu: 150m`, `memory: 150Mi` - Limits: `cpu: 300m`, `memory: 300Mi` These changes allow for more appropriate resource allocation and scaling behavior in non-production environments, optimizing resource usage and cost while maintaining production-like configurations for the production overlay (which continues to use the base HPA and Deployment settings unless patched separately).
gzur
left a comment
There was a problem hiding this comment.
Environments
We should be trying this out on dev / staging - seeing as we don't have those, I think we make a point of making this work on the staging cluster, since that is in the works anyway, and having a target like "get the contextapi running on staging" sounds like a super nice concrete milestone.
Template consolidation
The current PR duplicates a lot of YAML across dev/staging environments, with identical ConfigMaps and near-identical Ingress files that only differ in hostname/secret names.
Using Kustomize configMapGenerator / patches / replacements with environment-specific variables would reduce this overhead and create a single source of truth for shared configuration, lessening the chance of configuration drift between environments (for example, the 49-line contextapi-config.yaml could become a 5-line configMapGenerator in each overlay's kustomization.yaml)
Ideally, you do not want to be declaring new resources wholesale in overlays.
Network policy
- The ingress selector is wrong.
- Is the
contextapithe only in-cluster caller?
I would be hesitant to roll it out to prod as-is without extensive testing.
| - from: | ||
| - podSelector: | ||
| matchLabels: | ||
| app: nginx # Placeholder: Label for NGINX ingress pods |
There was a problem hiding this comment.
This selector is wrong, should be: app.kubernetes.io/name: rke2-ingress-nginx
Introduces Kustomize patches to tailor HorizontalPodAutoscaler (HPA) and Deployment resource settings (CPU/memory requests and limits) for development and staging environments, overriding the base defaults.
Development Overlay (
apps/contextapi/overlays/development):minReplicas: 1,maxReplicas: 1.nodecontainer):cpu: 100m,memory: 100Micpu: 200m,memory: 200MiStaging Overlay (
apps/contextapi/overlays/staging):minReplicas: 1,maxReplicas: 2.nodecontainer):cpu: 150m,memory: 150Micpu: 300m,memory: 300MiThese changes allow for more appropriate resource allocation and scaling behavior in non-production environments, optimizing resource usage and cost while maintaining production-like configurations for the production overlay (which continues to use the base HPA and Deployment settings unless patched separately).