A local development environment for testing CloudNativePG on Kubernetes using Kind (Kubernetes in Docker). This sandbox provides a safe space to experiment with PostgreSQL clusters in a Kubernetes environment, perfect for learning and testing before deploying to Azure Kubernetes Service (AKS).
Before you begin, make sure you have these tools installed on your system. They're essential for running the sandbox environment:
- Podman: For container management
- kubectl: To interact with Kubernetes clusters
- helm: For package management
- kind: To create local Kubernetes clusters
- curl: For downloading files and making HTTP requests
- jq: For processing JSON data
Get up and running quickly with these simple steps. The setup script will handle most of the configuration for you:
-
Clone the repository:
git clone https://github.com/yourusername/cloudnative-pg-aks-sandbox.git cd cloudnative-pg-aks-sandbox -
Run the setup script:
./setup.sh
The script will:
- Create a Kind cluster
- Set up storage and security components
- Deploy PostgreSQL clusters (primary and replica)
- Run validation tests
The project is organized to keep related configurations together while maintaining clear separation of concerns. Here's how the files are arranged:
.
├── manifests/
│ ├── certificates/ # Certificate configurations
│ ├── clusters/ # Primary and replica cluster configurations
│ ├── config/
│ │ ├── jobs/ # Kubernetes jobs for setup and testing
│ │ ├── namespaces/ # Namespace configurations (postgres-demo)
│ │ ├── storage/ # Storage configurations (postgres-storage)
│ │ ├── tests/ # Test configurations and scripts
│ │ └── kind-config.yaml
│ └── .DS_Store
├── setup.sh # Main setup script
├── README.md # Documentation
├── LICENSE # License file
└── .gitignore # Git ignore rules
Our Kind cluster is configured to mirror a production AKS environment with the following setup:
-
Control Plane Node:
- Hosts the Kubernetes control plane components
- Manages cluster orchestration and scheduling
- Labeled as
topology.kubernetes.io/zone=zone0
-
Worker Node 1:
- Labeled as
topology.kubernetes.io/zone=zone1 - Hosts the primary PostgreSQL instance
- Simulates an AKS node in Availability Zone 1
- Labeled as
-
Worker Node 2:
- Labeled as
topology.kubernetes.io/zone=zone2 - Hosts the replica PostgreSQL instance
- Simulates an AKS node in Availability Zone 2
- Labeled as
The postgres-storage storage class is configured to simulate Azure's locally redundant storage (LRS). For more details on storage configuration, see the CloudNativePG Storage documentation:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
parameters:
fsType: ext4
path: /var/lib/postgresql/data
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: trueThis configuration ensures that:
- Storage is provisioned on the same node as the pod
- Volumes are created only when pods are scheduled
- Storage respects node affinity rules
- Data persistence is maintained across pod restarts
We use cert-manager to handle TLS certificate generation and management for secure PostgreSQL connections. This automates the creation and renewal of certificates for both internal cluster communication and client connections. For more details, see CloudNativePG Certificates documentation.
Our certificate setup creates a complete chain of trust:
- A self-signed root CA certificate (
ca-cert) that serves as the foundation of our trust chain - A CA issuer that uses this root certificate to sign other certificates
- A server certificate (
pg-server-cert) for PostgreSQL server authentication, valid for 90 days with automatic renewal 15 days before expiry - A client certificate (
pg-client-cert) for replication authentication, also valid for 90 days with automatic renewal
Both the primary and replica clusters are configured to use these certificates:
certificates:
serverCASecret: ca-cert # Root CA for server certificate validation
serverTLSSecret: pg-server-cert # Server certificate for client connections
clientCASecret: ca-cert # Root CA for client certificate validation
replicationTLSSecret: pg-client-cert # Client certificate for replicationThis configuration enables:
- Secure client-to-server connections using TLS
- Authenticated replication between primary and replica clusters
- Automatic certificate renewal through cert-manager
- Consistent security across all cluster components
For PostgreSQL replication, we implement standalone replica clusters as defined in CloudNativePG. This setup provides:
- Read-only replicas for reporting and analytics workloads
- Streaming replication using
pg_basebackupfor initial data sync - Continuous recovery mode for real-time data replication
- Ability to promote replicas to primary if needed
The replica clusters operate independently from the source cluster, making them ideal for workload isolation while maintaining data consistency.
This environment simulates a production-like Kubernetes cluster with multiple nodes and availability zones using Kind. Here's how it works:
-
Node Simulation: We create three Kind nodes:
- One control plane node (simulating a Kubernetes master node)
- Two worker nodes (simulating production worker nodes)
-
Zone Simulation: We simulate Azure availability zones by:
- Labeling worker nodes with zone-specific labels (
topology.kubernetes.io/zone) - Using node affinity rules to ensure primary and replica PostgreSQL instances run on different nodes
- This mirrors how AKS distributes workloads across availability zones
- Labeling worker nodes with zone-specific labels (
-
Storage Simulation:
- Each node has its own local storage path
- Storage classes are configured to respect node affinity
- This simulates how AKS handles locally redundant storage (LRS)
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.