Skip to content

KenKilty/cloudnative-pg-aks-sandbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CloudNativePG AKS Sandbox

A local development environment for testing CloudNativePG on Kubernetes using Kind (Kubernetes in Docker). This sandbox provides a safe space to experiment with PostgreSQL clusters in a Kubernetes environment, perfect for learning and testing before deploying to Azure Kubernetes Service (AKS).

Prerequisites

Before you begin, make sure you have these tools installed on your system. They're essential for running the sandbox environment:

  • Podman: For container management
  • kubectl: To interact with Kubernetes clusters
  • helm: For package management
  • kind: To create local Kubernetes clusters
  • curl: For downloading files and making HTTP requests
  • jq: For processing JSON data

Quick Start

Get up and running quickly with these simple steps. The setup script will handle most of the configuration for you:

  1. Clone the repository:

    git clone https://github.com/yourusername/cloudnative-pg-aks-sandbox.git
    cd cloudnative-pg-aks-sandbox
  2. Run the setup script:

    ./setup.sh

The script will:

  • Create a Kind cluster
  • Set up storage and security components
  • Deploy PostgreSQL clusters (primary and replica)
  • Run validation tests

Project Structure

The project is organized to keep related configurations together while maintaining clear separation of concerns. Here's how the files are arranged:

.
├── manifests/
│   ├── certificates/      # Certificate configurations
│   ├── clusters/         # Primary and replica cluster configurations
│   ├── config/
│   │   ├── jobs/        # Kubernetes jobs for setup and testing
│   │   ├── namespaces/  # Namespace configurations (postgres-demo)
│   │   ├── storage/     # Storage configurations (postgres-storage)
│   │   ├── tests/       # Test configurations and scripts
│   │   └── kind-config.yaml
│   └── .DS_Store
├── setup.sh             # Main setup script
├── README.md           # Documentation
├── LICENSE             # License file
└── .gitignore         # Git ignore rules

Kind Cluster Configuration

Our Kind cluster is configured to mirror a production AKS environment with the following setup:

Node Configuration

  • Control Plane Node:

    • Hosts the Kubernetes control plane components
    • Manages cluster orchestration and scheduling
    • Labeled as topology.kubernetes.io/zone=zone0
  • Worker Node 1:

    • Labeled as topology.kubernetes.io/zone=zone1
    • Hosts the primary PostgreSQL instance
    • Simulates an AKS node in Availability Zone 1
  • Worker Node 2:

    • Labeled as topology.kubernetes.io/zone=zone2
    • Hosts the replica PostgreSQL instance
    • Simulates an AKS node in Availability Zone 2

Storage Configuration

The postgres-storage storage class is configured to simulate Azure's locally redundant storage (LRS). For more details on storage configuration, see the CloudNativePG Storage documentation:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: postgres-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
parameters:
  fsType: ext4
  path: /var/lib/postgresql/data
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

This configuration ensures that:

  1. Storage is provisioned on the same node as the pod
  2. Volumes are created only when pods are scheduled
  3. Storage respects node affinity rules
  4. Data persistence is maintained across pod restarts

Certificate Management and Replication

We use cert-manager to handle TLS certificate generation and management for secure PostgreSQL connections. This automates the creation and renewal of certificates for both internal cluster communication and client connections. For more details, see CloudNativePG Certificates documentation.

Our certificate setup creates a complete chain of trust:

  1. A self-signed root CA certificate (ca-cert) that serves as the foundation of our trust chain
  2. A CA issuer that uses this root certificate to sign other certificates
  3. A server certificate (pg-server-cert) for PostgreSQL server authentication, valid for 90 days with automatic renewal 15 days before expiry
  4. A client certificate (pg-client-cert) for replication authentication, also valid for 90 days with automatic renewal

Both the primary and replica clusters are configured to use these certificates:

certificates:
  serverCASecret: ca-cert        # Root CA for server certificate validation
  serverTLSSecret: pg-server-cert # Server certificate for client connections
  clientCASecret: ca-cert        # Root CA for client certificate validation
  replicationTLSSecret: pg-client-cert # Client certificate for replication

This configuration enables:

  • Secure client-to-server connections using TLS
  • Authenticated replication between primary and replica clusters
  • Automatic certificate renewal through cert-manager
  • Consistent security across all cluster components

For PostgreSQL replication, we implement standalone replica clusters as defined in CloudNativePG. This setup provides:

  • Read-only replicas for reporting and analytics workloads
  • Streaming replication using pg_basebackup for initial data sync
  • Continuous recovery mode for real-time data replication
  • Ability to promote replicas to primary if needed

The replica clusters operate independently from the source cluster, making them ideal for workload isolation while maintaining data consistency.

Local Node and Zone Simulation

This environment simulates a production-like Kubernetes cluster with multiple nodes and availability zones using Kind. Here's how it works:

  • Node Simulation: We create three Kind nodes:

    • One control plane node (simulating a Kubernetes master node)
    • Two worker nodes (simulating production worker nodes)
  • Zone Simulation: We simulate Azure availability zones by:

    • Labeling worker nodes with zone-specific labels (topology.kubernetes.io/zone)
    • Using node affinity rules to ensure primary and replica PostgreSQL instances run on different nodes
    • This mirrors how AKS distributes workloads across availability zones
  • Storage Simulation:

    • Each node has its own local storage path
    • Storage classes are configured to respect node affinity
    • This simulates how AKS handles locally redundant storage (LRS)

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages