Skip to content

Project Vision: Filling the Kubernetes Gap for Self-Hosted Supabase #14

@STRRL

Description

@STRRL

Context: The Self-Hosting Gap

Supabase has ~98K GitHub stars, 5M+ Docker pulls on supabase/postgres, and 4-5 million developers on its cloud platform. Yet the self-hosting story — especially on Kubernetes — remains a significant gap.

Evidence from GitHub Discussion #39820 ("Self-hosting: What's working and what's not"), opened by the Supabase team itself, and broader community signals:

What the community is asking for

Source Signal
Discussion #4907 Multi-project support — 123 reactions, 52 participants, unresolved for 3+ years
Discussion #17876 "Self-Hosted Supabase: A Missed Opportunity for True Open Source" — 29 reactions
Discussion #40583 "Ensure feature parity between Cloud and Self-Hosted"
Discussion #39820 100+ comments, multiple users explicitly requesting official Kubernetes support
supabase-community/supabase-kubernetes 721 stars — community Helm chart, not officially supported

Kubernetes-specific pain points from #39820

  • AntonOfTheWoods (3 years self-hosting on K8s): Started with the now-abandoned Bitnami chart, had to evolve it independently. Criticized lack of cohesion across component projects.
  • jniclas (solo dev, 2 years): Explicitly requested "A kubernetes helm chart that uses CloudNativePG as Postgres DB" — this is their 001 selfhost supabase operator #1 ask.
  • odicis (B2B SaaS startup): Needs "up-to-date, cloud-agnostic Kubernetes templates and clear guidance for production setups" for deploying into enterprise private clouds.
  • kyle-okami (medium startup): Self-hosts edge-runtime on K8s with memory-optimized nodes (4 CPU, 32 GB), running 40+ functions in production.

Notably, the Supabase team has not responded to any Kubernetes-specific requests in that thread.

The existing landscape

Solution Approach Limitation
Official docker-compose 11+ services, single-project only Not Kubernetes-native, no lifecycle management
supabase-community/supabase-kubernetes Helm chart (721 stars) Static deployment, no operator reconciliation, no multi-project
Bitnami Helm chart Deprecated Abandoned

What This Project Solves

supabase-operator addresses the specific gaps identified above:

1. Multi-project support (the #1 community request)

The official docker-compose deploys exactly one Supabase project. The community has been asking for multi-project support since January 2022 (Discussion #4907, 123 reactions). This operator supports multiple SupabaseProject custom resources per namespace, each with independent lifecycle management.

2. Kubernetes-native lifecycle management

Unlike Helm charts that are "deploy and forget", the operator provides:

  • Declarative CRD-based deployment — single manifest for the entire Supabase stack
  • Continuous reconciliation — drift detection and self-healing
  • Phased deployment with granular status tracking (11 phases, 15+ conditions)
  • Webhook-based admission validation before resources are created
  • Per-component health tracking and readiness reporting

3. Decoupled external dependencies

The operator deliberately does not manage PostgreSQL or object storage. Users bring their own:

  • PostgreSQL: CloudNativePG, Amazon RDS, Google Cloud SQL, Azure Database, or any Postgres instance
  • S3-compatible storage: MinIO, AWS S3, Cloudflare R2, etc.

This is a deliberate architectural decision. It aligns with the Kubernetes ecosystem where specialized operators (CloudNativePG, Zalando Postgres Operator) handle database lifecycle far better than a Supabase-specific operator ever could. It directly answers jniclas's request in #39820 for CloudNativePG integration.

4. Security-first defaults

  • Non-root containers
  • Read-only root filesystems
  • Dropped capabilities
  • Automatic JWT secret generation
  • Secret validation via admission webhooks

Current Stage

Alpha (v1alpha1) — the project is functional but early.

What works today

  • 7 core components deployed and managed: Kong, GoTrue, PostgREST, Realtime, Storage API, Meta, Studio
  • Idempotent database initialization (extensions, schemas, roles)
  • Helm chart for operator installation
  • Component-level image/resource/replica overrides
  • Ingress with TLS support
  • SMTP and OAuth configuration for Auth
  • E2E tests covering the full deployment lifecycle

Honest assessment of gaps

Area Status
Test coverage Weak — controller reconcile paths lack unit tests, E2E covers happy path only
Edge Functions (Deno runtime) Not yet managed
imgproxy Not yet managed
Supavisor (connection pooling) Not yet managed
Zero-downtime upgrades Not implemented
HPA / autoscaling Not implemented (manual replica count only)
Upstream version tracking Image tags are hardcoded defaults
Community adoption Minimal (5 stars)

Future Direction

The following items represent the planned evolution of this project. No timelines are attached — priority is driven by community need and contribution.

Phase: Component Coverage

Bring managed components to parity with the official docker-compose stack:

  • Edge Functions (supabase/edge-runtime) — the Deno-based serverless runtime
  • imgproxy — image transformation service
  • Supavisor — connection pooling, important for production workloads
  • Analytics (Logflare) — low priority; community consensus is that this service is a resource hog with limited value in self-hosted setups. May be offered as opt-in only.

Phase: Production Hardening

  • Controller reconciliation test coverage (happy + failure paths)
  • Failure scenario E2E tests (missing secrets, DB init failure, component crash)
  • Rolling update strategy with health verification
  • PodDisruptionBudget support
  • NetworkPolicy generation
  • Prometheus ServiceMonitor for operator and Supabase component metrics

Phase: Operational Excellence

  • Automated version tracking — detect upstream Supabase releases and surface available upgrades
  • HPA integration — optional autoscaling per component
  • Backup/restore coordination — not managing backups directly (that's the PG operator's job), but providing CRD-level hooks for backup orchestration
  • One-click demo environment (kind/k3d script for local evaluation)

Phase: Ecosystem Integration

  • CloudNativePG end-to-end reference architecture
  • OperatorHub / ArtifactHub listing for discoverability
  • Compatibility testing matrix (Supabase versions × Kubernetes versions)

Differentiation vs. Existing Solutions

Capability docker-compose supabase-community Helm supabase-operator
Kubernetes-native No Yes (Helm) Yes (CRD + Operator)
Multi-project No No Yes
Continuous reconciliation No No Yes
Drift detection / self-healing No No Yes
Granular status reporting No No Yes (per-component)
Admission validation No No Yes (webhooks)
Decoupled PostgreSQL No (bundled) Partial Yes (BYO-DB)
Lifecycle management Manual Helm upgrade Operator-managed

The core value proposition: a Kubernetes operator manages state continuously, while a Helm chart is a one-time deployment tool. For a complex distributed system like Supabase with 7+ interdependent services, operator-based management provides meaningful operational advantages.


Contributing

This project is MIT-licensed and contributions are welcome. High-impact areas for contribution:

  1. Adding Edge Functions / Supavisor / imgproxy component builders
  2. Controller test coverage
  3. CloudNativePG integration examples and documentation
  4. Real-world production deployment feedback

If you're self-hosting Supabase on Kubernetes and hitting pain points, this project exists because of those pain points. Issues and PRs are welcome.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions