This directory contains all configurations and artifacts necessary to containerize and orchestrate the SyncDesk application, including the complete observability stack (logging, metrics, and alerting).
deploy/
├── Dockerfile # Application Docker image
├── entrypoint.sh # Startup script (PostgreSQL healthcheck)
├── alertmanager/ # Alert manager
│ └── alertmanager.yml
├── grafana/ # Visualization dashboards
│ ├── dashboards/
│ │ └── syncdesk-overview.json
│ └── provisioning/
│ ├── dashboards/
│ │ └── dashboards.yml
│ └── datasources/
│ └── datasources.yml
├── loki/ # Logging system (log aggregation)
│ └── loki-config.yml
├── prometheus/ # Metrics collection and storage
│ ├── prometheus.yml
│ └── rules.yml
└── promtail/ # Agent that collects logs for Loki
└── promtail-config.yml
Base image: Python 3.12 (slim)
- Sets environment variables for optimization (no bytecode cache, unbuffered output)
- Installs system dependencies (build-essential, libpq-dev, curl)
- Installs Poetry as the dependency manager
- Copies
pyproject.tomlandpoetry.lock, installing only production dependencies - Copies application source code
- Exposes port 8000 (FastAPI)
- Sets up healthcheck and entrypoint
Optimized size: Uses Python slim + removes apt cache after installation.
The entrypoint.sh script runs before the application starts and:
- Waits for PostgreSQL to be ready — Connects in a loop until the database responds
- Runs migrations (implemented in
app/main.pyon startup event) - Starts the API — FastAPI with Uvicorn
This ensures the application never attempts to connect to an unavailable database.
- Port: 9090
- Purpose: Collects and stores time-series metrics
- Configuration:
prometheus.yml— Endpoints, scrape intervals, alertmanager targetsrules.yml— Alert rules (e.g., "API down for 5 minutes")
- Scrapes: Collects metrics from the API at
/metricsevery 5 seconds
- Port: 3100 (mapped from 3000)
- Purpose: Visualization of metrics and logs in dashboards
- Configuration:
provisioning/datasources/datasources.yml— Defines Prometheus and Loki as data sourcesprovisioning/dashboards/dashboards.yml— Points to JSON dashboardsdashboards/syncdesk-overview.json— Customized dashboard with panels for health, latency, errors, etc.
- Username/Password: Configurable via
.env(default: admin/admin)
- Port: 3100
- Purpose: Storage and querying of structured logs
- Configuration:
- Authentication disabled
- Filesystem storage (
/loki/chunks,/loki/rules) - Schema v13 with TSDB from 2026-04-09
- Retention period: 24 hours by default
- Purpose: Agent that collects local logs and sends them to Loki
- Dependencies: Loki and API (waits for both before starting)
- Volumes:
./logs/— Collects JSON logs from the application/var/run/docker.sock— Detects containers via Docker API
- Configuration:
promtail-config.ymldefines parsing pipelines and labels
- Port: 9093
- Purpose: Manages Prometheus alerts, deduplication, and routing
- Configuration:
alertmanager.ymldefines how alerts are grouped and routed - Integration: Pre-configured to receive alerts from
prometheus.yml(targets atalertmanager:9093)
When you run docker compose up:
-
Parallel (no dependencies):
- PostgreSQL (db)
- MongoDB (mongo)
- Prometheus
- AlertManager
- Loki
-
After (depends on db + mongo healthy):
- API (
apiwaits fordbandmongowithcondition: service_healthy)
- API (
-
After (depends on API):
- Promtail (collects logs generated by the API)
- Grafana (consumes data from Prometheus, Loki, AlertManager)
docker build -f deploy/Dockerfile -t syncdesk-api:latest .docker compose up -d| Service | URL | Description |
|---|---|---|
| API | http://localhost:8000 | FastAPI (docs at /docs) |
| Prometheus | http://localhost:9090 | Raw metrics |
| Grafana | http://localhost:3100 | Dashboards (user: admin, password: from .env) |
| AlertManager | http://localhost:9093 | Alert manager |
| Loki | http://localhost:3100 (via Grafana) | Logs (no native UI) |
# API logs
docker compose logs -f api
# Prometheus logs
docker compose logs -f prometheus
# All services logs
docker compose logs -fdocker compose down # Stop containers
docker compose down -v # Stop + remove volumes (caution: destroys data!)Defined in .env:
# PostgreSQL
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=syncdesk
# MongoDB
MONGO_INITDB_ROOT_USERNAME=user
MONGO_INITDB_ROOT_PASSWORD=password
# Grafana
GF_SECURITY_ADMIN_PASSWORD=admin-
Healthchecks:
- API and databases have configured healthchecks
- Prometheus, Grafana, and Loki don't (infrastructure services)
-
Persistence:
postgres_data— PostgreSQL datamongo_data— MongoDB dataprometheus_data— Metrics historygrafana_data— Customized dashboardsloki_data— Aggregated logs
-
Security in Production:
- Enable authentication in Loki
- Use secrets manager instead of
.env - Configure CORS and authentication in the API
- Enable TLS/HTTPS (nginx reverse proxy or similar)
-
Custom Metrics:
- The API exposes metrics at
/metrics(FastAPI + Prometheus client) - Add new metrics with decorators in
app/core/metrics/decorators.py
- The API exposes metrics at
-
Dashboards:
- Customize
dashboards/syncdesk-overview.jsonto monitor what matters - Import pre-built dashboards from Grafana Hub
- Customize
- Verify that
POSTGRES_HOST,POSTGRES_USER,POSTGRES_PASSWORDare correct in.env - Check entrypoint logs:
docker compose logs api
- Confirm the API is running and exposing
/metrics - Check
prometheus.yml— target should beapi:8000(container name) - Visit http://localhost:9090/targets to see status
- Confirm Prometheus is running: http://localhost:9090
- In Grafana, go to Configuration > Data Sources and test the connection
- Internal URLs must use container names (
http://prometheus:9090, nothttp://localhost:9090)
- Check if
promtailis running:docker compose ps promtail - Confirm
./logs/has.jsonfiles - Check
promtail-config.yml— paths should point to correct logs