Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/smoke_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ jobs:

- name: Store robot results
if: failure()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: robot
path: robot/results
4 changes: 2 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ jobs:
- name: Test frontend
run: docker compose run --no-deps web yarn test:js:coverage
- name: Upload coverage artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: frontend-coverage
path: |
Expand Down Expand Up @@ -74,7 +74,7 @@ jobs:
-e SFDX_HUB_KEY="sample key"
web yarn test:py
- name: Upload coverage artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: backend-coverage
path: |
Expand Down
33 changes: 28 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,14 @@ ARG BUILD_ENV=development
ARG PROD_ASSETS
ARG OMNIOUT_TOKEN
FROM node:22 AS node_base
FROM python:3.12
FROM python:3.12-slim-bookworm

# Re-import build args inside this stage. ARGs declared before the first
# FROM are only in scope for FROM lines themselves; they reset to undefined
# after each FROM and must be redeclared to be visible to RUN.
ARG BUILD_ENV=development
ARG PROD_ASSETS
ARG OMNIOUT_TOKEN

# Node and npm
COPY --from=node_base /usr/local/lib/node_modules /usr/local/lib/node_modules
Expand All @@ -16,13 +23,26 @@ RUN ln -s /opt/yarn/bin/yarnpkg /usr/local/bin/yarnpkg
RUN node --version && npm --version && yarn --version

# System setup:
# slim base lacks compilers and -dev headers needed to build wheels
# for cryptography, lxml, psycopg2-binary, etc. Add toolchain deps.
RUN apt-get update \
&& apt-get install -y gettext redis-tools --no-install-recommends \
&& apt-get install -y --no-install-recommends \
gettext \
redis-tools \
build-essential \
libxml2-dev \
libxslt-dev \
libpq-dev \
libffi-dev \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

# Python context setup:
RUN pip install --no-cache-dir --upgrade pip pip-tools
# setuptools<81 keeps the legacy pkg_resources.declare_namespace API
# that cumulusci's __init__ relies on. The full python:3.12 image
# ships an older setuptools by default; slim does not, so pin it.
RUN pip install --no-cache-dir --upgrade pip pip-tools "setuptools<81"

# ================ ENVIRONMENT
ENV PYTHONUNBUFFERED=1
Expand All @@ -39,8 +59,11 @@ ENV OMNIOUT_TOKEN=${OMNIOUT_TOKEN}
RUN npm install --location=global sfdx-cli --ignore-scripts

# Python requirements:
# setuptools<81 repeated here because --upgrade pip-tools would otherwise
# re-resolve setuptools to >=81 in this layer; the pin must survive both
# pip-install invocations (see the earlier toolchain layer for the full why).
COPY ./requirements requirements
RUN pip install --no-cache-dir --upgrade pip pip-tools \
RUN pip install --no-cache-dir --upgrade pip pip-tools "setuptools<81" \
&& pip install --no-cache-dir -r requirements/prod.txt
RUN pip install --no-cache-dir -r requirements/dev.txt

Expand All @@ -65,4 +88,4 @@ RUN \
SFDX_CLIENT_ID="sample id" \
python manage.py collectstatic --noinput

CMD /app/start-server.sh
CMD ["sh", "-c", "exec daphne --bind 0.0.0.0 --port $PORT metadeploy.asgi:application"]
32 changes: 6 additions & 26 deletions app.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
"description": "Web-based tool for installing Salesforce products",
"repository": "https://github.com/SFDO-Tooling/MetaDeploy",
"keywords": ["ci", "python", "django", "salesforce", "github"],
"stack": "container",
"env": {
"DJANGO_ALLOWED_HOSTS": {
"description": "Heroku proxies web requests and Django needs to be configured to allow the forwards",
Expand Down Expand Up @@ -67,47 +68,26 @@
"formation": {
"web": {
"quantity": 1,
"size": "free"
"size": "basic"
},
"devworker": {
"quantity": 1,
"size": "free"
"size": "basic"
},
"worker": {
"quantity": 0,
"size": "free"
"size": "basic"
},
"worker-short": {
"quantity": 0,
"size": "free"
"size": "basic"
}
},
"addons": ["heroku-postgresql", "heroku-redis"],
"buildpacks": [
{
"url": "heroku/nodejs"
},
{
"url": "heroku/python"
}
],
"environments": {
"test": {
"scripts": {
"test-setup": "pip install --upgrade -r requirements/test.txt",
"test": "pytest"
},
"env": {
"DJANGO_SETTINGS_MODULE": "config.settings.test",
"DATABASE_URL": "sqlite:///test.db",
"AWS_ACCESS_KEY_ID": "None",
"AWS_SECRET_ACCESS_KEY": "None",
"AWS_BUCKET_NAME": "None"
}
},
"review": {
"scripts": {
"postdeploy": "./manage.py populate_db"
"postdeploy": "./manage.py populate_data"
}
}
}
Expand Down
79 changes: 79 additions & 0 deletions docs/heroku-container-runtime.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Heroku container runtime

MetaDeploy is built and deployed via the [Heroku container runtime](https://devcenter.heroku.com/articles/build-docker-images-heroku-yml) rather than the legacy buildpacks slug. The container image is built from the repository's `Dockerfile` per the spec in `heroku.yml`, and the resulting image runs the `web`, `worker`, `worker-short`, and `devworker` dynos.

This page documents two things every operator of a MetaDeploy deployment should know:

1. How the container image is built and released.
2. How to keep the base image patched against published CVEs.

## Build and release

`heroku.yml` declares the build, release, and run commands. The relevant fields are:

```yaml
build:
docker:
web: Dockerfile
release:
image: web
command:
- ./.heroku/release.sh
run:
web: bash -c "exec daphne --bind 0.0.0.0 --port $PORT metadeploy.asgi:application"
worker: ./manage.py rqworker default short
worker-short: ./manage.py rqworker short
devworker: ./manage.py rqworker default short --burst
```

Two paths produce a deployed container:

- **Heroku-built (preferred).** A push to a branch that is wired to a Heroku review-app pipeline, or a direct push to a tracked app, causes Heroku to clone the repo, run `docker build` against the `Dockerfile` declared in `heroku.yml`, run the `release.command` (`./.heroku/release.sh`, which does `python manage.py migrate --noinput`), then promote the new image to the dyno formation. This is the path the GitHub Actions CI exercises for review apps.
- **Locally built (fallback).** If Heroku's build queue is congested or the build environment is otherwise unavailable, you can build the image on your workstation and push it directly to the Heroku container registry:

```bash
docker buildx build --platform linux/amd64 --build-arg BUILD_ENV=production \
-t registry.heroku.com/<app>/web --load .
heroku container:push web -a <app>
heroku container:release web -a <app>
```

**Caveat.** `heroku container:release` does **not** execute the `release.command` declared in `heroku.yml`. After a manual `container:release`, run the release script yourself before serving traffic:

```bash
heroku run -a <app> -- bash ./.heroku/release.sh
```

On Apple Silicon hosts the `--platform linux/amd64` flag is required so the resulting image runs on Heroku's amd64 dynos. QEMU emulation under Docker Desktop handles the cross-build transparently; expect the first build to take 5–10 minutes longer than a native build.

## Heroku Private Spaces note

In Heroku Private Spaces the `run` field in `heroku.yml` is **not** consulted to start the dyno. The container image's `CMD` is used instead. Because of this, the `Dockerfile`'s final `CMD` is kept aligned with the `web` `run` command above (`daphne` against `metadeploy.asgi:application`). If you change one, change the other in the same commit.

## CVE update mechanism

The base image is `python:3.12-slim-bookworm` (Debian 12 + CPython 3.12). Vulnerabilities published against CPython, Debian packages, the Node toolchain layer, or the system OpenSSL flow into the deployed image when it is rebuilt. We do **not** currently have automated rebuild-on-CVE plumbing for this repository (no Dependabot or scheduled GitHub Actions job that bumps the `FROM` tag and opens a PR). Until that is in place, follow the **manual rebuild cadence** below.

### Manual rebuild cadence

- **Trigger.** A maintainer rebuilds the image **at least monthly**, and additionally on any of: a Critical CVE against `python:3.12-slim-bookworm`, a Critical CVE against a major Debian package known to ship in the image (`openssl`, `libxml2`, `libcurl4`, `nodejs`), or an emergency advisory from Salesforce security.
- **Procedure.** Push a no-op or version-bump commit to the default branch and let the normal Heroku build pipeline rebuild from the latest base image:

```bash
git commit --allow-empty -m "chore: rebuild image to pick up base-image CVE patches"
git push origin main
```

The Heroku review-app and staging build pulls the current `python:3.12-slim-bookworm` digest at build time, picking up any Debian / CPython / Node patches published since the previous build. After the staging dyno is healthy, promote the slug to production through the normal pipeline.
- **Verification.** After the rebuild, `heroku run -a <app> -- python -V` prints the current CPython point release, and `heroku run -a <app> -- dpkg -l openssl` shows the patched Debian package version. Spot-check against the upstream advisory.
- **Followup tracking.** When the cross-cutting [SFDO-Tooling apps restart roadmap](https://github.com/SFDO-Tooling) publishes an automated CVE-rebuild mechanism (cron-driven rebuild + redeploy, or a workflow that bumps the `FROM` tag and opens a PR), this repo should adopt it and this section should be replaced by a cross-reference.

## Local development

`docker-compose.yml` uses the same `Dockerfile` but with `BUILD_ENV=development` and overrides `CMD` to run the Django development server with hot reload. The production container behavior is not exercised by `docker-compose up`; if you need to verify the production image locally, run:

```bash
docker run --rm -e PORT=8000 -p 8000:8000 \
-e DATABASE_URL=... -e REDIS_URL=... \
registry.heroku.com/<app>/web
```
12 changes: 12 additions & 0 deletions heroku.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
build:
docker:
web: Dockerfile
run:
web: daphne --bind 0.0.0.0 --port $PORT metadeploy.asgi:application
devworker: honcho start -f Procfile_devworker
worker: sh .heroku/start_metadeploy_worker.sh
worker-short: honcho start -f Procfile_worker_short
release:
image: web
command:
- ./.heroku/release.sh
Loading