This guide covers running ClawStrike + OpenClaw together in Docker. For installing ClawStrike directly on your host machine, see the direct setup guide.
ClawStrike runs as a CLI tool on the same container as OpenClaw. The custom image extends the official OpenClaw image, adding Python 3.12 and the clawstrike binary. One container, no inter-process communication complexity.
- Docker Engine 24+ and Docker Compose v2
- A Hugging Face account
- Meta license accepted for your chosen Prompt Guard model (see below)
ClawStrike uses Meta's Llama Prompt Guard 2. You have to accept Meta's license on Hugging Face before the model can be downloaded.
Choose one:
| Model | Size | Languages | License page |
|---|---|---|---|
| Llama-Prompt-Guard-2-22M | ~300 MB | English only | Hugging Face |
| Llama-Prompt-Guard-2-86M | ~1.13 GB | Multilingual | Hugging Face |
After accepting, generate a read-only token at huggingface.co/settings/tokens.
This step cannot be automated. The HF_TOKEN alone is not sufficient — you should also accept the license on the model page.
# Clone the repo
git clone https://github.com/yogur/ClawStrike && cd ClawStrike
# Create ClawStrike config (choose one)
cp clawstrike.example.yaml clawstrike.yaml # then edit to taste
# OR (if uv is installed locally):
# uv run clawstrike init # generates defaults
# Create environment file and fill in HF_TOKEN and LLM credentials
cp .env.example .envclawstrike.yaml is bind-mounted read-only into the OpenClaw workspace directory inside the container (/home/node/.openclaw/workspace/clawstrike.yaml). OpenClaw executes CLI commands from that directory, so the agent finds the config automatically without any extra flags.
Edit .env and fill in:
HF_TOKEN— your Hugging Face read-only token- LLM session credentials (
CLAUDE_AI_SESSION_KEY, etc.) for whichever LLM provider you use
OPENCLAW_GATEWAY_TOKEN is generated automatically by the setup script if not set.
bash docker-setup.shThe script will:
- Create required data directories
- Generate a gateway token (or reuse one if already configured)
- Build the Docker image
- Fix bind-mount directory permissions
- Run the interactive OpenClaw onboarding
- Start the gateway
Do not Ctrl+C during the first run. The Prompt Guard model is being downloaded to the hf-cache named volume; subsequent starts skip the download entirely.
In a separate terminal:
docker compose run --rm openclaw-cli# No rebuild needed unless ClawStrike or OpenClaw version changes
docker compose up -d openclaw-gatewayPull the latest code and rebuild:
git pull
docker compose build --no-cache
docker compose upThe Dockerfile pins the OpenClaw version (FROM ghcr.io/openclaw/openclaw:2026.3.2). To upgrade:
- Change the
FROMtag inDockerfileto the new version - Run
docker compose build --no-cache - Test before using in production
Pinning is intentional — OpenClaw updates may change the skill API or directory structure.
| Volume | Purpose | Survives rebuild? |
|---|---|---|
hf-cache |
Hugging Face model cache | Yes (named volume) |
clawstrike-data |
Audit DB, contact registry (mounted at workspace/data) |
Yes (named volume) |
OPENCLAW_CONFIG_DIR |
OpenClaw config, conversations | Yes (host bind-mount) |
./clawstrike.yaml |
Security policy config (read-only bind-mount into workspace) | N/A — host file |
clawstrike.yaml is missing or not bind-mounted. Create it first:
cp clawstrike.example.yaml clawstrike.yamlThen ensure docker-compose.yml has the correct bind-mount path (it does by default: ./clawstrike.yaml:/home/node/.openclaw/workspace/clawstrike.yaml:ro).
Check that:
HF_TOKENis set and valid- You have accepted the Meta license on the model's Hugging Face page
- The token has read access (not write-only)
The entrypoint prints the exact model URLs when warmup fails. OpenClaw still starts — classification requests will return errors until the model is available.
Once the gateway is running, open a shell in the container and check that ClawStrike is working:
docker compose exec openclaw-gateway clawstrike health
# {"status": "ok", "mode": "skill", "classifier": "multilingual", "mcp_enabled": false}clawstrike.yamlis mounted read-only (:ro) into the OpenClaw workspace (/home/node/.openclaw/workspace/clawstrike.yaml). The container cannot modify the security policy. OpenClaw executes CLI commands from this directory, so the agent finds the config automatically.- The gateway binds to
127.0.0.1on the host by default — reachable from the host machine only, not from the LAN or internet. To expose it to the LAN or a tailnet, addOPENCLAW_GATEWAY_HOST=0.0.0.0to.env. If you do, generate a strongOPENCLAW_GATEWAY_TOKEN(e.g.openssl rand -hex 32) and apply additional network controls so port 18789 is not reachable from the internet. - The image runs as the non-root
nodeuser (uid 1000), matching the upstream OpenClaw security posture. - PyTorch is installed from the CPU-only index — both Docker and local dev installs use CPU inference. GPU is unnecessary for models of this size (22M / 86M parameters).