Skip to content

Build your own Swarm Detection & Response (SDR) platform and OpenClaw security infrastructure with Clawdstrike. Become a cyber industry.

License

Notifications You must be signed in to change notification settings

backbay-labs/clawdstrike

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

344 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Clawdstrike

CI Status crates.io docs.rs Artifact Hub License: Apache-2.0 MSRV: 1.93

The claw strikes back.
At the boundary between intent and action,
it watches what leaves, what changes, what leaks.
Not "visibility." Not “telemetry.” Not "vibes." Logs are stories—proof is a signature.
If the tale diverges, the receipt won't sign.

Clawdstrike

Fail closed. Sign the truth.

 Tool-boundary enforcement   ·    Signed receipts   ·    Multi-framework

Docs   ·   TypeScript   ·   Python   ·   OpenClaw   ·   Examples


Overview

Alpha software — APIs and import paths may change between releases. See GitHub Releases and the package registries (crates.io / npm / PyPI) for published versions.

Clawdstrike is a fail-closed policy + attestation runtime for AI agents and computer-use systems, designed for developers building EDR solutions and security infrastructure for autonomous agent swarms. It sits at the boundary between intent and execution: normalize actions, enforce policy, and sign what happened.

Guards — Block sensitive paths, control network egress, detect secrets, validate patches, restrict tools, catch jailbreaks

Receipts — Ed25519-signed attestations proving what was decided, under which policy, with what evidence

Multi-language — Rust, TypeScript, Python, WebAssembly

Multi-framework — OpenClaw, Vercel AI, LangChain, Claude, OpenAI, and more

Computer Use Gateway

Clawdstrike now includes dedicated CUA gateway coverage for real runtime paths (not just static policy checks):

  • Canonical CUA action translation across providers/runtimes.
  • Side-channel policy controls for remote desktop surfaces (clipboard, audio, drive_mapping, printing, session_share, file transfer bounds).
  • Deterministic decision metadata (reason_code, guard, severity) for machine-checkable analytics.
  • Fixture-driven validator suites plus runtime bridge tests for regression safety.

Architecture At A Glance

flowchart LR
    A[Provider Runtime<br/>OpenAI / Claude / OpenClaw] --> B[Clawdstrike Adapter]
    B --> C[Canonical Action Event]
    C --> D[Policy Engine + Guard Evaluation]
    D -->|allow| E[Gateway / Tool / Remote Action]
    D -->|deny| F[Fail-Closed Block]
    D --> G[Signed Receipt + reason_code]
Loading

Quick Start

Pick one core runtime, then add the adapter for your framework.

Core Runtimes

Rust CLI

# from crates.io (recommended when published)
cargo install hush-cli

clawdstrike policy list
clawdstrike check --action-type file --ruleset strict ~/.ssh/id_rsa
# from source checkout (development path)
cargo install --path crates/services/hush-cli

Docs: Quick Start (Rust)

TypeScript SDK (@clawdstrike/sdk)

npm install @clawdstrike/sdk
import { Clawdstrike } from "@clawdstrike/sdk";

const cs = Clawdstrike.withDefaults("strict");
const decision = await cs.checkNetwork("api.openai.com:443");
console.log(decision.status);

Docs: Quick Start (TypeScript)

Python SDK (clawdstrike)

pip install clawdstrike
from clawdstrike import Policy, PolicyEngine, GuardAction, GuardContext

policy = Policy.from_yaml_file("policy.yaml")
engine = PolicyEngine(policy)
ctx = GuardContext(cwd="/app", session_id="session-123")

allowed = engine.is_allowed(GuardAction.file_access("/home/user/.ssh/id_rsa"), ctx)
print("allowed:", allowed)

Docs: Quick Start (Python)

Additional Language Bindings (Advanced / FFI)

These bindings are useful for native/runtime integrations and receipt/crypto flows. They currently rely on the hush-ffi native library (libhush_ffi).

C (via hush-ffi)

cargo build -p hush-ffi --release
  • Header: crates/libs/hush-ffi/hush.h
  • Native library output: target/release/ (libhush_ffi.*)

Go (via cgo binding)

# optional local-development pin
go mod edit -replace github.com/backbay-labs/clawdstrike/packages/sdk/hush-go=/path/to/clawdstrike/packages/sdk/hush-go
go get github.com/backbay-labs/clawdstrike/packages/sdk/hush-go
import hush "github.com/backbay-labs/clawdstrike/packages/sdk/hush-go"

v := hush.Version()
_ = v

C# (.NET binding)

dotnet add <your-project>.csproj reference /path/to/clawdstrike/packages/sdk/hush-csharp/src/Hush/Hush.csproj
using Hush;
using Hush.Crypto;

var kp = Keypair.Generate();
Console.WriteLine(kp.PublicKeyHex);

For Go/C#/C runtime setup, ensure libhush_ffi is on your dynamic library path.

Framework Adapters

OpenAI Agents SDK (@clawdstrike/openai)

npm install @clawdstrike/openai @clawdstrike/adapter-core @clawdstrike/engine-local
import { createStrikeCell } from "@clawdstrike/engine-local";
import { OpenAIToolBoundary, wrapOpenAIToolDispatcher } from "@clawdstrike/openai";

const boundary = new OpenAIToolBoundary({ engine: createStrikeCell({ policyRef: "default" }) });
const dispatchTool = wrapOpenAIToolDispatcher(boundary, async (toolName, input, runId) => {
  return { toolName, input, runId };
});

Docs: OpenAI Adapter README

Claude Code / Claude Agent SDK (@clawdstrike/claude)

npm install @clawdstrike/claude @clawdstrike/adapter-core @clawdstrike/engine-local
import { createStrikeCell } from "@clawdstrike/engine-local";
import { ClaudeToolBoundary, wrapClaudeToolDispatcher } from "@clawdstrike/claude";

const boundary = new ClaudeToolBoundary({ engine: createStrikeCell({ policyRef: "default" }) });
const dispatchTool = wrapClaudeToolDispatcher(boundary, async (toolName, input, runId) => {
  return { toolName, input, runId };
});

Docs: Claude Adapter README, Claude Recipe

Vercel AI SDK (@clawdstrike/vercel-ai)

npm install @clawdstrike/vercel-ai @clawdstrike/engine-local ai
import { createStrikeCell } from "@clawdstrike/engine-local";
import { createVercelAiInterceptor, secureTools } from "@clawdstrike/vercel-ai";

const interceptor = createVercelAiInterceptor(createStrikeCell({ policyRef: "default" }));
const tools = secureTools(
  { bash: { async execute(input: { cmd: string }) { return input.cmd; } } },
  interceptor,
);

Docs: Vercel AI Integration Guide

LangChain (@clawdstrike/langchain)

npm install @clawdstrike/langchain @clawdstrike/adapter-core @clawdstrike/engine-local
import { createStrikeCell } from "@clawdstrike/engine-local";
import { BaseToolInterceptor } from "@clawdstrike/adapter-core";
import { wrapTool } from "@clawdstrike/langchain";

const interceptor = new BaseToolInterceptor(createStrikeCell({ policyRef: "default" }));
const secureTool = wrapTool({ name: "bash", async invoke(input: { cmd: string }) { return input.cmd; } }, interceptor);

Docs: LangChain Integration Guide

OpenClaw Plugin (@clawdstrike/openclaw)

# published package workflow (recommended)
openclaw plugins install @clawdstrike/openclaw

# local development workflow
openclaw plugins install --link /path/to/clawdstrike/packages/adapters/clawdstrike-openclaw
openclaw plugins enable clawdstrike-security

Docs: OpenClaw Plugin Quick Start, OpenClaw Integration Guide

Computer Use Gateway (Production Onboarding)

Use the agent-owned OpenClaw architecture in production:

  1. Install a release build of Clawdstrike Agent/Desktop.
  2. Configure OpenClaw gateways (URL + token) in OpenClaw Fleet or via the local agent API.
  3. Validate gateway/session health through the agent health and gateway endpoints.

Operational docs:

Highlights

Feature Description
Computer Use Gateway Controls Canonical CUA policy evaluation for click/type/scroll/key-chord and remote side-channel actions
Provider Translation Layer Runtime translators for OpenAI/Claude/OpenClaw flows into a unified policy surface
7 Built-in Guards Path, egress, secrets, patches, tools, prompt injection, jailbreak
4-Layer Jailbreak Detection Heuristic + statistical + ML + optional LLM-as-judge with session aggregation
Deterministic Decisions Stable reason_code + severity metadata for enforcement analytics and regression checks
Fail-Closed Design Invalid policies reject at load time; evaluation errors deny access
Signed Receipts Tamper-evident audit trail with Ed25519 signatures
Output Sanitization Redact secrets/PII/internal data from model output with streaming support
Prompt Watermarking Embed signed provenance markers for attribution and forensics

Performance

Guard checks add <0.05ms overhead per tool call. For context, typical LLM API calls take 500-2000ms.

Operation Latency % of LLM call
Single guard check <0.001ms <0.0001%
Full policy evaluation ~0.04ms ~0.004%
Jailbreak detection (heuristic+statistical) ~0.03ms ~0.003%

No external API calls required for core detection. Full benchmarks →

Documentation

Security

We take security seriously. If you discover a vulnerability:

  • For sensitive issues: Email connor@backbay.io with details. We aim to respond within 48 hours.
  • For non-sensitive issues: Open a GitHub issue with the security label.

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

cargo build && cargo test && cargo clippy

License

Apache License 2.0 - see LICENSE for details.