Runnable examples demonstrating how to integrate the runcycles client into real-world Python applications.
- A running Cycles server (see Deploy the Full Stack)
- Set environment variables:
export CYCLES_BASE_URL="http://localhost:7878"
export CYCLES_API_KEY="your-api-key" # create via Admin Server — see link above
export CYCLES_TENANT="acme"- Install the client:
pip install runcycles| File | Description | Extra Dependencies |
|---|---|---|
| basic_usage.py | Programmatic reserve → commit lifecycle | — |
| decorator_usage.py | @cycles decorator with estimates, caps, and metrics |
— |
| async_usage.py | Async client and async decorator | — |
| openai_integration.py | Guard OpenAI chat completions with budget checks | openai |
| anthropic_integration.py | Guard Anthropic messages with per-tool budget tracking | anthropic |
| streaming_usage.py | Budget-managed streaming with token accumulation | openai |
| fastapi_integration.py | FastAPI middleware, dependency injection, per-tenant budgets | fastapi, uvicorn |
| langchain_integration.py | LangChain callback handler for budget-aware agents | langchain, langchain-openai |
# Basic examples (only need a Cycles server)
python examples/basic_usage.py
python examples/decorator_usage.py
python examples/async_usage.py
# Integration examples (need additional API keys)
export OPENAI_API_KEY="sk-..."
python examples/openai_integration.py
python examples/streaming_usage.py
export ANTHROPIC_API_KEY="sk-ant-..."
python examples/anthropic_integration.py
# FastAPI (starts a server on port 8000)
pip install fastapi uvicorn
python examples/fastapi_integration.py
# LangChain
pip install langchain langchain-openai
python examples/langchain_integration.py