Want the “with vs without hallucination detector” experience? Start here:
docs/workflows/README.md— index of workflow playbooks- Search & Learn
- Generate Boilerplate/Content
- Inline Completions
- Greenfield Prototyping
- RCA Fix Agent
Each playbook includes a before/after worked example (uncited output vs evidence-backed + verifier).
Berry runs a local MCP server with a safe, repo‑scoped toolpack plus verification tools (detect_hallucination, audit_trace_budget).
Berry ships a single MCP surface: classic.
Classic includes:
- Verification tools (
detect_hallucination,audit_trace_budget) - Run & evidence notebook tools (start/load runs, add/list/search spans)
See docs/MCP.md and docs/workflows/README.md.
Berry integrates with Cursor, Codex, Claude Code, and Gemini CLI via config files committed to your repo.
Berry’s current verification method requires token logprobs (Chat Completions-style logprobs + top_logprobs).
Supported today:
openai(default): OpenAI-compatible Chat Completions endpoints with logprobs (OpenAI, OpenRouter, local vLLM, or any compatiblebase_url)gemini: Gemini Developer APIgenerateContentwith token logprobs vialogprobsResult(when enabled for the model)vertex: Vertex AIgenerateContent(Gemini models) with token logprobs vialogprobsResultdummy: deterministic offline backend for tests/dev
Not supported yet:
- Anthropic (OpenAI-compat layer ignores
logprobs)
- Install:
pipx install -e .- In each repo you want to use Berry:
berry initThis provisions a hosted API key, writes MCP configs for Cursor/Codex/Claude Code/Gemini CLI, and sets up a Claude Code skill file.
- Reload MCP servers in your client.
To use your own API key or a different backend instead of the hosted key:
berry setupdocs/USAGE.md— task‑oriented guidesdocs/CLI.md— command referencedocs/CONFIGURATION.md— config files, defaults, and env varsdocs/MCP.md— tools/prompts and transport detailsdocs/PACKAGING.md— release pipeline (macOS pkg + Homebrew cask)
pytest