Build Rust CLIs that AI agents can discover, call, and learn from.
Five patterns that turn any Rust CLI into a tool AI agents can pick up and use without documentation, MCP servers, or skill files. The binary describes itself, returns structured output, and uses semantic exit codes. Your CLI becomes the tool, the documentation, and the API -- all in one binary.
Why This Exists | Before vs After | Install | How It Works | Features | Contributing
Agents need tools. Not connections to tools. Not descriptions of tools. Actual tools they can pick up and use.
An MCP server is a connection -- it tells the agent "there's a service over there, here's its schema, here's how to call it." A skill file is an instruction manual. Neither is the tool itself. The agent reads about capabilities without having them. It's the difference between handing someone a hammer and handing them a pamphlet about hammers.
A CLI is the tool. It sits on the machine, does one job, and explains itself when asked. An agent that has search on its PATH can search. An agent that has labparse can parse lab results. No intermediary, no server process, no protocol layer. The agent shells out, gets structured JSON back, and moves on.
Scalekit benchmarked 75 tasks: the simplest cost 1,365 tokens via CLI and 44,026 via MCP -- a 32x overhead. Each MCP tool definition burns 550-1,400 tokens just to describe itself. A typical setup dumps 55,000 tokens into the context window before any real work starts.
Speakeasy found that at 107 tools, models struggled to select the right one and started hallucinating tool names that didn't exist. GitHub Copilot cut from 40 tools to 13 and got better results.
LLMs already know how to use CLIs. They were trained on millions of shell examples from Stack Overflow, GitHub, and man pages. The grammar of tool subcommand --flag value is baked into their weights. Eugene Petrenko at JetBrains documented agents autonomously discovering and using the gh CLI -- handling auth, reading PRs, managing issues -- without being told it existed.
This repo gives you the architecture to build CLIs that work that way.
| Regular CLI | Agent-Friendly CLI |
|---|---|
Human-readable output. Agent has to parse free text. No way to discover capabilities programmatically. Exit code 0 means... it ran? |
$ mytool search "rust cli" | jq
{
"version": "1",
"status": "success",
"data": {
"results": [
{"title": "Clap framework", "url": "..."},
{"title": "Structopt", "url": "..."},
{"title": "Argh", "url": "..."}
],
"count": 3
}
}Structured JSON when piped. Coloured table in a terminal. |
Clone the repo and build the example:
git clone https://github.com/199-biotechnologies/agent-cli-framework.git
cd agent-cli-framework/example
cargo build --releaseRun it:
# Human at a terminal -- coloured output
./target/release/greeter hello Boris --style pirate
# Agent piping -- auto-switches to JSON
./target/release/greeter hello Boris | jq
# Capability discovery
./target/release/greeter agent-info
# Error with semantic exit code
./target/release/greeter hello ""
echo $? # 3 (bad input)
# Install skill to all agent platforms
./target/release/greeter skill install ┌─────────────────────────────────────┐
│ Your Rust CLI │
│ │
│ ┌──────────┐ ┌──────────────────┐ │
Agent calls │ │ clap │ │ Output Format │ │
`tool agent-info` │ │ Parser │ │ Detection │ │
│ │ └────┬─────┘ │ (TTY → table) │ │
▼ │ │ │ (Pipe → JSON) │ │
┌───────────┐ │ ▼ └──────────────────┘ │
│ Capability│ │ ┌─────────┐ ┌──────────────────┐ │
│ Manifest │◄────┤ │ Command │ │ JSON Envelope │ │
│ (JSON) │ │ │ Router │──▶│ { version, │ │
└───────────┘ │ └─────────┘ │ status, data } │ │
│ │ └──────────────────┘ │
Agent reads │ ▼ │
exit code ────────┤ ┌─────────┐ ┌──────────────────┐ │
0: success │ │ Semantic│ │ Skill │ │
1: retry │ │ Exit │ │ Self-Install │ │
3: fix args │ │ Codes │ │ (~/.claude/, │ │
│ └─────────┘ │ ~/.codex/, │ │
│ │ ~/.gemini/) │ │
│ └──────────────────┘ │
└─────────────────────────────────────┘
The binary describes itself. One command returns a JSON manifest of everything the tool can do: commands, flags, exit codes, environment variables.
{
"name": "greeter",
"version": "0.1.0",
"description": "Minimal agent-friendly CLI example",
"commands": {
"hello <name>": "Greet someone. Styles: friendly, formal, pirate.",
"agent-info": "This manifest.",
"skill install": "Install skill file to agent platforms.",
"update": "Self-update binary from GitHub Releases."
},
"exit_codes": {
"0": "Success",
"1": "Transient error (IO, network) — retry",
"3": "Bad input — fix arguments"
},
"auto_json_when_piped": true
}The agent calls it once and works from memory. This replaces documentation.
JSON on stdout when piped, coloured table when in a terminal. Auto-detected via std::io::IsTerminal. Errors include a suggestion field telling the agent exactly how to recover.
{
"version": "1",
"status": "error",
"error": {
"code": "invalid_input",
"message": "Invalid input: name cannot be empty",
"suggestion": "Check the --help output for valid arguments"
}
}| Code | Meaning | Agent Action |
|---|---|---|
0 |
Success | Continue |
1 |
Transient error (IO, network) | Retry |
2 |
Config error | Fix setup |
3 |
Bad input | Fix arguments |
4 |
Rate limited | Wait and retry |
The agent reads the code and knows its next move without parsing the error message.
The binary carries a minimal SKILL.md compiled in via include_str!. One command writes it to ~/.claude/skills/, ~/.codex/skills/, ~/.gemini/skills/. The skill is just a signpost -- a few lines saying "this tool exists, run agent-info for the rest." Binary update = skill update. No drift.
Three install paths, one update mechanism:
Install (pick any):
brew tap your-org/tap && brew install your-cli # Homebrew
cargo install your-cli # crates.io
curl -fsSL https://your-cli.dev/install.sh | sh # shell script
Self-update (built into the binary):
your-cli update --check # check for new version
your-cli update # pull latest from GitHub Releases
your-cli skill install # re-deploy updated skill
These came from shipping CLIs with these patterns and watching agents actually use them. Every one of these went to production before we caught it.
Wrong suggestions. Our search CLI told agents to set SEARCH_BRAVE_KEY when the actual env var was SEARCH_KEYS_BRAVE. The agent followed the suggestion exactly, set the wrong variable, and reported auth still broken. Suggestions are not hints. They are instructions. Test them.
JSON only on the main command. The primary search command returned proper JSON envelopes. But config show, update --check, and cache-miss paths printed raw text. An agent piping stdout into a JSON parser got a crash instead of data. Every subcommand, every code path, every error -- if it writes to stdout, it must respect the output format.
Success that was failure. All eleven providers errored out. The response: {"status": "success", "results": []}. The agent saw success and moved on. We added partial_success and all_failed as additional status values.
Dead features in agent-info. The manifest advertised search modes that existed in code but were never wired into the dispatch path. An agent that called search --mode deep got an "unknown mode" error despite agent-info promising it worked. If agent-info says the tool can do something, it must actually do it.
agent-cli-framework/
README.md # You are here
LICENSE # MIT
CONTRIBUTING.md # How to contribute
example/
Cargo.toml # Dependencies
src/main.rs # Complete working example (~280 lines)
The example is a greeter CLI that demonstrates all five patterns in one file. It's meant to be read, copied, and adapted.
| CLI | What it does | Install |
|---|---|---|
| search-cli | 11 search providers, 14 modes, one binary | cargo install agent-search |
| autoresearch | Autonomous experiment loops for any metric | cargo install autoresearch |
| xmaster | X/Twitter CLI with dual backends | cargo install xmaster |
| email-cli | Agent-friendly email via Resend API | cargo install email-cli |
- MCP vs CLI: Benchmarking AI Agent Cost & Reliability -- Scalekit
- Your MCP Server Is Eating Your Context Window -- Apideck
- CLI Is the New API and MCP -- Eugene Petrenko
- Reducing MCP Token Usage by 100x -- Speakeasy
Contributions are welcome. See CONTRIBUTING.md for guidelines.
MIT -- see LICENSE.