A minimal, no-dependency, hackable CLI coding agent.
It runs a simple chat loop, lets the model call a small set of local tools (read files, search, run shell commands, edit files), and supports “skills” as reusable instruction packs.
Tip
See LEARN.md to learn more about AI coding agents.
Note
The documentation below, LEARN.md and this commit were 100% written by smolcode.
- Interactive terminal session (prompt/response loop) with a single agent model.
- Multiple agents with switcher: Build and Plan Agents
- Subagents: delegates focused subtasks to specialized worker agents and merges results
- Conversation compaction
- Tool calling:
- Read files with line numbers
- Search the workspace (glob + regex grep)
- Run local shell commands
- Apply safe, exact-text edits to files
- Skills system: load task-specific instruction bundles (e.g. code review, python best practices) to steer behavior.
- Lightweight UI rendering: shows tool calls and previews results while you work.
The project is intentionally small: one CLI entrypoint, one agent wrapper around the API, a session loop, and a handful of tools.
app/__init__.py: CLI entrypoint. Creates aSessionand starts the loop.app/backend/*: Orchestrates the conversation:- Collects user input, calls the agent, dispatches tool calls, appends tool outputs back into the message stream
app/core/*: Agent primitives described belowapp/plugins/*: Usage of primitives: Tools, Skills, Subagents, OpenAI provider, ...app/ui/*: Terminal output formatting and event sink.config/agents/: Markdown agent definitionsconfig/subagents/: Worker agents used for delegated subtasksapp/ui/*: Renderer of backend events
sequenceDiagram
autonumber
participant U as User (Terminal)
participant S as Session
participant A as Agent
participant API as OpenAI Responses API
participant T as Local Tools
U->>S: typed input
S->>A: context[]
A->>API: /v1/responses (model + instructions + tools)
API-->>A: output blocks (message | function_call)
alt message block
A-->>U: print assistant text
A-->>S: extend context
else function_call block
A->>T: run tool(name, args)
T-->>A: tool output
A-->>S: extend context
A->>API: continue
API-->>A: next blocks
end
flowchart LR
User([User]) --> Session
Session --> Context
Session --> Agent
Agent --> Context
Agent --> LLM[(LLM API)]
LLM --> Agent
Agent --> Tools[Local Tools]
Tools --> Agent
Agent --> Session
Session --> User
- Python >= 3.13 (see
pyproject.toml) - Authentication (one of the following):
- API Key: An OpenAI API key
- OAuth: A ChatGPT Plus/Pro subscription (uses the Codex API)
Install with pipx for an isolated CLI (recommended):
pipx install -e .Or install in a virtual environment:
pip install -e .Install with uv:
uv tool install --editable .smolcode supports two authentication modes:
Use your OpenAI API key directly:
export OPENAI_API_KEY="sk-..."
smolcodeCaution
OpenAI OAuth is used because stated as allowed in OSS in this tweet. Please contact me if not authorized.
Use your ChatGPT subscription via OAuth:
# First, login (opens browser for authentication)
smolcode login
# Then, run smolcode
export SMOLCODE_OAUTH="true"
smolcodeThe OAuth flow uses PKCE and stores tokens in ~/.config/smolcode/auth.json.
/agent {build|plan}: Switch current agent session (keep history)/quitor/q: quit/clearor/c: clear the current conversation context
- Default markdown config lives in
config/agents/,config/subagents/, andconfig/skills/. - Base agent instructions live in
config/agents/common.txt(override in$XDG_CONFIG_HOME/smolcode/agents/common.txt). - You can add or override agents/subagents/skills in
$XDG_CONFIG_HOME/smolcode(fallback:~/.config/smolcode). $XDG_CONFIG_HOMEtakes priority over the repository config during loading.- Add new tools by implementing
Toolinapp/tool.py, then registering them inapp/registry.py.
Note
app/tools/edit.py implements an exact-text replacement strategy with fallbacks to make edits more robust.
