A powerful, extensible AI coding agent CLI with multi-provider support, built-in tools, and a rich extension system.
- Multi-Provider LLM Support: Anthropic, OpenAI, Google Gemini, Ollama, Azure OpenAI, AWS Bedrock, OpenRouter, and more
- Built-in Core Tools: bash, read, write, edit, grep, find, ls - no MCP overhead
- MCP Integration: Connect external MCP servers for expanded capabilities
- Extension System: Write custom tools, commands, widgets, and UI modifications in Go
- Interactive TUI: Rich terminal interface powered by Bubble Tea with streaming, syntax highlighting, and custom rendering
- Session Management: Tree-based conversation history with branching support
- Non-Interactive Mode: Script-friendly positional args with JSON output
- ACP Server: Run Kit as an Agent Client Protocol agent over stdio
- Go SDK: Embed Kit in your own applications
npm install -g @mark3labs/kitgo install github.com/mark3labs/kit/cmd/kit@latestgit clone https://github.com/mark3labs/kit.git
cd kit
go build -o kit ./cmd/kit# Start interactive session
kit
# Run a one-off prompt
kit "List files in src/"
# Attach files as context
kit @main.go @test.go "Review these files"
# Continue the most recent session
kit --continue
# Use specific model
kit --model anthropic/claude-sonnet-4-5-20250929# Get JSON output for scripting
kit "Explain main.go" --json
# Quiet mode (final response only)
kit "Run tests" --quiet
# Ephemeral mode (no session file)
kit "Quick question" --no-sessionKit can run as an ACP (Agent Client Protocol) agent server, enabling ACP-compatible clients (such as OpenCode) to drive Kit as a remote coding agent over stdio.
# Start Kit as an ACP server (communicates via JSON-RPC 2.0 on stdin/stdout)
kit acp
# With debug logging to stderr
kit acp --debugThe ACP server exposes Kit's full capabilities — LLM execution, tool calls (bash, read, write, edit, grep, etc.), and session persistence — over the standard ACP protocol. Sessions are persisted to Kit's normal JSONL session files, so they can be resumed later.
Kit looks for configuration in the following locations (in order of priority):
- CLI flags
- Environment variables (with
KIT_prefix) ./.kit.yml(project-local)~/.kit.yml(global)
Create ~/.kit.yml:
model: anthropic/claude-sonnet-4-5-20250929
max-tokens: 4096
temperature: 0.7
stream: trueexport ANTHROPIC_API_KEY="sk-..."
export OPENAI_API_KEY="sk-..."
export KIT_MODEL="openai/gpt-4o"Add external MCP servers to .kit.yml:
mcpServers:
filesystem:
type: local
command: ["npx", "-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed"]
environment:
LOG_LEVEL: "info"
allowedTools: ["read_file", "write_file"]
search:
type: remote
url: "https://mcp.example.com/search"# Model and provider
--model, -m Model to use (provider/model format)
--provider-api-key API key for the provider
--provider-url Base URL for provider API
--tls-skip-verify Skip TLS certificate verification
# Session management
--session, -s Open specific JSONL session file
--continue, -c Resume most recent session for current directory
--resume, -r Interactive session picker
--no-session Ephemeral mode, no persistence
# Behavior (non-interactive: pass prompt as positional arg)
--quiet Suppress all output (non-interactive only)
--json Output response as JSON (non-interactive only)
--no-exit Enter interactive mode after prompt completes
--max-steps Maximum agent steps (0 for unlimited)
--stream Enable streaming output (default: true)
--compact Enable compact output mode
--auto-compact Auto-compact conversation near context limit
# Extensions
--extension, -e Load additional extension file(s) (repeatable)
--no-extensions Disable all extensions
# Generation parameters
--max-tokens Maximum tokens in response (default: 4096)
--temperature Randomness 0.0-1.0 (default: 0.7)
--top-p Nucleus sampling 0.0-1.0 (default: 0.95)
--top-k Limit top K tokens (default: 40)
--stop-sequences Custom stop sequences (comma-separated)
# System
--config Config file path (default: ~/.kit.yml)
--system-prompt System prompt text or file path
--debug Enable debug logging# Authentication (for OAuth-enabled providers)
kit auth login # Start OAuth flow
kit auth logout # Remove credentials
kit auth status # Check authentication status
# Model database
kit models # List available models
kit models --all # Show all providers (not just Fantasy-compatible)
kit update-models # Update local model database from models.dev
# Extension management
kit extensions list # List discovered extensions
kit extensions validate # Validate extension files
kit extensions init # Generate example extension template
# ACP server
kit acp # Start as ACP agent (stdio JSON-RPC)
kit acp --debug # With debug logging to stderrExtensions are Go source files that run via Yaegi interpreter. They can add custom tools, slash commands, widgets, keyboard shortcuts, and intercept lifecycle events.
//go:build ignore
package main
import "kit/ext"
func Init(api ext.API) {
api.OnSessionStart(func(_ ext.SessionStartEvent, ctx ext.Context) {
ctx.SetFooter(ext.HeaderFooterConfig{
Content: ext.WidgetContent{Text: "Custom Footer"},
})
})
}Usage:
kit -e examples/extensions/minimal.goLifecycle Events: OnSessionStart, OnSessionShutdown, OnAgentStart, OnAgentEnd, OnToolCall, OnToolResult, OnInput, OnMessageStart, OnMessageUpdate, OnMessageEnd, OnModelChange, OnContextPrepare, OnBeforeFork, OnBeforeSessionSwitch, OnBeforeCompact
Custom Components:
- Tools: Add new tools the LLM can invoke
- Commands: Register slash commands (e.g.,
/mycommand) - Widgets: Persistent status displays above/below input
- Shortcuts: Global keyboard shortcuts
- Overlays: Modal dialogs with markdown content
- Tool Renderers: Customize how tool calls display
- Editor Interceptors: Handle key events and wrap rendering
See the examples/extensions/ directory:
minimal.go- Clean UI with custom footernotify.go- Desktop notificationswidget-status.go- Persistent status widgetscustom-editor-demo.go- Vim-like modal editorprompt-demo.go- Interactive prompts (select/confirm/input)tool-logger.go- Log all tool callsoverlay-demo.go- Modal dialogsplan-mode.go- Read-only planning modesubagent-widget.go- Multi-agent orchestrationauto-commit.go- Auto-commit on shutdown
Auto-discovery (loads automatically):
./.kit/extensions/*.go(project-local)~/.config/kit/extensions/*.go(global)
Explicit loading:
kit -e path/to/extension.go
kit -e ext1.go -e ext2.go # Multiple extensionsDisable auto-load:
kit --no-extensionsKit uses a tree-based session model that supports branching and forking conversations.
- Default:
~/.local/share/kit/sessions/<cwd-hash>/<uuid>.jsonl - Each line is a session entry (messages, tool calls, extension data)
- Supports branching from any message to explore alternate paths
# Resume most recent session for current directory
kit --continue
kit -c
# Interactive session picker
kit --resume
kit -r
# Open specific session file
kit --session path/to/session.jsonl
kit -s path/to/session.jsonl
# Ephemeral mode (no file persistence)
kit --no-sessionEmbed Kit in your Go applications:
package main
import (
"context"
"log"
kit "github.com/mark3labs/kit/pkg/kit"
)
func main() {
ctx := context.Background()
// Create Kit instance with default configuration
host, err := kit.New(ctx, nil)
if err != nil {
log.Fatal(err)
}
defer host.Close()
// Send a prompt
response, err := host.Prompt(ctx, "What is 2+2?")
if err != nil {
log.Fatal(err)
}
println(response)
}host, err := kit.New(ctx, &kit.Options{
Model: "ollama/llama3",
SystemPrompt: "You are a helpful bot",
ConfigFile: "/path/to/config.yml",
MaxSteps: 10,
Streaming: true,
Quiet: true,
})response, err := host.PromptWithCallbacks(
ctx,
"List files in current directory",
func(name, args string) {
// Tool call started
println("Calling tool:", name)
},
func(name, args, result string, isError bool) {
// Tool call completed
if isError {
println("Tool failed:", name)
}
},
func(chunk string) {
// Streaming text chunk
print(chunk)
},
)host.Prompt(ctx, "My name is Alice")
response, _ := host.Prompt(ctx, "What's my name?")
host.SaveSession("./session.json")
host.LoadSession("./session.json")
host.ClearSession()Spawn Kit as a subprocess for multi-agent orchestration:
kit "Analyze codebase" \
--json \
--no-session \
--no-extensions \
--quiet \
--model anthropic/claude-haiku-3-5-20241022Parse the JSON output:
{
"response": "Final assistant response text",
"model": "anthropic/claude-haiku-3-5-20241022",
"usage": {
"input_tokens": 1024,
"output_tokens": 512,
"total_tokens": 1536
},
"messages": [...]
}Test the TUI non-interactively:
# Start Kit in detached tmux session
tmux new-session -d -s kittest -x 120 -y 40 \
"kit -e ext.go --no-session 2>kit.log"
# Wait for startup
sleep 3
# Capture screen
tmux capture-pane -t kittest -p
# Send input
tmux send-keys -t kittest '/command' Enter
# Cleanup
tmux kill-session -t kittest# Build
go build -o output/kit ./cmd/kit
# Run tests
go test -race ./...
# Run specific test
go test -race ./cmd -run TestScriptExecution
# Lint
go vet ./...
# Format
go fmt ./...cmd/kit/ - CLI entry point
cmd/ - CLI command implementations
pkg/kit/ - Go SDK
internal/agent/ - Agent loop and tool execution
internal/ui/ - Bubble Tea TUI components
internal/extensions/ - Yaegi extension system
internal/core/ - Built-in tools
internal/tools/ - MCP tool integration
internal/config/ - Configuration management
internal/acpserver/ - ACP (Agent Client Protocol) server
internal/session/ - Session persistence
internal/models/ - Provider and model management
examples/extensions/ - Example extension files
- Anthropic - Claude models (native, prompt caching, OAuth)
- OpenAI - GPT models
- Google - Gemini models
- Ollama - Local models
- Azure OpenAI - Azure-hosted OpenAI
- AWS Bedrock - Bedrock models
- Google Vertex - Claude on Vertex AI
- OpenRouter - Multi-provider router
- Vercel AI - Vercel AI SDK models
- Auto-routed - Any provider from models.dev database
provider/model # Standard format
anthropic/claude-sonnet-4-5-20250929
openai/gpt-4o
ollama/llama3
google/gemini-2.0-flash-expclaude-opus-latest → claude-opus-4-20250514
claude-sonnet-latest → claude-sonnet-4-5-20250929
claude-3-5-haiku-latest → claude-3-5-haiku-20241022Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
