LLM agent framework with structured I/O
Build AI agents with type-safe inputs and outputs, automatic tool calling, and powerful agentic loops.
- π― Structured I/O - Pydantic models for inputs and outputs
- π€ Agentic Loops - Multi-turn execution with tool calling
- π οΈ Auto Tool Schemas - Generate from type hints and docstrings
- π Dynamic Tools - Add/remove tools during execution
- β Parse Error Recovery - Automatic retry on validation failures
- π Step Callbacks - Full control over loop behavior
- π LiteLLM Integration - Works with any LLM provider
- π Streaming Responses - Real-time output with partial structured updates
- πΎ Provider Caching - Reduce latency and cost with prompt caching
- π‘οΈ Model Fallbacks - Automatic provider failover for high availability
- π³ Branching Workflows - Spawn sub-agents that extend parent capabilities for parallel analysis and map-reduce patterns
pip install acornSet your API key:
# For Anthropic Claude
export ANTHROPIC_API_KEY="your-key-here"
# Or for OpenAI
export OPENAI_API_KEY="your-key-here"
# Or any other LiteLLM-supported providerfrom pydantic import BaseModel, Field
from acorn import Module
class Input(BaseModel):
text: str = Field(description="The text to summarize")
max_words: int = Field(default=100, description="Maximum words in summary")
class Output(BaseModel):
summary: str = Field(description="The concise summary")
word_count: int = Field(description="Number of words in summary")
class Summarizer(Module):
"""Summarize text concisely."""
initial_input = Input
final_output = Output
model = "anthropic/claude-sonnet-4-5-20250514"
# Use it
summarizer = Summarizer()
result = summarizer(
text="Long article text here...",
max_words=50
)
print(result.summary)
print(f"Words: {result.word_count}")from pydantic import BaseModel, Field
from acorn import Module, tool
class Input(BaseModel):
topic: str = Field(description="Research topic")
depth: str = Field(default="shallow", description="Research depth")
class Output(BaseModel):
findings: str = Field(description="Summary of findings")
sources: list[str] = Field(description="Sources consulted")
class ResearchAgent(Module):
"""Research assistant with tools."""
initial_input = Input
max_steps = 5 # Enable agentic loop
final_output = Output
model = "anthropic/claude-sonnet-4-5-20250514"
@tool
def search(self, query: str) -> list:
"""Search for information."""
# Your search implementation
return ["result1", "result2"]
@tool
def analyze(self, data: str) -> str:
"""Analyze collected data."""
# Your analysis implementation
return f"Analysis: {data}"
def on_step(self, step):
"""Called after each step."""
print(f"Step {step.counter}")
# Early termination if done
if len(step.tool_results) >= 3:
step.finish(
findings="Sufficient data collected",
sources=["source1", "source2"]
)
return step
# Use it
agent = ResearchAgent()
result = agent(topic="Large Language Models", depth="shallow")- Getting Started - Installation and first steps
- Module Reference - Complete Module API documentation
- Branching - Sub-agents and parallel processing
Base class for LLM agents. Configure with:
model- LLM to use (required - no default)temperature- Sampling temperaturemax_tokens- Maximum tokens to generatemax_steps- Max agentic loop iterations (None = single-turn)initial_input- Pydantic model for input schemafinal_output- Pydantic model for output schematools- List of available toolscache- Enable provider-level prompt cachingmodel_fallbacks- List of fallback models for automatic failover
Functions the LLM can call:
@tool
def search(query: str, limit: int = 10) -> list:
"""Search for information.
Args:
query: The search query
limit: Maximum results to return
"""
return search_api(query, limit)Schema is automatically generated from type hints and docstring.
Control agentic loop execution:
def on_step(self, step):
# Access step info
print(f"Step {step.counter}")
print(f"Tools called: {[tc.name for tc in step.tool_calls]}")
# Dynamic tool management
step.add_tool(new_tool)
step.remove_tool("old_tool")
# Early termination
if condition:
step.finish(result="Early exit")
return stepTry them live on the Gradio app or browse the source in examples/:
| Example | Category | Description |
|---|---|---|
| Simple Q&A | Basic | Single-turn question answering with structured output |
| HN Production Readiness | Agentic | Checks if a trending HN project is production-ready |
| Documentation Coverage | Agentic | Scores documentation coverage of a GitHub repo (0β100) |
| Bus Factor Calculator | Branching | Calculates the bus factor of a GitHub repository |
| License Compatibility | Agentic | Checks dependency license compatibility for conflicts |
| Dependency Bloat Scanner | Branching | Finds redundant and overlapping libraries in your deps |
# Run all tests
pytest
# With coverage
pytest --cov=acorn
# Specific test file
pytest tests/test_agentic_loop.py -vCurrent status: 201 tests passing, 85% coverage
- Single-turn execution
- Multi-turn agentic loops
- Tool calling with auto-schema generation
- Parse error recovery
- Dynamic tool management
- Step callbacks
- Streaming responses with partial structured output
- Forced termination strategies
- Provider caching
- Model fallbacks
- Branching workflows
- Async support
- More docs
- Integration examples with different providers (vector DBs, observability tools, etc.)
Contributions welcome! Please:
- Check open issues for areas to help
- Write tests for new features (maintain >80% coverage)
- Update documentation
- Add examples for new features
Thanks to @rosenbrockc for donating the acorn pip package name.