Skip to content

askmanu/acorn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

117 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

github

LLM agent framework with structured I/O

Build AI agents with type-safe inputs and outputs, automatic tool calling, and powerful agentic loops.

Tests Coverage Python License


✨ Features

  • 🎯 Structured I/O - Pydantic models for inputs and outputs
  • πŸ€– Agentic Loops - Multi-turn execution with tool calling
  • πŸ› οΈ Auto Tool Schemas - Generate from type hints and docstrings
  • πŸ”„ Dynamic Tools - Add/remove tools during execution
  • βœ… Parse Error Recovery - Automatic retry on validation failures
  • πŸ“Š Step Callbacks - Full control over loop behavior
  • πŸ”Œ LiteLLM Integration - Works with any LLM provider
  • 🌊 Streaming Responses - Real-time output with partial structured updates
  • πŸ’Ύ Provider Caching - Reduce latency and cost with prompt caching
  • πŸ›‘οΈ Model Fallbacks - Automatic provider failover for high availability
  • 🌳 Branching Workflows - Spawn sub-agents that extend parent capabilities for parallel analysis and map-reduce patterns

πŸš€ Quick Start

Installation

pip install acorn

Set your API key:

# For Anthropic Claude
export ANTHROPIC_API_KEY="your-key-here"

# Or for OpenAI
export OPENAI_API_KEY="your-key-here"

# Or any other LiteLLM-supported provider

Single-Turn Example

from pydantic import BaseModel, Field
from acorn import Module

class Input(BaseModel):
    text: str = Field(description="The text to summarize")
    max_words: int = Field(default=100, description="Maximum words in summary")

class Output(BaseModel):
    summary: str = Field(description="The concise summary")
    word_count: int = Field(description="Number of words in summary")

class Summarizer(Module):
    """Summarize text concisely."""

    initial_input = Input
    final_output = Output
    model = "anthropic/claude-sonnet-4-5-20250514"

# Use it
summarizer = Summarizer()
result = summarizer(
    text="Long article text here...",
    max_words=50
)

print(result.summary)
print(f"Words: {result.word_count}")

Multi-Turn Agentic Loop

from pydantic import BaseModel, Field
from acorn import Module, tool

class Input(BaseModel):
    topic: str = Field(description="Research topic")
    depth: str = Field(default="shallow", description="Research depth")

class Output(BaseModel):
    findings: str = Field(description="Summary of findings")
    sources: list[str] = Field(description="Sources consulted")

class ResearchAgent(Module):
    """Research assistant with tools."""

    initial_input = Input
    max_steps = 5  # Enable agentic loop
    final_output = Output
    model = "anthropic/claude-sonnet-4-5-20250514"

    @tool
    def search(self, query: str) -> list:
        """Search for information."""
        # Your search implementation
        return ["result1", "result2"]

    @tool
    def analyze(self, data: str) -> str:
        """Analyze collected data."""
        # Your analysis implementation
        return f"Analysis: {data}"

    def on_step(self, step):
        """Called after each step."""
        print(f"Step {step.counter}")

        # Early termination if done
        if len(step.tool_results) >= 3:
            step.finish(
                findings="Sufficient data collected",
                sources=["source1", "source2"]
            )

        return step

# Use it
agent = ResearchAgent()
result = agent(topic="Large Language Models", depth="shallow")

πŸ“– Documentation

askmanu.github.io/acorn


πŸ“š Core Concepts

Module

Base class for LLM agents. Configure with:

  • model - LLM to use (required - no default)
  • temperature - Sampling temperature
  • max_tokens - Maximum tokens to generate
  • max_steps - Max agentic loop iterations (None = single-turn)
  • initial_input - Pydantic model for input schema
  • final_output - Pydantic model for output schema
  • tools - List of available tools
  • cache - Enable provider-level prompt caching
  • model_fallbacks - List of fallback models for automatic failover

Tools

Functions the LLM can call:

@tool
def search(query: str, limit: int = 10) -> list:
    """Search for information.

    Args:
        query: The search query
        limit: Maximum results to return
    """
    return search_api(query, limit)

Schema is automatically generated from type hints and docstring.

Step Callback

Control agentic loop execution:

def on_step(self, step):
    # Access step info
    print(f"Step {step.counter}")
    print(f"Tools called: {[tc.name for tc in step.tool_calls]}")

    # Dynamic tool management
    step.add_tool(new_tool)
    step.remove_tool("old_tool")

    # Early termination
    if condition:
        step.finish(result="Early exit")

    return step

🎯 Examples

Try them live on the Gradio app or browse the source in examples/:

Example Category Description
Simple Q&A Basic Single-turn question answering with structured output
HN Production Readiness Agentic Checks if a trending HN project is production-ready
Documentation Coverage Agentic Scores documentation coverage of a GitHub repo (0–100)
Bus Factor Calculator Branching Calculates the bus factor of a GitHub repository
License Compatibility Agentic Checks dependency license compatibility for conflicts
Dependency Bloat Scanner Branching Finds redundant and overlapping libraries in your deps

πŸ§ͺ Testing

# Run all tests
pytest

# With coverage
pytest --cov=acorn

# Specific test file
pytest tests/test_agentic_loop.py -v

Current status: 201 tests passing, 85% coverage


πŸ›£οΈ Roadmap

βœ… Completed

  • Single-turn execution
  • Multi-turn agentic loops
  • Tool calling with auto-schema generation
  • Parse error recovery
  • Dynamic tool management
  • Step callbacks
  • Streaming responses with partial structured output
  • Forced termination strategies
  • Provider caching
  • Model fallbacks
  • Branching workflows

πŸ“‹ Planned

  • Async support
  • More docs
  • Integration examples with different providers (vector DBs, observability tools, etc.)

🀝 Contributing

Contributions welcome! Please:

  1. Check open issues for areas to help
  2. Write tests for new features (maintain >80% coverage)
  3. Update documentation
  4. Add examples for new features

πŸ™ Acknowledgments

Thanks to @rosenbrockc for donating the acorn pip package name.


πŸ“„ License

MIT License

About

LLM framework for long running agents

Topics

Resources

License

Stars

Watchers

Forks

Contributors 2

  •  
  •  

Languages