A clean ReAct pattern implementation using LangGraph and LiteLLM with automatic LangSmith tracing.
- Universal LLM Support via LiteLLM (OpenAI, Anthropic, Gemini, etc.)
- LangSmith Tracing with custom
wrap_litellm()wrapper (similar towrap_openai()) - ReAct Pattern for tool-augmented reasoning
- Simple example tools: weather lookup and calculator
# Install
uv sync
# Configure
cp .env.example .env
# Edit .env with your API key and model choice
# Run
python agent.pyReusable wrapper for automatic LangSmith tracing of LiteLLM calls:
from litellm_wrapper import wrap_litellm
import litellm
wrap_litellm(name="LiteLLM")
# All calls now traced automatically
response = await litellm.acompletion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}]
)Features:
- Traces
litellm.completion()andlitellm.acompletion() - Appears in LangSmith with
run_type="llm" - Captures model, temperature, and metadata
agent.py- Main ReAct agentlitellm_wrapper.py- LangSmith wrapper for LiteLLMutils.py- Message conversion helperstest_wrapper.py- Standalone wrapper demoannotate.py- Script to add feedback to recent tracessampling.py- Demonstrate configurable sampling rates
Use sampling.py to see how different sampling rates work:
python sampling.pyThis demonstrates:
- Running the agent with different sampling rates (100%, 50%, 25%, 0%)
- Using
tracing_contextto control sampling per-operation - Balancing observability vs. cost in production
Use annotate.py to add feedback scores to recent traces:
python annotate.pyThis demonstrates how to use the LangSmith SDK to:
- Retrieve recent traces from a project
- Attach multiple feedback scores to each run (helpfulness & correctness)
- View and analyze feedback in the LangSmith dashboard