A tRPC server that provides API endpoints for analyzing EVM contracts with an LLM and caches results.
The server package implements the backend services for the TinyExplorer, handling contract analysis requests, LLM integration, and authentication.
The server provides a tRPC API for analyzing smart contracts on EVM blockchains. It uses an LLM to generate human-readable explanations of contract functionality and events, with support for both synchronous and streaming responses. The server includes caching, authentication, and integration with the WhatsAbi library for ABI and source code retrieval.
This package is part of the TinyExplorer monorepo. To install it:
# From the repository root
pnpm installCreate a .env file in the root directory with the following variables:
# Server
EXPOSED_NODE_ENV=local
SERVER_HOST=0.0.0.0
SERVER_PORT=8888
FRONTEND_URL=http://localhost:5173
COOKIE_SECRET=your-secure-cookie-secret-min-32-chars
SESSION_TTL=86400
# LLM
OPENROUTER_API_KEY=your-openrouter-api-key
OPENROUTER_MODEL_NAME=qwen/qwq-32b
# Cache
DRAGONFLY_HOST=localhost
DRAGONFLY_PORT=6379
DEFAULT_CACHE_TIME=3600
# Blockchain
ETHEREUM_RPC_URL=your-ethereum-rpc-url
ETHEREUM_EXPLORER_API_KEY=your-etherscan-api-key
To run the server locally with Dragonfly cache:
# Start Dragonfly cache
pnpm start:cache
# Start the server
pnpm start:serverThe server can be deployed using Docker:
# Build the images
docker build -f Dockerfile.server -t tiny-explorer-server .
docker build -f Dockerfile.llm -t tiny-explorer-llm . # not yet using a local LLM
# Run with Docker Compose
docker-compose up -dOr use the packages published to the GitHub Container Registry:
docker pull ghcr.io/0xpolarzero/tiny-explorer-server:latest
docker pull ghcr.io/0xpolarzero/tiny-explorer-llm:latestThe server exposes the following endpoints:
login: Create a new authenticated sessionlogout: End the current sessiongetStatus: Check server healthexplainContract: Analyze a contract's source code and ABIexplainContractStream: Streaming version of contract analysis
The server integrates with OpenRouter to provide AI-powered contract analysis:
- Uses structured prompts defined in the core package
- Parses LLM responses into typed objects
- Supports both synchronous and streaming responses
- Customizable model selection through environment variables
Efficient caching is implemented to reduce API calls and improve performance:
- Redis-based caching for LLM responses
- Contract ABI and source code caching
- Session storage for authentication
- Configurable cache TTL
The server implements session-based authentication:
- Secure, HTTP-only cookies
- Redis-backed session storage
- Middleware protection for sensitive endpoints
- Cross-origin support for the frontend
src/
├── app/ # Application setup
│ ├── client.ts # tRPC client configuration
│ ├── debug.ts # Debug utilities
│ └── router.ts # API router and endpoints
├── index.ts # Main entry point
└── service/ # Service implementations
├── auth.ts # Authentication service
├── cache.ts # Redis caching service
├── index.ts # Main service orchestration
├── llm.ts # LLM integration service
└── whatsabi.ts # Contract ABI/source fetching service
Run tests with:
pnpm test- Define input/output types in
core/llm/types.ts - Add the endpoint to the router in
app/router.ts - Implement service logic in the appropriate service file
- Add any necessary middleware
If you wish to contribute to this package, please follow the contribution guidelines in the root repository README.
This project is licensed under the MIT License - see the LICENSE file for details.