Hederion AI Explorer is a next-generation block explorer for the Hedera network that enables users to consume and understand on-chain data through natural language queries.
- API endpoints
- ✅ Chat endpoint for real-time user queries and token-by-token streaming responses over WebSockets
- ✅ Suggested queries endpoint for list of pre-defined queries
- ✅ IP and global rate/cost limiting
- AI Agent
- ✅ LLM agentic reasoning and tool use
- ✅ Multi-turn conversation with context retention
- ✅ Session-based conversations (anonymous and pseudonymous sessions)
- ✅ Contextual user data (wallet account ID)
- Relational Database
- ✅ Chat history with database persistence
- Vector Database
- ✅ Semantic search with embeddings
- MCP tools
- ✅ Hedera's Mirror Node REST API
- ✅ Hgraph GraphQL API
- ❌ Hedera's BigQuery
- ✅ Timestamp conversion tool
- ✅ Money value conversion tool
- Benchmarking
- ✅ Tracing
- ✅ Evaluations
- Unit & Integration Tests
- ✅ CI/CD
- ✅ Documentation
- Docker, Docker Compose
- uv package manager
- Python 3.13+
- PostgreSQL
- Redis
- LLM API (OpenAI, Google, etc.)
- Clone the repository:
git clone https://github.com/LimeChain/ai-explorer-backend
cd ai-explorer-backend- Create
.envfile and configure the necessary environment variables:
cp .env.example .env- Install dependencies:
uv sync- Start the database and redis:
docker compose up postgres redis -d- Run database migrations:
uv run alembic upgrade head- Start the API server:
uv run uvicorn app.main:app --reload --port 8000- Install the Hedera SDK as a package and start the internal tools MCP server:
uv pip install -e ./sdk
uv run python mcp_servers/main.py- Send a sample query over WebSocket:
uv run python scripts/ws_send_query.py- Start the external MCP server that exposes the whole service as a tool for AI agents:
uv run python mcp_external/main.py --transport http --port 8002Connect via Postman MCP client.
List the available tools:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
}Call ask_explorer tool:
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "ask_explorer",
"arguments": {
"question": "What are the recent transactions for account 0.0.123?",
"network": "mainnet",
"account_id": "0.0.123"
}
}
}-
Configure the
.envfile to use the correct mcp endpoint: -
Start all services with Docker:
docker compose up- Send a sample query over WebSocket:
docker compose exec api uv run python scripts/dev/query_websocket_dev.pyCreate a new migration:
uv run alembic revision --autogenerate -m "Description of changes"Apply migrations:
uv run alembic upgrade headConfigure the LangSmith tracing in the .env file.
uv run python -m evals.mainTests the rate and cost limiting by sending multiple requests to the WebSocket endpoint.
uv run python scripts/spam.py
uv run python scripts/spam.py concurrent
uv run python scripts/check_limits.py list --details
uv run python scripts/check_limits.py stats
uv run python scripts/check_limits.py clear
uv run python scripts/check_limits.py monitorOnce running, visit:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
Prod and dev environments are running in the same GCP project.
Dev environment is deployed with default Terraform workspace while prod is deployed with prod workspace. This makes Terraform use different state files for both environments.
Before deploying check the Terraform workspace:
terraform workspace listIf needed change the workspace:
terraform workspace select <workspace>project_id = "<PROJECT_ID>>"
llm_api_key = "<API_KEY>"
langsmith_api_key = ""
environment = "production"
domain_name = "hederion.com"
app_name = "ai-explorer-prod"