A powerful, multi-database and multi-LLM SQL assistant built with Node.js, Express, and React. It uses LangGraph.js to orchestrate intelligent agents that reason about your database schema, generate optimized SQL queries, and provide analytical summariesβall through a sleek, modern UI.
Powered by Knex.js, the agent seamlessly supports multiple database engines:
- PostgreSQL - Full support with SSL/TLS for secure remote connections
- MySQL - Complete MySQL 8+ compatibility
- SQLite - Local file-based databases for development and testing
Native integration with leading AI providers:
- OpenAI (Compatible) - Works with OpenAI and any OpenAI-compatible API (Groq, OpenRouter, Ollama, Mistral, etc.)
- Anthropic Claude - Direct integration with Claude models
- Google Gemini - Native Google Generative AI support
- Dark Theme with Glassmorphism - Vibrant, professional design
- Real-time Reasoning Visualization - Watch the agent think step-by-step
- SQL Syntax Highlighting - Prism-based code highlighting
- Query Status Tracking - Visual indicators for pending, approved, denied, and executed queries
- Responsive Design - Works seamlessly on desktop and mobile
- Configuration Persistence - Settings saved to localStorage
- Node.js v18+ (v20 recommended)
- npm v9+ or yarn
- A database (PostgreSQL, MySQL, or SQLite)
- An LLM API key (OpenAI, Anthropic, Google, or compatible provider)
git clone https://github.com/varunreddy/Node-SQL-agent.git
cd Node-SQL-agentInstall dependencies for both the backend and frontend:
# Install root (server) dependencies
npm install
# Install client dependencies
cd client && npm install && cd ..Create a .env file in the root directory:
touch .envAdd your API keys and optional configurations:
# ===== LLM Provider Keys =====
# At least one is required
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
GOOGLE_API_KEY=your-google-api-key
GROQ_API_KEY=your-groq-api-key
# ===== Optional: Custom OpenAI-compatible endpoint =====
OPENAI_BASE_URL=https://api.openai.com/v1
# ===== Optional: Default Model Settings =====
MODEL_NAME=gpt-4o
TEMPERATURE=0
# ===== Optional: Database Defaults =====
# These can also be configured in the UI
DATABASE_URL=postgresql://user:password@localhost:5432/mydb
SQLITE_PATH=./database.sqliteStart both the backend server and frontend dev server concurrently:
npm run devThis runs:
- Backend:
http://localhost:3001(Express API server) - Frontend:
http://localhost:5173(Vite dev server with HMR)
# Build both server and client
npm run build
# Start production server
npm start
# or with environment flag
NODE_ENV=production node dist/server.jsThe production server serves both the API and the static frontend from client/dist/.
- Open the sidebar by clicking the toggle button
- Select the Database tab
- Choose your database engine:
| Field | Description |
|---|---|
| Host | Database server hostname (e.g., localhost, db.example.com) |
| Port | Connection port (PostgreSQL: 5432, MySQL: 3306) |
| Database | Database name |
| Username | Database user |
| Password | User password |
| Enable SSL/TLS | Toggle for secure remote connections (required for most cloud databases) |
| Field | Description |
|---|---|
| DB Path | Local file path (e.g., ./database.sqlite, /data/mydb.db) |
π‘ Tip: Click Save after configuring to persist settings.
- Select the LLM Setup tab in the sidebar
- Configure your AI provider:
| Field | Description |
|---|---|
| Provider | Select: OpenAI (Compatible), Anthropic, or Google Gemini |
| Base URL | API endpoint (for OpenAI-compatible only). Use shortcuts for Groq, OpenRouter, Moonshot, Ollama |
| API Key | Your provider's API key |
| Model Name | Model identifier (e.g., gpt-4o, claude-3-5-sonnet-20240620, gemini-1.5-pro) |
| Max Tokens | Maximum response length |
| Temperature | Creativity level (0 = focused, 1 = creative) |
| Provider | Base URL | Example Models |
|---|---|---|
| OpenAI | https://api.openai.com/v1 (default) |
gpt-4o, gpt-4o-mini, gpt-4-turbo |
| Groq | https://api.groq.com/openai/v1 |
llama-3.3-70b-versatile, mixtral-8x7b-32768 |
| OpenRouter | https://openrouter.ai/api/v1 |
openai/gpt-4o, anthropic/claude-3.5-sonnet |
| Ollama | http://localhost:11434/v1 |
llama3, mistral, codellama |
| Anthropic | Native (no URL needed) | claude-3-5-sonnet-20240620, claude-3-opus-20240229 |
| Google Gemini | Native (no URL needed) | gemini-1.5-pro, gemini-1.5-flash |
For secure connections to cloud databases (AWS RDS, Azure Database, Neon, Supabase, etc.):
- Enable SSL Toggle in the Database configuration panel
- The client uses
rejectUnauthorized: falseby default for compatibility with self-signed certificates
For production with certificate validation:
# Option 1: Use sslmode in connection string
DATABASE_URL=postgresql://user:pass@host:5432/db?sslmode=require
# Option 2: Provide CA certificate
SSL_CERT=/path/to/ca-certificate.crt
SSL_CLIENT_CERT=/path/to/client-cert.crt # For mTLS
SSL_CLIENT_KEY=/path/to/client-key.key # For mTLSThe project includes a vercel.json configuration for seamless deployment:
- Push your code to GitHub
- Import the repository in Vercel
- Add environment variables in Vercel dashboard
- Deploy!
Vercel will automatically:
- Run
npm run build(compiles TypeScript + builds React) - Start
npm start(Node.js serverless function) - Serve static files from
client/dist/
# Dockerfile example
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3001
CMD ["npm", "start"]- API Keys: Never commit
.envfiles. Use environment variables in production. - SQL Injection: The agent uses parameterized queries via Knex.js.
- SSL/TLS: Always enable for remote database connections.
- CORS: Configured for local development; restrict in production.
- Rate Limiting: Consider adding rate limiting for production deployments.
| Script | Description |
|---|---|
npm run dev |
Start dev servers (backend + frontend concurrently) |
npm run dev:server |
Start backend only (with tsx hot-reload) |
npm run dev:client |
Start frontend only (Vite dev server) |
npm run build |
Build for production |
npm start |
Run production server |
npm run lint |
Run ESLint (in client/) |
Backend:
- Express.js - Web server
- LangGraph.js - Agent orchestration
- LangChain.js - LLM integrations
- Knex.js - SQL query builder
- Zod - Schema validation
- TypeScript
Frontend:
- React 19 - UI framework
- Vite - Build tool
- Tailwind CSS 4 - Styling
- Lucide React - Icons
- React Syntax Highlighter - Code highlighting
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ by Varun Reddy