A full-stack Retrieval-Augmented Generation (RAG) chatbot example with OpenAI integration. Built as an npm workspaces monorepo with TypeScript/Express backend, React/Vite frontend, PostgreSQL with pgvector, and a shared UI component library.
Want to run this locally in 5 minutes? → QUICKSTART.md
The quickstart guide walks you through getting everything running with Docker Compose in just a few commands!
- Deploy to Render
- Features
- Repository Structure
- Quick Start with Docker
- Docker Commands
- Troubleshooting
- Additional Documentation
- Contributing
- License
This project is pre-configured for one-click deployment to Render using the included render.yaml blueprint.
What you get:
- ✅ Automatic database migrations on deployment
- ✅ Automatic database seeding with 15 AI/ML docs
- ✅ PostgreSQL with pgvector extension
- ✅ Auto-scaling and health checks
- ✅ Separate backend and frontend services
- ✅ Environment variable management
Deployment steps:
-
Fork this repository to your GitHub account
-
Create a new Blueprint Instance on Render:
- Go to Render Dashboard
- Click "New" → "Blueprint Instance"
- Connect your forked repository
- Select branch (usually
main)
-
Configure environment variables:
- Render will auto-detect
render.yaml - You'll be prompted to enter your
OPENAI_API_KEY(Get one here) - All other variables are pre-configured (including
RUN_SEED=true)
- Render will auto-detect
-
Deploy:
- Click "Apply" to create all services
- Render will:
- Create PostgreSQL database with pgvector
- Build and deploy backend with automatic migrations
- Automatically seed database with 15 AI/ML documentation files
- Build and deploy frontend
- Link services together
-
Access your deployed app:
- Frontend:
https://your-app-name-frontend.onrender.com - Backend API:
https://your-app-name-backend.onrender.com
- Frontend:
Note: Database seeding is enabled by default (RUN_SEED=true in render.yaml). This will incur a small one-time OpenAI API cost (~$0.01-0.02) for generating embeddings. To disable automatic seeding, change RUN_SEED to false in the Render dashboard after deployment.
Cost Estimate (Render Free Tier):
- 2 Web Services (frontend + backend): Free
- 1 PostgreSQL Database: Free (with limitations)
- OpenAI API (one-time seeding): ~$0.01-0.02
- Total: ~$0.01-0.02 setup + $0/month recurring (with free tier limitations)
- 🤖 RAG-Powered Responses - Semantic search over document knowledge base using vector embeddings
- 💬 Conversation History - Persistent multi-turn conversations with context awareness
- 📚 Source Citations - Shows which documents informed each response
- 🎯 Real-time Token Usage - Visual context window usage tracking
- 🚀 Easy Deployment - One-click Render.com deployment with automated migrations
.
├── backend/ # Express + TypeScript API (RAG + chat endpoints)
│ ├── src/
│ │ ├── services/ # RAG, embedding, LLM, and vector services
│ │ ├── routes/ # API endpoints (chat, health)
│ │ ├── db/ # Database config, migrations, and utilities
│ │ └── ...
│ ├── Dockerfile # Production container with auto-migrations
│ └── start.sh # Startup script (runs migrations then starts server)
├── frontend/ # React + Vite app (chat UI)
│ ├── src/
│ │ ├── components/ # Chatbot component with real API integration
│ │ └── config.ts # API configuration
│ ├── Dockerfile # Nginx-served production build
│ └── nginx.conf # Nginx configuration
├── common-ui/ # Shared UI components and styles
├── docker-compose.yml # Full-stack local development (Postgres + backend + frontend)
├── render.yaml # Production deployment configuration (Render.com)
├── package.json # Root workspaces + scripts
└── README.md
For detailed step-by-step instructions, see QUICKSTART.md
TL;DR:
# 1. Clone the repo
git clone <your-repo-url>
cd rag-chatbot
# 2. Create .env.docker file with your OpenAI API key
cat > .env.docker << 'EOF'
OPENAI_API_KEY=sk-your-actual-key-here
DB_PASSWORD=postgres
EOF
# 3. Start everything
docker compose up -d
# 4. Open http://localhost:3000This starts:
- ✅ PostgreSQL with pgvector (port 5432)
- ✅ Backend API with auto-migrations (port 3001)
- ✅ Frontend UI (port 3000)
- ✅ Automatic database seeding with 15 AI/ML docs
Start all services:
docker compose up -dStop all services:
docker compose downView logs:
# All services
docker compose logs -f
# Specific service
docker compose logs -f backendRebuild after code changes:
docker compose up -d --build1. Docker port conflicts:
# Check what's using the port
lsof -i :5432 # or :3001, :3000
# Change ports in docker-compose.yml or .env files2. Database connection errors:
# Ensure PostgreSQL is running
docker compose ps
# Check database logs
docker compose logs postgres3. OpenAI API errors:
- Verify your API key is valid at https://platform.openai.com/api-keys
- Check your OpenAI account has credits
- Ensure API key is properly set in
.env.dockeror environment variables
4. Frontend can't reach backend:
- For local dev: Frontend should connect to
http://localhost:3001 - For Docker: Check
VITE_API_URLindocker-compose.ymlbuild args - For Render: Services are auto-linked via
render.yaml
Getting more help:
- Check service logs:
docker compose logs <service-name> - Verify all environment variables are set correctly
- Ensure Docker containers are healthy:
docker compose ps - See detailed troubleshooting in
backend/README.md
- QUICKSTART.md - 🚀 Get started in 5 minutes with Docker Compose
backend/README.md- Detailed backend documentation and API referencebackend/SETUP.md- PostgreSQL and pgvector setup guidebackend/DOCKER.md- Docker development and deployment detailsfrontend/README.md- Frontend component documentationcommon-ui/README.md- Shared UI component library docs
Contributions are welcome! Please feel free to submit a Pull Request.
MIT