Full-stack meeting and knowledge assistant with real-time transcription, periodic and final summaries, RAG search, and voice/chat.
- Backend: FastAPI (
app/), Postgres for auth/chat/meeting data (transcripts and summaries encrypted at rest), ChromaDB (chroma_db/) for vectors. - LLM: Groq chat completions with configurable model and temperature; PII is scrubbed before prompts.
- Speech: Deepgram STT (meetings and voice chat) and TTS.
- Frontend: React 19 + Vite + Tailwind (
frontend/). - Integrations: SMTP email, Notion export, JWT auth with signup/login/reset, rate limiting, meeting delete, and retention/TTL.
- Live meeting WebSocket (
/meeting/ws): PCM16 audio (binary or base64), interim/final transcripts, keep-alive plus auto reconnect,STOPtriggers a structured final summary, runtime reconfig via{"type":"config","sample_rate":16000}. - Summaries: Periodic delta-based snapshots (last N minutes) plus final Markdown summary; fallback summary if the LLM fails.
- Post-meeting chat (
POST /meeting/{id|recent|any|latest}/chat): RAG over the meeting and cross-meeting context; accepts voice input; commands to email or Notion-export the summary. - Document chat (
POST /docs/chat): PDF/TXT upload (50 MB) with ingestion jobs, per-tab collections, optional RAG bypass, voice input. - General chat (
POST /chat/query): Remembers recent history and uses doc context when available; optional voice input. - Voice chat socket (
/voice-chat/ws): Real-time STT -> LLM -> TTS with conversation recall. - Data hygiene: PII redaction before LLM calls, encrypted DB columns for transcripts/summaries, meeting retention/TTL, and hard delete.
Copy .env.example to .env and set at least:
DATABASE_URL(async SQLAlchemy URI; Postgres viaasyncpgrecommended)GROQ_API_KEYDEEPGRAM_API_KEYJWT_SECRET(>=32 chars in production)ENCRYPTION_KEY(falls back toJWT_SECRET, but set a dedicated key for production)PRELOAD_EMBEDDINGS(set totrueto download the embedding model at startup; defaults tofalseto speed container health checks)
Common optional keys:
GROQ_MODEL,LLM_TEMPERATURE,LLM_TIMEOUTEMBEDDING_MODEL(defaults toall-MiniLM-L6-v2)CHROMA_DIRMEETING_RETENTION_DAYS(default 90),PERIODIC_SUMMARY_LOOKBACK_MINUTES(default 10)SMTP_HOST/PORT/USER/PASSfor emailNOTION_TOKEN,NOTION_PAGE_IDCORS_ORIGINS,FRONTEND_URL,RATE_LIMIT_REQUESTS,RATE_LIMIT_PERIOD
Backend (Python 3.11):
python -m venv .venv
.\.venv\Scripts\activate # PowerShell
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000Frontend (Node 20):
cd frontend
npm install
npm run dev- API base:
http://localhost:8000 - Frontend dev:
http://localhost:5173
Prebuilt images are on Docker Hub:
- Backend:
lazyghost1/kontext-backend - Frontend:
lazyghost1/kontext-frontend
Place a .env (copy from .env.example) in the repo root so the backend has your keys/secrets.
Save this as docker-compose.yml (or adapt your existing one):
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: kontext_agent
volumes:
- postgres_data:/var/lib/postgresql/data
backend:
image: lazyghost1/kontext-backend:latest
env_file: .env
environment:
APP_ENV: production
APP_HOST: 0.0.0.0
APP_PORT: "8000"
DATABASE_URL: postgresql+asyncpg://postgres:postgres@postgres:5432/kontext_agent
CHROMA_DIR: /app/chroma_db
PRELOAD_EMBEDDINGS: "true"
CORS_ORIGINS: http://localhost
ports:
- "8000:8000"
depends_on:
- postgres
volumes:
- chroma_data:/app/chroma_db
frontend:
image: lazyghost1/kontext-frontend:latest
environment:
VITE_API_BASE_URL: http://localhost:8000
VITE_WS_URL: ws://localhost:8000
ports:
- "80:80"
depends_on:
- backend
volumes:
postgres_data:
chroma_data:Run it:
docker compose up -dIf you prefer to build from source:
docker compose up --buildGET /andGET /healthPOST /auth/signup | /login | /forgot-password | /reset-passwordWS /meeting/ws- send audio;STOPto finalize;ACTION: EMAIL <addr>orACTION: NOTIONto exportGET /meeting/history,/meeting/{id}/transcript,/meeting/{id}/summaryDELETE /meeting/{id}(hard delete transcript, summaries, embeddings)POST /meeting/{id|recent|any|latest}/chatPOST /chat/query,DELETE /chat/historyPOST /docs/upload,GET /docs/status/{job_id},POST /docs/chat,DELETE /docs/clearWS /voice-chat/ws
- Transcripts and summaries: encrypted in Postgres with retention (
MEETING_RETENTION_DAYS). - Vector store:
chroma_db/(Chroma persistent client).
pytestTests stub Chroma/RAG for speed; they use SQLite by default unless DATABASE_URL is set.
Pull requests are welcome. Quick checklist:
- Fork the repo and create a branch (
git checkout -b feat/your-idea). - Backend: Python 3.11,
python -m venv .venv && .\.venv\Scripts\activate,pip install -r requirements.txt, runpytest. - Frontend: Node 20,
cd frontend && npm install && npm run lint. - Run apps locally (
uvicorn app.main:app --reload --port 8000andnpm run dev). If you change ports, setVITE_API_BASE_URL(andVITE_WS_URLfor sockets) infrontend/.env.local. - Keep PRs focused; include screenshots for UI changes when helpful.
- In the PR description, note the motivation, what changed, and how you tested.
- Port already in use: stop the conflicting process (
netstat -ano | findstr :8000) or change the port and updateVITE_API_BASE_URL. - Slow first start: embedding model download can take time; set
PRELOAD_EMBEDDINGS=falseto skip upfront (it will lazy-load later).
- Set strong
JWT_SECRETand explicitCORS_ORIGINSin production. - Do not commit real
.envvalues.