An ultra-premium, production-grade Artificial Intelligence dashboard for Multimodal Active Monitoring, Video Intelligence, and Semantic RAG operations.
Built with Next.js, FastAPI, Celery, and Ollama.

<!-- slide -->

<!-- slide -->

- Active WebRTC Sentinel: Hook directly into physical webcams. Monitor streams live with bi-directional WebSockets and configure natural-language "Tripwires" to automatically map and raise threat events to the Incident Dashboard.
- Multimodal Sandbox: Run complex multimodal LLMs locally without cloud API costs using our SSE
Fetch APIchunk sequence decoders for lightning-fast DOM rendering. - Video Intel Pipeline: Extract and chunk massive
.mp4payloads synchronously utilizing Celery + Redis worker detachments. - Local RAG Integration: Semantic search against PDF documents with continuous
all-MiniLM-L6-v2embeddings in ChromaDB collections. - Google Labs UI: A heavily elevated enterprise interface styled using Framer Motion micro-animations and a custom Terracotta/Desert Sand configuration.
Everything you need to launch the pipeline out-of-the-box.
# 1. Clone & Install
git clone https://github.com/Phani3108/Sentinel-AI.git
cd Sentinel-AI
pip install -r requirements.txt
npm install --prefix web
# 2. Boot Local Infrastructure (requires Redis/Celery)
docker-compose -f docker/docker-compose.yml up -d redis worker
# 3. Boot Web Sockets & FastAPI
uvicorn api.main:app --port 8080
# 4. Launch the Next.js UI (Google Labs Theme)
cd web && npm run dev
# Go to http://localhost:3000