Welcome to SentinelOps! This guide is explicitly designed to get you up and running locally from scratch in under 5 minutes. We use Docker to ensure the setup is completely reproducible and standardized.
Note to Hackathon Judges / Evaluators: We highly recommend following this guide closely. It spins up the heavy AI workers and databases inside Docker, ensuring you don't have to install Redis or PostgreSQL locally.
Before you begin, ensure you have the following installed and running:
- Docker Desktop: Download here (MUST BE RUNNING)
- Node.js 18+: Download here
- Python 3.11+: Download here
- Navigate to the backend directory:
cd sentinelops-backend - Create your
.envfile from the provided template:cp .env.example .env
- Open
.envand fill in your keys:OPENAI_API_KEY: Required for LLM Root Cause Analysis. Get one from OpenAI.GITHUB_TOKEN: Required to sync repositories and post commit statuses. Create a Classic PAT withreposcopes here.
- Navigate to the frontend directory:
cd ../sentinelops-frontend - Create your
.env.localfile from the template:cp .env.local.example .env.local
In your terminal (from the sentinelops-backend folder):
docker compose up -dWait ~30 seconds for PostgreSQL and Redis to initialize. This command starts the database, in-memory cache, Celery worker for background AI tasks, and Celery beat for scheduling.
Run these one-time commands to prep the system with realistic data:
# 1. Train the Machine Learning Risk Model locally
docker compose exec api python -m app.ml.train
# 2. Seed the Demo Data (This generates mock incidents, risks, and health tiles)
docker compose exec api env PYTHONPATH=. python scripts/seed_demo_data.pyOpen a new terminal tab:
cd sentinelops-frontend
npm install
npm run devTake SentinelOps for a spin! Once everything is running, explore these URLs:
- π Main AI Dashboard: http://localhost:3000/dashboard
View aggregate system pulse, build times, and the top-level risk heatmap. - π‘οΈ PR Gatekeeper: http://localhost:3000/pull-requests
View dynamic risk scores and logistic regression outputs for pre-seeded PRs. - π¨ Incident Explorer: http://localhost:3000/incidents
Select an incident to see AI-explained root causes, Vector Similarity search results, and run Digital Twin latency simulations. - βοΈ API Swagger Docs: http://localhost:8000/docs
Explore and test the underlying REST endpoints driving the platform.
- Redis Connection Errors: Ensure Docker Desktop is actually running. If containers aren't spinning up, run
docker compose down -vfollowed bydocker compose up -d --build. - LLM Expalantions Failing: Ensure your
OPENAI_API_KEYis valid and has sufficient credits. - Port Conflicts: The backend binds to
8000and5432(Postgres). The frontend binds to3000. Ensure these ports are free.
π‘οΈ Built with SentinelOps Decision Intelligence