Skip to content

kongesque/locus-vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

385 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Locus Vision Banner

Locus Vision πŸ‘οΈ

Open-source, real-time video analytics engine optimized for edge devices
Turn any camera into a smart sensor β€” object detection, tracking, line crossing, and zone counting at the edge.

GitHub stars License Open issues Last commit Platform: edge devices PRs welcome


Locus Vision is a self-hosted video analytics platform built for edge devices. It performs real-time object detection and tracking using YOLO + ONNX Runtime, with support for INT8/FP16 quantized models that cut model size by 70% while maintaining accuracy. Everything stays local β€” no cloud, no subscriptions, no data leaving your network.

demo.mp4

Features

Real-Time Monitoring

  • 🎯 Detection & Tracking β€” YOLO models (v5–v11) via ONNX Runtime + ByteTrack multi-object tracking across concurrent streams
  • πŸ”² Spatial Analytics β€” Polygon zones for occupancy counting, directional lines for crossing detection, dwell time, capacity alerts
  • πŸ“Ή Live Streams β€” MJPEG video with overlaid detections, real-time heatmaps, and event feeds

Intelligence & Insights

  • 🧠 Adaptive Models β€” Automatically detects hardware (Hailo, CUDA, CoreML, CPU ARM) and picks the optimal model format. Download models with one click from Settings.
  • πŸ“Š Rich Analytics β€” Peak hours analysis, hourly aggregations, CSV/JSON export, Prometheus metrics endpoint
  • 🎞️ Batch Processing β€” Upload videos for offline analysis with a crash-resilient job queue

Deployment & Integration

  • πŸ“· Camera Flexibility β€” RTSP streams, ONVIF auto-discovery, USB webcams, V4L2 hardware decoding
  • πŸ”’ Access Control β€” JWT authentication, role-based access (admin/viewer), rate-limited login
  • πŸ“ˆ System Insights β€” Per-camera FPS breakdown, CPU/memory/storage monitoring, data archival with Parquet

Use Cases

  • Retail & Foot Traffic β€” Count visitors entering/exiting zones, measure dwell time across store areas
  • Warehouse & Logistics β€” Monitor loading docks, track forklift movement, detect restricted zone breaches
  • Parking & Access Control β€” Track occupancy in real time, log vehicle entry/exit with line crossing
  • Wildlife & Environmental β€” Observe animal activity patterns, generate spatial heatmaps from trail cameras

Tech Stack

Layer Technology
Frontend SvelteKit 2, Svelte 5, TypeScript
Styling Tailwind CSS 4, shadcn-svelte
Backend FastAPI, Python 3.11+
Database SQLite (async), DuckDB (analytics)
AI / Vision ONNX Runtime, YOLO (v5–v11), ByteTrack, OpenCV, Hailo-8L
Auth JWT + Argon2id

Quick Start

Docker (recommended)

git clone https://github.com/kongesque/locus-vision.git
cd locusvision
docker compose up --build

Open localhost:3000 (app) or localhost:8000/api/docs (API docs).

Manual

# Clone & install
git clone https://github.com/kongesque/locus-vision.git
cd locusvision && pnpm install

# Setup backend
cd backend && python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt && cd ..

# Run
pnpm dev

Open localhost:5173 (app) or localhost:8000/api/docs (API docs).

Model Management

Models are managed from Settings > Models in the UI. The system auto-detects your hardware (Hailo, CUDA, CoreML, or CPU) and shows which model formats are compatible.

Downloading models:

  • Click "Download" next to any model to fetch it from GitHub Releases
  • The best precision for your hardware is automatically selected (INT8 on Pi, FP32 on Mac/desktop)
  • No additional dependencies required on your machine

Current models: YOLO11 (n/s/m/l/x), YOLOv8 (n/s), YOLOv6n, YOLOv5 variants, YOLOX. See MODELS.md for the full list and available formats per model.

For developers: To add new models to the release, export them with the CLI tool (requires ultralytics on your dev machine only):

source backend/.venv/bin/activate
pip install ultralytics
python backend/scripts/export_model.py yolo11n --int8
# Then upload to the GitHub release and update model_catalog.json with the download URL

Performance

All numbers are inference-only (ONNX Runtime, 640Γ—640 input, 15s timed run after 3s warmup). End-to-end stream FPS is lower due to video decode, tracking, and analytics overhead.

High-compute reference β€” Apple M5 (16 GB)

Model Size FPS Latency (median) Latency (p99)
YOLO11n 10 MB 200 5.0 ms 6.6 ms
YOLO11s 36 MB 118 8.5 ms 9.9 ms
YOLO11m 77 MB 62 16.1 ms 19.4 ms

FP32, CoreML (Apple Neural Engine), ONNX Runtime 1.24.2. 15s timed run after 3s warmup, 2s cooldown between models. FPS derived from median latency.

Edge reference β€” Raspberry Pi 5 (8 GB)

CPU-only (ARM Cortex-A76)

Model Precision Size FPS Latency (median) Latency (p99)
YOLO11n FP32 10 MB 5 188.9 ms 275.5 ms
YOLO11n INT8 3 MB 14 69.9 ms 93.7 ms

ONNX Runtime 1.24.4. Static INT8 quantization with QDQ nodes. 15s timed run after 3s warmup, 2s cooldown between models. FPS derived from median latency.

Hailo-8L AI HAT+ (PCIe, 13 TOPS)

Model Size FPS HW Latency
YOLOv6n 14 MB 355 7.0 ms
YOLOv5n-seg 7 MB 65 14.2 ms
YOLOv8s 35 MB 59 13.1 ms
YOLOv5s-personface 26 MB 64 13.7 ms
YOLOX-s 21 MB 61 14.8 ms
YOLOv8s-pose 23 MB 51 18.9 ms

HailoRT 4.23.0, HEF format. hailortcli benchmark, streaming mode (includes host-side DMA transfer). 15s per model. End-to-end app FPS will be lower due to host-side preprocessing (letterbox resize, normalization) and postprocessing (NMS, tracking, zone analytics). YOLO11 HEF models not yet available β€” compiled from pre-built Hailo model zoo.

To reproduce:

# CPU benchmark
cd backend && source .venv/bin/activate
python scripts/benchmark_inference.py

# Hailo benchmark
hailortcli benchmark /usr/share/hailo-models/<model>.hef

Contributing

Contributions welcome β€” bug reports, feature requests, and pull requests all help. See open issues for where to start.

License

MIT License


Made with ❀️ by kongesque