Powerful EBPF-based Network & System Inspection Manager for Your Kubernetes
A modern, full-featured web interface for Inspektor Gadget that brings powerful eBPF-based observability tools to your Kubernetes clusters with real-time monitoring, historical analysis, and an intuitive user experience.
PENNY transforms complex eBPF-based observability into an accessible, visual experience:
- 🎯 No kubectl required - Manage gadgets through an intuitive web interface
- 📊 Real-time streaming - Live event updates via WebSocket connections
- 📚 Historical analysis - Review past sessions with session replay
- 🔄 Multiple sessions - Run and monitor multiple gadgets simultaneously
- 🌓 Modern UI - Beautiful dark/light mode with responsive design
- 💾 Data persistence - Store and query historical events with TimescaleDB
- 🚀 Production ready - Scalable architecture with Redis for distributed sessions
┌─────────────────────────────────────────────────────────────────────┐
│ User Browser │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ PENNY Web UI (React + TypeScript) │ │
│ │ • Gadget Catalog • Active Sessions • History & Replay │ │
│ │ • Dark Mode • Real-time Tables • Filtering │ │
│ └───────────┬──────────────────────────────┬───────────────────┘ │
└──────────────┼──────────────────────────────┼──────────────────────┘
│ REST API │ WebSocket
│ │
┌──────────────▼──────────────────────────────▼──────────────────────┐
│ Frontend Service (Nginx) │
│ • Static assets serving │
│ • API proxy to backend │
└──────────────┬──────────────────────────────────────────────────────┘
│ /api/* → backend:8080
│
┌──────────────▼──────────────────────────────────────────────────────┐
│ Backend Service (Go) │
│ ┌────────────────┬─────────────────┬──────────────────────────┐ │
│ │ HTTP Handlers │ WebSocket Hub │ Gadget Manager │ │
│ │ • REST API │ • Event Stream │ • kubectl-gadget CLI │ │
│ │ • CORS │ • Fan-out │ • Process Management │ │
│ └────────┬───────┴────────┬────────┴──────────┬───────────────┘ │
│ │ │ │ │
│ ┌────────▼────────┬───────▼────────┬──────────▼───────────────┐ │
│ │ Session Store │ Event Consumer │ Storage Layer │ │
│ │ • In-memory │ • Redis Stream │ • TimescaleDB Client │ │
│ │ • Redis sync │ • Event handler│ • Event persistence │ │
│ └────────┬────────┴───────┬────────┴──────────┬───────────────┘ │
└───────────┼────────────────┼───────────────────┼───────────────────┘
│ │ │
┌───────▼────────┐ ┌───▼──────────┐ ┌────▼──────────────┐
│ Redis │ │ Redis Streams│ │ TimescaleDB │
│ • Session data │ │ • Events pub │ │ • Event history │
│ • Distributed │ │ • Real-time │ │ • Session logs │
│ state │ │ distribution│ │ • Hypertables │
└────────────────┘ └──────────────┘ └───────────────────┘
│
┌────────────▼───────────────┐
│ Kubernetes API Server │
│ • RBAC permissions │
│ • Pod/Node access │
└────────────┬───────────────┘
│
┌────────────▼───────────────┐
│ Inspektor Gadget │
│ (DaemonSet) │
│ • eBPF programs │
│ • Kernel tracing │
│ • Network monitoring │
└────────────────────────────┘
- Technology: Modern React 18 with TypeScript, Vite for fast builds
- UI Framework: Tailwind CSS with custom dark mode implementation
- Icons: Lucide React for consistent iconography
- State Management: React hooks and context for theme management
- Routing: Client-side navigation between gadget catalog, active sessions, and history
- WebSocket Client: Real-time event streaming from backend
- Key Features:
- Gadget catalog with categorized view
- Live session monitoring with multiple concurrent sessions
- Historical session replay
- Dark/light mode toggle with persistence
- Responsive tables with search, filter, and sort
- Summary views with aggregated statistics
- Technology: Go 1.23 with gorilla/mux for routing
- Architecture: Multi-goroutine event processing with channels
- Components:
- Gadget Manager: Spawns and manages kubectl-gadget processes
- WebSocket Hub: Broadcasts events to connected clients
- Session Store: Manages active gadget sessions with Redis sync
- Storage Layer: Persists events to TimescaleDB
- Event Consumer: Processes Redis streams for real-time distribution
- Key Features:
- Process lifecycle management for gadgets
- Output parsing for different gadget types (trace, snapshot)
- CORS-enabled REST API
- Health check endpoints
- Graceful shutdown handling
- Distributed session support for horizontal scaling
- Purpose: Real-time event streaming and distributed session management
- Configuration:
- Streams for event distribution
- AOF persistence enabled
- LRU eviction policy (256MB max memory)
- Usage:
- Session state synchronization across backend replicas
- Real-time event pub/sub for WebSocket clients
- Temporary event buffering
- Purpose: Long-term storage and querying of historical gadget events
- Schema:
gadget_events- Hypertable partitioned by time- Stores all gadget output events
- JSONB data for flexible event structure
- Indexed by session_id, event_type, namespace
gadget_sessions- Session metadata- Session lifecycle tracking
- Event count aggregation
- Features:
- Automatic data retention policies
- Compression for historical data
- Efficient time-based queries
- GIN indexes on JSONB data for fast searches
- Deployment: DaemonSet running on all cluster nodes
- Integration: Accessed via kubectl-gadget CLI embedded in backend
- Supported Gadgets:
trace_sni- TLS SNI monitoringtrace_tcp- TCP connection tracingsnapshot_process- Process snapshotssnapshot_socket- Socket listings
User clicks "Start Gadget"
│
▼
Frontend sends POST /api/sessions
│
▼
Backend creates session in SessionStore
│
▼
Backend spawns kubectl-gadget process
│
▼
kubectl-gadget starts eBPF program on nodes
│
▼
Session ID returned to frontend
│
▼
Frontend opens WebSocket connection
eBPF program captures kernel event
│
▼
Inspektor Gadget formats event
│
▼
kubectl-gadget outputs JSON to stdout
│
▼
Backend reads stdout, parses JSON
│
├─────────────────┬──────────────────┐
▼ ▼ ▼
WebSocket Hub Redis Stream TimescaleDB
(immediate) (distributed) (persistent)
│ │ │
▼ ▼ ▼
Connected Other Backend Event history
Clients Instances stored
User opens History view
│
▼
Frontend fetches GET /api/history
│
▼
Backend queries TimescaleDB
│
▼
Returns session list with metadata
│
▼
User clicks on session to replay
│
▼
Frontend fetches GET /api/history/{sessionId}
│
▼
Backend retrieves all events for session
│
▼
Frontend displays events in read-only mode
- Modern Web UI: React 18 + TypeScript with Vite for lightning-fast development
- Real-time Streaming: WebSocket-based live event streaming with automatic reconnection
- Multiple Gadget Types:
- Trace SNI: Monitor TLS Server Name Indication (SNI) from HTTPS requests
- Trace TCP: Track TCP connections, accepts, and failures with summary statistics
- Snapshot Process: Capture current running processes across cluster
- Snapshot Socket: List open network sockets with protocol and state
- Session Management:
- Run multiple concurrent gadget sessions (no limits)
- Switch between active sessions seamlessly
- Session history with full replay capability
- Session metadata tracking (start time, event count, status)
- Advanced Filtering:
- Filter by namespace, pod name, container
- Client-side search across all event fields
- Sort by any column in table views
- Custom TCP filters (all, accept-only, connect-only, failure-only)
- Data Persistence:
- All events stored in TimescaleDB hypertables
- Automatic time-based partitioning
- Efficient JSONB queries for flexible event structure
- Event Streaming:
- Redis Streams for reliable event distribution
- Consumer group support for horizontal scaling
- Automatic event fan-out to multiple clients
- Active Sessions View: Monitor all running gadgets with live event counts
- Session Replay: Review historical gadget outputs with original formatting
- Responsive Design: Optimized for desktop, tablet, and mobile devices
- Dark Mode: System preference detection with manual toggle and localStorage persistence
- Kubernetes cluster (k3s, k3d, minikube, etc.)
- Inspektor Gadget installed on the cluster (Tested with v0.46.0)
- Podman or Docker for building images
- kubectl configured to access your cluster
If you haven't already installed Inspektor Gadget on your cluster:
kubectl gadget deployVerify the installation:
kubectl gadget versionThe build script automatically detects whether you're using Podman or Docker:
./build.shFor k3s, import the images:
# Using Podman
podman save penny-backend:latest | sudo k3s ctr images import -
podman save penny-frontend:latest | sudo k3s ctr images import -
# Or using the Makefile
make import-k3sFor k3d:
k3d image import penny-backend:latest penny-frontend:latest -c myclusterFor minikube:
minikube image load penny-backend:latest
minikube image load penny-frontend:latest./deploy.shOption 1: Port Forward
kubectl port-forward -n penny svc/frontend 3000:80Then open http://localhost:3000
Option 2: NodePort
Access via NodePort (default: 30080):
# For k3s/k3d
http://localhost:30080
# For other clusters, get the node IP
kubectl get nodes -o wide
# Then access: http://<NODE_IP>:30080Option 3: Ingress (Optional)
Deploy the Ingress resource:
kubectl apply -f k8s/ingress.yamlAdd to /etc/hosts:
127.0.0.1 penny.lima.local
Access: http://penny.lima.local
- Select a gadget from the catalog (Trace SNI, Trace TCP, Snapshot Process, or Snapshot Socket)
- Configure filtering options (namespace, pod name, etc.)
- Click "Start" to begin the gadget session
- View real-time events streaming in the output panel
- Active Sessions View: Click "Active Sessions" in the sidebar to see all running gadgets
- Switch Sessions: Click on any session card to view its live output
- Stop Session: Use the stop button on any session
- Multiple Sessions: Run multiple gadgets simultaneously
- History View: Access historical sessions from the sidebar
- Session Replay: Click on any past session to replay its events
- Search & Filter: Find specific sessions by type, namespace, or time
Monitor TLS Traffic:
- Start a "Trace SNI" gadget
- Make HTTPS requests from your pods
- See SNI data (domains, IPs, ports) in real-time
Debug TCP Connections:
- Start a "Trace TCP" gadget
- Filter by namespace if needed
- View connection attempts, accepts, and failures
- Switch to "Summary View" for aggregated statistics
Inspect Running Processes:
- Start a "Snapshot Process" gadget
- Get instant snapshot of all processes
- Filter and sort by PID, command, namespace
Analyze Network Sockets:
- Start a "Snapshot Socket" gadget
- See all open TCP/UDP sockets
- Filter by status (LISTEN, ESTABLISHED, etc.)
.
├── backend/ # Go backend service
│ ├── cmd/
│ │ └── server/
│ │ └── main.go # Application entry point
│ ├── internal/
│ │ ├── gadget/
│ │ │ └── gadget.go # kubectl-gadget process manager
│ │ ├── handler/
│ │ │ ├── handler.go # HTTP REST handlers
│ │ │ └── websocket.go # WebSocket hub and connections
│ │ ├── models/
│ │ │ └── models.go # Data structures (Session, Event)
│ │ ├── sessionstore/
│ │ │ └── sessionstore.go # Session state management + Redis sync
│ │ ├── storage/
│ │ │ └── storage.go # TimescaleDB persistence layer
│ │ └── parser/
│ │ └── parser.go # Gadget output parsing (JSON, text)
│ ├── Dockerfile # Multi-stage build (Alpine + kubectl-gadget)
│ ├── go.mod
│ └── go.sum
│
├── frontend/ # React frontend
│ ├── public/
│ │ ├── logo.svg # PENNY logo
│ │ └── logo-with-text.svg # Logo with text variant
│ ├── src/
│ │ ├── components/
│ │ │ ├── GadgetCard.tsx # Catalog item component
│ │ │ ├── Runner.tsx # Gadget execution view
│ │ │ ├── ActiveSessionsView.tsx
│ │ │ ├── HistoryView.tsx # Session history browser
│ │ │ ├── SessionReplay.tsx # Historical session viewer
│ │ │ ├── SessionPicker.tsx # Multi-session selector
│ │ │ ├── ThemeToggle.tsx # Dark mode toggle
│ │ │ ├── TraceSNITable.tsx # SNI trace visualization
│ │ │ ├── TCPSummaryTable.tsx # TCP aggregated stats
│ │ │ ├── ProcessSnapshotTable.tsx
│ │ │ └── SocketSnapshotTable.tsx
│ │ ├── contexts/
│ │ │ └── ThemeContext.tsx # Theme state management
│ │ ├── services/
│ │ │ └── api.ts # REST and WebSocket client
│ │ ├── types.ts # TypeScript interfaces
│ │ ├── App.tsx # Main application component
│ │ └── main.tsx # React entry point
│ ├── Dockerfile # Multi-stage (Node build + Nginx serve)
│ ├── nginx.conf # Nginx proxy configuration
│ ├── vite.config.ts # Vite build configuration
│ ├── tailwind.config.js # Tailwind CSS customization
│ ├── package.json
│ └── tsconfig.json
│
├── k8s/ # Kubernetes manifests
│ ├── namespace.yaml # penny namespace
│ ├── backend-rbac.yaml # ServiceAccount + ClusterRole + Binding
│ ├── backend-deployment.yaml # Backend + Service (ClusterIP)
│ ├── frontend-deployment.yaml # Frontend + Service (NodePort 30080)
│ ├── redis-deployment.yaml # Redis + PVC + ConfigMap
│ ├── timescaledb-deployment.yaml # TimescaleDB + PVC + Init Job
│ └── ingress.yaml # Traefik ingress (optional)
│
├── demo/ # Demo services for testing
│ ├── README.md
│ ├── deploy.sh
│ ├── Dockerfile
│ ├── app.py # Python Flask test services
│ └── *.yaml # Demo deployments (apples, oranges, bananas)
│
├── build.sh # Build and tag Docker images
├── push.sh # Push images to Docker Hub
├── deploy.sh # Deploy all K8s resources
├── logo.svg # PENNY logo (for README)
└── README.md # This file
GET /api/gadgets- List available gadgetsGET /api/sessions- List active sessionsPOST /api/sessions- Start a new gadget sessionDELETE /api/sessions/{sessionId}- Stop a sessionGET /api/history- Get historical sessionsGET /api/history/{sessionId}- Get specific session historyGET /health- Health check
WS /ws/{sessionId}- Stream real-time gadget output for a session
This project supports both Podman and Docker. The build script automatically detects which one is available on your system.
Platform Support: Images are built for linux/amd64 by default, which is compatible with most Kubernetes clusters.
Podman is a daemonless container engine that can run containers without root privileges:
# Build images (auto-detects ARM64 or AMD64)
./build.sh # Automatically uses podman if available
# Import to k3s
make import-k3s
# Or manually
podman save penny-backend:latest | sudo k3s ctr images import -
podman save penny-frontend:latest | sudo k3s ctr images import -Note for Apple Silicon (M1/M2/M3): The build script automatically detects ARM64 architecture and builds native images for better performance.
If you prefer Docker, the scripts will automatically use it if Podman is not available:
# Build images (auto-detects platform)
./build.sh # Automatically uses docker if podman not found
# Import to k3s
docker save penny-backend:latest | sudo k3s ctr images import -
docker save penny-frontend:latest | sudo k3s ctr images import -cd backend
go mod download
go run cmd/server/main.gocd frontend
npm install
npm run devThe frontend will be available at http://localhost:3000 with hot reload.
| Variable | Description | Default | Required |
|---|---|---|---|
PORT |
HTTP server port | 8080 |
No |
REDIS_ADDR |
Redis server address | redis:6379 |
Yes |
REDIS_PASSWORD |
Redis password | `` (empty) | No |
POSTGRES_URL |
PostgreSQL connection string | postgres://gadget:password@timescaledb:5432/gadget_events |
Yes |
Example PostgreSQL URL format:
postgres://username:password@host:port/database?sslmode=disable
Create a .env file in the frontend directory for local development:
| Variable | Description | Default | Required |
|---|---|---|---|
VITE_API_URL |
Backend API base URL | /api |
No |
VITE_WS_URL |
WebSocket URL | window.location.host |
No |
Note: In production (Kubernetes), the frontend nginx proxies API requests to the backend, so these variables typically don't need to be changed.
Minimum recommended resources:
- Backend: 100m CPU, 128Mi RAM (request) / 500m CPU, 512Mi RAM (limit)
- Frontend: 50m CPU, 64Mi RAM (request) / 200m CPU, 256Mi RAM (limit)
- Redis: 100m CPU, 128Mi RAM (request) / 500m CPU, 512Mi RAM (limit)
- TimescaleDB: 200m CPU, 256Mi RAM (request) / 1000m CPU, 1Gi RAM (limit)
The backend supports horizontal scaling:
kubectl scale deployment backend -n penny --replicas=3Requirements for multi-replica deployment:
- Redis must be accessible to all backend pods (already configured)
- Session state is synchronized via Redis
- WebSocket connections are load-balanced by the ingress/service
- Event consumers use Redis consumer groups to avoid duplicate processing
Both Redis and TimescaleDB require persistent volumes:
- Redis: 5Gi for session data and event streams
- TimescaleDB: 10Gi for historical event data (scales with retention period)
Data Retention: Configure TimescaleDB retention policies in the init job:
SELECT add_retention_policy('gadget_events', INTERVAL '30 days');- RBAC Permissions: Backend requires cluster-level permissions to run kubectl-gadget
- Network Policies: Consider restricting backend network access
- Secrets Management: Store TimescaleDB credentials in Kubernetes Secrets (already configured)
- TLS/HTTPS: Use ingress with TLS for production deployments
- Authentication: Currently no authentication - add auth proxy if exposing publicly
Symptom: Frontend shows 502 errors, API requests fail
Cause: Backend failed to initialize connections to Redis or TimescaleDB (usually because backend started before these services were ready)
Solution:
# Restart the backend pod to reinitialize connections
kubectl delete pods -n penny -l app=penny-backend
# Verify backend logs show successful connections
kubectl logs -n penny -l app=penny-backend
# Should see: "Connected to Redis", "Connected to PostgreSQL", "Session store initialized"Symptom: Pods stuck in Pending, ContainerCreating, or CrashLoopBackOff
Diagnosis:
# Check pod status and events
kubectl get pods -n penny
kubectl describe pod -n penny <pod-name>
# Check logs for errors
kubectl logs -n penny -l app=penny-backend
kubectl logs -n penny -l app=penny-frontend
kubectl logs -n penny -l app=redis
kubectl logs -n penny -l app=timescaledbCommon issues:
- PVC not bound: Check if persistent volumes are available
kubectl get pvc -n penny
- Image pull errors: Verify images exist and are accessible
kubectl describe pod -n penny <pod-name> | grep -A 10 Events
- Resource constraints: Check if cluster has enough resources
kubectl top nodes
Symptom: Gadgets fail to start, or show "kubectl-gadget command not found"
Diagnosis:
# Verify Inspektor Gadget is installed
kubectl get pods -n gadget
kubectl gadget version
# Check if kubectl-gadget is available in backend
kubectl exec -n penny deployment/backend -- kubectl-gadget versionSolution:
# Install Inspektor Gadget if not present
kubectl gadget deploy
# Wait for daemonset to be ready
kubectl wait --for=condition=ready pod -l k8s-app=gadget -n gadget --timeout=300sSymptom: Gadgets fail with "forbidden" or "unauthorized" errors
Diagnosis:
# Check if RBAC resources exist
kubectl get clusterrole penny-backend-role
kubectl get clusterrolebinding penny-backend-binding
kubectl get serviceaccount -n penny penny-backend
# Verify backend pod is using the correct service account
kubectl get pod -n penny -l app=penny-backend -o jsonpath='{.items[0].spec.serviceAccountName}'Solution:
# Reapply RBAC configuration
kubectl apply -f k8s/backend-rbac.yamlSymptom: Events not streaming, "WebSocket connection error" in browser console
Diagnosis:
-
Check backend is running and healthy:
kubectl get pods -n penny -l app=penny-backend kubectl exec -n penny deployment/backend -- wget -qO- http://localhost:8080/health -
Check browser console for specific errors:
Failed to connect to WebSocket: Backend not reachableWebSocket closed with code 1006: Network interruption404 Not Found: Incorrect WebSocket URL
Solution:
- Verify API endpoint is accessible from browser
- Check if ingress/service is properly configured
- Ensure no network policies blocking WebSocket upgrades
- For ingress, ensure WebSocket headers are preserved:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
Symptom: timescaledb-init job shows errors or doesn't complete
Diagnosis:
kubectl logs -n penny job/timescaledb-initCommon issues:
- Connection timeout: TimescaleDB not ready yet
# Check TimescaleDB pod status kubectl get pods -n penny -l app=timescaledb - Permission denied: Check credentials in secret
kubectl get secret -n penny timescaledb-secret -o yaml
Solution:
# Delete and recreate the job (it will retry)
kubectl delete job -n penny timescaledb-init
kubectl apply -f k8s/timescaledb-deployment.yamlSymptom: Browser shows empty page, no UI visible
Diagnosis:
- Check browser console for JavaScript errors
- Verify frontend pod is running:
kubectl get pods -n penny -l app=penny-frontend
- Check if static assets are served:
curl -I http://localhost:30080/ curl -I http://localhost:30080/logo.svg
Solution:
- Rebuild frontend image with correct build output
- Check nginx configuration in
frontend/nginx.conf - Verify file permissions in the image (should be readable)
Symptom: Backend logs show "failed to connect to Redis"
Diagnosis:
# Check Redis is running
kubectl get pods -n penny -l app=redis
# Test connection from backend
kubectl exec -n penny deployment/backend -- nc -zv redis 6379Solution:
# Restart Redis if needed
kubectl delete pods -n penny -l app=redis
# Then restart backend to reconnect
kubectl delete pods -n penny -l app=penny-backendSymptom: Pods being OOMKilled or running out of memory
Diagnosis:
# Check current resource usage
kubectl top pods -n penny
# Check resource limits
kubectl describe pod -n penny <pod-name> | grep -A 10 "Limits\|Requests"Solution:
- Increase memory limits in deployment YAML
- For TimescaleDB: Add retention policy to delete old data
- For Redis: Tune
maxmemoryand eviction policy - Enable compression in TimescaleDB
Remove the deployment:
kubectl delete namespace penny
# or using Makefile
make cleanRemove container images:
# Using Podman
podman rmi penny-backend:latest penny-frontend:latest
# Using Docker
docker rmi penny-backend:latest penny-frontend:latest-
trace_dns- DNS query monitoring -
trace_exec- Process execution tracing -
trace_open- File open operations -
top_block_io- Block I/O statistics -
top_tcp- TCP traffic statistics -
top_file- File I/O by process -
profile_cpu- CPU profiling -
profile_block_io- I/O profiling
- Export sessions to JSON/CSV/PCAP formats
- Prometheus metrics integration
- Grafana dashboard templates
- Webhook notifications for events
- S3/Object storage backup for historical data
- Saved filter presets
- Custom dashboard layouts
- Event annotations and comments
- Session sharing via URL
- Keyboard shortcuts
- Advanced search with query language
- Event correlation across sessions
- OAuth/OIDC authentication
- Role-based access control (RBAC)
- Namespace-level permissions
- Audit logging
- Session encryption
- API key management
- Multi-cluster support with cluster selector
- Helm chart for easy deployment
- High availability configuration
- Backup and restore procedures
- Resource usage dashboard
- Cost monitoring and optimization
- Event sampling and rate limiting
- Automatic data archival to cold storage
- Query result caching
- Read replicas for TimescaleDB
- Redis cluster mode support
- REST API documentation (OpenAPI/Swagger)
- Client SDKs (Python, Go, JavaScript)
- Gadget plugin system
- Custom event parsers
- Webhook integration framework
- Built-in performance monitoring
- Resource usage alerts
- Automatic error reporting
- Health check dashboard
- SLO/SLA tracking
- Persistent storage for historical data (TimescaleDB)
- Session replay functionality
- Dark mode support
- Multiple concurrent sessions
- Real-time event streaming
- Redis-based distributed sessions
- Horizontal backend scaling
- WebSocket event distribution
- Advanced table filtering and search
- Summary views for aggregated data
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details




