Skip to content

p-kowadkar/PrometheusAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”₯ PrometheusAI: Self-Improving Multi-Agent Research System

License: MIT Python 3.11+ You.com API

A production-ready AI research assistant with adaptive learning, real-time streaming, and multi-agent collaboration

PrometheusAI is an advanced research system that employs specialized AI agents to discover, analyze, validate, and synthesize information. It learns from every interaction, continuously improving its strategies while providing real-time results through a beautiful web interface.

Prometheus Architecture


✨ What Makes PrometheusAI Special

  • πŸ€– Multi-Agent Collaboration - Specialized agents (Architect, Scout, Analyst, Validator, Synthesizer) work together
  • 🧠 Self-Improving - Learns from every query, optimizing strategies over time
  • 🌐 Real Web Search - Integrated with You.com API for fresh, accurate information
  • βœ… Fact Validation - Cross-references sources, detects contradictions, finds consensus
  • 🎨 Beautiful Dashboard - Royal black theme with real-time progress and streaming results
  • πŸ”Œ Flexible LLMs - Supports any OpenAI-compatible provider (OpenAI, Anthropic, local models)
  • ⚑ Parallel Processing - Concurrent execution for 3-5x faster results
  • πŸ“Š Advanced Metrics - Reliability scores, consensus tracking, and performance analytics

πŸš€ Quick Start (30 seconds!)

1. Install Dependencies

pip install -r requirements.txt

2. Configure API Keys

# Copy the example environment file
cp .env.example .env

# Edit .env and add your keys:
# YOUCOM_API_KEY=your_youcom_key_here
# DEFAULT_LLM_API_KEY=your_openai_key_here

3. Launch the Dashboard

python prometheus_dashboard.py

4. Open Your Browser

Visit http://localhost:7860 and start researching! πŸŽ‰

Need more help? See START_HERE.md for detailed setup instructions.


🎯 Key Features

πŸ” Intelligent Research

  • Adaptive Strategies - Automatically adjusts approach based on query type (factual, comparative, temporal, technical, exploratory)
  • Quality Assessment - Evaluates source reliability and information confidence
  • Consensus Detection - Identifies claims verified by multiple sources
  • Contradiction Alerts - Flags conflicting information across sources

🎨 Modern Dashboard

  • Real-time Streaming - Watch reports being generated section by section
  • Progress Tracking - Live updates from each agent
  • Model Display - See which LLMs are active for each agent
  • Performance Metrics - Detailed reliability, reward scores, and execution stats
  • Learning Visualization - Charts showing strategy performance over time

🧠 Continuous Learning

  • Reward System - Multi-component scoring (Relevance 40%, Completeness 30%, Speed 15%, Cost 15%)
  • Strategy Memory - SQLite database tracks performance across sessions
  • Adaptive Resource Allocation - Increases resources for low-performing query types
  • Historical Analysis - Learns from past successes and failures

πŸ”Œ Flexible Configuration

# Use different models for different agents
ARCHITECT_MODEL=gpt-4o           # Complex reasoning
ANALYST_MODEL=gpt-4o-mini        # Fast analysis
SYNTHESIZER_MODEL=gpt-4o-mini    # Report generation
VALIDATOR_MODEL=gpt-4o-mini      # Cross-validation

# Or use a single model for all
DEFAULT_LLM_MODEL=gpt-4o

# Works with any OpenAI-compatible API
DEFAULT_LLM_BASE_URL=https://api.openai.com/v1
# Or: https://api.anthropic.com, http://localhost:1234/v1, etc.

πŸ“š Documentation

Getting Started

Detailed Guides (in reports/ folder)

System Documentation

Development History


πŸ—οΈ Architecture

Core Components

Agent Role LLM Used
ArchitectAgent Analyzes queries, designs research strategies Configurable (default: gpt-4o)
ParallelScoutSwarm Discovers relevant sources via You.com API N/A (API-based)
EnhancedAnalystPool Extracts claims and key findings in parallel Configurable (default: gpt-4o-mini)
ValidatorAgent Cross-references facts, detects contradictions Configurable (default: gpt-4o-mini)
SynthesizerAgent Creates comprehensive, cited reports Configurable (default: gpt-4o-mini)

Support Systems

Component Purpose
LLMClient Unified interface with caching and streaming
FeedbackCollector Gathers user signals
RewardCalculator Scores research quality
StrategyMemory Persistent learning database

πŸ“Š Performance

Speed Improvements

  • Sequential (V1): 18.55s baseline
  • Parallel (V2): 11.33s (1.6x faster)
  • Adaptive (V4): 10.69s (1.7x faster)
  • With Caching: <1s for repeated queries

Quality Metrics (Latest Version)

  • Relevance: 65-85% (fixed from 4%!)
  • Reliability: 65-90% (improved algorithm)
  • Completeness: 51-85%
  • Overall Reward: 0.55-0.80 (up from 0.33-0.38)
  • Success Rate: 100% with proper API keys

πŸ“¦ Project Structure

prometheus/
β”œβ”€β”€ prometheus_dashboard.py           # πŸš€ MAIN APP - Run this!
β”œβ”€β”€ prometheus_v5_learning.py         # Learning engine (current)
β”œβ”€β”€ prometheus_v4_adaptive.py         # Adaptive strategies (required by v5)
β”œβ”€β”€ prometheus_v3_validation.py       # Validation system (required by v4/v5)
β”œβ”€β”€ llm_client.py                     # Unified LLM client with caching
β”œβ”€β”€ youcom_integration.py             # You.com API integration
β”‚
β”œβ”€β”€ requirements.txt                  # Python dependencies
β”œβ”€β”€ .env.example                      # Environment template
β”œβ”€β”€ .env                              # Your API keys (create this!)
β”‚
β”œβ”€β”€ START_HERE.md                     # Quick start guide
β”œβ”€β”€ PROJECT_STRUCTURE.md              # Project organization
β”œβ”€β”€ README.md                         # This file
β”‚
β”œβ”€β”€ reports/                          # πŸ“š All documentation (22 files)
β”‚   β”œβ”€β”€ USER_GUIDE.md
β”‚   β”œβ”€β”€ QUICK_START_GUIDE.md
β”‚   β”œβ”€β”€ LLM_FLEXIBILITY_GUIDE.md
β”‚   └── ... (see reports/README.md)
β”‚
β”œβ”€β”€ archive/                          # πŸ—„οΈ Old versions (16 files, preserved)
β”‚   β”œβ”€β”€ prometheus_v1_complete.py
β”‚   β”œβ”€β”€ prometheus_v2_parallel.py
β”‚   └── ... (see archive/README.md)
β”‚
β”œβ”€β”€ cache/                            # LLM response cache (auto-managed)
β”œβ”€β”€ memory/                           # Learning database (auto-created)
β”‚   └── prometheus_memory.db
└── output/                           # Generated reports (auto-created)

🎯 Usage Examples

Basic Research

from prometheus_v5_learning import PrometheusAIV5Engine
from youcom_integration import youcom_search, youcom_fetch_content

# Initialize
engine = PrometheusAIV5Engine(youcom_search, youcom_fetch_content)

# Research any topic
result, analysis, session = engine.research_with_learning(
    "What are the latest developments in quantum computing?"
)

# Get results
print(result.final_answer)
print(f"Reliability: {result.metadata['overall_reliability']:.0%}")
print(f"Reward Score: {session.reward_score:.2f}")

Dashboard (Recommended)

# Just run the dashboard!
python prometheus_dashboard.py

# Then visit http://localhost:7860

The dashboard provides:

  • βœ… Real-time progress updates
  • βœ… Streaming report generation
  • βœ… Visual learning analytics
  • βœ… One-click shutdown button
  • βœ… Beautiful UI

πŸ”§ Configuration

Environment Variables

# Required
YOUCOM_API_KEY=your_youcom_api_key
DEFAULT_LLM_API_KEY=your_openai_api_key

# Optional - Per-agent model configuration
ARCHITECT_MODEL=gpt-4o              # Query analysis
ANALYST_MODEL=gpt-4o-mini           # Content analysis
SYNTHESIZER_MODEL=gpt-4o-mini       # Report generation
VALIDATOR_MODEL=gpt-4o-mini         # Fact validation

# Optional - Custom LLM provider
DEFAULT_LLM_BASE_URL=https://api.openai.com/v1
DEFAULT_LLM_MODEL=gpt-4o

# Optional - Caching
USE_CACHE=true                      # Enable/disable caching
CACHE_DIR=./cache                   # Cache directory

Customizing Strategies

Edit prometheus_v4_adaptive.py to adjust strategies:

STRATEGY_TEMPLATES = {
    QueryType.FACTUAL: ResearchStrategy(
        num_sources=5,      # Number of sources to fetch
        num_analysts=3,     # Parallel analysts
        depth="shallow",    # Analysis depth
        validation_level="basic",
        estimated_time=12.0
    ),
    # ... more strategies
}

🚒 Production Deployment

Docker (Coming Soon)

docker build -t prometheus:latest .
docker run -d -p 7860:7860 \
  -e YOUCOM_API_KEY="your-key" \
  -e DEFAULT_LLM_API_KEY="your-key" \
  -v ./memory:/app/memory \
  prometheus:latest

Direct Deployment

# 1. Clone and setup
git clone <your-repo>
cd Prometheus
pip install -r requirements.txt

# 2. Configure
cp .env.example .env
# Edit .env with your keys

# 3. Run
python prometheus_dashboard.py

# 4. Access at http://your-server:7860

See reports/DEPLOYMENT_GUIDE.md for complete instructions.


πŸ§ͺ Testing

All core components are tested and working:

# Test dashboard (recommended)
python prometheus_dashboard.py

# Test individual components
python -c "from llm_client import create_llm_client_for_agent; print('βœ… LLM Client OK')"
python -c "from youcom_integration import YouComClient; print('βœ… You.com Integration OK')"
python -c "from prometheus_v5_learning import PrometheusAIV5Engine; print('βœ… Engine OK')"

Test Report: See reports/PROMETHEUS_TEST_REPORT.md for detailed results.


πŸ›£οΈ Recent Updates

October 30, 2025 - Major Enhancements

  • βœ… Fixed Scoring System - Relevance up from 4% to 65-85%!
  • βœ… Royal Black Theme - Beautiful dark UI
  • βœ… Streaming Support - Real-time report generation
  • βœ… Model Display - Shows active LLMs per agent
  • βœ… Shutdown Button - Graceful server stop
  • βœ… Project Cleanup - Organized structure (reports/ and archive/)
  • βœ… Comprehensive Docs - START_HERE.md, PROJECT_STRUCTURE.md

Current Status (v1.0)

  • βœ… Multi-agent architecture
  • βœ… Real You.com API integration
  • βœ… Parallel processing (3-5x faster)
  • βœ… Fact validation with consensus detection
  • βœ… Adaptive strategy selection
  • βœ… Continuous learning system
  • βœ… Flexible LLM support (any OpenAI-compatible provider)
  • βœ… Production-ready web dashboard
  • βœ… Response caching for cost savings

🀝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ™ Acknowledgments

  • Built with You.com API for real-time web search
  • Powered by OpenAI and compatible LLM providers
  • Inspired by multi-agent AI research and reinforcement learning
  • Developed using agile sprint methodology

πŸ“ž Support

Need help?


πŸ’‘ Why PrometheusAI?

Feature PrometheusAI Traditional Search
Multi-Source Analysis βœ… Automatically ❌ Manual
Fact Validation βœ… Cross-referenced ❌ No validation
Learns Over Time βœ… Self-improving ❌ Static
Contradiction Detection βœ… Automatic ❌ Manual review
Cited Sources βœ… Always ⚠️ Sometimes
Adaptive Strategies βœ… Query-specific ❌ One-size-fits-all
Quality Metrics βœ… Detailed scores ❌ None

Built with πŸ”₯ for the You.com Agentic Hackathon 2025

"A self-improving research AI that learns from every interaction"

Get Started: Run python prometheus_dashboard.py and visit http://localhost:7860 πŸš€

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published