π Automated CVE scanner for HTTP endpoints through technology fingerprinting and NVD database matching.
- ποΈ Complete CVE Database: 177K+ CVEs from 2015 to present
- π Auto-Update: Automatic updates from fkie-cad/nvd-json-data-feeds
- β‘ Async Scanning: High-speed parallel scanning with aiohttp
- π CSV Reports: Detailed exports with CVSS scores and detected technologies
- πΎ Smart Cache: Optimized indexing for instant searches
- π― Advanced Fingerprinting: Technology detection from headers, HTML, and cookies
- π§ 403 Bypass Tools: Integrated WAF bypass modules (nuclear_bypass)
git clone https://github.com/theghostshinobi/CVE-Matcher.git
cd CVE-Matcher
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Full CVE database download (first time - ~1.1GB)
python3 update_db.py --full
# Incremental update (daily - ~15MB)
python3 update_db.py
# Create targets file
cat > targets.txt << EOF
https://example.com
https://wordpress.org
https://target.com/admin
EOF
# Run scan
python3 main.py -t targets.txt
# Verbose scan with cache rebuild
python3 main.py -t targets.txt -v --force-rebuild
url,cve_id,cvss_score,technology,vendor,product,version
https://example.com,CVE-2023-12345,7.5,Apache,apache,httpd,2.4.49
https://wordpress.org,CVE-2024-56789,9.8,WordPress,wordpress,wordpress,5.8.0
Reports are saved in output/cve_report_*.csv and output/summary_*.txt
# Step 1: Subdomain Enumeration
subfinder -d target.com -all -recursive -silent | \
anew subdomains.txt
# Step 2: Live Host Detection
cat subdomains.txt | httpx -silent -threads 200 -timeout 10 \
-status-code -tech-detect -title -follow-redirects \
-o httpx_results.txt
# Step 3: Extract URLs
cat httpx_results.txt | awk '{print $1}' | anew live_targets.txt
# Step 4: Web Crawling (expand attack surface)
cat live_targets.txt | katana -silent -jc -kf all -d 3 \
-timeout 20 -c 50 -o crawled_urls.txt
# Step 5: Wayback Machine URLs (historical endpoints)
cat subdomains.txt | waybackurls | \
grep -E "\.(js|php|asp|aspx|jsp|html|htm)" | \
anew wayback_urls.txt
# Step 6: Combine all URLs and deduplicate
cat live_targets.txt crawled_urls.txt wayback_urls.txt | \
sort -u | httpx -silent -mc 200,201,301,302,401,403 \
-o final_targets.txt
# Step 7: CVE Scan
python3 main.py -t final_targets.txt -v
# Step 8: Filter High-Risk CVEs
cat output/cve_report_*.csv | awk -F',' 'NR>1 && $3 >= 7.0 {print $1","$2","$3}' \
| sort -t',' -k3 -nr > high_risk_cves.csv
# Step 9: Nuclei Validation
cat high_risk_cves.csv | cut -d',' -f2 | sort -u | while read cve; do
nuclei -t "cves/$(echo $cve | tr '[:upper:]' '[:lower:]').yaml" \
-l final_targets.txt -silent
done
subfinder -d target.com -silent | httpx -silent | tee targets.txt | \
python3 main.py -t /dev/stdin && \
cat output/cve_report_*.csv | awk -F',' 'NR>1 && $3>=7.0'
# Download recent years only
python3 update_db.py --full --start-year 2020
# Download missing files
python3 update_db.py --missing
# Check database status
python3 update_db.py --check
# Force complete refresh
rm -f data/cve_index.json && python3 update_db.py --full
Edit config.ini:
[scanner]
timeout = 10
concurrency = 50
verify_ssl = false
user_agent = Mozilla/5.0 (compatible; CVEScanner/1.0)
[database]
database_path = database
cache_path = data
[report]
output_dir = output
include_no_cves = false
csv_delimiter = ,
# Custom output directory
python3 main.py -t targets.txt -o custom_output/
# Force index rebuild
python3 main.py -t targets.txt --force-rebuild
# Verbose mode with detailed logging
python3 main.py -t targets.txt -v
# Single URL test
python3 nuclear_bypass.py https://target.com/admin
# Batch testing
python3 mass_bypass.py forbidden_urls.txt
# Results saved in mass_bypass_results.txt
crontab -e
# Add this line (update at 3 AM daily)
0 3 * * * cd /path/to/CVE-Matcher && ./venv/bin/python3 update_db.py >> logs/update.log 2>&1
#!/bin/bash
# monitor.sh - Continuous CVE monitoring
DOMAIN="target.com"
OUTPUT_DIR="monitoring/$(date +%Y-%m-%d)"
mkdir -p $OUTPUT_DIR
# Recon
subfinder -d $DOMAIN -silent | httpx -silent > $OUTPUT_DIR/targets.txt
# CVE Scan
python3 main.py -t $OUTPUT_DIR/targets.txt -o $OUTPUT_DIR/
# Alert on critical CVEs
CRITICAL=$(cat $OUTPUT_DIR/cve_report_*.csv | awk -F',' '$3 >= 9.0' | wc -l)
if [ $CRITICAL -gt 0 ]; then
echo "β οΈ $CRITICAL critical CVEs found!" | notify-send
fi
# Centralized download
python3 update_db.py --full \
--database-dir /opt/cvematcher/database \
--cache-dir /opt/cvematcher/data
Modify config.ini for each user:
[database]
database_path = /opt/cvematcher/database
cache_path = /opt/cvematcher/data/cve_index.json
force_rebuild = false
[report]
output_dir = ~/cvematcher/output # Personal output
# Export Burp sitemap to file, then:
cat burp_sitemap.txt | grep -oP 'https?://[^\s]+' | \
sort -u > burp_targets.txt
python3 main.py -t burp_targets.txt
# First run CVE-Matcher
python3 main.py -t targets.txt
# Extract CVE IDs
cat output/cve_report_*.csv | cut -d',' -f2 | sort -u > cve_list.txt
# Run Nuclei validation
nuclei -l targets.txt -t cves/ -severity critical,high
#!/usr/bin/env python3
import subprocess
import json
# Your recon tool
targets = get_targets_from_custom_source()
# Save to file
with open('targets.txt', 'w') as f:
f.write('\n'.join(targets))
# Run CVE-Matcher
subprocess.run(['python3', 'main.py', '-t', 'targets.txt'])
# Parse results
with open('output/cve_report_*.csv') as f:
results = parse_csv(f)
send_to_slack(results)
- Python 3.8+
- requests >= 2.31.0
- aiohttp >= 3.9.0
CVE-Matcher/
βββ main.py # Main controller
βββ update_db.py # Database updater
βββ scanner/ # Scanner modules
β βββ scanner.py # HTTP scanner & fingerprinting
β βββ db_manager.py # CVE database manager
β βββ matcher.py # CVE matching engine
β βββ reporter.py # Report generator
βββ database/ # CVE-YYYY.json files (auto-generated)
βββ data/ # Cache and metadata (auto-generated)
βββ output/ # CSV reports (auto-generated)
βββ nuclear_bypass.py # 403 bypass tool
βββ mass_bypass.py # Batch bypass tool
βββ config.ini # Configuration file
# Enable verbose mode to see fingerprinting details
python3 main.py -t targets.txt -v
# Test manually
curl -I https://target.com
# Check if target exposes technology headers
curl -s https://target.com | grep -i "generator\|powered"
# Rebuild index
rm -f data/cve_index.json
python3 main.py --force-rebuild -t targets.txt
# Re-download database
rm -rf database/*.json
python3 update_db.py --full
# Increase concurrency in config.ini
concurrency = 100 # Default: 50
# Increase timeout for slow targets
timeout = 30 # Default: 10
MIT License
This tool is intended for authorized security research and bug bounty programs only. The author is not responsible for misuse or illegal activities. Always ensure you have proper authorization before scanning targets.
Pull requests are welcome! For major changes, please open an issue first to discuss proposed changes.
- CVE data from fkie-cad/nvd-json-data-feeds
- Inspired by the security research community