AI-Powered Clothing Detection
Intelligent Clothing Detection & Color Classification using YOLOv8
Live Demo β’ Documentation β’ API Reference
| Feature | Description |
|---|---|
| π― 8 Clothing Classes | T-Shirt, Dress, Jacket, Pants, Shirt, Shorts, Skirt, Sweater |
| π¨ 8 Color Classes | Beige, Black, Blue, Gray, Green, Pattern, Red, White |
| πΈ Image Upload | Drag & drop or click to upload images |
| πΉ Live Webcam | Real-time detection with adjustable confidence |
| β‘ Fast Detection | YOLOv8n optimized for speed |
| π Modern Web UI | Premium dark theme with glass-morphism design |
| π± Responsive | Works on desktop, tablet, and mobile |
| π REST API | Full-featured FastAPI backend |
Modern landing page with feature showcase and quick navigation.
Real-time clothing detection with bounding boxes, labels, and confidence scores.
Comprehensive documentation with API reference and deployment guides.
- Python 3.9+
- pip
git clone https://github.com/RedEye1605/ClothRecognition.git
cd ClothRecognitionpython -m venv .venv
# Windows
.venv\Scripts\activate
# Linux/Mac
source .venv/bin/activatecd backend
pip install -r requirements.txtPlace trained models in models/ folder:
cloth_classifier.pt- Clothing detection model (YOLOv8)color_classifier.pt- Color classification model (YOLOv8-cls)
π‘ Tip: Train your own models using the Jupyter notebook in
notebooks/
cd backend
uvicorn app.main:app --reload --port 8000You should see:
π Loading detection model from: .../models/cloth_classifier.pt
β
Detection model loaded: cloth_classifier.pt
π Loading color model from: .../models/color_classifier.pt
β
Color model loaded: color_classifier.pt
π FashionAI API v3.0.0 starting...
Open a new terminal:
cd frontend
python -m http.server 5500Navigate to: http://127.0.0.1:5500
ClothRecognition/
βββ backend/
β βββ app/
β β βββ __init__.py
β β βββ main.py # FastAPI entry point
β β βββ config.py # Configuration settings
β β βββ schemas.py # Pydantic models
β β βββ services/
β β β βββ __init__.py
β β β βββ detector.py # YOLO detection service
β β βββ routers/
β β βββ __init__.py
β β βββ detection.py # API endpoints
β βββ run.py # Alternative entry point
β βββ requirements.txt
β
βββ frontend/
β βββ index.html # Landing page
β βββ app.html # Detection application
β βββ docs.html # Documentation page
β βββ styles.css # Shared styles
β βββ Logo.png # Project logo
β
βββ models/
β βββ cloth_classifier.pt # Clothing detection model
β βββ color_classifier.pt # Color classification model
β
βββ notebooks/
β βββ cloth_detection_training.ipynb # Training notebook
β
βββ deployment/
β βββ Dockerfile # Docker configuration
β βββ docker-compose.yml # Docker Compose
β βββ huggingface/ # HuggingFace Spaces deployment
β
βββ requirements.txt # Root dependencies
βββ QUICKSTART.md # Quick start guide
βββ README.md # This file
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check with model status |
/detect |
POST | Detect clothing with color classification |
/detect/batch |
POST | Batch detection for multiple images |
/classes |
GET | List all supported classes |
curl -X POST "http://127.0.0.1:8000/detect" \
-H "Content-Type: multipart/form-data" \
-F "file=@image.jpg" \
-F "confidence=0.25"{
"success": true,
"detections": [
{
"className": "Tshirt",
"confidence": 0.95,
"bbox": [100, 100, 200, 200],
"color": "blue",
"colorConfidence": 0.87,
"colorHex": "#3B82F6",
"label": "Blue Tshirt"
}
],
"processingTime": 0.045,
"imageSize": [640, 480]
}- Open
notebooks/cloth_detection_training.ipynbin Google Colab - Upload your dataset or use the default dataset
- Run all cells (recommended: T4 GPU runtime)
- Download trained models:
cloth_classifier.ptcolor_classifier.pt
- Place models in the
models/folder
cd deployment
docker-compose up -ddocker build -t fashionai .
docker run -p 8000:8000 fashionai- Create a new Space on HuggingFace
- Select Gradio SDK
- Upload all files from
deployment/huggingface/:app.pyrequirements.txt- Model files (
.pt)
- Your app will be live!
| Category | Technology |
|---|---|
| Machine Learning | YOLOv8, PyTorch, Ultralytics |
| Backend | FastAPI, Uvicorn, Pydantic |
| Frontend | HTML5, CSS3, JavaScript |
| Design | Glass-morphism, Dark Theme |
| Deployment | Docker, HuggingFace Spaces |
The project includes comprehensive documentation:
- Landing Page (
frontend/index.html) - Overview and features - Application (
frontend/app.html) - Detection interface - Documentation (
frontend/docs.html) - Full API docs and guides
Access the documentation by running the frontend server and navigating to the Docs page.
Environment variables can be set in .env or passed directly:
| Variable | Default | Description |
|---|---|---|
MODEL_PATH |
../models |
Path to model files |
API_PORT |
8000 |
Backend API port |
CONFIDENCE_THRESHOLD |
0.25 |
Default detection confidence |
- Verify
models/folder contains both.ptfiles - Check file names match expected names
- Ensure backend is running on port 8000
- Check CORS settings if using different ports
- First request loads models into memory
- Subsequent requests will be faster
- Camera requires HTTPS or localhost
- Check browser permissions
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics for YOLOv8
- FastAPI for the amazing framework
- PyTorch for deep learning capabilities
Made with β€οΈ for AI enthusiasts