EmotiTrack is a real-time emotion detection system that uses OpenCV and deep learning to analyze facial expressions and detect emotions. This project is designed to interpret human emotions such as happiness, sadness, anger, surprise, and more through facial expression recognition. It can be applied in various domains like mental health analysis, customer feedback, and human-computer interaction.
- Real-Time Emotion Detection: Captures and analyzes emotions from live camera feed or pre-recorded videos.
- Emotion Categories: Detects emotions such as Happy, Sad, Angry, Neutral, and Surprise.
- Pretrained Models: Utilizes deep learning models trained on datasets like FER2013.
- User-Friendly Interface: Easy-to-use interface for running the system and viewing results.
Ensure you have the following installed:
- Python (>=3.7)
- pip (Python package manager)
- Clone the repository:
git clone https://github.com/yourusername/EmotiTrack.git cd EmotiTrack - Create a virtual environment (optional but recommended):
python -m venv env source env/bin/activate # On Windows: env\Scripts\activate
- Install the required dependencies:
pip install -r requirements.txt
-
Face Detection:
- Uses OpenCV's Haar cascades or Dlib to detect faces in an image or video feed.
-
Emotion Classification:
- A pretrained deep learning model (e.g., CNN) analyzes facial features and classifies the detected emotion.
-
Visualization:
- Detected emotions are displayed in real time on the video feed with bounding boxes around the face.
- Open your terminal and navigate to the project directory.
- Run the following command:
python emotitrack.py
- Follow the on-screen instructions to use your webcam or load a video file.
To run the detector on a sample video file:
python emotitrack.py --video path/to/video.mp4To use the webcam for real-time emotion detection:
python emotitrack.py --live- Programming Language: Python
- Computer Vision Library: OpenCV
- Deep Learning Framework: TensorFlow/Keras
- Pretrained Models: FER2013-based CNN
- Visualization: Matplotlib for plotting results
The model is trained on the FER2013 dataset, which contains 35,887 grayscale images of facial expressions categorized into 7 emotions:
- Happy
- Sad
- Angry
- Neutral
- Fear
- Disgust
- Surprise
- Accuracy: Achieved 78% accuracy on the validation dataset.
- Performance: Real-time emotion detection at ~30 FPS (frames per second) on a standard webcam.
- Add support for multi-face detection and emotion recognition.
- Expand emotion categories to include more complex emotions.
- Integrate with APIs for sentiment analysis and mental health insights.
- Develop a mobile-friendly version.