EmotionSpoon_AI is an intelligent system utilizing emotion recognition technology in the field of Human-Computer Interaction (HCI). The project aims to optimize interactions by recognizing users’ emotional states through various biosignals and expressions.
- Multimodal Emotion Recognition: Integrates multiple input channels such as hand-written text for emotion detection.
- Real-time Emotion Analysis: Detects and analyzes users’ emotional states in real time.
- Emotion-based Interaction: Generates context-appropriate playlist based on the recognized emotions.
- User-independent Models: Provides consistent emotion recognition performance regardless of individual differences.
- Language: Python 3.11
- Large Language Model: Llama-3.2-1B
- Sentence Embedding Model: all-MiniLM-L6-v2
- Optical Character Recognition: PaddleOCR
- Data Processing: NumPy, Pandas
-
Clone the repository:
git clone https://github.com/2025-1-HCI-Project/EmotionSpoon_AI.git cd EmotionSpoon_AI -
Install required packages:
pip install -r requirements.txt
Basic execution:
python main.py
This project uses the following emotion models:
- Persona Prompt: Classifies basic emotion categories such as happiness, sadness, surprise, fear, anger, and disgust as if model were psychotherapist.
- Zero-Shot Prompting: Categorizes emotions with high-flexibility and cost-efficiency.
- Improve emotion recognition models considering diverse backgrounds
- Optimize real-time processing performance
- Implement emotion recognition API endpoints
- Create a new issue or review existing issues
- Fork the repository and create a development branch
- Implement and test your changes
- Submit a Pull Request
For project-related inquiries, please submit an issue via GitHub.
This project was developed as part of the 2025 Human-Computer Interaction (HCI) course. The goal is to realize more empathetic and intelligent interfaces through emotion recognition technology.