Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Sentiment Analysis Tool

Small web app that predicts facial emotions from images using an ONNX model and a Next.js frontend.

Key pieces
- Frontend: `frontend/` — Next.js app (TypeScript) serving the UI and demo at `/demo`.
- Model API: `model-lab/` — FastAPI app that loads `models/onxx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in ONNX model path.

Line 7 contains "onxx_models" which should be "onnx_models" (ONNX format).

🔎 Proposed fix
-- Model API: `model-lab/` — FastAPI app that loads `models/onxx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint.
+- Model API: `model-lab/` — FastAPI app that loads `models/onnx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Model API: `model-lab/` — FastAPI app that loads `models/onxx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint.
- Model API: `model-lab/` — FastAPI app that loads `models/onnx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint.
🤖 Prompt for AI Agents
In README.md around line 7, the ONNX model directory name is misspelled as
"onxx_models"; update the path to "onnx_models" so the description reads
`models/onnx_models/emotion-ferplus-8.onnx` (fixing the typo only, preserve
surrounding text and formatting).


Quick start (frontend)

```bash
cd frontend
npm install
npm run dev
# open http://localhost:3000
```

Quick start (model API, local Python)

```bash
cd model-lab
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# from project root (recommended):
uvicorn app.main:app --reload --host 0.0.0.0 --port 8080
# or if you're inside model-lab/app:
uvicorn main:app --reload --host 0.0.0.0 --port 8080
# API will be available at http://localhost:8080
```

Quick start (model API, with Docker)

```bash
docker build -t sentiment-model -f model-lab/Dockerfile model-lab
docker run -p 8080:8080 sentiment-model
# API will be available at http://localhost:8080
```

Usage
- Demo UI: http://localhost:3000/demo
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrap bare URL in markdown link.

Line 41 uses a bare URL which violates markdown linting standards (MD034). Wrap it in a proper markdown link format.

🔎 Proposed fix
-- Demo UI: http://localhost:3000/demo
+- Demo UI: [http://localhost:3000/demo](http://localhost:3000/demo)
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

41-41: Bare URL used

(MD034, no-bare-urls)

🤖 Prompt for AI Agents
In README.md around line 41, the bare URL "http://localhost:3000/demo" violates
markdown linting (MD034); replace the bare URL with a markdown link by wrapping
the URL in square brackets for link text and parentheses for the target (e.g.,
[Demo UI](http://localhost:3000/demo)) so the link is properly formatted and
linter-compliant.

- API: `POST /predict` with multipart `file` (image) and optional `model_option` form field.

More detailed developer instructions are in the `docs/` folder.
23 changes: 23 additions & 0 deletions docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Project Architecture

Overview

This repository contains two primary parts:

- Frontend (`frontend/`): Next.js application that provides the UI and a small demo at `/demo`.
- Model service (`model-lab/`): FastAPI-based Python service that loads an ONNX emotion detection model and exposes a `POST /predict` endpoint.

Key files and responsibilities

- `frontend/` — Next.js app and UI components in `frontend/components` and `frontend/app`.
- `model-lab/app/main.py` — FastAPI app; accepts multipart image uploads and returns predicted emotion and probabilities.
- `model-lab/app/model.py` — Model wrapper that loads the ONNX model and exposes a `predict()` method.
- `model-lab/models/onxx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in ONNX model path.

Line 15 contains "onxx_models" which should be "onnx_models" (ONNX format).

🔎 Proposed fix
-- `model-lab/models/onxx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
+- `model-lab/models/onnx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- `model-lab/models/onxx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
- `model-lab/models/onnx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
🤖 Prompt for AI Agents
In docs/architecture.md around line 15, the ONNX model directory name is
misspelled as "onxx_models"; update the path to "onnx_models" so the line reads
`model-lab/models/onnx_models/emotion-ferplus-8.onnx`, ensuring the directory
name correctly matches the ONNX format and project structure.

- `model-lab/Dockerfile` — Container image for the model service (runs Uvicorn on port 8080).

Runtime flow

1. User uploads an image via the frontend (or `curl`).
2. Frontend sends a multipart `POST` to `http://<model-host>:8080/predict`.
3. FastAPI endpoint validates the image, instantiates `Model(model_path, model_option)` and calls `predict()`.
4. The API returns JSON `{ "emotion": <label>, "probabilities": <map> }`.
48 changes: 48 additions & 0 deletions docs/setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Setup and Development

This document lists commands to run the frontend and model API for development.

Prerequisites
- Node.js (18+ recommended) and `npm` (or `pnpm`/`yarn`) for the frontend.
- Python 3.10 for the model API (the Docker image uses 3.10-slim).
- Docker (optional) if you prefer containerized model service.

Frontend (development)

```bash
cd frontend
npm install
npm run dev
# opens on http://localhost:3000
```

Frontend (build)

```bash
cd frontend
npm install
npm run build
npm start
```

Model API (local Python)

```bash
cd model-lab
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# run from repo root or inside model-lab as described in README
uvicorn app.main:app --reload --host 0.0.0.0 --port 8080
```

Model API (Docker)

```bash
docker build -t sentiment-model -f model-lab/Dockerfile model-lab
docker run -p 8080:8080 sentiment-model
```

Notes
- The FastAPI app configures CORS; by default the frontend origin is `http://localhost:3000`.
- The ONNX model used is at `model-lab/models/onxx_models/emotion-ferplus-8.onnx`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in ONNX model path.

Line 48 contains "onxx_models" which should be "onnx_models" (ONNX format).

🔎 Proposed fix
-- The ONNX model used is at `model-lab/models/onxx_models/emotion-ferplus-8.onnx`.
+- The ONNX model used is at `model-lab/models/onnx_models/emotion-ferplus-8.onnx`.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- The ONNX model used is at `model-lab/models/onxx_models/emotion-ferplus-8.onnx`.
- The ONNX model used is at `model-lab/models/onnx_models/emotion-ferplus-8.onnx`.
🤖 Prompt for AI Agents
In docs/setup.md around line 48, the ONNX model path contains a typo
("onxx_models"); update the path segment to "onnx_models" so the line reads the
model location as model-lab/models/onnx_models/emotion-ferplus-8.onnx; ensure
any other references in this file use the corrected "onnx_models" spelling.

22 changes: 22 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Usage Examples

Using the demo UI

1. Start the frontend: `cd frontend && npm run dev` (port 3000).
2. Start the model API (see `docs/setup.md`) on port 8080.
3. Open `http://localhost:3000/demo` and upload an image.

Direct API example (curl)

```bash
curl -X POST "http://localhost:8080/predict" \
-F "file=@/path/to/face.jpg" \
-F "model_option=1"

# Example response:
# { "emotion": "happy", "probabilities": {"happy": 0.88, "neutral": 0.07, ...} }
```

Notes
- The API expects an image file (content-type starting with `image/`).
- `model_option` is an optional form field that is forwarded to the model wrapper.