-
Notifications
You must be signed in to change notification settings - Fork 30
Added documentation #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,44 @@ | ||
| # Sentiment Analysis Tool | ||
|
|
||
| Small web app that predicts facial emotions from images using an ONNX model and a Next.js frontend. | ||
|
|
||
| Key pieces | ||
| - Frontend: `frontend/` — Next.js app (TypeScript) serving the UI and demo at `/demo`. | ||
| - Model API: `model-lab/` — FastAPI app that loads `models/onxx_models/emotion-ferplus-8.onnx` and exposes a `/predict` endpoint. | ||
|
|
||
| Quick start (frontend) | ||
|
|
||
| ```bash | ||
| cd frontend | ||
| npm install | ||
| npm run dev | ||
| # open http://localhost:3000 | ||
| ``` | ||
|
|
||
| Quick start (model API, local Python) | ||
|
|
||
| ```bash | ||
| cd model-lab | ||
| python -m venv .venv | ||
| source .venv/bin/activate | ||
| pip install -r requirements.txt | ||
| # from project root (recommended): | ||
| uvicorn app.main:app --reload --host 0.0.0.0 --port 8080 | ||
| # or if you're inside model-lab/app: | ||
| uvicorn main:app --reload --host 0.0.0.0 --port 8080 | ||
| # API will be available at http://localhost:8080 | ||
| ``` | ||
|
|
||
| Quick start (model API, with Docker) | ||
|
|
||
| ```bash | ||
| docker build -t sentiment-model -f model-lab/Dockerfile model-lab | ||
| docker run -p 8080:8080 sentiment-model | ||
| # API will be available at http://localhost:8080 | ||
| ``` | ||
|
|
||
| Usage | ||
| - Demo UI: http://localhost:3000/demo | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wrap bare URL in markdown link. Line 41 uses a bare URL which violates markdown linting standards (MD034). Wrap it in a proper markdown link format. 🔎 Proposed fix-- Demo UI: http://localhost:3000/demo
+- Demo UI: [http://localhost:3000/demo](http://localhost:3000/demo)🧰 Tools🪛 markdownlint-cli2 (0.18.1)41-41: Bare URL used (MD034, no-bare-urls) 🤖 Prompt for AI Agents |
||
| - API: `POST /predict` with multipart `file` (image) and optional `model_option` form field. | ||
|
|
||
| More detailed developer instructions are in the `docs/` folder. | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,23 @@ | ||||||
| # Project Architecture | ||||||
|
|
||||||
| Overview | ||||||
|
|
||||||
| This repository contains two primary parts: | ||||||
|
|
||||||
| - Frontend (`frontend/`): Next.js application that provides the UI and a small demo at `/demo`. | ||||||
| - Model service (`model-lab/`): FastAPI-based Python service that loads an ONNX emotion detection model and exposes a `POST /predict` endpoint. | ||||||
|
|
||||||
| Key files and responsibilities | ||||||
|
|
||||||
| - `frontend/` — Next.js app and UI components in `frontend/components` and `frontend/app`. | ||||||
| - `model-lab/app/main.py` — FastAPI app; accepts multipart image uploads and returns predicted emotion and probabilities. | ||||||
| - `model-lab/app/model.py` — Model wrapper that loads the ONNX model and exposes a `predict()` method. | ||||||
| - `model-lab/models/onxx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix typo in ONNX model path. Line 15 contains "onxx_models" which should be "onnx_models" (ONNX format). 🔎 Proposed fix-- `model-lab/models/onxx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.
+- `model-lab/models/onnx_models/emotion-ferplus-8.onnx` — The ONNX emotion recognition model file.📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| - `model-lab/Dockerfile` — Container image for the model service (runs Uvicorn on port 8080). | ||||||
|
|
||||||
| Runtime flow | ||||||
|
|
||||||
| 1. User uploads an image via the frontend (or `curl`). | ||||||
| 2. Frontend sends a multipart `POST` to `http://<model-host>:8080/predict`. | ||||||
| 3. FastAPI endpoint validates the image, instantiates `Model(model_path, model_option)` and calls `predict()`. | ||||||
| 4. The API returns JSON `{ "emotion": <label>, "probabilities": <map> }`. | ||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,48 @@ | ||||||
| # Setup and Development | ||||||
|
|
||||||
| This document lists commands to run the frontend and model API for development. | ||||||
|
|
||||||
| Prerequisites | ||||||
| - Node.js (18+ recommended) and `npm` (or `pnpm`/`yarn`) for the frontend. | ||||||
| - Python 3.10 for the model API (the Docker image uses 3.10-slim). | ||||||
| - Docker (optional) if you prefer containerized model service. | ||||||
|
|
||||||
| Frontend (development) | ||||||
|
|
||||||
| ```bash | ||||||
| cd frontend | ||||||
| npm install | ||||||
| npm run dev | ||||||
| # opens on http://localhost:3000 | ||||||
| ``` | ||||||
|
|
||||||
| Frontend (build) | ||||||
|
|
||||||
| ```bash | ||||||
| cd frontend | ||||||
| npm install | ||||||
| npm run build | ||||||
| npm start | ||||||
| ``` | ||||||
|
|
||||||
| Model API (local Python) | ||||||
|
|
||||||
| ```bash | ||||||
| cd model-lab | ||||||
| python -m venv .venv | ||||||
| source .venv/bin/activate | ||||||
| pip install -r requirements.txt | ||||||
| # run from repo root or inside model-lab as described in README | ||||||
| uvicorn app.main:app --reload --host 0.0.0.0 --port 8080 | ||||||
| ``` | ||||||
|
|
||||||
| Model API (Docker) | ||||||
|
|
||||||
| ```bash | ||||||
| docker build -t sentiment-model -f model-lab/Dockerfile model-lab | ||||||
| docker run -p 8080:8080 sentiment-model | ||||||
| ``` | ||||||
|
|
||||||
| Notes | ||||||
| - The FastAPI app configures CORS; by default the frontend origin is `http://localhost:3000`. | ||||||
| - The ONNX model used is at `model-lab/models/onxx_models/emotion-ferplus-8.onnx`. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix typo in ONNX model path. Line 48 contains "onxx_models" which should be "onnx_models" (ONNX format). 🔎 Proposed fix-- The ONNX model used is at `model-lab/models/onxx_models/emotion-ferplus-8.onnx`.
+- The ONNX model used is at `model-lab/models/onnx_models/emotion-ferplus-8.onnx`.📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| # Usage Examples | ||
|
|
||
| Using the demo UI | ||
|
|
||
| 1. Start the frontend: `cd frontend && npm run dev` (port 3000). | ||
| 2. Start the model API (see `docs/setup.md`) on port 8080. | ||
| 3. Open `http://localhost:3000/demo` and upload an image. | ||
|
|
||
| Direct API example (curl) | ||
|
|
||
| ```bash | ||
| curl -X POST "http://localhost:8080/predict" \ | ||
| -F "file=@/path/to/face.jpg" \ | ||
| -F "model_option=1" | ||
|
|
||
| # Example response: | ||
| # { "emotion": "happy", "probabilities": {"happy": 0.88, "neutral": 0.07, ...} } | ||
| ``` | ||
|
|
||
| Notes | ||
| - The API expects an image file (content-type starting with `image/`). | ||
| - `model_option` is an optional form field that is forwarded to the model wrapper. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in ONNX model path.
Line 7 contains "onxx_models" which should be "onnx_models" (ONNX format).
🔎 Proposed fix
📝 Committable suggestion
🤖 Prompt for AI Agents