Offline-first prototype for consent-first accessibility profiling and personalized journey planning.
Mock (offline): fully local deterministic behavior, no model server, no API key.Ollama (local): local text + vision models (/api/chat) for richer responses, still no cloud API key.
Notes:
- This repo does not include a cloud LLM provider.
- Ollama can require internet once for
ollama pull, then inference is local.
- Runs a short consent-first dialogue to infer functional needs only.
- Builds validated profile JSON (
accessibility_profile.v1) with Pydantic + JSON Schema. - Personalizes route plans from fixture routes.
- Supports two Streamlit flows:
- Chat-only
- Stepper (
Consent -> Profile -> Trip -> Review/Export)
- Supports optional consent-gated image hazard analysis (
stairs,slope,crowd).
- Vision: stepwise text, avoid map-only phrasing, landmark-friendly guidance.
- Hearing: avoid audio-only instructions and prefer visible text cues.
- Sign users: supports
sign_gloss_textoutput mode. - Mobility: step-free preference and strong stair alerts.
- Cognitive or child-focused needs: switches to simple language mode with reminders/checklists.
- UI and plan output support
English,中文,Deutsch. Autocurrently defaults to English.- Short answers supported:
yes/no,有/没有,是/否,ja/nein,skip.
- Explicit consent required before any image analysis.
- Source can be:
- Upload (
.png,.jpg,.jpeg) - Built-in sample images
- Upload (
- Analysis is manually triggered via button (
Analyze image hazards), not auto-run on upload. - Sample fixtures use fixed demo mappings:
default_stairs.png-> stairs highdefault_slope.png-> slope highdefault_crowd.png-> crowd highdefault_none.png-> all none
backend/app/models.pybackend/app/schemas/accessibility_profile.v1.schema.jsonbackend/app/providers/llm_provider.pybackend/app/providers/route_provider.pybackend/app/providers/image_provider.pybackend/app/services/profiler_agent.pybackend/app/services/planner_agent.pybackend/app/evaluation/harness.pyfrontend/app.pybackend/tests/
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtpytest -q
streamlit run frontend/app.py- Start Ollama:
ollama serve- Pull models:
ollama pull llama3.1:8b
ollama pull llava:7b- In the Streamlit sidebar set:
LLM backendtoOllama (local)Ollama base URLtohttp://127.0.0.1:11434orhttp://localhost:11434Text modelandVision modelnames exactly asollama list
Fallback behavior:
- If Ollama is unreachable or a request fails, the app falls back to mock providers for that turn/plan.
- Check server:
curl http://127.0.0.1:11434/api/tags- If
ollama servesaysaddress already in use, Ollama is already running. - If image analysis is slow, this is often model load latency on first vision call.
- If image analysis errors, check model name and ensure it supports image input.
python -m backend.app.evaluation.run_eval- No medical diagnosis inference.
- Functional needs only, with skip allowed.
- Confirm-understanding recap in profiler turns.
- Planner claims only what route fixture metadata supports.