Real-time AR-powered accessibility tool for visual, hearing, and speech impairments
AccessLens is a web-based AR application that helps people with disabilities interact with the world in real-time through:
- Live Speech-to-Text Captions - Real-time captions for hard of hearing users
- Scene & Person Description - Audio narration for visually impaired users
- AR Memory System - Face recognition with personalized notes
- Hand Gesture Controls - Pinch-to-click and hand menu navigation (always enabled)
- Node.js (v16+)
- Modern web browser with camera access
- Firebase account (for backend features)
npm install
npm run dev- Copy
.env.exampleto.env:cp .env.example .env
- Fill in your Firebase credentials in
.env(see Environment Setup Guide) - The app will use environment variables if available, otherwise fallback to default values
AccessLens/
├── frontend/ # Frontend AR interface
├── ml/ # ML models and recognition logic
├── backend/ # Firebase/backend services
├── assets/ # Static assets (images, models)
├── docs/ # Documentation
└── config/ # Configuration files
- Build Tool: Vite (Fast HMR, ES Modules)
- AR Framework: A-Frame / AR.js
- ML: MediaPipe, TensorFlow.js, face-api.js
- Speech: Web Speech API
- Backend: Firebase Firestore
- Hosting: Vercel / GitHub Pages
- Frontend Developer - A-Frame interface, camera, overlays
- ML Developer - MediaPipe, TensorFlow.js, face-api.js integration
- Backend Developer - Firebase setup and data management
- UX/Accessibility Lead - UI/UX design and accessibility features
See docs/TIMELINE.md for detailed milestones and deadlines.
- Conversation Captioning - Live speech to text
- Scene Description - Audio narration
- Face Memory - Recognized individuals with notes
- Hand Gesture Controls - Pinch-to-click and hand menu navigation
MIT License - Hackathon Project