Skip to content

Btmoy1122/AccessLens

Repository files navigation

AccessLens - AR Accessibility Assistant

Real-time AR-powered accessibility tool for visual, hearing, and speech impairments

🎯 Project Overview

AccessLens is a web-based AR application that helps people with disabilities interact with the world in real-time through:

  • Live Speech-to-Text Captions - Real-time captions for hard of hearing users
  • Scene & Person Description - Audio narration for visually impaired users
  • AR Memory System - Face recognition with personalized notes
  • Hand Gesture Controls - Pinch-to-click and hand menu navigation (always enabled)

🚀 Quick Start

Prerequisites

  • Node.js (v16+)
  • Modern web browser with camera access
  • Firebase account (for backend features)

Installation

npm install
npm run dev

Environment Setup

  1. Copy .env.example to .env:
    cp .env.example .env
  2. Fill in your Firebase credentials in .env (see Environment Setup Guide)
  3. The app will use environment variables if available, otherwise fallback to default values

📁 Project Structure

AccessLens/
├── frontend/           # Frontend AR interface
├── ml/                 # ML models and recognition logic
├── backend/            # Firebase/backend services
├── assets/             # Static assets (images, models)
├── docs/               # Documentation
└── config/             # Configuration files

🧱 Tech Stack

  • Build Tool: Vite (Fast HMR, ES Modules)
  • AR Framework: A-Frame / AR.js
  • ML: MediaPipe, TensorFlow.js, face-api.js
  • Speech: Web Speech API
  • Backend: Firebase Firestore
  • Hosting: Vercel / GitHub Pages

👥 Team Roles

  • Frontend Developer - A-Frame interface, camera, overlays
  • ML Developer - MediaPipe, TensorFlow.js, face-api.js integration
  • Backend Developer - Firebase setup and data management
  • UX/Accessibility Lead - UI/UX design and accessibility features

⏱️ Timeline

See docs/TIMELINE.md for detailed milestones and deadlines.

📚 Documentation

🎥 Demo Scenarios

  1. Conversation Captioning - Live speech to text
  2. Scene Description - Audio narration
  3. Face Memory - Recognized individuals with notes
  4. Hand Gesture Controls - Pinch-to-click and hand menu navigation

📝 License

MIT License - Hackathon Project

About

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors