Advanced ASL translation system with real-time sign detection and 3D avatar animation.
- Audio → ASL: Speak naturally and see sign translations in real-time
- ASL → Audio: Show signs and hear voice output with text-to-speech
- 3D Avatar System: Interactive 3D characters performing ASL signs
- Multiple ML Models: Support for MobileNetV2, ResNet50, EfficientNetB0, InceptionV3, VGG16, DenseNet121
- Real-time Processing: Live transcription and sign detection
- Training Pipeline: Train custom models with your data
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Start the backend server:
cd backend python main.py -
Access the application: Open
http://localhost:8081in your browser
src/components/astraign/- React components for ASL featuressrc/services/- ML model services and API integrationssrc/data/- ASL signs database and training databackend/- FastAPI backend for transcription and TTSpublic/models/- 3D model files for avatars
- Frontend: React + TypeScript + Three.js + TensorFlow.js
- Backend: FastAPI with Python
- ML Models: TensorFlow.js implementations
- 3D Rendering: Three.js with React Three Fiber
- UI: Shadcn/ui with Tailwind CSS
Create a .env file in the backend directory:
OPENAI_API_KEY=your_openai_key
GEMINI_API_KEY=your_gemini_key
ELEVENLABS_API_KEY=your_elevenlabs_key
API_SECRET=your_secret_keyThe system supports multiple computer vision models:
- MobileNetV2 (default, fast)
- ResNet50 (balanced accuracy/speed)
- EfficientNetB0 (high accuracy)
- InceptionV3 (very high accuracy)
- VGG16 (legacy support)
- DenseNet121 (alternative architecture)
- Click the "Audio → ASL" tab
- Select your preferred detection model
- Click the microphone button to start recording
- Speak naturally and watch the ASL translation appear
- Use the play button to see avatar animations
- Click the "ASL → Audio" tab
- Enable camera access
- Start sign detection
- Show ASL signs to the camera
- Listen to the voice output
- Click the "Avatar" tab
- Choose your preferred avatar model and environment
- Select signs to animate or use quick actions
- Control playback speed and camera settings
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is licensed under the MIT License.
For support and questions, please contact the AstraSign team.
The only requirement is having Node.js & npm installed - install with nvm
Follow these steps:
# Step 1: Clone the repository using the project's Git URL.
git clone <YOUR_GIT_URL>
# Step 2: Navigate to the project directory.
cd <YOUR_PROJECT_NAME>
# Step 3: Install the necessary dependencies.
npm i
# Step 4: Start the development server with auto-reloading and an instant preview.
npm run devEdit a file directly in GitHub
- Navigate to the desired file(s).
- Click the "Edit" button (pencil icon) at the top right of the file view.
- Make your changes and commit the changes.
Use GitHub Codespaces
- Navigate to the main page of your repository.
- Click on the "Code" button (green button) near the top right.
- Select the "Codespaces" tab.
- Click on "New codespace" to launch a new Codespace environment.
- Edit files directly within the Codespace and commit and push your changes once you're done.
This project is built with:
- Vite
- TypeScript
- React
- shadcn-ui
- Tailwind CSS
Simply open Lovable and click on Share -> Publish.
Yes, you can!
To connect a domain, navigate to Project > Settings > Domains and click Connect Domain.
Read more here: Setting up a custom domain