AVIRI transforms hiring by converting resumes into engaging AI-driven video pitches. Recruiters interact with candidate avatars via real-time chat, making talent discovery fast, visual, and interactive.
π Video Demo: Google Drive
Recruiters spend too much time on manual resume review, repetitive calls, and outdated systems. AVIRI streamlines this with a smart, visual, and interactive hiring experience that feels as easy as scrolling reels.
- Generates video avatars from resumes and photos
- Provides a chat interface powered by Hugging Face for recruiter-agent interaction
- Displays candidates in a swipe-style carousel UI
- Supports bookmarking, liking, messaging, and dark mode
- Enables inclusive hiring with accessibility and multilingual pitch support
- Resume Parsing: Converts resume into structured profile info
- Pitch Video Generation: Uses
SadTalker,EdgeTTS,FFmpegfor video synthesis - Avatar Chat: Hugging Face models power real-time AI conversations
- Carousel UI: Swipe left/right to shortlist or reject candidates
- Background Removal:
rembgandface_recognitionfor clean avatar videos - Inclusive UI: Accessibility mode, dark mode, multilingual agents
| Component | Tech Used |
|---|---|
| Frontend | React.js |
| Backend | Node.js, Express.js (Nodemon) |
| Database | MongoDB |
| Resume Parsing | Python |
| Chatbot Integration | Hugging Face Transformers (LLM models) |
| Video Generation | SadTalker, EdgeTTS, FFmpeg |
| Image Processing | rembg, face_recognition |
LinkedIn-Hack/
βββ Avatar/ # Talking avatar
generation (SadTalker, etc.)
βββ backend/ # Node.js backend
β βββ app.js
β βββ routes/
β βββ controllers/
βββ frontend/ # React-based
frontend
βββ models/
β βββ Elevator pitch/ # Parsed
resume text , pitch scripts
git clone https://github.com/itsvamz/LinkedIn-Hack.git
cd LinkedIn-Hackcd backend
npm install
npx nodemon app.jscd frontend
npm install
npm run devπ¦ Key Dependencies
react, axios, tailwindcss
express, mongoose, nodemon
huggingface, transformers, python-shell
formidable, ffmpeg-static, sadtalker
edgetts, rembg, face_recognition
β Future Enhancements Real-time live agent interviews
QR code to launch pitch on mobile
Dynamic pitch updates over time
Blockchain-based credential verification
- Copy
.env.exampleto.envin both backend and frontend folders. - Fill in all required values (DB URIs, API keys, service URLs, etc.).
- Install gdown (if not already):
pip install gdown - Run the model download script from the project root:
bash scripts/download_models.sh - This will download all required models to
Avatar/checkpoints/.
- Install dependencies:
cd backend npm install - Start the backend:
npm start
- Install dependencies:
cd Avatar # (Optional) Create and activate a virtual environment # python3 -m venv venv # source venv/bin/activate (Linux/Mac) # venv\Scripts\activate (Windows) pip install -r requirements.txt - (If needed) Start the Gradio demo UI:
python app_sadtalker.py - For backend integration, no need to run a separate service; the backend calls the Python script directly.
- Install dependencies:
cd frontend npm install - Start the frontend:
npm start
- Windows: Use
venv\Scripts\activateto activate Python virtual environments. - Linux/Mac: Use
source venv/bin/activate. - Always run the model download script from the project root to ensure files go to the correct directory.
- Frontend: Vercel, Netlify, or similar.
- Backend: Render.com, Railway, Heroku, or a cloud VM (AWS, Azure, GCP).
- Avatar (Python):
- If backend and avatar are on the same server, no extra step.
- For scaling, deploy avatar as a separate service (Render, Railway, or a VM).
- Frontend:
- Build with:
npm run build - Deploy the
build/folder to your chosen platform. - Set environment variables (API URLs) in the platform dashboard.
- Build with:
- Backend:
- Zip and upload your backend folder (excluding
node_modulesand large files). - Set all environment variables in the platform dashboard.
- Run the model download script on the server after deployment.
- Zip and upload your backend folder (excluding
- Avatar:
- If needed, run the model download script and install dependencies on the server.
- For production, use a cloud storage provider (AWS S3, Cloudinary, ImageKit, etc.) for uploads.
- Update your backend to use cloud storage for avatars, resumes, and videos.
After deployment, test the following:
- Registration and login
- Resume parsing
- Avatar/video/photo upload and generation
- All user flows (end-to-end)
For any issues, check logs on your deployment platform and ensure all environment variables and model files are correctly set up.