🚀 AI-powered video summarization pipeline — from URL to summary, using YTDLP, Whisper, Gemma Embeddings, and OpenAI models.
Video Summarizer is an intelligent system that automatically extracts, transcribes, embeds, and summarizes content from any video URL.
It leverages Dockerized video processing, OpenAI Whisper, and vector embeddings to generate concise, meaningful summaries.
-
🎥 Video Extraction
- The app takes a video URL (e.g., YouTube link).
- It uses YTDLP, running inside a Docker container, to download and convert the video into an MP3 file.
-
🎧 Audio Transcription
- The MP3 file is sent to OpenAI Whisper, which performs speech-to-text transcription.
- Outputs a clean, time-aligned transcript of the video.
-
🧠 Embedding & Storage
- The transcript is stored in a PostgreSQL database.
- Text is split into meaningful chunks.
- Each chunk is embedded using Gemma Embedding (768 dimensions) and stored with its vector representation for fast retrieval.
-
✨ Summarization
- The transcript is passed to an OpenAI model, which generates a concise summary of the video’s content.
🐳 Separate Docker container — currently runs locally; migrate to host or orchestrated environment (e.g., Docker Compose / Kubernetes).
⚙️ Automate OLama — currently must be started manually; integrate into the main app workflow.
🌐 Add REST API for external services to fetch transcripts and summaries.
💻 Frontend Interface — allow users to input URLs and view results directly.
💾 Caching Mechanism — avoid redundant downloads or transcriptions.
🎬 Watch the demo:
👉 Video Summarizer Demo on Google Drive
[Video URL]
↓
[YTDLP in Docker] → MP3
↓
[Whisper Model] → Transcript
↓
[Gemma Embedding] → Vector Store (Postgres)
↓
[OpenAI Model] → Summary