█████╗ ██████╗ ██╗██████╗ ██████╗██╗ ██╗ █████╗ ████████╗ ██╔══██╗██╔══██╗██║╚════██╗██╔════╝██║ ██║██╔══██╗╚══██╔══╝ ███████║██████╔╝██║ █████╔╝██║ ███████║███████║ ██║ ██╔══██║██╔═══╝ ██║██╔═══╝ ██║ ██╔══██║██╔══██║ ██║ ██║ ██║██║ ██║███████╗╚██████╗██║ ██║██║ ██║ ██║ ╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝ ╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝
API2CHAT is a lightweight (below 9KBs), purely client-side Graphical User Interface designed to interact with any OpenAI-compatible LLM endpoint. It bypasses the need for bloated backends, databases, or subscriptions, allowing you to plug in your own API keys securely.
- 🛡️ 100% Zero-Knowledge Security: No data, API keys, or chat logs are ever transmitted to or stored on a centralized server. The app runs entirely in your browser's volatile memory. Refreshing or flushing the session destroys the keys locally.
- 🔌 Maximum Compatibility: Natively supports OpenAI, Google (Gemini via OpenAI Shim), DeepSeek, and OpenRouter. Features a "Custom" mode to connect to any local (e.g., LM Studio, Ollama) or remote provider using the OpenAI standard.
- 📎 Local File Context: Files are read locally by your browser and injected into the LLM prompt without requiring an upload server. Even when you can attach a file to it (e.g., a PDF) it is not stored anywhere.
- 💻 Host Anywhere: Because there is no PHP, Python, or Node.js required, you can host API2CHAT on GitHub Pages, S3 buckets, cheap shared hosting (e.g., Namecheap), or simply double-click
index.htmlon your desktop (Windows, Linux, iOS, Android, rpi-related...). - 🎨 Hacker Aesthetic: A sleek, minimal, dark-mode UI with full Markdown rendering and code syntax highlighting.
API2CHAT features a sleek, terminal-inspired interface designed for speed and low-friction interactions.
API2CHAT can natively read local files and inject them directly into your LLM prompt. Files are never uploaded to a server. Your browser reads the text locally and sends it straight to the API provider.
- Clone or download this repository or its ZIP release (less than 9KBs).
- Unzip the contents to your device or any hosting provider (from low-end Namecheap, Hostgator, etc... will work!).
- Double-click
index.htmlto open it in any browser and OS (requires no: PHP, Python, Node.js... nothing!). - Select provider, paste your API key, and start chatting. You can copy/paste text or upload a file to ask the LLM.
- (Optionally) To make sure all data was erased click "Flush session", or if you wish to start a new clean chat.
Want to host your own secure, live instance for free?
- Fork or upload this repository to GitHub.
- Go to your repository Settings > Pages.
- Under Branch, select
mainand click Save. - Your live link will be generated in minutes.
| Provider | Base URL | Default Model |
|---|---|---|
| OpenAI | https://api.openai.com/v1 |
gpt-4o-mini |
| DeepSeek | https://api.deepseek.com |
deepseek-chat |
| OpenRouter | https://openrouter.ai/api/v1 |
qwen/qwen3.5-flash-02-23 |
| Custom | User Defined | User Defined |
Created by Dr. Manuel Herrador (mherrador@ujaen.es) - University of Jaen (Spain)
Released under the Apache 2.0 License. You are free to modify, distribute, and use this software privately or commercially. The author takes no liability for any damages or data lost.


