Open-source ChatGPT alternative. Run local LLMs or connect cloud models — with full control and privacy.
Getting Started · Discord · X / Twitter · Bug Reports
| macOS (Universal) | Atomic.Chat_1.1.66_universal.dmg |
| Windows (x64) | Atomic.Chat_1.1.66_x64-setup.exe |
| iOS | App Store |
Download from atomic.chat or GitHub Releases.
- 🧠 Local AI Models — download and run LLMs (Llama, Gemma, Qwen, and more) from HuggingFace
- ⚡ Fast Inference Engines — TurboQuant-optimized llama.cpp on all platforms, MLX for Apple Silicon
- ☁️ Cloud Integration — connect to OpenAI, Anthropic, Mistral, Groq, MiniMax, and others
- 🤖 Custom Assistants — create specialized AI assistants for your tasks
- 🔌 OpenAI-Compatible API — local server at
localhost:1337for other applications - 🔗 Model Context Protocol — MCP integration for agentic capabilities
- 🔒 Privacy First — everything runs locally when you want it to
Atomic Chat ships its own optimized inference stack so models run fast on whatever hardware you have:
- atomic-llama-cpp-turboquant — our fork of
llama.cppwith TurboQuant optimizations for faster quantized inference. Works on macOS, Windows, and Linux across CPU and GPU backends. - MLX-VLM — Apple Silicon-native engine for vision-language models, running directly on the Neural Engine and unified memory. Faster than llama.cpp on M-series chips for supported models.
The local API server at http://localhost:1337/v1 exposes models from both engines through a single OpenAI-compatible endpoint — tools don't need to know which backend is running underneath.
Atomic Chat runs an OpenAI-compatible server at http://localhost:1337/v1, so any agent, CLI, IDE plugin, or app that speaks the OpenAI API can run on top of your local models — no extra glue needed. Just point its base URL at Atomic Chat and you're done.
A few projects already ship first-class support with their own setup docs:
| Tool | What it is | Setup |
|---|---|---|
| OpenCode | Open-source TUI coding agent. Add Atomic Chat as a local provider in opencode.json. |
Setup guide → |
| OpenClaude | Open-source coding-agent CLI for cloud and local models. Lists Atomic Chat as a supported provider. | Providers list → |
| Hermes Workspace | Local-first agent workspace built on Nous Research's Hermes. Uses Atomic Chat as its inference backend. | Repo → |
| nanoclaw | Containerized agent runtime that calls Atomic Chat as an MCP tool. | Skill guide → |
Built something that runs on Atomic Chat? Open a PR and we'll add it here.
- Node.js ≥ 20.0.0
- Yarn ≥ 4.5.3
- Make ≥ 3.81
- Rust (for Tauri)
- (Apple Silicon) MetalToolchain
xcodebuild -downloadComponent MetalToolchain
git clone https://github.com/AtomicBot-ai/Atomic-Chat
cd Atomic-Chat
make devThis handles everything: installs dependencies, builds core components, and launches the app.
Available make targets:
make dev— full development setup and launchmake build— production buildmake test— run tests and lintingmake clean— delete everything and start fresh
yarn install
yarn build:tauri:plugin:api
yarn build:core
yarn build:extensions
yarn dev- macOS: 13.6+ (8GB RAM for 3B models, 16GB for 7B, 32GB for 13B)
- Windows: 10/11 x64 (same RAM recommendations as macOS)
- iOS: 17+ (download from App Store)
If something isn't working:
- Copy your error logs and system specs
- Open an issue on GitHub
- Or ask for help in our
#🆘|atomic-chat-helpchannel
Contributions welcome. See CONTRIBUTING.md for details.
- Bugs: GitHub Issues
- General Discussion: Discord
- Updates: X / Twitter
Apache 2.0 — see LICENSE for details.
Built on the shoulders of giants:
© 2026 Atomic Chat · Built with ❤️ · atomic.chat

