This repository hosts runtime configuration consumed by the Atomic Chat desktop client. The client fetches the manifest at startup and refreshes it periodically, so changes here propagate to all installed Atomic Chat clients (macOS, Windows, Linux) without an application release.
TL;DR for non-developers — to add a new model or provider, edit
providers/registry.jsonon GitHub, open a Pull Request, get it reviewed, merge it. All running Atomic Chat clients will pick up the change within an hour.
providers/
registry.json # Single source of truth for cloud providers
schema.json # JSON Schema (Draft-07) used by CI validation
models/
recommended.json # Recommended models surfaced in Hub + onboarding
schema.json # JSON Schema (Draft-07) for the recommended-models manifest
.github/workflows/
validate.yml # Validates every manifest on every PR
README.md
Future configuration domains (themes, default prompts, etc.) will live in
sibling directories such as themes/, prompts/, and so on. Each domain
gets its own subdirectory and its own schema.
The registry only describes cloud providers that need API keys (OpenAI, Anthropic, OpenRouter, Mistral, Groq, xAI, Gemini, MiniMax, Hugging Face, NVIDIA, …).
The following are not in the registry and must not be added:
| Excluded | Why |
|---|---|
| Local providers (llama.cpp, etc) | Discovered at runtime by the client's engine manager. |
azure |
Kept inside the client as a baseline fallback — it requires per-resource configuration. |
| API keys | Must always remain on the user's machine. The api_key field MUST be "" in the registry. |
raw.githubusercontent.com/AtomicBot-ai/atomic-chat-conf/main/providers/registry.json
│
▼ fetched once per hour (TTL = 1h)
┌─────────────────────────────────┐
│ Atomic Chat client (web-app) │
│ - validates schema_version │
│ - merges with `azure` │
│ - caches in localStorage │
└─────────────────────────────────┘
If the network is unreachable or the manifest is malformed, the client falls
back to its built-in baseline (azure only) plus whatever was previously
cached. The application never crashes due to a registry issue.
- Open
providers/registry.jsonon GitHub. - Click the pencil icon ("Edit").
- Find the provider block (for example,
"provider": "openrouter"). - Add a new entry to its
modelsarray, copying the shape of an existing entry. Example:{ "id": "openai/gpt-5.5", "name": "GPT-5.5", "version": "1.0", "description": "Short, factual one-liner. 1M ctx, vision + tools.", "capabilities": ["completion", "tools", "vision"] } - Bump
updated_atat the top of the file to today (YYYY-MM-DDTHH:MM:SSZ). - Commit the change to a new branch and open a Pull Request.
- Wait for CI ("Validate registry") to pass — it checks the file against
schema.json. - Request a review and merge.
The same as above, but you add a whole provider object to the top-level
providers array. Use any existing provider as a template. Required fields:
provider— short, lowercase, unique id (e.g.cohere).base_url— OpenAI-compatible base URL.settings— at minimum, anapi-keyandbase-urlcontroller pair.models— may be[]if you want users to discover models manually.api_key— always"".
If your provider needs extra HTTP headers (Anthropic does, for example), use
the optional custom_header array.
The capabilities array on each model uses these values:
| Value | Meaning |
|---|---|
completion |
Standard chat / completion endpoint. |
tools |
Native function-calling / tool-use support. |
vision |
Accepts image inputs. |
embeddings |
Embedding endpoint available for this model. |
reasoning |
Exposes structured reasoning_details (e.g. DeepSeek R1). |
Every manifest carries a top-level schema_version. The Atomic Chat client embeds
its highest supported version. If you ship a manifest whose schema_version
exceeds what older clients understand, they will fall back to their cached or
baseline manifest and prompt the user to update Atomic Chat.
Bump schema_version only when adding new required fields or changing the
shape of existing fields in a backwards-incompatible way. Adding a new
provider or new model never requires a bump.
models/recommended.json drives the Recommended
section in two places of the Atomic Chat client:
- The Hub screen (
/hub). - The first-run Setup / onboarding screen.
Each entry is a small object — only the model id and the i18n key for the
chip label live here. The full Hugging Face metadata (quants, mmproj, file
sizes) is fetched at runtime from huggingface.co. A bundled, slim
fallback in the client covers the offline first launch.
Entry shape (full schema in models/schema.json):
{
"model_name": "unsloth/gemma-4-E4B-it-GGUF",
"description_key": "hub:recEverydayUse",
"platforms": ["macos", "windows", "linux"],
"active": true
}| Field | Required | Notes |
|---|---|---|
model_name |
yes | Hugging Face repo id (owner/name). |
description_key |
yes | i18n key for the chip label. Must start with hub: and exist in the client's hub.jsons. |
platforms |
no | Subset of ["macos", "windows", "linux"]. Omit to show on every platform. |
active |
no | Defaults to true. Set to false to hide an entry without deleting it. |
These keys are translated in
web-app/src/locales/*/hub.json
and mapped to chip colors in web-app/src/constants/recommendedModelChip.ts:
| Key | Chip color | English label |
|---|---|---|
hub:recEverydayUse |
green | Everyday use |
hub:recVisionKnowledge |
purple | Vision & knowledge |
hub:recFinetuningChat |
blue | Fine-tuning & chat |
hub:recMathReasoning |
yellow | Math & reasoning |
hub:recForMlx |
orange | For MLX |
Adding a brand-new description_key requires both adding the translation in
the Atomic-Chat repo and publishing a client release — until then, older
clients render the entry with a neutral gray chip.
platforms is purely a presentation hint:
- MLX models (anything from the
mlx-communityorg or marked aslibrary_name: "mlx") only run on macOS — list them with"platforms": ["macos"]. - GGUF models generally run everywhere; setting
platformslets you promote a different default for Windows/Linux users (e.g. recommend Llama 3.1 GGUF on Windows when MLX is not an option).
The Atomic Chat client filters this list locally based on the host OS before rendering.
- Open
models/recommended.jsonon GitHub. - Click the pencil icon ("Edit").
- Append (or modify) an entry following the shape above.
- Bump
updated_atto today (YYYY-MM-DDTHH:MM:SSZ). - Commit to a new branch and open a Pull Request.
- Wait for CI ("Validate registry") to pass — it runs the JSON Schema
plus a duplicate-entry check against
models/recommended.json. - Request a review and merge.
All running Atomic Chat clients pick up the change within an hour.
.github/workflows/validate.yml runs on
every push and pull request. It performs the following checks:
ajvvalidatesproviders/registry.jsonagainstproviders/schema.json.- Every
providerid must be unique. - The job fails if any
api_keyfield is non-empty. ajvvalidatesmodels/recommended.jsonagainstmodels/schema.json.- Every
(model_name, description_key)pair in the recommended-models manifest must be unique.
You cannot merge a PR until CI is green.
If you want to validate locally before pushing:
npx ajv-cli@5 validate -s providers/schema.json -d providers/registry.json --strict=false
npx ajv-cli@5 validate -s models/schema.json -d models/recommended.json --strict=false- API keys must never appear in this repository.
- The registry is served via HTTPS from
raw.githubusercontent.com. - Atomic Chat clients ignore the
api_keyfield even if a malicious commit slips through; user-supplied keys live only in the local OS keychain.
See the project's primary license in the main Atomic Chat repository.