Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 2c44b1e

Browse files
committed
chore: readme
1 parent 0a56d50 commit 2c44b1e

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Cortex is a Local AI API Platform that is used to run and customize LLMs.
2828
Key Features:
2929
- Straightforward CLI (inspired by Ollama)
3030
- Full C++ implementation, packageable into Desktop and Mobile apps
31-
- Pull from Huggingface of Cortex Built-in Model Library
31+
- Pull from Huggingface, or Cortex Built-in Models
3232
- Models stored in universal file formats (vs blobs)
3333
- Swappable Engines (default: [`llamacpp`](https://github.com/janhq/cortex.llamacpp), future: [`ONNXRuntime`](https://github.com/janhq/cortex.onnx), [`TensorRT-LLM`](https://github.com/janhq/cortex.tensorrt-llm))
3434
- Cortex can be deployed as a standalone API server, or integrated into apps like [Jan.ai](https://jan.ai/)
@@ -88,22 +88,22 @@ Refer to our [Quickstart](https://cortex.so/docs/quickstart/) and
8888
### API:
8989
Cortex.cpp includes a REST API accessible at `localhost:39281`.
9090

91-
Refer to our [API documentation](https://cortex.so/api-reference) for more details
91+
Refer to our [API documentation](https://cortex.so/api-reference) for more details.
9292

93-
## Models & Quantizations
93+
## Models
9494

9595
Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access.
9696

9797
Currently Cortex supports pulling from:
98-
- Hugging Face: GGUF models eg `author/Model-GGUF`
98+
- [Hugging Face](https://huggingface.co): GGUF models eg `author/Model-GGUF`
9999
- Cortex Built-in Models
100100

101101
Once downloaded, the model `.gguf` and `model.yml` files are stored in `~\cortexcpp\models`.
102102

103103
> **Note**:
104104
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
105105
106-
### Cortex Model Hub & Quantizations
106+
### Cortex Built-in Models & Quantizations
107107

108108
| Model /Engine | llama.cpp | Command |
109109
| -------------- | --------------------- | ----------------------------- |

0 commit comments

Comments
 (0)