Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 9b7eb44

Browse files
committed
Remove beta/nightly/onnx/trt from quickstart
1 parent 280c94b commit 9b7eb44

File tree

1 file changed

+18
-146
lines changed

1 file changed

+18
-146
lines changed

docs/docs/quickstart.mdx

Lines changed: 18 additions & 146 deletions
Original file line numberDiff line numberDiff line change
@@ -8,76 +8,51 @@ import Tabs from "@theme/Tabs";
88
import TabItem from "@theme/TabItem";
99

1010

11-
:::warning
12-
🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.
11+
:::info
12+
Cortex.cpp is in active development. If you have any questions, please reach out to us:
13+
- [GitHub](https://github.com/janhq/cortex.cpp/issues/new/choose)
14+
- [Discord](https://discord.com/invite/FTk2MvZwJH)
1315
:::
1416

15-
## Installation
16-
To install Cortex, download the installer for your operating system from the following options:
17-
- **Stable Version**
18-
- [Windows](https://github.com/janhq/cortex.cpp/releases)
19-
- [Mac](https://github.com/janhq/cortex.cpp/releases)
20-
- [Linux (Debian)](https://github.com/janhq/cortex.cpp/releases)
21-
- [Linux (Fedora)](https://github.com/janhq/cortex.cpp/releases)
17+
## Local Installation
18+
Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process.
19+
- [Windows](https://app.cortexcpp.com/download/latest/windows-amd64-local)
20+
- [Mac (Universal)](https://app.cortexcpp.com/download/latest/mac-universal-local)
21+
- [Linux](https://app.cortexcpp.com/download/latest/linux-amd64-local)
22+
2223
## Start Cortex.cpp Processes and API Server
2324
This command starts the Cortex.cpp API server at `localhost:39281`.
2425
<Tabs>
2526
<TabItem value="MacOs/Linux" label="MacOs/Linux">
2627
```sh
27-
# Stable
2828
cortex start
29-
30-
# Beta
31-
cortex-beta start
32-
33-
# Nightly
34-
cortex-nightly start
3529
```
3630
</TabItem>
3731
<TabItem value="Windows" label="Windows">
3832
```sh
39-
# Stable
4033
cortex.exe start
41-
42-
# Beta
43-
cortex-beta.exe start
44-
45-
# Nightly
46-
cortex-nightly.exe start
4734
```
4835
</TabItem>
4936
</Tabs>
37+
5038
## Run a Model
5139
This command downloads the default `gguf` model format from the [Cortex Hub](https://huggingface.co/cortexso), starts the model, and chat with the model.
5240
<Tabs>
5341
<TabItem value="MacOs/Linux" label="MacOs/Linux">
5442
```sh
55-
# Stable
5643
cortex run mistral
57-
58-
# Beta
59-
cortex-beta run mistral
60-
61-
# Nightly
62-
cortex-nightly run mistral
6344
```
6445
</TabItem>
6546
<TabItem value="Windows" label="Windows">
6647
```sh
67-
# Stable
6848
cortex.exe run mistral
69-
70-
# Beta
71-
cortex-beta.exe run mistral
72-
73-
# Nightly
74-
cortex-nightly.exe run mistral
7549
```
7650
</TabItem>
7751
</Tabs>
7852
:::info
79-
All model files are stored in the `~users/cortex/models` folder.
53+
All model files are stored in the `~/cortex/models` folder.
8054
:::
55+
8156
## Using the Model
8257
### API
8358
```curl
@@ -103,153 +78,58 @@ curl http://localhost:39281/v1/chat/completions \
10378
"top_p": 1
10479
}'
10580
```
106-
### Cortex.js
107-
```js
108-
const resp = await cortex.chat.completions.create({
109-
model: "mistral",
110-
messages: [
111-
{ role: "system", content: "You are a chatbot." },
112-
{ role: "user", content: "What is the capital of the United States?" },
113-
],
114-
});
115-
```
116-
### Cortex.py
117-
```py
118-
completion = client.chat.completions.create(
119-
model=mistral,
120-
messages=[
121-
{
122-
"role": "user",
123-
"content": "Say this is a test",
124-
},
125-
],
126-
)
127-
```
81+
12882
## Stop a Model
12983
This command stops the running model.
13084
<Tabs>
13185
<TabItem value="MacOs/Linux" label="MacOs/Linux">
13286
```sh
133-
# Stable
13487
cortex models stop mistral
135-
136-
# Beta
137-
cortex-beta models stop mistral
138-
139-
# Nightly
140-
cortex-nightly models stop mistral
14188
```
14289
</TabItem>
14390
<TabItem value="Windows" label="Windows">
14491
```sh
145-
# Stable
14692
cortex.exe models stop mistral
147-
148-
# Beta
149-
cortex-beta.exe models stop mistral
150-
151-
# Nightly
152-
cortex-nightly.exe models stop mistral
15393
```
15494
</TabItem>
15595
</Tabs>
96+
15697
## Show the System State
15798
This command displays the running model and the hardware system status.
15899
<Tabs>
159100
<TabItem value="MacOs/Linux" label="MacOs/Linux">
160101
```sh
161-
# Stable
162102
cortex ps
163-
164-
# Beta
165-
cortex-beta ps
166-
167-
# Nightly
168-
cortex-nightly ps
169103
```
170104
</TabItem>
171105
<TabItem value="Windows" label="Windows">
172106
```sh
173-
# Stable
174107
cortex.exe ps
175-
176-
# Beta
177-
cortex-beta.exe ps
178-
179-
# Nightly
180-
cortex-nightly.exe ps
181108
```
182109
</TabItem>
183110
</Tabs>
184-
## Run Different Model Variants
111+
112+
<!-- ## Run Different Model Variants
185113
<Tabs>
186114
<TabItem value="MacOs/Linux" label="MacOs/Linux">
187115
```sh
188-
# Stable
189-
## Run HuggingFace model with HuggingFace Repo
190-
cortex run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
191-
192116
# Run Mistral in ONNX format
193117
cortex run mistral:onnx
194118

195119
# Run Mistral in TensorRT-LLM format
196120
cortex run mistral:tensorrt-llm
197-
198-
# Beta
199-
## Run HuggingFace model with HuggingFace Repo
200-
cortex-beta run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
201-
202-
# Run Mistral in ONNX format
203-
cortex-beta run mistral:onnx
204-
205-
# Run Mistral in TensorRT-LLM format
206-
cortex-beta run mistral:tensorrt-llm
207-
208-
# Nightly
209-
## Run HuggingFace model with HuggingFace Repo
210-
cortex-nightly run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
211-
212-
# Run Mistral in ONNX format
213-
cortex-nightly run mistral:onnx
214-
215-
# Run Mistral in TensorRT-LLM format
216-
cortex-nightly run mistral:tensorrt-llm
217121
```
218122
</TabItem>
219123
<TabItem value="Windows" label="Windows">
220124
```sh
221-
# Stable
222-
## Run HuggingFace model with HuggingFace Repo
223-
cortex.exe run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
224-
225125
# Run Mistral in ONNX format
226126
cortex.exe run mistral:onnx
227127

228128
# Run Mistral in TensorRT-LLM format
229129
cortex.exe run mistral:tensorrt-llm
230-
231-
# Beta
232-
## Run HuggingFace model with HuggingFace Repo
233-
cortex-beta.exe run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
234-
235-
# Run Mistral in ONNX format
236-
cortex-beta.exe run mistral:onnx
237-
238-
# Run Mistral in TensorRT-LLM format
239-
cortex-beta.exe run mistral:tensorrt-llm
240-
241-
# Nightly
242-
## Run HuggingFace model with HuggingFace Repo
243-
cortex-nightly.exe run TheBloke/Mistral-7B-Instruct-v0.2-GGUF
244-
245-
# Run Mistral in ONNX format
246-
cortex-nightly.exe run mistral:onnx
247-
248-
# Run Mistral in TensorRT-LLM format
249-
cortex-nightly.exe run mistral:tensorrt-llm
250130
```
251131
</TabItem>
252-
</Tabs>
132+
</Tabs> -->
253133

254134
## What's Next?
255135
Now that Cortex.cpp is set up, here are the next steps to explore:
@@ -258,11 +138,3 @@ Now that Cortex.cpp is set up, here are the next steps to explore:
258138
2. Explore the Cortex.cpp [data folder](/docs/data-folder) to understand how it stores data.
259139
3. Learn about the structure of the [`model.yaml`](/docs/model-yaml) file in Cortex.cpp.
260140
4. Integrate Cortex.cpp [libraries](/docs/category/libraries) seamlessly into your Python or JavaScript applications.
261-
262-
263-
:::info
264-
Cortex.cpp is still in early development, so if you have any questions, please reach out to us:
265-
266-
- [GitHub](https://github.com/janhq/cortex)
267-
- [Discord](https://discord.gg/YFKKeuVu)
268-
:::

0 commit comments

Comments
 (0)