Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit a04a93a

Browse files
committed
docs: use tabs for multiple option in installation docs
1 parent 3c507fe commit a04a93a

File tree

2 files changed

+53
-32
lines changed

2 files changed

+53
-32
lines changed

docs/docs/installation/docker.mdx

Lines changed: 41 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -30,28 +30,39 @@ This guide walks you through the setup and running of Cortex using Docker.
3030
```
3131

3232
2. **Build the Docker Image**
33-
- To use the latest versions of `cortex.cpp` and `cortex.llamacpp`:
34-
```bash
35-
docker build -t cortex --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -f docker/Dockerfile .
36-
```
37-
- To specify versions:
38-
```bash
39-
docker build --build-arg CORTEX_LLAMACPP_VERSION=0.1.34 --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -t cortex -f docker/Dockerfile .
40-
```
33+
34+
<Tabs>
35+
<TabItem value="Latest cortex.llamacpp" label="Latest cortex.llamacpp">
36+
```sh
37+
docker build -t cortex --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -f docker/Dockerfile .
38+
```
39+
</TabItem>
40+
<TabItem value="Specify cortex.llamacpp version" label="Specify cortex.llamacpp version">
41+
```sh
42+
docker build --build-arg CORTEX_LLAMACPP_VERSION=0.1.34 --build-arg CORTEX_CPP_VERSION=$(git rev-parse HEAD) -t cortex -f docker/Dockerfile .
43+
```
44+
</TabItem>
45+
</Tabs>
4146

4247
3. **Run the Docker Container**
43-
- Create a Docker volume to store models and data:
48+
- Create a Docker volume to store models and data:
4449
```bash
4550
docker volume create cortex_data
4651
```
47-
- Run in **GPU mode** (requires `nvidia-docker`):
48-
```bash
49-
docker run --gpus all -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex
50-
```
51-
- Run in **CPU mode**:
52-
```bash
53-
docker run -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex
54-
```
52+
53+
<Tabs>
54+
<TabItem value="GPU mode" label="GPU mode">
55+
```sh
56+
# requires nvidia-container-toolkit
57+
docker run --gpus all -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex
58+
```
59+
</TabItem>
60+
<TabItem value="CPU mode" label="CPU mode">
61+
```sh
62+
docker run -it -d --name cortex -v cortex_data:/root/cortexcpp -p 39281:39281 cortex
63+
```
64+
</TabItem>
65+
</Tabs>
5566

5667
4. **Check Logs (Optional)**
5768
```bash
@@ -106,15 +117,19 @@ curl --request GET --url http://localhost:39281/v1/engines --header "Content-Typ
106117
- Open a terminal and run `websocat ws://localhost:39281/events` to capture download events, follow [this instruction](https://github.com/vi/websocat?tab=readme-ov-file#installation) to install `websocat`.
107118
- In another terminal, pull models using the commands below.
108119

109-
```bash
110-
# Pull model from Cortex's Hugging Face hub
111-
curl --request POST --url http://localhost:39281/v1/models/pull --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}'
112-
```
113-
114-
```bash
115-
# Pull model directly from a URL
116-
curl --request POST --url http://localhost:39281/v1/models/pull --header 'Content-Type: application/json' --data '{"model": "https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full.q2_k.gguf"}'
117-
```
120+
<Tabs>
121+
<TabItem value="Pull model from cortexso's Hugging Face hub" label="Pull model from Cortex's Hugging Face hub">
122+
```sh
123+
# requires nvidia-container-toolkit
124+
curl --request POST --url http://localhost:39281/v1/models/pull --header 'Content-Type: application/json' --data '{"model": "tinyllama:gguf"}'
125+
```
126+
</TabItem>
127+
<TabItem value="Pull model directly from a URL" label="Pull model directly from a URL">
128+
```sh
129+
curl --request POST --url http://localhost:39281/v1/models/pull --header 'Content-Type: application/json' --data '{"model": "https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full.q2_k.gguf"}'
130+
```
131+
</TabItem>
132+
</Tabs>
118133

119134
- After pull models successfully, run command below to list models.
120135
```bash

docs/docs/installation/mac.mdx

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -83,16 +83,22 @@ The script requires sudo permission.
8383
```
8484
2. Build the Cortex.cpp :
8585

86-
```bash
86+
<Tabs>
87+
<TabItem value="Mac Silicon" label="Mac Silicon">
88+
```sh
8789
cd engine
8890
make configure-vcpkg
89-
90-
# Mac silicon
91-
make build CMAKE_EXTRA_FLAGS="-DCORTEX_CPP_VERSION=latest -DCMAKE_BUILD_TEST=OFF -DCMAKE_TOOLCHAIN_FILE=vcpkg/scripts/buildsystems/vcpkg.cmake"
92-
93-
# Mac Intel
9491
make build CMAKE_EXTRA_FLAGS="-DCORTEX_CPP_VERSION=latest -DCMAKE_BUILD_TEST=OFF -DMAC_ARM64=ON -DCMAKE_TOOLCHAIN_FILE=vcpkg/scripts/buildsystems/vcpkg.cmake"
9592
```
93+
</TabItem>
94+
<TabItem value="Mac Intel" label="Mac Intel">
95+
```sh
96+
cd engine
97+
make configure-vcpkg
98+
make build CMAKE_EXTRA_FLAGS="-DCORTEX_CPP_VERSION=latest -DCMAKE_BUILD_TEST=OFF -DCMAKE_TOOLCHAIN_FILE=vcpkg/scripts/buildsystems/vcpkg.cmake"
99+
```
100+
</TabItem>
101+
</Tabs>
96102

97103
3. Verify that Cortex.cpp is builded correctly by getting help information.
98104

0 commit comments

Comments
 (0)