Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 8245cb1

Browse files
committed
update the quickstart
1 parent 5b432e8 commit 8245cb1

File tree

1 file changed

+7
-15
lines changed

1 file changed

+7
-15
lines changed

docs/docs/new/quickstart.md

Lines changed: 7 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,29 +6,27 @@ description: How to use Nitro
66

77
## Step 1: Install Nitro
88

9+
Download and install Nitro on your system.
10+
911
### For Linux and MacOS
1012

11-
Open your terminal and enter the following command. This will download and install Nitro on your system.
1213
```bash
1314
curl -sfL https://raw.githubusercontent.com/janhq/nitro/main/install.sh | sudo /bin/bash -
1415
```
1516

1617
### For Windows
1718

18-
Open PowerShell and execute the following command. This will perform the same actions as for Linux and MacOS but is tailored for Windows.
1919
```bash
2020
powershell -Command "& { Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/janhq/nitro/main/install.bat' -OutFile 'install.bat'; .\install.bat; Remove-Item -Path 'install.bat' }"
2121
```
2222

23-
> **NOTE:**Installing Nitro will add new files and configurations to your system to enable it to run.
23+
> Installing Nitro will add new files and configurations to your system to enable it to run.
2424
2525
For a manual installation process, see: [Install from Source](install.md)
2626

2727
## Step 2: Downloading a Model
2828

29-
Next, we need to download a model. For this example, we'll use the [Llama2 7B chat model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main).
30-
31-
- Create a `/model` and navigate into it:
29+
For this example, we'll use the [Llama2 7B chat model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/tree/main).
3230

3331
```bash
3432
mkdir model && cd model
@@ -37,8 +35,6 @@ wget -O llama-2-7b-model.gguf https://huggingface.co/TheBloke/Llama-2-7B-Chat-GG
3735

3836
## Step 3: Run Nitro server
3937

40-
To start using Nitro, you need to run its server.
41-
4238
```bash title="Run Nitro server"
4339
nitro
4440
```
@@ -51,7 +47,7 @@ curl http://localhost:3928/healthz
5147

5248
## Step 4: Load model
5349

54-
To load the model to Nitro server, you need to run:
50+
To load the model to Nitro server, run:
5551

5652
```bash title="Load model"
5753
curl http://localhost:3928/inferences/llamacpp/loadmodel \
@@ -65,9 +61,7 @@ curl http://localhost:3928/inferences/llamacpp/loadmodel \
6561

6662
## Step 5: Making an Inference
6763

68-
Finally, let's make an actual inference call using Nitro.
69-
70-
- In your terminal, execute:
64+
Finally, let's chat with the model using Nitro.
7165

7266
```bash title="Nitro Inference"
7367
curl http://localhost:3928/v1/chat/completions \
@@ -82,6 +76,4 @@ curl http://localhost:3928/v1/chat/completions \
8276
}'
8377
```
8478

85-
This command sends a request to Nitro, asking it about the 2020 World Series winner.
86-
87-
- As you can see, A key benefit of Nitro is its alignment with [OpenAI's API structure](https://platform.openai.com/docs/guides/text-generation?lang=curl). Its inference call syntax closely mirrors that of OpenAI's API, facilitating an easier shift for those accustomed to OpenAI's framework.
79+
As you can see, a key benefit of Nitro is its alignment with [OpenAI's API structure](https://platform.openai.com/docs/guides/text-generation?lang=curl). Its inference call syntax closely mirrors that of OpenAI's API, facilitating an easier shift for those accustomed to OpenAI's framework.

0 commit comments

Comments
 (0)