You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: docs/docs/new/about.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,12 @@ Learn more on [GitHub](https://github.com/janhq/nitro).
13
13
-**Fast Inference:** Built on top of the cutting-edge inference library `llama.cpp`, modified to be production ready.
14
14
-**Lightweight:** Only 3MB, ideal for resource-sensitive environments.
15
15
-**Easily Embeddable:** Simple integration into existing applications, offering flexibility.
16
-
-**Quick Setup:** Approximately 10-second initialization for swift deployment.
16
+
-**Quick Setup:** Approximately 10-second initialization.
17
17
-**Enhanced Web Framework:** Incorporates `drogon cpp` to boost web service efficiency.
18
18
19
19
### OpenAI-compatible API
20
20
21
-
One of the significant advantages of using Nitro is its compatibility with OpenAI's API structure. The command format for making inference calls with Nitro is very similar to that used with OpenAI's API. This similarity ensures a transition for users who are already familiar with OpenAI's system.
21
+
Nitro's compatibility with OpenAI's API structure is a notable advantage. Its command format for inference calls closely mirrors that of OpenAI, facilitating an easy transition for users.
0 commit comments