You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/GETTING_STARTED.md
+13-4Lines changed: 13 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,21 @@
1
1
# Getting Started
2
2
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) is used for model inference speeding up.
3
3
4
+
All the codes are successfully tested in the following enviroments:
5
+
* Linux (18.04, 20.04, 22.04)
6
+
* Python 3.10
7
+
* Pytorch 2.0 or higher
8
+
* CUDA 12.1 or higher
9
+
* TensorRT-LLM 0.11.0 (stable version)
4
10
5
11
### 1. Conda or Python Environment Preparation
6
-
* Please follow the step 1, 2 from the [official tutorial](https://nvidia.github.io/TensorRT-LLM/installation/linux.html) of TensorRT-LLM to install the environment.
12
+
13
+
14
+
* Please follow the step 1, 2 from the [official tutorial](https://nvidia.github.io/TensorRT-LLM/installation/linux.html) of TensorRT-LLM to install the environment.
15
+
16
+
Note we used the TensorRT-LLM **stable version `0.11.0`**.
7
17
```bash
8
18
# Installing on Linux
9
-
10
19
Step 1. Retrieve and launch the docker container (optional).
11
20
12
21
You can pre-install the environment using the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit) to avoid manual environment configuration.
@@ -26,7 +35,7 @@ Step 2. Install TensorRT-LLM.
26
35
# Install the latest preview version (corresponding to the main branch) of TensorRT-LLM.
27
36
# If you want to install the stable version (corresponding to the release branch), please
0 commit comments