Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 81cc859

Browse files
authored
YOLOv5 tutorial and readme fixes (#339)
* YOLOv5 tutorial and readme fixes * fix reference to v3
1 parent 7bd1dbe commit 81cc859

File tree

4 files changed

+22
-19
lines changed

4 files changed

+22
-19
lines changed

integrations/ultralytics-yolov5/README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,11 +29,12 @@ The techniques include, but are not limited to:
2929

3030
## Highlights
3131

32-
- Coming soon!
32+
- Example: [DeepSparse YOLOv5 Inference Example](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo)
3333

3434
## Tutorials
3535

36-
- Coming soon!
36+
- [Sparsifying YOLOv5 Using Recipes](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md)
37+
- [Sparse Transfer Learning With YOLOv5](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md)
3738

3839
## Installation
3940

@@ -44,6 +45,7 @@ bash setup_integration.sh
4445

4546
The setup_integration.sh file will clone the yolov5 repository with the SparseML integration as a subfolder.
4647
After the repo has successfully cloned, all dependencies from the `yolov5/requirements.txt` file will install in your current environment.
48+
Note, the `yolov5` repository requires Python 3.7 or greater.
4749

4850
## Quick Tour
4951

@@ -84,7 +86,7 @@ The export process is modified such that the quantized and pruned models are cor
8486

8587
For example, the following command can be run from within the yolov5 repository folder to export a trained/sparsified model's checkpoint:
8688
```bash
87-
python models/export.py --weights PATH/TO/weights.pt --img-size 512 512
89+
python models/export.py --weights PATH/TO/weights.pt --dynamic
8890
```
8991

9092
The DeepSparse Engine accepts ONNX formats and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
624 KB
Loading

integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ limitations under the License.
1616

1717
# Sparsifying YOLOv5 Using Recipes
1818

19-
This tutorial shows how Neural Magic recipes simplify the sparsification process by encoding the hyperparameters and instructions needed to create highly accurate pruned and pruned-quantized YOLOv5 models, specifically for the S and L versions.
19+
This tutorial shows how Neural Magic recipes simplify the sparsification process by encoding the hyperparameters and instructions needed to create highly accurate pruned and pruned-quantized YOLOv5 models, specifically for the s and l versions.
2020

2121
## Overview
2222

@@ -55,7 +55,7 @@ Otherwise, setup scripts for both [VOC](https://cs.stanford.edu/~roozbeh/pascal-
5555

5656
1. For this tutorial, run the COCO setup script with the following command from the root of the `yolov5` repository:
5757
```bash
58-
bash data/scripts/get_voc.sh
58+
bash data/scripts/get_coco.sh
5959
```
6060
2. Download and validation of the COCO dataset will begin and takes around 10 minutes to finish.
6161
The script downloads the COCO dataset into a `coco` folder under the parent directory.
@@ -150,12 +150,12 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
150150
151151
| Recipe Name | Description | Train Command | COCO mAP@0.5 | Size on Disk | DeepSparse Performance** |
152152
|----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|--------------------------|
153-
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5s.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.556 | 154 MB | --- img/sec |
154-
| [YOLOv5s Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned.md) | Creates a highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned.md ``` | 0.534 | 32.8 MB | --- img/sec |
155-
| [YOLOv5s Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned_quantized.md ``` | 0.525 | 12.7 MB | --- img/sec |
156-
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5l.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.654 | 24.8 MB | --- img/sec |
157-
| [YOLOv5l Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned.md) | Creates a highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned.md ``` | 0.643 | 8.4 MB | --- img/sec |
158-
| [YOLOv5l Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned_quantized.md ``` | 0.623 | 3.3 MB | --- img/sec |
153+
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5s.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.556 | 154 MB | 78.2 img/sec |
154+
| [YOLOv5s Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned.md) | Creates a highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned.md ``` | 0.534 | 32.8 MB | 100.5 img/sec |
155+
| [YOLOv5s Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned_quantized.md ``` | 0.525 | 12.7 MB | 198.2 img/sec |
156+
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5l.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.654 | 24.8 MB | 22.7 img/sec |
157+
| [YOLOv5l Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned.md) | Creates a highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned.md ``` | 0.643 | 8.4 MB | 40.1 img/sec |
158+
| [YOLOv5l Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned_quantized.md ``` | 0.623 | 3.3 MB | 98.6 img/sec |
159159
160160
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 640x640 input with version 1.6 of the DeepSparse Engine.
161161
@@ -223,7 +223,7 @@ The [`export.py` script](https://github.com/neuralmagic/yolov5/blob/master/model
223223
1. Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
224224
225225
```bash
226-
python models/export.py --weights PATH_TO_SPARSIFIED_WEIGHTS
226+
python models/export.py --weights PATH_TO_SPARSIFIED_WEIGHTS --dynamic
227227
```
228228
229229
The result is a new file added next to the sparsified checkpoint with a `.onnx` extension:

integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -47,12 +47,13 @@ For Neural Magic Support, sign up or log in to get help with your questions in o
4747

4848
| Sparsification Type | Description | COCO mAP@0.5 | Size on Disk | DeepSparse Performance** |
4949
|---------------------------|----------------------------------------------------------------------------------------------|--------------|--------------|--------------------------|
50-
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | 0.556 | 154 MB | --- img/sec |
51-
| YOLOv5s Pruned | A highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | 0.534 | 32.8 MB | --- img/sec |
52-
| YOLOv5s Pruned Quantized | A highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | 0.525 | 12.7 MB | --- img/sec |
53-
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | 0.654 | 24.8 MB | --- img/sec |
54-
| YOLOv5l Pruned | A highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | 0.643 | 8.4 MB | --- img/sec |
55-
| YOLOv5l Pruned Quantized | A highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | 0.623 | 3.3 MB | --- img/sec |
50+
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | 0.556 | 154 MB | 78.2 img/sec |
51+
| YOLOv5s Pruned | A highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | 0.534 | 32.8 MB | 100.5 img/sec |
52+
| YOLOv5s Pruned Quantized | A highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | 0.525 | 12.7 MB | 198.2 img/sec |
53+
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | 0.654 | 24.8 MB | 22.7 img/sec |
54+
| YOLOv5l Pruned | A highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | 0.643 | 8.4 MB | 40.1 img/sec |
55+
| YOLOv5l Pruned Quantized | A highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | 0.623 | 3.3 MB | 98.6 img/sec |
56+
5657
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 640x640 input with version 1.6 of the DeepSparse Engine.
5758

5859
2) After deciding on which model meets your performance requirements for both speed and accuracy, select the SparseZoo stub associated with that model.
@@ -200,7 +201,7 @@ The [export.py script](https://github.com/neuralmagic/yolov5/blob/master/models/
200201
201202
1) Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
202203
```bash
203-
python models/export.py --weights PATH_TO_SPARSIFIED_WEIGHTS --img-size 512 512
204+
python models/export.py --weights PATH_TO_SPARSIFIED_WEIGHTS --dynamic
204205
```
205206
206207
The result is a new file added next to the sparsified checkpoint with a `.onnx` extension:

0 commit comments

Comments
 (0)