You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
-[Sparsifying YOLOv5 Using Recipes](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md)
37
+
-[Sparse Transfer Learning With YOLOv5](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md)
37
38
38
39
## Installation
39
40
@@ -44,6 +45,7 @@ bash setup_integration.sh
44
45
45
46
The setup_integration.sh file will clone the yolov5 repository with the SparseML integration as a subfolder.
46
47
After the repo has successfully cloned, all dependencies from the `yolov5/requirements.txt` file will install in your current environment.
48
+
Note, the `yolov5` repository requires Python 3.7 or greater.
47
49
48
50
## Quick Tour
49
51
@@ -84,7 +86,7 @@ The export process is modified such that the quantized and pruned models are cor
84
86
85
87
For example, the following command can be run from within the yolov5 repository folder to export a trained/sparsified model's checkpoint:
The DeepSparse Engine accepts ONNX formats and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
Copy file name to clipboardExpand all lines: integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ limitations under the License.
16
16
17
17
# Sparsifying YOLOv5 Using Recipes
18
18
19
-
This tutorial shows how Neural Magic recipes simplify the sparsification process by encoding the hyperparameters and instructions needed to create highly accurate pruned and pruned-quantized YOLOv5 models, specifically for the S and L versions.
19
+
This tutorial shows how Neural Magic recipes simplify the sparsification process by encoding the hyperparameters and instructions needed to create highly accurate pruned and pruned-quantized YOLOv5 models, specifically for the s and l versions.
20
20
21
21
## Overview
22
22
@@ -55,7 +55,7 @@ Otherwise, setup scripts for both [VOC](https://cs.stanford.edu/~roozbeh/pascal-
55
55
56
56
1. For this tutorial, run the COCO setup script with the following command from the root of the `yolov5` repository:
57
57
```bash
58
-
bash data/scripts/get_voc.sh
58
+
bash data/scripts/get_coco.sh
59
59
```
60
60
2. Download and validation of the COCO dataset will begin and takes around 10 minutes to finish.
61
61
The script downloads the COCO dataset into a `coco` folder under the parent directory.
@@ -150,12 +150,12 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
150
150
151
151
| Recipe Name | Description | Train Command | COCO mAP@0.5 | Size on Disk | DeepSparse Performance** |
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5s.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.556 | 154 MB | --- img/sec |
154
-
| [YOLOv5s Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned.md) | Creates a highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned.md ``` | 0.534 | 32.8 MB | --- img/sec |
155
-
| [YOLOv5s Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned_quantized.md ``` | 0.525 | 12.7 MB | --- img/sec |
156
-
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5l.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.654 | 24.8 MB | --- img/sec |
157
-
| [YOLOv5l Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned.md) | Creates a highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned.md ``` | 0.643 | 8.4 MB | --- img/sec |
158
-
| [YOLOv5l Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned_quantized.md ``` | 0.623 | 3.3 MB | --- img/sec |
153
+
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5s.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.556 | 154 MB | 78.2 img/sec |
154
+
| [YOLOv5s Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned.md) | Creates a highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned.md ``` | 0.534 | 32.8 MB | 100.5 img/sec |
155
+
| [YOLOv5s Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5s.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5s.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5s.pruned_quantized.md ``` | 0.525 | 12.7 MB | 198.2 img/sec |
156
+
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | ``` python train.py --cfg ../models/yolov5l.yaml --weights "" --data coco.yaml --hyp data/hyp.scratch.yaml ``` | 0.654 | 24.8 MB | 22.7 img/sec |
157
+
| [YOLOv5l Pruned](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned.md) | Creates a highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned.md ``` | 0.643 | 8.4 MB | 40.1 img/sec |
158
+
| [YOLOv5l Pruned Quantized](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/recipes/yolov5l.pruned_quantized.md) | Creates a highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | ``` python train.py --cfg ../models/yolov5l.yaml --weights PATH_TO_COCO_PRETRAINED_WEIGHTS --data coco.yaml --hyp data/hyp.scratch.yaml --recipe ../recipes/yolov5l.pruned_quantized.md ``` | 0.623 | 3.3 MB | 98.6 img/sec |
159
159
160
160
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 640x640 input with version 1.6 of the DeepSparse Engine.
161
161
@@ -223,7 +223,7 @@ The [`export.py` script](https://github.com/neuralmagic/yolov5/blob/master/model
223
223
1. Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | 0.556 | 154 MB | --- img/sec |
51
-
| YOLOv5s Pruned | A highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | 0.534 | 32.8 MB | --- img/sec |
52
-
| YOLOv5s Pruned Quantized | A highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | 0.525 | 12.7 MB | --- img/sec |
53
-
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | 0.654 | 24.8 MB | --- img/sec |
54
-
| YOLOv5l Pruned | A highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | 0.643 | 8.4 MB | --- img/sec |
55
-
| YOLOv5l Pruned Quantized | A highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | 0.623 | 3.3 MB | --- img/sec |
50
+
| YOLOv5s Baseline | The baseline, small YOLOv5 model used as the starting point for sparsification. | 0.556 | 154 MB | 78.2 img/sec |
51
+
| YOLOv5s Pruned | A highly sparse, FP32 YOLOv5s model that recovers close to the baseline model. | 0.534 | 32.8 MB | 100.5 img/sec |
52
+
| YOLOv5s Pruned Quantized | A highly sparse, INT8 YOLOv5s model that recovers reasonably close to the baseline model. | 0.525 | 12.7 MB | 198.2 img/sec |
53
+
| YOLOv5l Baseline | The baseline, large YOLOv5 model used as the starting point for sparsification. | 0.654 | 24.8 MB | 22.7 img/sec |
54
+
| YOLOv5l Pruned | A highly sparse, FP32 YOLOv5l model that recovers close to the baseline model. | 0.643 | 8.4 MB | 40.1 img/sec |
55
+
| YOLOv5l Pruned Quantized | A highly sparse, INT8 YOLOv5l model that recovers reasonably close to the baseline model. | 0.623 | 3.3 MB | 98.6 img/sec |
56
+
56
57
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 640x640 input with version 1.6 of the DeepSparse Engine.
57
58
58
59
2) After deciding on which model meets your performance requirements for both speed and accuracy, select the SparseZoo stub associated with that model.
@@ -200,7 +201,7 @@ The [export.py script](https://github.com/neuralmagic/yolov5/blob/master/models/
200
201
201
202
1) Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
0 commit comments