Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit ead40a3

Browse files
authored
update DeepSparse YOLO examples link (#333)
* update DeepSparse YOLO examples link * update v5 * Update sparsifying_yolov3_using_recipes.md * Update yolov3_sparse_transfer_learning.md
1 parent b5f673f commit ead40a3

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

integrations/ultralytics-yolov3/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ The techniques include, but are not limited to:
3030
## Highlights
3131

3232
- Blog: [YOLOv3 on CPUs: Sparsifying to Achieve GPU-Level Performance](https://neuralmagic.com/blog/benchmark-yolov3-on-cpus-with-deepsparse/)
33-
- Example: [DeepSparse YOLOv3 Inference Example](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3)
33+
- Example: [DeepSparse YOLOv3 Inference Example](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo)
3434
- Video: [DeepSparse YOLOv3 Pruned Quantized Performance](https://youtu.be/o5qIYs47MPw)
3535

3636
## Tutorials
@@ -124,4 +124,4 @@ python models/export.py --weights PATH/TO/weights.pt --img-size 512 512
124124
```
125125

126126
The DeepSparse Engine accepts ONNX formats and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
127-
Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3).
127+
Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo).

integrations/ultralytics-yolov3/tutorials/sparsifying_yolov3_using_recipes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -236,12 +236,12 @@ The result is a new file added next to the sparsified checkpoint with a `.onnx`
236236

237237
2. Now you can run the `.onnx` file through a compression algorithm to reduce its deployment size and run it in ONNX-compatible inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse).
238238

239-
The DeepSparse Engine is explicitly coded to support running sparsified models for significant improvements in inference performance. An example for benchmarking and deploying YOLOv3 models with DeepSparse can be found [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3).
239+
The DeepSparse Engine is explicitly coded to support running sparsified models for significant improvements in inference performance. An example for benchmarking and deploying YOLOv3 models with DeepSparse can be found [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo).
240240

241241
## Wrap-Up
242242

243243
Neural Magic recipes simplify the sparsification process by encoding the hyperparameters and instructions needed to create highly accurate pruned and pruned-quantized YOLOv3 models. In this tutorial, you created a pre-trained model to establish a baseline, applied a Neural Magic recipe for sparsification, and exported to ONNX to run through an inference engine.
244244

245-
Now, refer [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3) for an example for benchmarking and deploying YOLOv3 models with DeepSparse.
245+
Now, refer [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo) for an example for benchmarking and deploying YOLOv3 models with DeepSparse.
246246

247247
For Neural Magic Support, sign up or log in to get help with your questions in our **Tutorials channel**: [Discourse Forum](https://discuss.neuralmagic.com/) and/or [Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).

integrations/ultralytics-yolov3/tutorials/yolov3_sparse_transfer_learning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -207,13 +207,13 @@ The [export.py script](https://github.com/neuralmagic/yolov3/blob/master/models/
207207
```
208208
2) Now you can run the `.onnx` file through a compression algorithm to reduce its deployment size and run it in ONNX-compatible inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse).
209209
The DeepSparse Engine is explicitly coded to support running sparsified models for significant improvements in inference performance.
210-
An example for benchmarking and deploying YOLOv3 models with DeepSparse can be found [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3).
210+
An example for benchmarking and deploying YOLOv3 models with DeepSparse can be found [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo).
211211

212212
## Wrap-Up
213213

214214
Neural Magic sparse models and recipes simplify the sparsification process by enabling sparse transfer learning to create highly accurate pruned and pruned-quantized YOLOv3 models.
215215
In this tutorial, you downloaded a pre-sparsified model, applied a Neural Magic recipe for sparse transfer learning, and exported to ONNX to run through an inference engine.
216216

217-
Now, refer [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3) for an example for benchmarking and deploying YOLOv3 models with DeepSparse.
217+
Now, refer [here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo) for an example for benchmarking and deploying YOLOv3 models with DeepSparse.
218218

219219
For Neural Magic Support, sign up or log in to get help with your questions in our Tutorials channel: [Discourse Forum](https://discuss.neuralmagic.com/) and/or [Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).

integrations/ultralytics-yolov5/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,4 +88,4 @@ python models/export.py --weights PATH/TO/weights.pt --img-size 512 512
8888
```
8989

9090
The DeepSparse Engine accepts ONNX formats and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
91-
Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolov3).
91+
Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse/tree/main/examples/ultralytics-yolo).

0 commit comments

Comments
 (0)