Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit 2a42c43

Browse files
YOLOv5 Sparse Transfer Learning Tutorial and Recipes (#335)
* YOLOv5 recipes and tutorials * fix for reloading set lr modifier * add model disk size info to yolov5 apply a recipe tutorial table * update descriptions for recipes * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/sparsifying_yolov5_using_recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * minor updates and remove transfer learning to break that out into a separate diff * updates from review * run make style * YOLOv5 Sparse Transfer Learning Tutorial and Recipes * update sparse tl tutorial with more notes for which command to run * Update integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/recipes/yolov5.transfer_learn_pruned_quantized.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update integrations/ultralytics-yolov5/tutorials/yolov5_sparse_transfer_learning.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
1 parent 0f30713 commit 2a42c43

File tree

3 files changed

+421
-0
lines changed

3 files changed

+421
-0
lines changed
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
<!--
2+
Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
http://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing,
11+
software distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
---
18+
# General Epoch/LR Hyperparams
19+
num_epochs: &num_epochs 50
20+
init_lr: &init_lr 0.0032
21+
final_lr: &final_lr 0.000384
22+
warmup_epochs: &warmup_epochs 2
23+
weights_warmup_lr: &weights_warmup_lr 0
24+
biases_warmup_lr: &biases_warmup_lr 0.05
25+
26+
# modifiers
27+
training_modifiers:
28+
- !EpochRangeModifier
29+
start_epoch: 0.0
30+
end_epoch: *num_epochs
31+
32+
- !LearningRateFunctionModifier
33+
start_epoch: *warmup_epochs
34+
end_epoch: *num_epochs
35+
lr_func: cosine
36+
init_lr: *init_lr
37+
final_lr: *final_lr
38+
39+
- !LearningRateFunctionModifier
40+
start_epoch: 0
41+
end_epoch: *warmup_epochs
42+
lr_func: linear
43+
init_lr: *weights_warmup_lr
44+
final_lr: *init_lr
45+
param_groups: [0, 1]
46+
47+
- !LearningRateFunctionModifier
48+
start_epoch: 0
49+
end_epoch: *warmup_epochs
50+
lr_func: linear
51+
init_lr: *biases_warmup_lr
52+
final_lr: *init_lr
53+
param_groups: [2]
54+
55+
pruning_modifiers:
56+
- !ConstantPruningModifier
57+
start_epoch: 0.0
58+
params: __ALL_PRUNABLE__
59+
---
60+
61+
# YOLOv5 Pruned Transfer Learning
62+
63+
This recipe transfer learns from a sparse, [YOLOv5](https://github.com/ultralytics/yolov5) model.
64+
It has been tested with s and l versions on the VOC dataset using the the [SparseML integration with ultralytics/yolov5](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5).
65+
66+
When running, adjust hyperparameters based on training environment and dataset.
67+
68+
## Weights and Biases
69+
70+
The training results for this recipe are made available through Weights and Biases for easy viewing.
71+
72+
- [YOLOv5 VOC Transfer Learning](https://wandb.ai/neuralmagic/yolov5-voc-sparse-transfer-learning)
73+
74+
## Training
75+
76+
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/README.md).
77+
Using the given training script from the `yolov5` directory the following command can be used to launch this recipe.
78+
Adjust the script command for your GPU device setup.
79+
Ultralytics supports both DataParallel and DDP.
80+
Finally, the sparse weights used with this recipe are stored in the SparseZoo and can be retrieved by passing in a SparseZoo stub to the `--weights` argument.
81+
82+
*script command:*
83+
84+
```bash
85+
python train.py \
86+
--data voc.yaml \
87+
--cfg ../models/yolov5s.yaml \
88+
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned-aggressive_96 \
89+
--hyp data/hyp.finetune.yaml \
90+
--recipe ../recipes/yolov5s.transfer_learn_pruned.md
91+
```
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
<!--
2+
Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
http://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing,
11+
software distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
---
18+
# General Epoch/LR Hyperparams
19+
num_epochs: &num_epochs 52
20+
init_lr: &init_lr 0.0032
21+
final_lr: &final_lr 0.000384
22+
warmup_epochs: &warmup_epochs 2
23+
weights_warmup_lr: &weights_warmup_lr 0
24+
biases_warmup_lr: &biases_warmup_lr 0.05
25+
quantization_lr: &quantization_lr 0.000002
26+
27+
# Quantization Params
28+
quantization_start_epoch: &quantization_start_epoch 50
29+
30+
# modifiers
31+
training_modifiers:
32+
- !EpochRangeModifier
33+
start_epoch: 0.0
34+
end_epoch: *num_epochs
35+
36+
- !LearningRateFunctionModifier
37+
start_epoch: *warmup_epochs
38+
end_epoch: *num_epochs
39+
lr_func: cosine
40+
init_lr: *init_lr
41+
final_lr: *final_lr
42+
43+
- !LearningRateFunctionModifier
44+
start_epoch: 0
45+
end_epoch: *warmup_epochs
46+
lr_func: linear
47+
init_lr: *weights_warmup_lr
48+
final_lr: *init_lr
49+
param_groups: [0, 1]
50+
51+
- !LearningRateFunctionModifier
52+
start_epoch: 0
53+
end_epoch: *warmup_epochs
54+
lr_func: linear
55+
init_lr: *biases_warmup_lr
56+
final_lr: *init_lr
57+
param_groups: [2]
58+
59+
- !SetLearningRateModifier
60+
start_epoch: *quantization_start_epoch
61+
learning_rate: *quantization_lr
62+
63+
pruning_modifiers:
64+
- !ConstantPruningModifier
65+
start_epoch: 0.0
66+
params: __ALL_PRUNABLE__
67+
68+
quantization_modifiers:
69+
- !QuantizationModifier
70+
start_epoch: *quantization_start_epoch
71+
submodules: [ 'model.0', 'model.1', 'model.2', 'model.3', 'model.4', 'model.5', 'model.6', 'model.7', 'model.8', 'model.9', 'model.10', 'model.11', 'model.12', 'model.13', 'model.14', 'model.15', 'model.16', 'model.17', 'model.18', 'model.19', 'model.20', 'model.21', 'model.22', 'model.23' ]
72+
---
73+
74+
# YOLOv5 Pruned-Quantized Transfer Learning
75+
76+
This recipe transfer learns from a sparse, [YOLOv5](https://github.com/ultralytics/yolov5) model and then quantizes it.
77+
It has been tested with s and l versions on the VOC dataset using the the [SparseML integration with ultralytics/yolov5](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov5).
78+
79+
When running, adjust hyperparameters based on training environment and dataset.
80+
81+
## Weights and Biases
82+
83+
The training results for this recipe are made available through Weights and Biases for easy viewing.
84+
85+
- [YOLOv5 VOC Transfer Learning](https://wandb.ai/neuralmagic/yolov5-voc-sparse-transfer-learning)
86+
87+
## Training
88+
89+
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov5/README.md).
90+
Using the given training script from the `yolov5` directory the following command can be used to launch this recipe.
91+
Adjust the script command for your GPU device setup.
92+
Ultralytics supports both DataParallel and DDP.
93+
Finally, the sparse weights used with this recipe are stored in the SparseZoo and can be retrieved by passing in a SparseZoo stub to the `--weights` argument.
94+
95+
*script command:*
96+
97+
```bash
98+
python train.py \
99+
--data voc.yaml \
100+
--cfg ../models/yolov5s.yaml \
101+
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned_quant-aggressive_94 \
102+
--hyp data/hyp.finetune.yaml \
103+
--recipe ../recipes/yolov5s.transfer_learn_pruned_quantized.md
104+
```

0 commit comments

Comments
 (0)