You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
* Update annotate.py
minor edits, thx
* Update annotate.py
Getting additional edits at top of file to match ones below
* Fix for integrations/timm checkpoint path (#198)
This PR fixes issue #197
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
* Fix steps_per_epoch calculation (#201)
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
* YOLO webcam example - add assert for webcam load (#202)
* YOLO webcam example - add assert for webcam load
* update readme to note other options for annotate
* formatting
Co-authored-by: Eldar Kurtic <eldar.ciki@gmail.com>
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: Mark Kurtz <mark@neuralmagic.com>
Copy file name to clipboardExpand all lines: integrations/transformers/run_distill_qa.py
+24-23Lines changed: 24 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -19,8 +19,8 @@
19
19
# limitations under the License.
20
20
21
21
"""
22
-
Example script for integrating spaseml with the transformers library to perform model distillation.
23
-
This script is addopted from hugging face's implementation for Question Answering on the SQUAD Dataset.
22
+
Example script for integrating spaseml with the transformers library to perform model distillation.
23
+
This script is addopted from hugging face's implementation for Question Answering on the SQUAD Dataset.
24
24
Hugging Face's original implementation is regularly updated and can be found at https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py
25
25
This script will:
26
26
- Load transformer based models
@@ -54,12 +54,12 @@
54
54
[--onnx_export_path] \
55
55
[--layers_to_keep] \
56
56
57
-
Train, prune, and evaluate a transformer base question answering model on squad.
57
+
Train, prune, and evaluate a transformer base question answering model on squad.
58
58
-h, --help show this help message and exit
59
59
--teacher_model_name_or_path The name or path of model which will be used for distilation.
60
60
Note, this model needs to be trained for QA task already.
61
61
--student_model_name_or_path The name or path of the model wich will be trained using distilation.
62
-
--temperature Hyperparameter which controls model distilation
62
+
--temperature Hyperparameter which controls model distilation
63
63
--distill_hardness Hyperparameter which controls how much of the loss comes from teacher vs training labels
64
64
--model_name_or_path The path to the transformers model you wish to train
65
65
or the name of the pretrained language model you wish
@@ -72,21 +72,21 @@
72
72
or not. Default is false.
73
73
--do_eval Boolean denoting if the model should be evaluated
74
74
or not. Default is false.
75
-
--per_device_train_batch_size Size of each training batch based on samples per GPU.
75
+
--per_device_train_batch_size Size of each training batch based on samples per GPU.
76
76
12 will fit in a 11gb GPU, 16 in a 16gb.
77
-
--per_device_eval_batch_size Size of each training batch based on samples per GPU.
77
+
--per_device_eval_batch_size Size of each training batch based on samples per GPU.
Copy file name to clipboardExpand all lines: integrations/transformers/run_qa.py
+23-22Lines changed: 23 additions & 22 deletions
Original file line number
Diff line number
Diff line change
@@ -19,8 +19,8 @@
19
19
# limitations under the License.
20
20
21
21
"""
22
-
Example script for integrating spaseml with the transformers library.
23
-
This script is addopted from hugging face's implementation for Question Answering on the SQUAD Dataset.
22
+
Example script for integrating spaseml with the transformers library.
23
+
This script is addopted from hugging face's implementation for Question Answering on the SQUAD Dataset.
24
24
Hugging Face's original implementation is regularly updated and can be found at https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py
25
25
This script will:
26
26
- Load transformer based modesl
@@ -50,7 +50,7 @@
50
50
[--do_onnx_export]
51
51
[--onnx_export_path]
52
52
53
-
Train, prune, and evaluate a transformer base question answering model on squad.
53
+
Train, prune, and evaluate a transformer base question answering model on squad.
54
54
-h, --help show this help message and exit
55
55
--model_name_or_path MODEL The path to the transformers model you wish to train
56
56
or the name of the pretrained language model you wish
@@ -63,21 +63,21 @@
63
63
or not. Default is false.
64
64
--do_eval Boolean denoting if the model should be evaluated
65
65
or not. Default is false.
66
-
--per_device_train_batch_size Size of each training batch based on samples per GPU.
66
+
--per_device_train_batch_size Size of each training batch based on samples per GPU.
67
67
12 will fit in a 11gb GPU, 16 in a 16gb.
68
-
--per_device_eval_batch_size Size of each training batch based on samples per GPU.
68
+
--per_device_eval_batch_size Size of each training batch based on samples per GPU.
0 commit comments