Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit fc8baa0

Browse files
authored
add note about handling OOM errors in examples (#53)
1 parent c823118 commit fc8baa0

File tree

4 files changed

+12
-4
lines changed

4 files changed

+12
-4
lines changed

examples/pytorch-torchvision/pruning.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,9 @@
251251
"\n",
252252
"You can create SparseML recipes to perform various model pruning schedules, quantization aware training, sparse transfer learning, and more. If you are using a different model than the default, you will have to modify the recipe YAML file to target the new model's parameters.\n",
253253
"\n",
254-
"Finally, using the wrapped optimizer object, you will call the training function to prune your model."
254+
"Finally, using the wrapped optimizer object, you will call the training function to prune your model.\n",
255+
"\n",
256+
"If the kernel shuts down during training, this may be an out of memory error, to resolve this, try lowering the `batch_size` in the cell above."
255257
]
256258
},
257259
{

notebooks/pytorch_classification.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,9 @@
200200
"\n",
201201
"You can create SparseML recipes to perform various model pruning schedules, quantization aware training, sparse transfer learning, and more. If you are using a different model than the default, you will have to modify the recipe YAML file to target the new model's parameters.\n",
202202
"\n",
203-
"Finally, using the wrapped optimizer object, you will call the training function to prune your model."
203+
"Finally, using the wrapped optimizer object, you will call the training function to prune your model.\n",
204+
"\n",
205+
"If the kernel shuts down during training, this may be an out of memory error, to resolve this, try lowering the `batch_size` in the cell above."
204206
]
205207
},
206208
{

notebooks/pytorch_detection.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,9 @@
219219
"\n",
220220
"You can create SparseML recipes to perform various model pruning schedules, quantization aware training, sparse transfer learning, and more. If you are using a different model than the default, you will have to modify the recipe YAML file to target the new model's parameters.\n",
221221
"\n",
222-
"Finally, using the wrapped optimizer object, you will call the training function to prune your model."
222+
"Finally, using the wrapped optimizer object, you will call the training function to prune your model.\n",
223+
"\n",
224+
"If the kernel shuts down during training, this may be an out of memory error, to resolve this, try lowering the `batch_size` in the cell above."
223225
]
224226
},
225227
{

notebooks/tensorflow_v1_classification.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,9 @@
198198
"## Step 4 - Prune your model using a TensorFlow training loop\n",
199199
"SparseML can plug directly into your existing TensorFlow training flow by creating additional operators to run. To demonstrate this, in the cell below, prune the model using a standard TensorFlow training loop while also running the operators created by the manager object. To prune your existing models using SparseML, you can use your own training flow with the additional operators added.\n",
200200
"\n",
201-
"For your convienence the lines needed for integrating with SparseML are preceeded by large comment blocks."
201+
"For your convienence the lines needed for integrating with SparseML are preceeded by large comment blocks.\n",
202+
"\n",
203+
"If the kernel shuts down during training, this may be an out of memory error, to resolve this, try lowering the `BATCH_SIZE` in Step 2 above."
202204
]
203205
},
204206
{

0 commit comments

Comments
 (0)