You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
@@ -44,37 +44,28 @@ limitations under the License.
44
44
45
45
## Overview
46
46
47
-
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art
48
-
optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and
49
-
[quantization](https://arxiv.org/abs/1609.07061) to any neural network.
50
-
General, recipe-driven approaches built around these optimizations enable the simplification
51
-
of creating faster and smaller models for the ML performance community at large.
47
+
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to any neural network. General, recipe-driven approaches built around these optimizations enable the simplification of creating faster and smaller models for the ML performance community at large.
52
48
53
49
SparseML is integrated for easy model optimizations within the [PyTorch](https://pytorch.org/),
54
50
[Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems currently.
Easy-to-use autoML interface to optimize deep neural networks for
64
-
better inference performance and a smaller footprint
54
+
-[DeepSparse](https://github.com/neuralmagic/deepsparse): CPU inference engine that delivers unprecedented performance for sparse models
55
+
-[SparseZoo](https://github.com/neuralmagic/sparsezoo): Neural network model repository for highly sparse models and optimization recipes
56
+
-[Sparsify](https://github.com/neuralmagic/sparsify): Easy-to-use autoML interface to optimize deep neural networks for better inference performance and a smaller footprint
65
57
66
58
## Quick Tour
67
59
68
60
To enable flexibility, ease of use, and repeatability, optimizing a model is generally done using a recipe file.
69
61
The files encode the instructions needed for modifying the model and/or training process as a list of modifiers.
70
62
Example modifiers can be anything from setting the learning rate for the optimizer to gradual magnitude pruning.
71
-
The files are written in [YAML](https://yaml.org/) and stored in YAML or
72
-
[markdown](https://www.markdownguide.org/) files using
73
-
[YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
63
+
The files are written in [YAML](https://yaml.org/) and stored in YAML or [markdown](https://www.markdownguide.org/) files using [YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
74
64
The rest of the SparseML system is coded to parse the recipe files into a native format for the desired framework
75
65
and apply the modifications to the model and training pipeline.
76
66
77
67
A sample recipe for pruning a model generally looks like the following:
More information on the available recipes, formats, and arguments can be found [here](docs/optimization-recipes.md).
104
-
Additionally all code implementations of the modifiers under the `optim` packages
105
-
for the frameworks are documented with example YAML formats.
94
+
More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/optimization-recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.
106
95
107
-
Pre-configured recipes and the resulting models can be explored and downloaded from the
enables autoML style creation of optimization recipes for use with SparseML.
96
+
Pre-configured recipes and the resulting models can be explored and downloaded from the [SparseZoo](https://github.com/neuralmagic/sparsezoo). Also, [Sparsify](https://github.com/neuralmagic/sparsify) enables autoML style creation of optimization recipes for use with SparseML.
111
97
112
98
For a more in-depth read, check out [SparseML documentation](https://docs.neuralmagic.com/sparseml/).
113
99
114
100
### PyTorch Optimization
115
101
116
102
The PyTorch optimization libraries are located under the `sparseml.pytorch.optim` package.
117
-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into
118
-
PyTorch training pipelines.
103
+
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into PyTorch training pipelines.
119
104
120
-
The integration is done using the `ScheduledOptimizer` class.
121
-
It is intended to wrap your current optimizer and its step function.
122
-
The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file.
123
-
With this setup, the training process can then be modified as desired to optimize the model.
105
+
The integration is done using the `ScheduledOptimizer` class. It is intended to wrap your current optimizer and its step function. The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file. With this setup, the training process can then be modified as desired to optimize the model.
124
106
125
107
To enable all of this, the integration code you'll need to write is only a handful of lines:
108
+
126
109
```python
127
110
from sparseml.pytorch.optim import ScheduledModifierManager, ScheduledOptimizer
The TensorFlow optimization libraries for TensorFlow version 1.X are located under the
179
-
`sparseml.tensorflow_v1.optim`package.
180
-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into
181
-
TensorFlow V1 training pipelines.
160
+
The TensorFlow optimization libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package. Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.
182
161
183
162
The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
184
163
This class handles modifying the TensorFlow graph for the desired optimizations.
185
164
With this setup, the training process can then be modified as desired to optimize the model.
186
165
187
166
#### Estimator-based pipelines
167
+
188
168
Estimator-based pipelines are simpler to integrate with as compared to session-based pipelines.
189
-
The `ScheduledModifierManager` can override the necessary callbacks in the estimator to modify
190
-
the graph using the `modify_estimator` function.
169
+
The `ScheduledModifierManager` can override the necessary callbacks in the estimator to modify the graph using the `modify_estimator` function.
191
170
192
171
```python
193
172
from sparseml.tensorflow_v1.optim import ScheduledModifierManager
Session-based pipelines need a little bit more as compared to estimator-based pipelines; however,
206
186
it is still designed to require only a few lines of code for integration.
207
187
After graph creation, the manager's `create_ops` method must be called.
@@ -234,18 +214,14 @@ with tf_compat.Graph().as_default() as graph:
234
214
235
215
### Exporting to ONNX
236
216
237
-
[ONNX](https://onnx.ai/) is a generic representation for neural network graphs that
238
-
most ML frameworks can be converted to.
239
-
Some inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse)
240
-
natively take in ONNX for deployment pipelines,
241
-
so convenience functions for conversion and export are provided for the supported frameworks.
217
+
[ONNX](https://onnx.ai/) is a generic representation for neural network graphs that most ML frameworks can be converted to. Some inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse) natively take in ONNX for deployment pipelines, so convenience functions for conversion and export are provided for the supported frameworks.
242
218
243
219
#### Exporting PyTorch to ONNX
244
220
245
221
ONNX is built into the PyTorch system natively.
246
-
The `ModuleExporter` class under the `sparseml.pytorch.utils` package features an
247
-
`export_onnx`function built on top of this native support.
222
+
The `ModuleExporter` class under the `sparseml.pytorch.utils` package features an `export_onnx` function built on top of this native support.
ONNX is not built into the Keras system, but is supported through an ONNX official tool
262
-
[keras2onnx](https://github.com/onnx/keras-onnx).
263
-
The `ModelExporter` class under the `sparseml.keras.utils` package features an
264
-
`export_onnx`function built on top of keras2onnx.
237
+
238
+
ONNX is not built into the Keras system, but is supported through an ONNX official tool [keras2onnx](https://github.com/onnx/keras-onnx). The `ModelExporter` class under the `sparseml.keras.utils` package features an `export_onnx` function built on top of keras2onnx.
265
239
Example code:
240
+
266
241
```python
267
242
import os
268
243
from sparseml.keras.utils import ModelExporter
@@ -273,12 +248,14 @@ exporter.export_onnx()
273
248
```
274
249
275
250
#### Exporting TensorFlow V1 to ONNX
251
+
276
252
ONNX is not built into the TensorFlow system, but it is supported through an ONNX official tool
This repository is tested on Python 3.6+, and Linux/Debian systems.
307
-
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html)
308
-
to keep your system in order.
284
+
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your system in order.
309
285
310
286
Install with pip using:
311
287
312
288
```bash
313
289
pip install sparseml
314
290
```
315
291
316
-
Then if you would like to explore any of the [scripts](scripts/), [notebooks](notebooks/), or [examples](examples/)
292
+
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [examples](https://github.com/neuralmagic/sparseml/blob/main/examples/)
317
293
clone the repository and install any additional dependencies as required.
318
294
319
295
#### Supported Framework Versions
296
+
320
297
The currently supported framework versions are:
321
298
322
299
- PyTorch supported versions: `>= 1.1.0, < 1.7.0`
323
300
- Keras supported versions: `2.3.0-tf`(through the TensorFlow `2.2` package; as of Feb 1st, 2021, `keras2onnx` has
324
301
not been tested for TensorFlow >= `2.3`).
325
302
- TensorFlow V1 supported versions: >= `1.8.0` (TensorFlow >= `2.X` is not currently supported)
326
303
327
-
328
304
#### Optional Dependencies
305
+
329
306
Additionally, optional dependencies can be installed based on the framework you are using.
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests!
357
-
[Learn how here](CONTRIBUTING.md).
336
+
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).
358
337
359
338
## Join the Community
360
339
361
-
For user help or questions about Sparsify,
362
-
use our [GitHub Discussions](https://www.github.com/neuralmagic/sparseml/discussions/). Everyone is welcome!
340
+
For user help or questions about Sparsify, use our [GitHub Discussions](https://www.github.com/neuralmagic/sparseml/discussions/). Everyone is welcome!
363
341
364
-
You can get the latest news, webinar and event invites, research papers,
365
-
and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
342
+
You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
366
343
367
-
For more general questions about Neural Magic,
368
-
please email us at [learnmore@neuralmagic.com](mailto:learnmore@neuralmagic.com)
369
-
or fill out this [form](http://neuralmagic.com/contact/).
344
+
For more general questions about Neural Magic, please email us at [learnmore@neuralmagic.com](mailto:learnmore@neuralmagic.com) or fill out this [form](http://neuralmagic.com/contact/).
370
345
371
346
## License
372
347
373
-
The project is licensed under the [Apache License Version 2.0](LICENSE).
348
+
The project is licensed under the [Apache License Version 2.0](https://github.com/neuralmagic/sparseml/blob/main/LICENSE).
0 commit comments