Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit dddfe6d

Browse files
authored
readme updates (#47)
- edited in absolute links - removed "comingsoon" repo from certain placeholders; one remains for post launch on purpose - optimized markdown errors
1 parent 3e874ad commit dddfe6d

File tree

1 file changed

+46
-70
lines changed

1 file changed

+46
-70
lines changed

README.md

Lines changed: 46 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,12 @@ See the License for the specific language governing permissions and
1414
limitations under the License.
1515
-->
1616

17-
# ![icon for SparseMl](docs/icon-sparseml.png) SparseML
17+
# ![icon for SparseMl](https://github.com/neuralmagic/sparseml/blob/main/docs/icon-sparseml.png) SparseML
1818

1919
### Libraries for state-of-the-art deep neural network optimization algorithms, enabling simple pipelines integration with a few lines of code
2020

2121
<p>
22-
<a href="https://github.com/neuralmagic/comingsoon/blob/master/LICENSE">
22+
<a href="https://github.com/neuralmagic/sparseml/blob/main/LICENSE">
2323
<img alt="GitHub" src="https://img.shields.io/github/license/neuralmagic/comingsoon.svg?color=purple&style=for-the-badge" height=25>
2424
</a>
2525
<a href="https://docs.neuralmagic.com/sparseml/">
@@ -28,7 +28,7 @@ limitations under the License.
2828
<a href="https://github.com/neuralmagic/sparseml/releases">
2929
<img alt="GitHub release" src="https://img.shields.io/github/release/neuralmagic/sparseml.svg?style=for-the-badge" height=25>
3030
</a>
31-
<a href="https://github.com/neuralmagic.com/comingsoon/blob/master/CODE_OF_CONDUCT.md">
31+
<a href="https://github.com/neuralmagic.com/sparseml/blob/main/CODE_OF_CONDUCT.md">
3232
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg?color=yellow&style=for-the-badge" height=25>
3333
</a>
3434
<a href="https://www.youtube.com/channel/UCo8dO_WMGYbWCRnj_Dxr4EA">
@@ -44,37 +44,28 @@ limitations under the License.
4444

4545
## Overview
4646

47-
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art
48-
optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and
49-
[quantization](https://arxiv.org/abs/1609.07061) to any neural network.
50-
General, recipe-driven approaches built around these optimizations enable the simplification
51-
of creating faster and smaller models for the ML performance community at large.
47+
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to any neural network. General, recipe-driven approaches built around these optimizations enable the simplification of creating faster and smaller models for the ML performance community at large.
5248

5349
SparseML is integrated for easy model optimizations within the [PyTorch](https://pytorch.org/),
5450
[Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems currently.
5551

5652
### Related Products
5753

58-
- [DeepSparse](https://github.com/neuralmagic/deepsparse):
59-
CPU inference engine that delivers unprecedented performance for sparse models
60-
- [SparseZoo](https://github.com/neuralmagic/sparsezoo):
61-
Neural network model repository for highly sparse models and optimization recipes
62-
- [Sparsify](https://github.com/neuralmagic/sparsify):
63-
Easy-to-use autoML interface to optimize deep neural networks for
64-
better inference performance and a smaller footprint
54+
- [DeepSparse](https://github.com/neuralmagic/deepsparse): CPU inference engine that delivers unprecedented performance for sparse models
55+
- [SparseZoo](https://github.com/neuralmagic/sparsezoo): Neural network model repository for highly sparse models and optimization recipes
56+
- [Sparsify](https://github.com/neuralmagic/sparsify): Easy-to-use autoML interface to optimize deep neural networks for better inference performance and a smaller footprint
6557

6658
## Quick Tour
6759

6860
To enable flexibility, ease of use, and repeatability, optimizing a model is generally done using a recipe file.
6961
The files encode the instructions needed for modifying the model and/or training process as a list of modifiers.
7062
Example modifiers can be anything from setting the learning rate for the optimizer to gradual magnitude pruning.
71-
The files are written in [YAML](https://yaml.org/) and stored in YAML or
72-
[markdown](https://www.markdownguide.org/) files using
73-
[YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
63+
The files are written in [YAML](https://yaml.org/) and stored in YAML or [markdown](https://www.markdownguide.org/) files using [YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
7464
The rest of the SparseML system is coded to parse the recipe files into a native format for the desired framework
7565
and apply the modifications to the model and training pipeline.
7666

7767
A sample recipe for pruning a model generally looks like the following:
68+
7869
```yaml
7970
version: 0.1.0
8071
modifiers:
@@ -100,29 +91,21 @@ modifiers:
10091
params: ['sections.0.0.conv1.weight', 'sections.0.0.conv2.weight', 'sections.0.0.conv3.weight']
10192
```
10293
103-
More information on the available recipes, formats, and arguments can be found [here](docs/optimization-recipes.md).
104-
Additionally all code implementations of the modifiers under the `optim` packages
105-
for the frameworks are documented with example YAML formats.
94+
More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/optimization-recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.
10695

107-
Pre-configured recipes and the resulting models can be explored and downloaded from the
108-
[SparseZoo](https://github.com/neuralmagic/sparsezoo).
109-
Also, [Sparsify](https://github.com/neuralmagic/sparsify)
110-
enables autoML style creation of optimization recipes for use with SparseML.
96+
Pre-configured recipes and the resulting models can be explored and downloaded from the [SparseZoo](https://github.com/neuralmagic/sparsezoo). Also, [Sparsify](https://github.com/neuralmagic/sparsify) enables autoML style creation of optimization recipes for use with SparseML.
11197

11298
For a more in-depth read, check out [SparseML documentation](https://docs.neuralmagic.com/sparseml/).
11399

114100
### PyTorch Optimization
115101

116102
The PyTorch optimization libraries are located under the `sparseml.pytorch.optim` package.
117-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into
118-
PyTorch training pipelines.
103+
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into PyTorch training pipelines.
119104

120-
The integration is done using the `ScheduledOptimizer` class.
121-
It is intended to wrap your current optimizer and its step function.
122-
The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file.
123-
With this setup, the training process can then be modified as desired to optimize the model.
105+
The integration is done using the `ScheduledOptimizer` class. It is intended to wrap your current optimizer and its step function. The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file. With this setup, the training process can then be modified as desired to optimize the model.
124106

125107
To enable all of this, the integration code you'll need to write is only a handful of lines:
108+
126109
```python
127110
from sparseml.pytorch.optim import ScheduledModifierManager, ScheduledOptimizer
128111
@@ -139,18 +122,17 @@ optimizer = ScheduledOptimizer(optimizer, model, manager, steps_per_epoch=num_tr
139122
### Keras Optimization
140123

141124
The Keras optimization libraries are located under the `sparseml.keras.optim` package.
142-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into
143-
Keras training pipelines.
125+
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into Keras training pipelines.
144126

145127
The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
146128
This class handles modifying the Keras objects for the desired optimizations using the `modify` method.
147129
The edited model, optimizer, and any callbacks necessary to modify the training process are returned.
148-
The model and optimizer can be used normally and the callbacks must be passed into
149-
the `fit` or `fit_generator` function.
130+
The model and optimizer can be used normally and the callbacks must be passed into the `fit` or `fit_generator` function.
150131
If using `train_on_batch`, the callbacks must be invoked after each call.
151132
After training is completed, call into the manager's `finalize` method to clean up the graph for exporting.
152133

153134
To enable all of this, the integration code you'll need to write is only a handful of lines:
135+
154136
```python
155137
from sparseml.keras.optim import ScheduledModifierManager
156138
@@ -175,19 +157,16 @@ save_model = manager.finalize(model)
175157

176158
### TensorFlow V1 Optimization
177159

178-
The TensorFlow optimization libraries for TensorFlow version 1.X are located under the
179-
`sparseml.tensorflow_v1.optim` package.
180-
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into
181-
TensorFlow V1 training pipelines.
160+
The TensorFlow optimization libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package. Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.
182161

183162
The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
184163
This class handles modifying the TensorFlow graph for the desired optimizations.
185164
With this setup, the training process can then be modified as desired to optimize the model.
186165

187166
#### Estimator-based pipelines
167+
188168
Estimator-based pipelines are simpler to integrate with as compared to session-based pipelines.
189-
The `ScheduledModifierManager` can override the necessary callbacks in the estimator to modify
190-
the graph using the `modify_estimator` function.
169+
The `ScheduledModifierManager` can override the necessary callbacks in the estimator to modify the graph using the `modify_estimator` function.
191170

192171
```python
193172
from sparseml.tensorflow_v1.optim import ScheduledModifierManager
@@ -202,6 +181,7 @@ manager.modify_estimator(estimator, steps_per_epoch=num_train_batches)
202181
```
203182

204183
#### Session-based pipelines
184+
205185
Session-based pipelines need a little bit more as compared to estimator-based pipelines; however,
206186
it is still designed to require only a few lines of code for integration.
207187
After graph creation, the manager's `create_ops` method must be called.
@@ -234,18 +214,14 @@ with tf_compat.Graph().as_default() as graph:
234214

235215
### Exporting to ONNX
236216

237-
[ONNX](https://onnx.ai/) is a generic representation for neural network graphs that
238-
most ML frameworks can be converted to.
239-
Some inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse)
240-
natively take in ONNX for deployment pipelines,
241-
so convenience functions for conversion and export are provided for the supported frameworks.
217+
[ONNX](https://onnx.ai/) is a generic representation for neural network graphs that most ML frameworks can be converted to. Some inference engines such as [DeepSparse](https://github.com/neuralmagic/deepsparse) natively take in ONNX for deployment pipelines, so convenience functions for conversion and export are provided for the supported frameworks.
242218

243219
#### Exporting PyTorch to ONNX
244220

245221
ONNX is built into the PyTorch system natively.
246-
The `ModuleExporter` class under the `sparseml.pytorch.utils` package features an
247-
`export_onnx` function built on top of this native support.
222+
The `ModuleExporter` class under the `sparseml.pytorch.utils` package features an `export_onnx` function built on top of this native support.
248223
Example code:
224+
249225
```python
250226
import os
251227
import torch
@@ -258,11 +234,10 @@ exporter.export_onnx(sample_batch=torch.randn(1, 1, 28, 28))
258234
```
259235

260236
#### Exporting Keras to ONNX
261-
ONNX is not built into the Keras system, but is supported through an ONNX official tool
262-
[keras2onnx](https://github.com/onnx/keras-onnx).
263-
The `ModelExporter` class under the `sparseml.keras.utils` package features an
264-
`export_onnx` function built on top of keras2onnx.
237+
238+
ONNX is not built into the Keras system, but is supported through an ONNX official tool [keras2onnx](https://github.com/onnx/keras-onnx). The `ModelExporter` class under the `sparseml.keras.utils` package features an `export_onnx` function built on top of keras2onnx.
265239
Example code:
240+
266241
```python
267242
import os
268243
from sparseml.keras.utils import ModelExporter
@@ -273,12 +248,14 @@ exporter.export_onnx()
273248
```
274249

275250
#### Exporting TensorFlow V1 to ONNX
251+
276252
ONNX is not built into the TensorFlow system, but it is supported through an ONNX official tool
277253
[tf2onnx](https://github.com/onnx/tensorflow-onnx).
278254
The `GraphExporter` class under the `sparseml.tensorflow_v1.utils` package features an
279255
`export_onnx` function built on top of tf2onnx.
280256
Note that the ONNX file is created from the protobuf graph representation, so `export_pb` must be called first.
281257
Example code:
258+
282259
```python
283260
import os
284261
from sparseml.tensorflow_v1.utils import tf_compat, GraphExporter
@@ -304,39 +281,44 @@ exporter.export_onnx(inputs=input_names, outputs=output_names)
304281
### Installation
305282

306283
This repository is tested on Python 3.6+, and Linux/Debian systems.
307-
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html)
308-
to keep your system in order.
284+
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your system in order.
309285

310286
Install with pip using:
311287

312288
```bash
313289
pip install sparseml
314290
```
315291

316-
Then if you would like to explore any of the [scripts](scripts/), [notebooks](notebooks/), or [examples](examples/)
292+
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [examples](https://github.com/neuralmagic/sparseml/blob/main/examples/)
317293
clone the repository and install any additional dependencies as required.
318294

319295
#### Supported Framework Versions
296+
320297
The currently supported framework versions are:
321298

322299
- PyTorch supported versions: `>= 1.1.0, < 1.7.0`
323300
- Keras supported versions: `2.3.0-tf` (through the TensorFlow `2.2` package; as of Feb 1st, 2021, `keras2onnx` has
324301
not been tested for TensorFlow >= `2.3`).
325302
- TensorFlow V1 supported versions: >= `1.8.0` (TensorFlow >= `2.X` is not currently supported)
326303
327-
328304
#### Optional Dependencies
305+
329306
Additionally, optional dependencies can be installed based on the framework you are using.
330307

331308
PyTorch:
309+
332310
```bash
333311
pip install sparseml[torch]
334312
```
313+
335314
Keras:
315+
336316
```bash
337317
pip install sparseml[tf_keras]
338318
```
319+
339320
TensorFlow V1:
321+
340322
```bash
341323
pip install sparseml[tf_v1]
342324
```
@@ -347,34 +329,28 @@ pip install sparseml[tf_v1]
347329
- [SparseML Documentation](https://docs.neuralmagic.com/sparseml/)
348330
- [Sparsify Documentation](https://docs.neuralmagic.com/sparsify/)
349331
- [DeepSparse Documentation](https://docs.neuralmagic.com/deepsparse/)
350-
- Neural Magic [Blog](https://www.neuralmagic.com/blog/),
351-
[Resources](https://www.neuralmagic.com/resources/),
352-
[Website](https://www.neuralmagic.com/)
332+
- Neural Magic [Blog](https://www.neuralmagic.com/blog/), [Resources](https://www.neuralmagic.com/resources/), [Website](https://www.neuralmagic.com/)
353333

354334
## Contributing
355335

356-
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests!
357-
[Learn how here](CONTRIBUTING.md).
336+
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).
358337

359338
## Join the Community
360339

361-
For user help or questions about Sparsify,
362-
use our [GitHub Discussions](https://www.github.com/neuralmagic/sparseml/discussions/). Everyone is welcome!
340+
For user help or questions about Sparsify, use our [GitHub Discussions](https://www.github.com/neuralmagic/sparseml/discussions/). Everyone is welcome!
363341

364-
You can get the latest news, webinar and event invites, research papers,
365-
and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
342+
You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
366343

367-
For more general questions about Neural Magic,
368-
please email us at [learnmore@neuralmagic.com](mailto:learnmore@neuralmagic.com)
369-
or fill out this [form](http://neuralmagic.com/contact/).
344+
For more general questions about Neural Magic, please email us at [learnmore@neuralmagic.com](mailto:learnmore@neuralmagic.com) or fill out this [form](http://neuralmagic.com/contact/).
370345

371346
## License
372347

373-
The project is licensed under the [Apache License Version 2.0](LICENSE).
348+
The project is licensed under the [Apache License Version 2.0](https://github.com/neuralmagic/sparseml/blob/main/LICENSE).
374349

375350
## Release History
376351

377352
Official builds are hosted on PyPi
353+
378354
- stable: [sparseml](https://pypi.org/project/sparseml/)
379355
- nightly (dev): [sparseml-nightly](https://pypi.org/project/sparseml-nightly/)
380356

@@ -414,4 +390,4 @@ Find this project useful in your research or other communications? Please consid
414390
archivePrefix={arXiv},
415391
primaryClass={cs.LG}
416392
}
417-
```
393+
```

0 commit comments

Comments
 (0)