Skip to content

Commit 6d84b80

Browse files
authored
Merge branch 'main' into split_pointwise_conv_by_rf_codegen
2 parents daae96d + 5616e5a commit 6d84b80

File tree

2 files changed

+50
-0
lines changed

2 files changed

+50
-0
lines changed

docs/advanced/hgq.rst

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
===================================
2+
High Granularity Quantization (HGQ)
3+
===================================
4+
5+
.. image:: https://github.com/calad0i/HGQ/actions/workflows/sphinx-build.yml/badge.svg
6+
:target: https://calad0i.github.io/HGQ/
7+
.. image:: https://badge.fury.io/py/hgq.svg
8+
:target: https://badge.fury.io/py/hgq
9+
.. image:: https://img.shields.io/badge/arXiv-2405.00645-b31b1b.svg
10+
:target: https://arxiv.org/abs/2405.00645
11+
12+
`High Granularity Quantization (HGQ) <https://github.com/calad0i/HGQ/>`_ is a library that performs gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.
13+
14+
.. image:: https://calad0i.github.io/HGQ/_images/overview.svg
15+
:alt: Overview of HGQ
16+
:align: center
17+
18+
Conversion of models made with HGQ library is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a model with HGQ and convert it to hls4ml model.
19+
20+
.. code-block:: Python
21+
22+
import keras
23+
from HGQ.layers import HDense, HDenseBatchNorm, HQuantize
24+
from HGQ import ResetMinMax, FreeBOPs
25+
26+
model = keras.models.Sequential([
27+
HQuantize(beta=1.e-5),
28+
HDenseBatchNorm(32, beta=1.e-5, activation='relu'),
29+
HDenseBatchNorm(32, beta=1.e-5, activation='relu'),
30+
HDense(10, beta=1.e-5),
31+
])
32+
33+
opt = keras.optimizers.Adam(learning_rate=0.001)
34+
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
35+
model.compile(optimizer=opt, loss=loss, metrics=['accuracy'])
36+
callbacks = [ResetMinMax(), FreeBOPs()]
37+
38+
model.fit(..., callbacks=callbacks)
39+
40+
from HGQ import trace_minmax, to_proxy_model
41+
from hls4ml.converters import convert_from_keras_model
42+
43+
trace_minmax(model, x_train, cover_factor=1.0)
44+
proxy = to_proxy_model(model, aggressive=True)
45+
46+
model_hls = convert_from_keras_model(proxy, backend='vivado',output_dir=... ,part=...)
47+
48+
49+
An interactive example of HGQ can be found in the `kaggle notebook <https://www.kaggle.com/code/calad0i/small-jet-tagger-with-hgq-1>`_. Full documentation can be found at `calad0i.github.io/HGQ <https://calad0i.github.io/HGQ/>`_.

docs/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
:hidden:
2323
:caption: Advanced Features
2424

25+
advanced/hgq
2526
advanced/qonnx
2627
advanced/fifo_depth
2728
advanced/extension

0 commit comments

Comments
 (0)