Skip to content
This repository was archived by the owner on Oct 23, 2023. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
992ed02
Add TruthTable custom operation
jalezeta Feb 17, 2021
a64558b
Merge branch 'dev' of https://github.com/Xilinx/finn-base into featur…
jalezeta Feb 22, 2021
d00ecd7
Move truthtables CustomOp into logicnets folder
jalezeta Feb 22, 2021
62e3795
Reformat truthtable testing function
jalezeta Feb 22, 2021
8ccd651
Add attributes
jalezeta Feb 23, 2021
95ecfee
Fix shape_compatibility error
jalezeta Feb 23, 2021
f2521ad
Adding zero entries in the custom operations, attributes, and a trans…
jalezeta Feb 26, 2021
5532ac0
Merge branch 'dev' into feature/incomplete_tables
jalezeta Feb 26, 2021
a655834
There are a bunch of changes in this commit:
jalezeta Mar 2, 2021
f8eb7c1
Modify function and file names to keed consistency
jalezeta Mar 3, 2021
1154ceb
Add Copyright (c) 2021 Xilinx, Inc.
jalezeta Mar 3, 2021
b18d59c
Add custom directory entry for the verilog file
jalezeta Mar 3, 2021
a9626f1
Add PyVerilator test for the generated binary truth table verilog files
jalezeta Mar 4, 2021
d67885f
Add new execution mode: python or rtlsim
jalezeta Mar 5, 2021
b37621d
Add random LUT generator utility function.
jalezeta Mar 5, 2021
b4743bb
Merge branch 'dev' of https://github.com/Xilinx/finn-base into featur…
jalezeta Mar 8, 2021
72b088c
Merge branch 'dev' into feature/incomplete_tables
jalezeta Mar 8, 2021
a9e246b
Change verilog file path to temporary folder
jalezeta Mar 8, 2021
8c308e2
Simplify the binary array to integer conversion
jalezeta Mar 8, 2021
11a6b32
Check the shape and the size of the input vector
jalezeta Mar 9, 2021
29945d2
Add UniqueNodeNames to create different verilog module names and files
jalezeta Mar 9, 2021
5917c1b
Initial commit of simple LogicNets model (Non working code)
jalezeta Mar 10, 2021
8dc3ccf
Initial commit of simple LogicNets model (Non working code)
jalezeta Mar 10, 2021
fa9c641
BinaryTruthTable returns a vector instead of a scalar and make_Shape_…
jalezeta Mar 12, 2021
c5f9253
Merge branch 'feature/incomplete_tables' of github.com:jalezeta/finn-…
jalezeta Mar 12, 2021
4acb776
Update binary_truthtable.py
jalezeta Mar 12, 2021
dd048f0
Merge branch 'feature/incomplete_tables' of github.com:jalezeta/finn-…
jalezeta Mar 12, 2021
d082996
Fix simple LogicNets model creation
jalezeta Mar 12, 2021
0f6ecce
Remove inputs and add custom tensor name in make_shape compatible fun…
jalezeta Mar 16, 2021
1ab4427
Merge branch 'feature/incomplete_tables' of github.com:jalezeta/finn-…
jalezeta Mar 16, 2021
6a01bda
Change operation pattern of LogicNets model to <SparseSlice + BinaryT…
jalezeta Mar 26, 2021
908ffab
Fix small problem, missing input index data in the input dictionary
jalezeta Mar 26, 2021
a0bfc0f
Add Concat operation at the beginning of the model
jalezeta Mar 29, 2021
48ef946
Support for dedicated care_set in the node level Verilog generation
jalezeta Mar 29, 2021
404617a
Merge branch 'feature/incomplete_tables' into feature/logicnets-initi…
jalezeta Mar 29, 2021
c208bbb
Add support for separate care sets
jalezeta Mar 29, 2021
8b83e32
Add python based prediction and result assertion
jalezeta Mar 30, 2021
5a0a8d8
Remove input concat block. We assume the general input to the model i…
jalezeta Mar 30, 2021
7802057
1- Add Verilog generation for Custom logicnets model.\n2- Add test fo…
jalezeta Apr 1, 2021
741464f
Reformat verilog generation file
jalezeta Apr 1, 2021
32fef33
1- Add TruthTable customOp for X:Y LUT operations(python and rtlsim).…
jalezeta Apr 2, 2021
055a969
Make the algorithm cleaner by using ModelWrapper functions to find pr…
jalezeta Apr 9, 2021
b99707d
[create_generic_partitions]: minor modification, removed redundant ou…
mmrahorovic Apr 12, 2021
aea01d6
[extend_partition]: added a new transformation ExtendPartition. (#27)
mmrahorovic Apr 12, 2021
b94395b
Added support for non-equal strides along different axes (#25)
mmrahorovic Apr 13, 2021
94beb27
Changes for supporting vitis_hls (#28)
Apr 19, 2021
dd97556
Add TruthTable custom operation
jalezeta Feb 17, 2021
3d07226
Move truthtables CustomOp into logicnets folder
jalezeta Feb 22, 2021
1289d74
Reformat truthtable testing function
jalezeta Feb 22, 2021
06d4989
Add attributes
jalezeta Feb 23, 2021
06b0364
Fix shape_compatibility error
jalezeta Feb 23, 2021
5ec2582
Adding zero entries in the custom operations, attributes, and a trans…
jalezeta Feb 26, 2021
670b460
There are a bunch of changes in this commit:
jalezeta Mar 2, 2021
cc68438
Modify function and file names to keed consistency
jalezeta Mar 3, 2021
1ac978e
Add Copyright (c) 2021 Xilinx, Inc.
jalezeta Mar 3, 2021
5294bda
Add custom directory entry for the verilog file
jalezeta Mar 3, 2021
003e227
Add PyVerilator test for the generated binary truth table verilog files
jalezeta Mar 4, 2021
3d1af47
Add new execution mode: python or rtlsim
jalezeta Mar 5, 2021
2dedb60
Add random LUT generator utility function.
jalezeta Mar 5, 2021
af38ff9
Merge branch 'dev' into feature/incomplete_tables
jalezeta Mar 8, 2021
ebd9c5d
Change verilog file path to temporary folder
jalezeta Mar 8, 2021
ac4b33d
Simplify the binary array to integer conversion
jalezeta Mar 8, 2021
0613341
Check the shape and the size of the input vector
jalezeta Mar 9, 2021
79e4cfd
Add UniqueNodeNames to create different verilog module names and files
jalezeta Mar 9, 2021
c7d0033
Initial commit of simple LogicNets model (Non working code)
jalezeta Mar 10, 2021
88dd426
Initial commit of simple LogicNets model (Non working code)
jalezeta Mar 10, 2021
5c41a7d
BinaryTruthTable returns a vector instead of a scalar and make_Shape_…
jalezeta Mar 12, 2021
3179a86
Update binary_truthtable.py
jalezeta Mar 12, 2021
749ff9d
Fix simple LogicNets model creation
jalezeta Mar 12, 2021
6694423
Remove inputs and add custom tensor name in make_shape compatible fun…
jalezeta Mar 16, 2021
7fa9529
Change operation pattern of LogicNets model to <SparseSlice + BinaryT…
jalezeta Mar 26, 2021
be403f9
Fix small problem, missing input index data in the input dictionary
jalezeta Mar 26, 2021
35fc6af
Add Concat operation at the beginning of the model
jalezeta Mar 29, 2021
0282ef6
Support for dedicated care_set in the node level Verilog generation
jalezeta Mar 29, 2021
0a89250
Add support for separate care sets
jalezeta Mar 29, 2021
ab3c976
Add python based prediction and result assertion
jalezeta Mar 30, 2021
14f2a1b
Remove input concat block. We assume the general input to the model i…
jalezeta Mar 30, 2021
f6a5bad
1- Add Verilog generation for Custom logicnets model.\n2- Add test fo…
jalezeta Apr 1, 2021
84f35a8
Reformat verilog generation file
jalezeta Apr 1, 2021
2680141
Make the algorithm cleaner by using ModelWrapper functions to find pr…
jalezeta Apr 9, 2021
5b64cbd
Merge branch 'feature/pla-backend' of github.com:jalezeta/finn-base i…
jalezeta Apr 19, 2021
31dbaf1
Merge branch 'feature/truthtable-custom-op' into feature/pla-backend
jalezeta Apr 19, 2021
8d89a9d
Add PLA file backend support for BinaryTruthTable and TruthTable cust…
jalezeta Apr 20, 2021
2c08044
Changes for supporting non-equal dilation (#29)
mmrahorovic May 9, 2021
ac0b86a
Support infer_datatype for flatten layer (#30)
fpjentzsch May 17, 2021
fea6f14
Update AUTHORS.rst
Jun 4, 2021
42fa89a
Create python-publish.yml
Jun 4, 2021
c928353
Add ZCU111 board to part map (#32)
fpjentzsch Jun 4, 2021
7cb1196
Update AUTHORS.rst
Jun 4, 2021
d4b80dd
GHA for PyPI, sdist only (#35)
Jun 4, 2021
8bcf791
Delete onnx file generation
jalezeta Jul 2, 2021
64468ed
Merge branch 'dev' of github.com:Xilinx/finn-base into dev
jalezeta Jul 2, 2021
dddd90d
Remove non-used package
jalezeta Jul 2, 2021
b90a44d
Solve linting problems
jalezeta Jul 2, 2021
14051a2
Fix docstring issues
jalezeta Jul 2, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Upload Python Package

on:
release:
types: [created]

jobs:
deploy:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
python setup.py sdist
twine upload dist/*
6 changes: 4 additions & 2 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
Contributors
============

* Yaman Umuroglu <yamanu@xilinx.com> (maintainer)
* Sambhav Jain <sambhavj@xilinx.com> (maintainer)
* Yaman Umuroglu (@maltanar) (maintainer)
* Sambhav Jain (@sjain-stanford)
* Jakoba Petri-Koenig (@auphelia)
* Lucian Petrica (@quetric)
* Tobias Alonso (@Tobi-Alonso)
* Hendrik Borras (@HenniOVP)
* Mirza Mrahorovic (@mmrahorovic)
* Felix Paul Jentzsch (@felixpj)
* Jon Ander Lezeta (@jalezeta)
5 changes: 3 additions & 2 deletions src/finn/core/rtlsim_exec.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,12 @@

from finn.custom_op.registry import getCustomOp
from finn.util.data_packing import npy_to_rtlsim_input, rtlsim_output_to_npy
from finn.util.fpgadataflow import (
from finn.util.pyverilator import (
pyverilate_get_liveness_threshold_cycles,
pyverilate_stitched_ip,
reset_rtlsim,
toggle_clk,
)
from finn.util.pyverilator import reset_rtlsim, toggle_clk

try:
from pyverilator import PyVerilator
Expand Down
4 changes: 4 additions & 0 deletions src/finn/custom_op/general/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@
from finn.custom_op.general.multithreshold import MultiThreshold
from finn.custom_op.general.quantavgpool2d import QuantAvgPool2d
from finn.custom_op.general.xnorpopcount import XnorPopcountMatMul
from finn.custom_op.logicnets.binary_truthtable import BinaryTruthTable
from finn.custom_op.logicnets.truthtable import TruthTable

custom_op = dict()

Expand All @@ -43,3 +45,5 @@
custom_op["MultiThreshold"] = MultiThreshold
custom_op["XnorPopcountMatMul"] = XnorPopcountMatMul
custom_op["Im2Col"] = Im2Col
custom_op["BinaryTruthTable"] = BinaryTruthTable
custom_op["TruthTable"] = TruthTable
146 changes: 83 additions & 63 deletions src/finn/custom_op/general/im2col.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,28 +19,40 @@ def compute_conv_output_dim(ifm_dim, k, stride, total_pad=0, dilation=1):
"""
if ifm_dim == 1:
# indicates dummy dimension, keep as-is
# Also ensure that every call to this function respects the expected
# kernel shape and padding
assert (
k == 1 and total_pad == 0
), "Unexpected kernel shape and padding for 1D input image"
out_dim = 1
else:
out_dim = int(((ifm_dim + total_pad - dilation * (k - 1) - 1) / stride) + 1)
return out_dim


def get_im2col_indices_nchw(
x_shape, field_height, field_width, padding=0, stride_y=1, stride_x=1, dilation=1
x_shape,
field_height,
field_width,
padding=0,
stride_h=1,
stride_w=1,
dilation_h=1,
dilation_w=1,
):
"""Returns im2col indices."""
# First figure out what the size of the output should be
n, c, h, w = x_shape
pad_h = padding[0] + padding[2]
pad_w = padding[1] + padding[3]
out_height = compute_conv_output_dim(h, field_height, stride_y, pad_h, dilation)
out_width = compute_conv_output_dim(w, field_width, stride_x, pad_w, dilation)
out_height = compute_conv_output_dim(h, field_height, stride_h, pad_h, dilation_h)
out_width = compute_conv_output_dim(w, field_width, stride_w, pad_w, dilation_w)

i0 = dilation * np.repeat(np.arange(field_height), field_width)
i0 = dilation_h * np.repeat(np.arange(field_height), field_width)
i0 = np.tile(i0, c)
i1 = stride_y * np.repeat(np.arange(out_height), out_width)
j0 = dilation * np.tile(np.arange(field_width), field_height * c)
j1 = stride_x * np.tile(np.arange(out_width), out_height)
i1 = stride_h * np.repeat(np.arange(out_height), out_width)
j0 = dilation_w * np.tile(np.arange(field_width), field_height * c)
j1 = stride_w * np.tile(np.arange(out_width), out_height)
i = i0.reshape(-1, 1) + i1.reshape(1, -1)
j = j0.reshape(-1, 1) + j1.reshape(1, -1)

Expand All @@ -56,43 +68,34 @@ def im2col_indices_nchw(
field_height,
field_width,
padding=[0, 0, 0, 0],
stride_y=1,
stride_x=1,
stride_h=1,
stride_w=1,
pad_val=0,
dilation=1,
dilation_h=1,
dilation_w=1,
):
"""Performs im2col on image (2D tensor, possibly with 1-length dummy dimensions) x with
given field height and width, as well as values for padding and stride size.
Returns result of im2col."""
# Zero-pad the input
p = padding

if ifm_h == 1: # Shape of input image is: (1, C, 1, W)
x_padded = np.pad(
x,
((0, 0), (0, 0), (0, 0), (p[1], p[3])),
mode="constant",
constant_values=pad_val,
)
elif ifm_w == 1: # Shape of input image is: (1, C, H, 1)
x_padded = np.pad(
x,
((0, 0), (0, 0), (p[0], p[2]), (0, 0)),
mode="constant",
constant_values=pad_val,
)
elif ifm_h > 1 and ifm_w > 1: # Shape of input image is: (1, C, H, W)
x_padded = np.pad(
x,
((0, 0), (0, 0), (p[0], p[2]), (p[1], p[3])),
mode="constant",
constant_values=pad_val,
)
else:
raise Exception("Unknown combination of spatial sizes in im2col_indices_nchw")
x_padded = np.pad(
x,
((0, 0), (0, 0), (p[0], p[2]), (p[1], p[3])),
mode="constant",
constant_values=pad_val,
)

k, i, j = get_im2col_indices_nchw(
x.shape, field_height, field_width, padding, stride_y, stride_x, dilation
x.shape,
field_height,
field_width,
padding,
stride_h,
stride_w,
dilation_h,
dilation_w,
)

cols = x_padded[:, k, i, j]
Expand Down Expand Up @@ -122,7 +125,7 @@ class Im2Col(CustomOp):
def get_nodeattr_types(self):
return {
# stride and shape of convolution kernel
"stride": ("i", True, 1),
"stride": ("ints", True, []),
"kernel_size": ("ints", True, []),
# input tensor shape
"input_shape": ("s", True, ""),
Expand All @@ -134,16 +137,14 @@ def get_nodeattr_types(self):
# depthwise: if 1, infer ConvolutionInputGenerator with depthwise == 1
"depthwise": ("i", False, 0, {0, 1}),
# dilation factor applied to the conv kernel
"dilations": ("i", False, 1),
"dilations": ("ints", False, [1, 1]),
}

def make_shape_compatible_op(self, model):
k = self.get_nodeattr("kernel_size") # Assumption: Height x Width
k_h = k[0]
k_w = k[1]
stride = self.get_nodeattr("stride")
k_h, k_w = self.get_nodeattr("kernel_size") # Assumption: Height x Width
stride_h, stride_w = self.get_nodeattr("stride")
ishape = self.get_nodeattr("input_shape")
dilation = self.get_nodeattr("dilations")
dilation_h, dilation_w = self.get_nodeattr("dilations")
pad = self.get_nodeattr(
"pad_amount"
) # padding: [H_begin, W_begin, H_end, W_end]
Expand All @@ -166,16 +167,22 @@ def make_shape_compatible_op(self, model):

# check that kernel tensor also respects any existing dummy dimensions
if ifm_dim_h == 1:
kernel_1d = k_h == 1
pad_1d = pad_h == 0
assert (
k_h == 1
), "Unexpected kernel shape for input image of dimensions (N, 1, W, C)"
kernel_1d and pad_1d
), "Unexpected kernel shape and padding for input image\
of dimensions (N, 1, W, C)"
if ifm_dim_w == 1:
kernel_1d = k_w == 1
pad_1d = pad_w == 0
assert (
k_w == 1
), "Unexpected kernel shape for input image of dimensions (N, H, 1, C)"
kernel_1d and pad_1d
), "Unexpected kernel shape padding for input image\
of dimensions (N, H, 1, C)"

ofm_dim_h = compute_conv_output_dim(ifm_dim_h, k_h, stride, pad_h, dilation)
ofm_dim_w = compute_conv_output_dim(ifm_dim_w, k_w, stride, pad_w, dilation)
ofm_dim_h = compute_conv_output_dim(ifm_dim_h, k_h, stride_h, pad_h, dilation_h)
ofm_dim_w = compute_conv_output_dim(ifm_dim_w, k_w, stride_w, pad_w, dilation_w)

# implement tensor with correct shape
values = np.random.randn(1, ofm_dim_h, ofm_dim_w, k_h * k_w * ifm_ch).astype(
Expand All @@ -201,46 +208,59 @@ def infer_node_datatype(self, model):

def execute_node(self, context, graph):
node = self.onnx_node
k = self.get_nodeattr("kernel_size") # Assumption: Height x Width
k_h = k[0]
k_w = k[1]

stride = self.get_nodeattr("stride")
k_h, k_w = self.get_nodeattr("kernel_size") # Assumption: Height x Width
stride_h, stride_w = self.get_nodeattr("stride")
pad = self.get_nodeattr("pad_amount")
pad_h = pad[0] + pad[2]
pad_w = pad[1] + pad[3]
pad_val = self.get_nodeattr("pad_value")
dilation = self.get_nodeattr("dilations")
dilation_h, dilation_w = self.get_nodeattr("dilations")

iname = node.input[0]
x = context[iname]
qnt_annotations = graph.quantization_annotation
ret = util.get_by_name(qnt_annotations, iname, "tensor_name")
ret = util.get_by_name(ret.quant_parameter_tensor_names, "finn_datatype", "key")
idt = DataType[ret.value]
for val in pad:
if val != 0:
assert idt.allowed(val), "Im2Col dtype must allow pad_val"
if pad != [0, 0, 0, 0]:
assert idt.allowed(pad_val), "Im2Col dtype must allow pad_val"
# check that input is NHWC
assert x.ndim == 4, "Unexpected number of input dims for Im2Col"
n, h, w, c = x.shape

# check that kernel tensor also respects any existing dummy dimensions
if h == 1:
kernel_1d = k_h == 1
pad_1d = pad_h == 0
assert (
k_h == 1
), "Unexpected kernel shape for input image of dimensions (N, 1, W, C)"
kernel_1d and pad_1d
), "Unexpected kernel shape and padding for input image\
of dimensions (N, 1, W, C)"
if w == 1:
kernel_1d = k_w == 1
pad_1d = pad_w == 0
assert (
k_w == 1
), "Unexpected kernel shape for input image of dimensions (N, H, 1, C)"
kernel_1d and pad_1d
), "Unexpected kernel shape and padding for input image\
of dimensions (N, H, 1, C)"

out_dim_h = compute_conv_output_dim(h, k_h, stride, pad_h, dilation)
out_dim_w = compute_conv_output_dim(w, k_w, stride, pad_w, dilation)
out_dim_h = compute_conv_output_dim(h, k_h, stride_h, pad_h, dilation_h)
out_dim_w = compute_conv_output_dim(w, k_w, stride_w, pad_w, dilation_w)
# internally convert input to NCHW
x = x.transpose(0, 3, 1, 2)
# call NCHW im2col implementation
ret = im2col_indices_nchw(
x, h, w, k_h, k_w, pad, stride, stride, pad_val=pad_val, dilation=dilation
x,
h,
w,
k_h,
k_w,
pad,
stride_h,
stride_w,
pad_val=pad_val,
dilation_h=dilation_h,
dilation_w=dilation_w,
)
# result shape is (k_H*k_W*N, out_dim_H*out_dim_W), convert to NCHW
ret = ret.reshape(n, c, k_h, k_w, out_dim_h, out_dim_w)
Expand Down
Loading