This repository has been archived by the owner on Oct 23, 2023. It is now read-only.
Highlights:
- New high-level quantization ops for expressing quantized networks more flexibly in the QONNX format
- Inference cost (MACs at particular precision + memory) estimation
DataType
system refactoring to allow flexible arbitrary-precision integers and fixed-point types
Merged PRs:
- #36 Copied platforms from finn-experimental and made pre-commit conform
- #37 Added support for cost estimation for upsampling
- #41 Add support for Bipolar and Binary FINN datatype for Quant op.
- #42 #43 Updates to inference cost computation (Upsample and binary quantization)
- #44 chore: Add message to AssertionError
- #45 Various fixes from multi-headed net testing
- #46 Pull upstream changes into qonnx_quant_op
- #47 rtlsim improvements
- #48 Support and tests for unsetting FINN data types.
- #50 DataType system refactoring and fixed-point types
- #51 Faster&smaller shape inference
- #52 QONNX ops and new general transformations