arXiv Analytics

Sign in

arXiv:2003.06308 [cs.LG]AbstractReferencesReviewsResources

Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML

Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Duc Hoang, Sergo Jindariani, Edward Kreinar, Mia Liu, Vladimir Loncar, Jennifer Ngadiuba, Kevin Pedro, Maurizio Pierini, Dylan Rankin, Sheila Sagear, Sioni Summers, Nhan Tran, Zhenbin Wu

Published 2020-03-11Version 1

We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.

Related articles: Most relevant | Search more
arXiv:2007.14917 [cs.LG] (Published 2020-07-29)
Compressing Deep Neural Networks via Layer Fusion
arXiv:1904.06194 [cs.LG] (Published 2019-04-11)
Compressing deep neural networks by matrix product operators
arXiv:2308.10722 [cs.LG] (Published 2023-08-21)
Clustered Linear Contextual Bandits with Knapsacks