arXiv Analytics

Sign in

arXiv:1812.11337 [cs.LG]AbstractReferencesReviewsResources

Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks

Ghouthi Boukli Hacene, Vincent Gripon, Matthieu Arzel, Nicolas Farrugia, Yoshua Bengio

Published 2018-12-29Version 1

Convolutional Neural Networks (CNNs) are state-of-the-art in numerous computer vision tasks such as object classification and detection. However, the large amount of parameters they contain leads to a high computational complexity and strongly limits their usability in budget-constrained devices such as embedded devices. In this paper, we propose a combination of a new pruning technique and a quantization scheme that effectively reduce the complexity and memory usage of convolutional layers of CNNs, and replace the complex convolutional operation by a low-cost multiplexer. We perform experiments on the CIFAR10, CIFAR100 and SVHN and show that the proposed method achieves almost state-of-the-art accuracy, while drastically reducing the computational and memory footprints. We also propose an efficient hardware architecture to accelerate CNN operations. The proposed hardware architecture is a pipeline and accommodates multiple layers working at the same time to speed up the inference process.

Related articles: Most relevant | Search more
arXiv:1602.02660 [cs.LG] (Published 2016-02-08)
Exploiting Cyclic Symmetry in Convolutional Neural Networks
arXiv:1604.04428 [cs.LG] (Published 2016-04-15)
The Artificial Mind's Eye: Resisting Adversarials for Convolutional Neural Networks using Internal Projection
arXiv:1511.06067 [cs.LG] (Published 2015-11-19)
Convolutional neural networks with low-rank regularization