arXiv Analytics

Sign in

arXiv:1707.03684 [cs.CV]AbstractReferencesReviewsResources

Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations

Yoonho Boo, Wonyong Sung

Published 2017-07-01Version 1

Deep neural networks (DNNs) usually demand a large amount of operations for real-time inference. Especially, fully-connected layers contain a large number of weights, thus they usually need many off-chip memory accesses for inference. We propose a weight compression method for deep neural networks, which allows values of +1 or -1 only at predetermined positions of the weights so that decoding using a table can be conducted easily. For example, the structured sparse (8,2) coding allows at most two non-zero values among eight weights. This method not only enables multiplication-free DNN implementations but also compresses the weight storage by up to x32 compared to floating-point networks. Weight distribution normalization and gradual pruning techniques are applied to mitigate the performance degradation. The experiments are conducted with fully-connected deep neural networks and convolutional neural networks.

Comments: This paper is accepted in SIPS 2017
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1611.05431 [cs.CV] (Published 2016-11-16)
Aggregated Residual Transformations for Deep Neural Networks
arXiv:1709.03820 [cs.CV] (Published 2017-09-12)
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
arXiv:1312.2249 [cs.CV] (Published 2013-12-08)
Scalable Object Detection using Deep Neural Networks