arXiv Analytics

Sign in

arXiv:1705.08665 [stat.ML]AbstractReferencesReviewsResources

Bayesian Compression for Deep Learning

Christos Louizos, Karen Ullrich, Max Welling

Published 2017-05-24Version 1

Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by taking a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.

Related articles: Most relevant | Search more
arXiv:1609.08976 [stat.ML] (Published 2016-09-28)
Variational Autoencoder for Deep Learning of Images, Labels and Captions
arXiv:1805.05814 [stat.ML] (Published 2018-05-14)
SHADE: Information-Based Regularization for Deep Learning
arXiv:2012.06969 [stat.ML] (Published 2020-12-13, updated 2020-12-16)
Predicting Generalization in Deep Learning via Local Measures of Distortion