arXiv Analytics

Sign in

arXiv:1805.01930 [stat.ML]AbstractReferencesReviewsResources

Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks

Brian Bartoldson, Adrian Barbu, Gordon Erlebacher

Published 2018-05-04Version 1

Artificial neural networks (ANNs) may not be worth their computational/memory costs when used in mobile phones or embedded devices. Parameter-pruning algorithms combat these costs, with some algorithms capable of removing over 90% of an ANN's weights without harming the ANN's performance. Removing weights from an ANN is a form of regularization, but existing pruning algorithms do not significantly improve generalization error. We show that pruning ANNs can improve generalization if pruning targets large weights instead of small weights. Applying our pruning algorithm to an ANN leads to a higher image classification accuracy on CIFAR-10 data than applying the popular regularizer dropout. The pruning couples this higher accuracy with an 85% reduction of the ANN's parameter count.

Related articles: Most relevant | Search more
arXiv:2002.11152 [stat.ML] (Published 2020-02-25)
Fundamental Issues Regarding Uncertainties in Artificial Neural Networks
arXiv:2008.03920 [stat.ML] (Published 2020-08-10)
Do ideas have shape? Plato's theory of forms as the continuous limit of artificial neural networks
arXiv:2001.00396 [stat.ML] (Published 2020-01-02)
Restricting the Flow: Information Bottlenecks for Attribution