arXiv Analytics

Sign in

arXiv:1901.08624 [cs.LG]AbstractReferencesReviewsResources

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

Published 2019-01-24Version 1

ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its basic operations include group, channel-wise convolution and channel shuffling. However, channel shuffling is manually designed empirically. Mathematically, shuffling is a multiplication by a permutation matrix. In this paper, we propose to automate channel shuffling by learning permutation matrices in network training. We introduce an exact Lipschitz continuous non-convex penalty so that it can be incorporated in the stochastic gradient descent to approximate permutation at high precision. Exact permutations are obtained by simple rounding at the end of training and are used in inference. The resulting network, referred to as AutoShuffleNet, achieved improved classification accuracies on CIFAR-10 and ImageNet data sets. In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance. We prove theoretically the exactness (error bounds) in recovering permutation matrices when our penalty function is zero (very small). We present examples of permutation optimization through graph matching and two-layer neural network models where the loss functions are calculated in closed analytical form. In the examples, convex relaxation failed to capture permutations whereas our penalty succeeded.

Related articles: Most relevant | Search more
arXiv:1902.05967 [cs.LG] (Published 2019-02-15)
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
arXiv:1809.05606 [cs.LG] (Published 2018-09-14)
Non-iterative recomputation of dense layers for performance improvement of DCNN
arXiv:1809.09399 [cs.LG] (Published 2018-09-25)
Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks