arXiv Analytics

Sign in

arXiv:1810.09102 [cs.LG]AbstractReferencesReviewsResources

Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

Nitin Bansal, Xiaohan Chen, Zhangyang Wang

Published 2018-10-22Version 1

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways? We develop novel orthogonality regularizations on training deep CNNs, utilizing various advanced analytical tools such as mutual coherence and restricted isometry property. These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. We then benchmark their effects on state-of-the-art models: ResNet, WideResNet, and ResNeXt, on several most popular computer vision datasets: CIFAR-10, CIFAR-100, SVHN and ImageNet. We observe consistent performance gains after applying those proposed regularizations, in terms of both the final accuracies achieved, and faster and more stable convergences. We have made our codes and pre-trained models publicly available: https://github.com/nbansal90/Can-we-Gain-More-from-Orthogonality.

Related articles: Most relevant | Search more
arXiv:1705.08014 [cs.LG] (Published 2017-05-22)
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
arXiv:1804.10574 [cs.LG] (Published 2018-04-27)
Decoupled Parallel Backpropagation with Convergence Guarantee
arXiv:1807.04511 [cs.LG] (Published 2018-07-12)
Training Neural Networks Using Features Replay