arXiv Analytics

Sign in

arXiv:1909.03172 [cs.LG]AbstractReferencesReviewsResources

Towards Understanding the Importance of Noise in Training Neural Networks

Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, Tuo Zhao

Published 2019-09-07Version 1

Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network model. Although training such a network requires solving a nonconvex optimization problem with a spurious local optimum and a global optimum, we prove that perturbed gradient descent and perturbed mini-batch stochastic gradient algorithms in conjunction with noise annealing is guaranteed to converge to a global optimum in polynomial time with arbitrary initialization. This implies that the noise enables the algorithm to efficiently escape from the spurious local optimum. Numerical experiments are provided to support our theory.

Related articles: Most relevant | Search more
arXiv:1905.01422 [cs.LG] (Published 2019-05-04)
NAMSG: An Efficient Method For Training Neural Networks
Yushu Chen et al.
arXiv:1805.09214 [cs.LG] (Published 2018-05-23)
A Unified Framework for Training Neural Networks
arXiv:1807.04511 [cs.LG] (Published 2018-07-12)
Training Neural Networks Using Features Replay