arXiv Analytics

Sign in

arXiv:2011.10487 [stat.ML]AbstractReferencesReviewsResources

Normalization effects on shallow neural networks and related asymptotic expansions

Jiahui Yu, Konstantinos Spiliopoulos

Published 2020-11-20Version 1

We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units $N$ and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the $1/\sqrt{N}$ and the mean-field $1/N$ normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion we demonstrate mathematically that to leading order in $N$ there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in $N$, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.

Related articles: Most relevant | Search more
arXiv:2106.14997 [stat.ML] (Published 2021-06-28)
Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks
arXiv:2307.15285 [stat.ML] (Published 2023-07-28)
Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks
arXiv:2212.02223 [stat.ML] (Published 2022-11-30)
Limitations on approximation by deep and shallow neural networks