arXiv Analytics

Sign in

arXiv:2106.14997 [stat.ML]AbstractReferencesReviewsResources

Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

Jonathan W. Siegel, Jinchao Xu

Published 2021-06-28Version 1

We consider the approximation rates of shallow neural networks with respect to the variation norm. Upper bounds on these rates have been established for sigmoidal and ReLU activation functions, but it has remained an important open problem whether these rates are sharp. In this article, we provide a solution to this problem by proving sharp lower bounds on the approximation rates for shallow neural networks, which are obtained by lower bounding the $L^2$-metric entropy of the convex hull of the neural network basis functions. In addition, our methods also give sharp lower bounds on the Kolmogorov $n$-widths of this convex hull, which show that the variation spaces corresponding to shallow neural networks cannot be efficiently approximated by linear methods. These lower bounds apply to both sigmoidal activation functions with bounded variation and to activation functions which are a power of the ReLU. Our results also quantify how much stronger the Barron spectral norm is than the variation norm and, combined with previous results, give the asymptotics of the $L^\infty$-metric entropy up to logarithmic factors in the case of the ReLU activation function.

Comments: arXiv admin note: substantial text overlap with arXiv:2101.12365
Categories: stat.ML, cs.LG, math.ST, stat.TH
Subjects: 62M45, 41A46
Related articles: Most relevant | Search more
arXiv:2307.15285 [stat.ML] (Published 2023-07-28)
Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks
arXiv:1804.01592 [stat.ML] (Published 2018-04-04, updated 2019-04-10)
Robust and Resource Efficient Identification of Shallow Neural Networks by Fewest Samples
arXiv:1804.02253 [stat.ML] (Published 2018-04-06, updated 2018-09-24)
A comparison of deep networks with ReLU activation function and linear spline-type methods