arXiv Analytics

Sign in

arXiv:1804.02253 [stat.ML]AbstractReferencesReviewsResources

A comparison of deep networks with ReLU activation function and linear spline-type methods

Konstantin Eckle, Johannes Schmidt-Hieber

Published 2018-04-06, updated 2018-09-24Version 2

Deep neural networks (DNNs) generate much richer function spaces than shallow networks. Since the function spaces induced by shallow networks have several approximation theoretic drawbacks, this explains, however, not necessarily the success of deep networks. In this article we take another route by comparing the expressive power of DNNs with ReLU activation function to piecewise linear spline methods. We show that MARS (multivariate adaptive regression splines) is improper learnable by DNNs in the sense that for any given function that can be expressed as a function in MARS with $M$ parameters there exists a multilayer neural network with $O(M \log (M/\varepsilon))$ parameters that approximates this function up to sup-norm error $\varepsilon.$ We show a similar result for expansions with respect to the Faber-Schauder system. Based on this, we derive risk comparison inequalities that bound the statistical risk of fitting a neural network by the statistical risk of spline-based methods. This shows that deep networks perform better or only slightly worse than the considered spline methods. We provide a constructive proof for the function approximations.

Related articles: Most relevant | Search more
arXiv:2208.05776 [stat.ML] (Published 2022-08-10)
Neural Networks for Scalar Input and Functional Output
arXiv:1805.09091 [stat.ML] (Published 2018-05-23)
Neural networks for post-processing ensemble weather forecasts
arXiv:1503.02531 [stat.ML] (Published 2015-03-09)
Distilling the Knowledge in a Neural Network