arXiv Analytics

Sign in

arXiv:1810.06397 [stat.ML]AbstractReferencesReviewsResources

A Priori Estimates of the Generalization Error for Two-layer Neural Networks

Weinan E, Chao Ma, Lei Wu

Published 2018-10-15Version 1

New estimates for the generalization error are established for the two-layer neural network model. These new estimates are a priori in nature in the sense that the bounds depend only on some norms of the underlying functions to be fitted, not the parameters in the model. In contrast, most existing results for neural networks are a posteriori in nature in the sense that the bounds depend on some norms of the model parameters. The error rates are comparable to that of the Monte Carlo method for integration problems. Moreover, these bounds are equally effective in the over-parametrized regime when the network size is much larger than the size of the dataset.

Related articles: Most relevant | Search more
arXiv:1905.11427 [stat.ML] (Published 2019-05-27)
Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
arXiv:2409.18836 [stat.ML] (Published 2024-09-27)
Constructing Confidence Intervals for 'the' Generalization Error -- a Comprehensive Benchmark Study
arXiv:2409.09078 [stat.ML] (Published 2024-09-10)
Bounds on the Generalization Error in Active Learning