arXiv:2303.05262 [math.NA]AbstractReferencesReviewsResources
Fredholm integral equations for function approximation and the training of neural networks
Patrick Gelß, Aizhan Issagali, Ralf Kornhuber
Published 2023-03-09, updated 2023-04-17Version 2
We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.
Related articles: Most relevant | Search more
arXiv:2311.18333 [math.NA] (Published 2023-11-30)
Spherical Designs for Function Approximation and Beyond
arXiv:1503.02352 [math.NA] (Published 2015-03-09)
Infinite-dimensional $\ell^1$ minimization and function approximation from pointwise data
arXiv:1610.02852 [math.NA] (Published 2016-10-10)
Truncation Dimension for Function Approximation