arXiv Analytics

Sign in

arXiv:2109.13512 [math.FA]AbstractReferencesReviewsResources

Neural Networks in Fréchet spaces

Fred Espen Benth, Nils Detering, Luca Galimberti

Published 2021-09-28, updated 2022-05-16Version 4

We define a neural network in infinite dimensional spaces for which we can show the universal approximation property. Indeed, we derive approximation results for continuous functions from a Fr\'echet space $\X$ into a Banach space $\Y$. The approximation results are generalising the well known universal approximation theorem for continuous functions from $\mathbb{R}^n$ to $\mathbb{R}$, where approximation is done with (multilayer) neural networks [15, 25, 18, 29]. Our infinite dimensional networks are constructed using activation functions being nonlinear operators and affine transforms. Several examples are given of such activation functions. We show furthermore that our neural networks on infinite dimensional spaces can be projected down to finite dimensional subspaces with any desirable accuracy, thus obtaining approximating networks that are easy to implement and allow for fast computation and fitting. The resulting neural network architecture is therefore applicable for prediction tasks based on functional data.

Related articles: Most relevant | Search more
arXiv:2005.12605 [math.FA] (Published 2020-05-26)
Inverse Function Theorem in Fréchet Spaces
arXiv:1612.00642 [math.FA] (Published 2016-12-02)
Riemann integrability under weaker forms of continuity in infinite dimensional spaces
arXiv:0809.4647 [math.FA] (Published 2008-09-26)
Series expansions in Fréchet spaces and their duals; construction of Fréchet frames