arXiv Analytics

Sign in

arXiv:1912.04310 [math.NA]AbstractReferencesReviewsResources

Efficient approximation of high-dimensional functions with deep neural networks

Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek

Published 2019-12-09Version 1

In this paper, we develop an approximation theory for deep neural networks that is based on the concept of a catalog network. Catalog networks are generalizations of standard neural networks in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of continuous functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with neural networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. We apply the theory of catalog networks to demonstrate that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems.

Related articles: Most relevant | Search more
arXiv:2405.13566 [math.NA] (Published 2024-05-22)
Bounds on the approximation error for deep neural networks applied to dispersive models: Nonlinear waves
arXiv:2206.00934 [math.NA] (Published 2022-06-02)
Deep neural networks can stably solve high-dimensional, noisy, non-linear inverse problems
arXiv:2105.12080 [math.NA] (Published 2021-05-25)
Operator Compression with Deep Neural Networks