arXiv Analytics

Sign in

arXiv:2103.00815 [math.NA]AbstractReferencesReviewsResources

Computation complexity of deep ReLU neural networks in high-dimensional approximation

Dinh Dũng, Van Kien Nguyen, Mai Xuan Thao

Published 2021-03-01Version 1

The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in H\"older-Nikol'skii spaces of mixed smoothness $H_\infty^\alpha(\mathbb{I}^d)$ on the unit cube $\mathbb{I}^d:=[0,1]^d$. In this context, for any function $f\in H_\infty^\alpha(\mathbb{I}^d)$, we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.

Comments: 30 pages. arXiv admin note: text overlap with arXiv:2007.08729
Categories: math.NA, cs.NA
Related articles: Most relevant | Search more
arXiv:1805.09106 [math.NA] (Published 2018-05-23)
Transformed Rank-1 Lattices for high-dimensional approximation
arXiv:1702.03361 [math.NA] (Published 2017-02-11)
Quasi-Monte Carlo for discontinuous integrands with singularities along the boundary of the unit cube
arXiv:2207.12826 [math.NA] (Published 2022-07-26)
Variable Transformations in combination with Wavelets and ANOVA for high-dimensional approximation