arXiv Analytics

Sign in

arXiv:2105.12080 [math.NA]AbstractReferencesReviewsResources

Operator Compression with Deep Neural Networks

Fabian Kröpfl, Roland Maier, Daniel Peterseim

Published 2021-05-25Version 1

This paper studies the compression of partial differential operators using neural networks. We consider a family of operators, parameterized by a potentially high-dimensional space of coefficients that may vary on a large range of scales. Based on existing methods that compress such a multiscale operator to a finite-dimensional sparse surrogate model on a given target scale, we propose to directly approximate the coefficient-to-surrogate map with a neural network. We emulate local assembly structures of the surrogates and thus only require a moderately sized network that can be trained efficiently in an offline phase. This enables large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches. We apply the abstract framework to a family of prototypical second-order elliptic heterogeneous diffusion operators as a demonstrating example.

Related articles: Most relevant | Search more
arXiv:2312.14276 [math.NA] (Published 2023-12-21)
Deep Neural Networks and Finite Elements of Any Order on Arbitrary Dimensions
arXiv:2405.13566 [math.NA] (Published 2024-05-22)
Bounds on the approximation error for deep neural networks applied to dispersive models: Nonlinear waves
arXiv:2206.00934 [math.NA] (Published 2022-06-02)
Deep neural networks can stably solve high-dimensional, noisy, non-linear inverse problems