arXiv Analytics

Sign in

arXiv:2312.09845 [math.NA]AbstractReferencesReviewsResources

Learned Regularization for Inverse Problems: Insights from a Spectral Model

Martin Burger, Samira Kabri

Published 2023-12-15Version 1

The aim of this paper is to provide a theoretically founded investigation of state-of-the-art learning approaches for inverse problems. We give an extended definition of regularization methods and their convergence in terms of the underlying data distributions, which paves the way for future theoretical studies. Based on a simple spectral learning model previously introduced for supervised learning, we investigate some key properties of different learning paradigms for inverse problems, which can be formulated independently of specific architectures. In particular we investigate the regularization properties, bias, and critical dependence on training data distributions. Moreover, our framework allows to highlight and compare the specific behavior of the different paradigms in the infinite-dimensional limit.

Related articles: Most relevant | Search more
arXiv:1908.03006 [math.NA] (Published 2019-08-08)
Sparse $\ell^q$-regularization of inverse problems with deep learning
arXiv:2210.14764 [math.NA] (Published 2022-10-26)
Towards a machine learning pipeline in reduced order modelling for inverse problems: neural networks for boundary parametrization, dimensionality reduction and solution manifold approximation
arXiv:2012.00611 [math.NA] (Published 2020-11-30)
On inverse problems modeled by PDE's