arXiv Analytics

Sign in

arXiv:1209.5732 [math.NA]AbstractReferencesReviewsResources

Convergence rates in $\mathbf{\ell^1}$-regularization if the sparsity assumption fails

Martin Burger, Jens Flemming, Bernd Hofmann

Published 2012-09-25, updated 2012-10-28Version 2

Variational sparsity regularization based on $\ell^1$-norms and other nonlinear functionals has gained enormous attention recently, both with respect to its applications and its mathematical analysis. A focus in regularization theory has been to develop error estimation in terms of regularization parameter and noise strength. For this sake specific error measures such as Bregman distances and specific conditions on the solution such as source conditions or variational inequalities have been developed and used. In this paper we provide, for a certain class of ill-posed linear operator equations, a convergence analysis that works for solutions that are not completely sparse, but have a fast decaying nonzero part. This case is not covered by standard source conditions, but surprisingly can be treated with an appropriate variational inequality. As a consequence the paper also provides the first examples where the variational inequality approach, which was often believed to be equivalent to appropriate source conditions, can indeed go farther than the latter.

Related articles: Most relevant | Search more
arXiv:1311.1923 [math.NA] (Published 2013-11-08)
Convergence rates in $\ell^1$-regularization when the basis is not smooth enough
arXiv:1805.11854 [math.NA] (Published 2018-05-30)
Convergence rates of nonlinear inverse problems in Banach spaces via Holder stability estimates
arXiv:2405.18034 [math.NA] (Published 2024-05-28)
Convergence rates of particle approximation of forward-backward splitting algorithm for granular medium equations