arXiv Analytics

Sign in

arXiv:1311.1923 [math.NA]AbstractReferencesReviewsResources

Convergence rates in $\ell^1$-regularization when the basis is not smooth enough

Jens Flemming, Markus Hegland

Published 2013-11-08Version 1

Sparsity promoting regularization is an important technique for signal reconstruction and several other ill-posed problems. Theoretical investigation typically bases on the assumption that the unknown solution has a sparse representation with respect to a fixed basis. We drop this sparsity assumption and provide error estimates for non-sparse solutions. After discussing a result in this direction published earlier by one of the authors and coauthors we prove a similar error estimate under weaker assumptions. Two examples illustrate that this set of weaker assumptions indeed covers additional situations which appear in applications.

Related articles: Most relevant | Search more
arXiv:1805.11854 [math.NA] (Published 2018-05-30)
Convergence rates of nonlinear inverse problems in Banach spaces via Holder stability estimates
arXiv:1209.5732 [math.NA] (Published 2012-09-25, updated 2012-10-28)
Convergence rates in $\mathbf{\ell^1}$-regularization if the sparsity assumption fails
arXiv:2405.18034 [math.NA] (Published 2024-05-28)
Convergence rates of particle approximation of forward-backward splitting algorithm for granular medium equations