arXiv Analytics

Sign in

arXiv:1604.03364 [math.FA]AbstractReferencesReviewsResources

Elastic-net regularization versus $\ell^1$-regularization for linear inverse problems with quasi-sparse solutions

De-Han Chen, Bernd Hofmann, Jun Zou

Published 2016-04-12Version 1

We consider the ill-posed operator equation $Ax=y$ with an injective and bounded linear operator $A$ mapping between $\ell^2$ and a Hilbert space $Y$, and a unique solution $x^\dag=\{x^\dag_k\}_{k=1}^\infty$. For the cases that sparsity $x^\dag \in \ell^0$ is expected but often slightly violated in practice, we investigate, in comparison with the $\ell^1$-regularization, the elastic-net regularization, where the penalty is a weighted superposition of the $\ell^1$-norm and the $\ell^2$-norm square, under the assumption $x^\dag \in \ell^1$. There occur two positive parameters in this approach, the weight parameter $\eta$ and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in $\ell^1$-regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator $A$ and exploiting the method of approximate source conditions we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay $x^\dag \to 0$ for $k \to \infty$ and the classical smoothness properties of $x^\dag$ as an element in $\ell^2$.

Related articles: Most relevant | Search more
arXiv:math/0307152 [math.FA] (Published 2003-07-10, updated 2003-11-02)
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
arXiv:1409.7610 [math.FA] (Published 2014-09-26)
Generalized Convergence Rates Results for Linear Inverse Problems in Hilbert Spaces
arXiv:1511.02950 [math.FA] (Published 2015-11-10)
Optimal Convergence Rates Results for Linear Inverse Problems in Hilbert Spaces