arXiv Analytics

Sign in

arXiv:1501.03704 [cs.IT]AbstractReferencesReviewsResources

Does $\ell_p$-minimization outperform $\ell_1$-minimization?

Le Zheng, Arian Maleki, Xiaodong Wang, Teng Long

Published 2015-01-15Version 1

In many application areas we are faced with the following question: Can we recover a sparse vector $x_o \in \mathbb{R}^N$ from its undersampled set of noisy observations $y \in \mathbb{R}^n$, $y=A x_o+w$. The last decade has witnessed a surge of algorithms and theoretical results addressing this question. One of the most popular algorithms is the $\ell_p$-penalized least squares (LPLS) given by the following formulation: \[ \hat{x}(\lambda,p )\in \arg\min_x \frac{1}{2}\|y - Ax\|_2^2+\lambda\|x\|_p^p, \] where $p \in [0,1]$. Despite the non-convexity of these problems for $p<1$, they are still appealing because of the following folklores in compressed sensing: (i) $\hat{x}(\lambda,p )$ is closer to $x_o$ than $\hat{x}(\lambda,1)$. (ii) If we employ iterative methods that aim to converge to a local minima of LPLS, then under good initialization these algorithms converge to a solution that is closer to $x_o$ than $\hat{x}(\lambda,1)$. In spite of the existence of plenty of empirical results that support these folklore theorems, the theoretical progress to establish them has been very limited. This paper aims to study the above folklore theorems and establish their scope of validity. Starting with approximate message passing algorithm as a heuristic method for solving LPLS, we study the impact of initialization on the performance of AMP. Then, we employ the replica analysis to show the connection between the solution of $AMP$ and $\hat{x}(\lambda, p)$ in the asymptotic settings. This enables us to compare the accuracy of $\hat{x}(\lambda,p)$ for $p \in [0,1]$. In particular, we will characterize the phase transition and noise sensitivity of LPLS for every $0\leq p\leq 1$ accurately. Our results in the noiseless setting confirm that LPLS exhibits the same phase transition for every $0\leq p <1$ and this phase transition is much higher than that of LASSO.

Related articles: Most relevant | Search more
arXiv:1905.10031 [cs.IT] (Published 2019-05-24)
Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation
arXiv:1004.2392 [cs.IT] (Published 2010-04-14)
On the optimal stacking of noisy observations
arXiv:1303.6672 [cs.IT] (Published 2013-03-26, updated 2014-04-25)
Living on the edge: Phase transitions in convex programs with random data