Elsevier

Neurocomputing

Volume 223, 5 February 2017, Pages 103-106
Neurocomputing

Brief papers
Mean-square analysis of the gradient projection sparse recovery algorithm based on non-uniform norm

https://doi.org/10.1016/j.neucom.2016.10.032Get rights and content

Abstract

With the previously proposed non-uniform norm called lN-norm, which consists of a sequence of l1-norm or l0-norm elements according to relative magnitude, a novel lN-norm sparse recovery algorithm can be derived by projecting the gradient descent solution to the reconstruction feasible set. In order to gain analytical insights into the performance of this algorithm, in this letter we analyze the steady state mean square performance of the gradient projection lN-norm sparse recovery algorithm in terms of different sparsity, as well as additive noise. Numerical simulations are provided to verify the theoretical results.

Introduction

Obtaining the sparse recovery solution under the framework of compressed sensing (CS) has gained more and more attention in recent years. The l1-norm reconstruction algorithms, such as the interior-point method for large-scale l1-regularized least squares (l1-ls) method [2] and gradient projection for sparse reconstruction [3], are usually adopted by searching for a solution with minimum l1-norm. In comparison, because direct search of minimum l0-norm will generally lead to NP hard problem [4], [5], [6], [7], various approximation methods are also investigated to solve the difficulties caused by l0-norm [4], [5], [6] such as the smoothing l0–norm (SL0) [8] and l0–norm zero-point attracting projection (l0–ZAP) [9]. Another type of popular approaches is greedy method such as matching pursuit (MP) and orthogonal matching pursuit (OMP) [4], [5], by which the approximation is generated via an iteratively process to search the column vectors that most closely resemble the required. Based on block based least square and minimum mean-square-error cost function, iteratively reweighted least squares (IRLS) minimization technique is used for iterative sparse recovery [6]. Moreover, in [10] the Frobenius norm and the l1-norm of the Euclidean norm are used to design an iterative optimization algorithm for the structured sparse coding model.

In [11], [12], a new non-uniform norm called lN-norm, which consisted of a sequence of l0 or l1 norm elements according to relative magnitude, is proposed to exploit the sparseness while providing adaptability to different sparsity of sources. It is pointed out in [11] that imposing lN-norm constraint on the Least Mean Square (LMS) iteration yields enhanced convergence rate as well as better tolerance upon different sparsity in system identification. The sparsity exploitation performance of the lN-norm LMS algorithm has been compared with that of other existing sparsity-aware algorithms in [14], [15]. In [16], the concept of non-uniform norm is further combined with variable step-size to derive the p norm variable step-size LMS algorithm, which claims to outperform the classic lN-norm LMS.

The concept of lN-norm can be also introduced into the sparse signal reconstruction at the presence of source signals associated with different sparsity. Similar to previous gradient projection type approaches [3], the iterative optimization solution of the proposed lN-norm sparse reconstruction can be derived by directly minimizing the lN-norm cost function via the steepest descent method, and then affine projecting the solution to the feasible set. However, there is a lack of analytical steady state performance in terms of lN-norm sparsity recovery. In this letter, the steady state mean-square performance of the lN-norm gradient projection sparse recovery algorithm is theoretically performed. Finally numerical simulation results are provided to verify the analysis.

Section snippets

Derivation of the non-uniform norm CS Algorithm

The problem to obtain l1-norm or l0-norm sparse solution can be respectively expressed as:minxx1subjecttoAx=yminxx0subjecttoAx=ywhere x1=i=1n|x(i)| and x0=#{i|x(i)0,i=1,,n}.

In [11], [12], the concept of p-norm like is introduced and defined as:xpp=i=1n|x(i)|p,0p1.

Furthermore, a non-uniform norm, denoted as xN in this study, is defined in [12]. It is noticeable that xN utilizes different value of p for each entry of x, i.e. xN=i=1n|x(i)|pi,0pi1. Moreover, by classifying

Steady state performance analysis of the lN norm CS algorithm

Without loss of generality, in this letter the entries of mixing matrix A are independently sampled from a normal distribution with mean zero and variance of 1m, which ensure each column vector of A are normalized.

We define the misalignment vector as hj=xjxO, where xO is the optimum gradient descent solution, and the actual possible sampling result y1 related to the optimum gradient descent solution can be rewritten as:y1=AxO+wwhere w can be regarded as deviation noise signal between the

Numerical simulation

In this section, numerical simulations are performed to verify the theoretical analysis of mean-square performance. The MSD of the sparse reconstruction is defined as:MSD=xxO22where xO is ideal sparse signal size of n×1. In order to formulate the CS problem, the matrix and vectors are generated according to (10), and the nonzero value of xO is generated according to normal distribution process as following:xi~SRN(0,σu2)+(1SR)N(0,σw2),0inwhere σu2 and σw2 are the variance of the

Conclusion

Extending from the application of the non-uniform norm for sparsity exploitation in the framework of LMS algorithm, a novel lN-norm sparse recovery algorithm is derived in this letter, by projecting the gradient descent lN norm solution to the reconstruction feasible set. Meanwhile, the steady state MSD performance of the proposed lN norm compressed sensing signal recovery algorithm is theoretically investigated in terms of different sparsity as well as different additive noise. The performance

Acknowledgment

The authors are grateful for the funding by Grants from the National Natural Science Foundation of China (Project no. 11274259 and Project no. 11574258) in support of the present research. The authors also would like to thank the anonymous reviewers for improving the quality of this letter.

Feiyun Wu received his B.S., M.S and Ph.D degrees from Nanchang University, Nanchang, Sun Yat-Sen University, Guangzhou and Xiamen University, Xiamen, China, 2006, 2010, 2016, respectively. From 2006-2008, he worked as a college lecturer in Electronic Engineering Department of Nanchang University Gongqing College. During 2013-2015, he was a visiting Ph.D student in University of Delaware, DE, US, Now he is an assistant professor in Northwestern Polytechnical University, Xi'an, China. His

References (16)

  • I. Takigawa et al.

    performance analysis of minimum l1-norm solutions for underdetermined source separation

    IEEE Trans. Signal Process.

    (2004)
  • S.J. Kim et al.

    An interior point method for large-scale l1-regularized least squares

    IEEE J. Sel. Top. Signal Process.

    (2007)
  • M. Figueiredo et al.

    Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems

    IEEE J. Sel. Top. Signal Process.

    (2007)
  • J.A. Tropp

    Greed is good: algorithmic results for sparse approximation

    IEEE Trans. Inform. Theory

    (2004)
  • J.A. Tropp et al.

    Signal recovery from random measurements via orthogonal matching pursuit

    IEEE Trans. Inform. Theory

    (2007)
  • I. Daubechies et al.

    Iteratively reweighted least squares minimization for sparse recovery

    Commun. Pure Appl. Maths.

    (2010)
  • Y. Gu et al.

    l0 norm constraint LMS algorithm for sparse system identification

    IEEE Signal Process. Lett.

    (2009)
  • M. Hosein et al.

    A fast approach for overcomplete sparse decomposition based on smoothed l0 Norm

    IEEE Trans. Signal Process.

    (2009)
There are more references available in the full text version of this article.

Cited by (15)

View all citing articles on Scopus

Feiyun Wu received his B.S., M.S and Ph.D degrees from Nanchang University, Nanchang, Sun Yat-Sen University, Guangzhou and Xiamen University, Xiamen, China, 2006, 2010, 2016, respectively. From 2006-2008, he worked as a college lecturer in Electronic Engineering Department of Nanchang University Gongqing College. During 2013-2015, he was a visiting Ph.D student in University of Delaware, DE, US, Now he is an assistant professor in Northwestern Polytechnical University, Xi'an, China. His research interests include adaptive signal processing for underwater acoustics communications, compressed sensing, ICA and wavelet algorithms and their applications.

Feng Tong received his PhD in underwater acoustics at Xiamen University, China in 2000. From 2000–2002, he worked as a post-doctoral fellow in the Department of Radio Engineering, Southeast University, China. Since 2003, he has been a research associate at the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong for one and a half year. From Dec 2009 to Dec 2010 He was a visiting scholar in Department of Computer Science and Engineering, University of California San Diego, USA. Currently he is a professor with the Department of Applied Marine Physics and Engineering, Xiamen University, China. His research interests focus on underwater acoustic communication and acoustic signal processing. He is a member of IEEE, ASC (Acoustical Society of China) and CSIS (China Ship Instrument Society). He serves on the editorial board for the Journal of Marine Science and Application.

View full text