Sparse Signal Recovery by Stepwise Subspace Pursuit in Compressed Sensing

In this paper, an algorithm named stepwise subspace pursuit (SSP) is proposed for sparse signal recovery. Unlike existing algorithms that select support set from candidate sets directly, our approach eliminates useless information from the candidate through threshold processing at first and then recovers the signal through the largest correlation coefficients. We demonstrate that SSP significantly outperforms conventional techniques in recovering sparse signals whose nonzero values have exponentially decaying magnitudes or distribution of N ( 0,1 ) . Experimental results of Lena show that SSP is better than CoSaMP, OMP, and SP in terms of peak signal to noise ratio (PSNR) by 5.5 db, 4.1 db, and 4.2 db, respectively.


Introduction
In many applications such as statistical regression [1], digital communications [2], image processing [3,4], multimedia sensor networks [5,6], interpolation/extrapolation [7], and signal deconvolution [8,9], recovering high-dimensional signals from relatively fewer measurements is a challenging task. Fortunately, in the real world many signals are, or can be, transformed (such as DCT, wavelet packet transform [10]) to sparse such that only a small part of signal coefficients are nonzero values. And compressed sensing [11,12] allows us to recover sparse signal from high-dimensional signals with very few measurements. In fact, some works in the real world show that one can recover exactly sparse signal of length with only ( log ) random measurements. Let y ∈ be an observed signal, and let Φ ∈ × be a dictionary of atoms, then the standard of the formulation of sparse followŝ x = arg min ‖x‖ 0 s.t. y = Φx, (1) where ‖ ‖ 0 is the 's ℓ 0 -norm, which counts the number of nonzero elements of vector .
Finding the exact solution of (1) is known as NP-hard [7]. It is intractable for combinatorial approaches to solve the problems of moderate-to-high dimensionality, and thus one needs to resort to heuristic procedures. However, if the dictionary Φ satisfies nearly orthogonality, (1) then becomeŝ where ‖ ‖ 1 is the 's ℓ 1 -norm, Θ = ΦΨ, and there is a sparse vector such that x = Ψ . The rest of the paper is organized as follows. Section 2 summarizes the related work of recover algorithm in compressed sensing. Section 3 contains the description of stepwise subspace pursuit algorithm. Section 4 makes a comparison of our work with the related papers through simulation. Section 5 presents our main conclusion.

Related Work
Existing recovery algorithms are roughly classified into three main families: convex relaxation algorithms, Bayesian algorithms, and pursuit algorithms. Our algorithm presented in this paper belongs to the pursuit family. The convex relaxation algorithms approximate the nonsmooth and nonconvex ℓ 0 -norm by functions that are easier to handle. The resulting problem can be solved by means of standard optimization techniques. Well-known instances of algorithms based on such an approach are Basis Pursuit (BP) [13] and FOCUSS [14] which approximate the ℓ 0 -norm by the ℓ 1 -norm and ℓ -norm ( < 1), respectively.
The Bayesian algorithms express the problem as the solution of a Bayesian inference problem and apply statistical tools to solve it, that is assuming a prior distribution for the unknown coefficients that favors sparsity. They develop a maximum a posteriori estimator that incorporates the observation. There are many algorithms that incorporate some of these features. For example, identify a region of significant posterior mass [15] or average over most probable models [16]. One key ingredient of Bayesian algorithms is the choice of a proper prior on the sought sparse vector.
The pursuit algorithms for sparse signal recovery are a greedy approach that iteratively refines the current estimate for the coefficient vector x by modifying one or several coefficients chosen to yield a substantial improvement in approximating the signal. The family of pursuit algorithms includes several approaches according to the way of updating the support set: single or multiple algorithms. The algorithms of single updating support set gradually increase the support by sequentially adding new atoms. The complexity of these algorithms is lower than the complexity of BP. However, they require more measurements for accurate reconstruction, and they often have an effect on empirical work and do not offer the strong theoretical guarantees. Single algorithms include matching pursuit (MP) [17], orthogonal matching pursuit (OMP) [18]. However, for many applications, single updating support set does not offer adequate performance, so researchers have developed more sophisticated pursuit methods that work better in practice and yield essentially optimal theoretical guarantees, called multiple algorithms. These techniques depend on several enhancements to the basic greedy framework: (1) selecting multiple columns per iteration; (2) pruning the set of active columns at each step; (3) solving the least squares problems iteratively; (4) theoretical analysis using the RIP bound. There are many algorithms that incorporate some of these features. For example, stagewise orthogonal matching pursuit (StOMP) [19], hard thresholding pursuit (HTP) [20], regularized orthogonal matching pursuit algorithm (ROMP) [21] which was the first greedy technique whose analysis was supported by a RIP bound, compressive sampling matching pursuit (CoSaMP) [22] which was the first algorithm to assemble these ideas to obtain essentially optimal performance guarantees and the subspace pursuit (SP) [23], and so forth. Sparsity-predict in these algorithms bring to the perfect performance. However, once the sparsity is false predicted, many signals cannot be reconstructed accurately. All in all, pursuit algorithms have often been considered naive, in part because there are contrived examples where the approach fails spectacularly. However, recent research has clarified that greedy pursuits succeed empirically and theoretically in many situations where convex relaxation works.  Figure 1 depicts the schematic representation of the proposed SSP algorithm. The ℓth iteration applies matched filtering to the current residual and gets a candidate set Φ * y ℓ−1 , which contains a small number of significant nonzero values. Then we eliminate useless information in the candidate through threshold processing { | | ( )| > } and select indices which are considered to be reliable on some iteration steps with the largest correlation by interim estimate. We merge the indices of newly selected coordinates with the previous support estimate, thereby updating the intermediate estimatẽℓ.

Sparse Signal Recovery by Stepwise Subspace Pursuit
We have the new approximation x supported iñℓ with coefficients given by (Φ̃ℓ * Φ̃ℓ) −1 * Φ̃ℓ * y. The updated support estimate can be gained from approximation x by the largest correlation. We then project the vector y on the columns of Φ ℓ belonging to the updated support, and check the stopping condition, and if it is not yet time to stop, we set ℓ = ℓ + 1 and go to the next iteration. The algorithm is depicted in Algorithm 1. The main contribution of the SSP reconstruction algorithm is that it generates a list of candidates sequentially and incorporates a simple method for re-evaluating the reliability of all candidates at iteration, thus gaining the correlation of candidates before the operation of SP with the correlation at iteration of the method.

Simulation and Results
In this section, we show the performance of the proposed algorithm through simulation from two aspects: (1) forsparse 1-dimensional signal, we compare the reconstruction probabilities of OMP, CoSaMP, SP, and SSP algorithmies; (2) for sparse 2-dimensional images signal with DCT, we compare the effectiveness and accuracy of signal recovery with OMP, CoSaMP, SP, and SSP algorithms under the same test conditions. In Figure 2, we compare the performance of SSP algorithm with that of OMP, CoSaMP, and SP algorithms. The original signal in Figure 2(a) is obtained, when the -sparse signal is set by random nonzero values drawn from (0, 1), where the nonzero coefficients are set by iid (0, 1) and the remaining coefficients of x are set by 0. The original signal in Figure 2(b) is obtained, when the nonzero coefficients are a random permutation of exponentially decay and the remaining coefficients of x are set to 0. Figure 2(a) shows that SSP performs better than OMP, CoSaMP, and SP when the nonzero entries of the sparse signal are drawn according to zero-mean Gaussian with variance 1. We discover that the recovery probability is 1 in low sparsity level, but the recovery signal is not accurate when the sparsity increases to a certain level. And therefore more measurements are needed for a better signal recovery. Furthermore the results in Figure 2(a) show that the CoSaMP, OMP, and SP can accurately recover signal in sparsity level in 15, 17, and 18, respectively, while the SSP algorithm can reach 21. As depicted in Figure 2(b), SSP significantly outperforms existing methods for the exponential case.
Two 256 × 256 tested images (Lena and Cameraman) are used to illustrate the quality of reconstructed image. Our simulation experiments were performed in the MATLAB2010b environment using an AMD Athlon II X2 245 processor with 2 GB of memory. The Gaussian random matrix was applied to measure the coefficients of OMP, CoSaM, PSP, and SSP algorithms. In order to validate the effectiveness of OMP, CoSaMP, SP, and SSP algorithms, compression ratio of test images was set 0.3333. Figure 3 shows that the reconstructed quality of SSP is better than that of the OMP, CoSaMP, and SP in the same experimental condition. Table 1 is the PSNR of reconstructed images and the reconstructed times of OMP, CoSaMP, SP, and SSP algorithms for test images. As shown in Table 1, SSP has the maximum PSNR and the largest time consumed in reconstruction compared with other algorithms. It shows that searching for maximum correlation set from candidate in SSP is a double-edged sword. The maximum correlation of SSP explains the possibility of its relative higher quality compared to OMP, CoSaMP, and SP in this example.

Conclusion
In this paper, a stepwise subspace pursuit algorithm for signal reconstruction is proposed by using the largest correlation of the candidate set. It can obtain accurate solutions that preserve more important coefficients as well as recover more data than other existing algorithms. The experimental results of Lena demonstrated that SSP is a more effective algorithm for signal recovery from random measurement than OMP, CoSaMP, and SP algorithms in peak signal to noise ratio by 5.5 db, 4.1 db, and 4.2 db, respectively. In future work, we need to investigate how to reduce the reconstruction time while improving the quality of signal.