A modified multiple OLS (m2OLS) algorithm for signal recovery in compressive sensing
Introduction
Signal recovery in compressive sensing (CS) requires evaluation of the sparsest solution to an underdetermined set of equations where is the so-called measurement matrix and y is the m × 1 observation vector. It is usually presumed that the sparsest solution is K-sparse, i.e., not more than K elements of x are non-zero, and also that the sparsest solution is unique which can be ensured by maintaining every 2K columns of Φ as linearly independent. There exist a popular class of algorithms in literature called greedy algorithms, which obtain the sparsest x by iteratively constructing the support set of x (i.e., the set of indices of non-zero elements in x) via some greedy principles. Orthogonal Matching Pursuit(OMP) [1] is a prominent algorithm in this category, which, at each step of iteration, enlarges a partially constructed support set by appending a column of Φ that is most strongly correlated with a residual vector, and updates the residual vector by projecting y on the column space of the sub-matrix of Φ indexed by the updated support set, and then taking the projection error. Tropp and Gilbert [1] have shown that OMP can recover the original sparse vector from a few measurements with exceedingly high probability when the measurement matrix has i.i.d Gaussian entries. OMP was extended by Wang et al. [2] to the generalized orthogonal matching pursuit (gOMP)where at the indentification stage, multiple columns are selected based on the correlation of the columns of matrix Φ with the residual vector, which allows gOMP to enjoy faster convergence compared to OMP. It has, however, been shown recently by Soussen et al. [3] that the probability of success in OMP reduces sharply as the correlation between the columns of Φ increases, and for measurement matrices with correlated entries, another greedy algorithm, namely, the Orthogonal Least Squares (OLS) [4] enjoys much higher probability of recovery of the sparse signal than OMP. OLS is computationally similar to OMP except for a more expensive greedy selection step. Here, at each step of iteration, the partial support set already evaluated is augmented by an index i which minimizes the energy (i.e., the l2 norm) of the resulting residual vector.
An improved version of OLS called multiple OLS (mOLS) has been proposed recently by Wang et al. [5], where unlike OLS, a total of L (L > 1) indices are appended to the existing partial support set by suitably generalizing the greedy principle used in OLS. As L indices are chosen each time, possibility of selection of multiple “true” candidates in each iteration increases and thus, the probability of convergence in much fewer iterations than OLS becomes significantly high.
In this paper, we present a refinement of the mOLS algorithm, named as modified mOLS (m2OLS), where, at each step of iteration, we first pre-select a total of, say, N columns of Φ by evaluating the correlation between the columns of Φ with the current residual vector and choosing the N largest (in magnitude) of them. The steps of mOLS are then applied to this pre-selected set of columns. Here the preselection strategy is identical to the identification strategy of gOMP so that chances of selection of multiple “true” candidates in the pre-selected set is expected to be high. Furthermore, as the mOLS subsequently works on this preselected set of columns and not on the entire matrix Φ, to determine a subset of L columns (L < N), computational costs reduce drastically compared to conventional mOLS. This is also confirmed by our simulation studies. Derivation of conditions of convergence for the proposed algorithm is, however, tricky, as it requires to ensure simultaneous passage of at least one true candidate from Φ to the pre-selected set and then, from the pre-selected set to the mOLS determined subset at every iteration step. This paper presents convergence conditions of the proposed algorithm for the cases of both noise free and noisy observations. It also presents the computational steps of an efficient implementation of both mOLS and m2OLS, and brings out the computational superiority of m2OLS over mOLS analytically. Detailed simulation results in support of the claims made are also presented.1
Section snippets
Preliminaries
The following notations have been used throughout the paper : ‘H’ in superscript indicates matrix / vector Hermitian conjugate, denotes the set of indices {1, 2, ⋅⋅⋅, n}, T denotes the true support set of x, i.e., and ϕi denotes the ith column of . For any two vectors . All the columns of Φ are assumed to have unit l2 norm, i.e., which is a common assumption in literature [1], [5]. For any xS denotes a vector comprising those entries of x
Proposed algorithm
The proposed m2OLS algorithm is described in Table 1. At any kth step of iteration (k ≥ 1), assume a residual signal vector and a partially constructed support set have already been computed ( and ). In the preselection stage, N columns of Φ are identified that have largest (in magnitude) correlations with by picking up the N largest absolute entries of and the set Sk containing the corresponding indices is selected. This is followed by the identification stage,
Signal recovery using m2OLS algorithm
In this section, we obtain convergence conditions for the proposed m2OLS algorithm. In particular, we derive conditions for selection of at least one correct index at each iteration, which guarantees recovery of a K-sparse signal by the m2OLS algorithm in a maximum of K iterations.
Unlike mOLS, proving convergence is, however, trickier in the proposed m2OLS algorithm because of the presence of two selection stages at every iteration, namely, preselection and identification. In order that the
Comparative analysis of computational complexities of mOLS and m2OLS
By restricting the steps of mOLS to a pre-selected subset of columns of Φ, the proposed m2OLS algorithm achieves considerable computational simplicity over mOLS. In this section, we analyze the computational steps involved in both mOLS and m2OLS at the iteration (i.e., assuming that k iterations of either algorithm have been completed), and compare their computational costs in terms of number of floating point operations (flops) required.
Simulation results
For simulation, we constructed measurement matrices with correlated entries, as used by Soussen et al. [3]. For this, first a matrix A is formed such that is given by where i.i.d. ∀i, j, and {nij} is statistically independent of {tk}, ∀i, j, k. The measurement matrix Φ is then constructed from A as where and ai denotes the ith column of A. Note that in the construction process for Φ, the random variables nij play the role
Conclusion
In this paper we have proposed a greedy algorithm for sparse signal recovery which preselects a few (N) possibly “good” indices according to correlation of the respective columns of the measurement matrix with a residual vector, and then uses an mOLS step to identify a subset of these indices (of size L) to be included in the estimated support set. We have carried out a theoretical analysis of the algorithm using RIP and have shown that for the noiseless signal model, if the sensing matrix
Declaration of Competing Interest
The authors declare that they do not have any financial or nonfinancial conflict of interests.
Acknowledgment
This research was in part supported by a research grant from the Science and Engineering Research Board, Govt. of India (grant number: EMR/2016/005290) and a chair professorship grant from the Indian National Academy of Engineering.
References (20)
- et al.
Cosamp: iterative signal recovery from incomplete and inaccurate samples
Appl. Comput. Harmon. Anal.
(2009) - et al.
An iterative framework for sparse signal reconstruction algorithms
Signal Process.
(2015) A short note on compressed sensing with partially known signal support
Signal Process.
(2010)- et al.
Nonconvex compressed sensing with partially known signal support
Signal Process.
(2013) - et al.
Recovery of signals under the high order rip condition via prior support information
Signal Process.
(2018) - et al.
Sufficient conditions for generalized orthogonal matching pursuit in noisy case
Signal Process.
(2015) - et al.
Signal recovery from random measurements via orthogonal matching pursuit
IEEE Trans. Inf. Theory
(2007) - et al.
Generalized orthogonal matching pursuit
IEEE Trans. Signal Process.
(2012) - et al.
Joint k-step analysis of orthogonal matching pursuit and orthogonal least squares
IEEE Trans. Inf. Theory
(2013) - et al.
Orthogonal least squares methods and their application to non-linear system identification
Int. J. Control
(1989)
Cited by (2)
REQUIRED NUMBER OF ITERATIONS FOR SPARSE SIGNAL RECOVERY VIA ORTHOGONAL LEAST SQUARES
2023, Journal of Computational MathematicsAnalysis of the modified multiple OLS (m<sup>2</sup>OLS) algorithm for sparse signal recovery with noise
2021, Scientia Sinica Mathematica