Elsevier

Signal Processing

Volume 168, March 2020, 107337
Signal Processing

A modified multiple OLS (m2OLS) algorithm for signal recovery in compressive sensing

https://doi.org/10.1016/j.sigpro.2019.107337Get rights and content

Highlights

  • Proposes m2OLS as a new modification of the mOLS algorithm using gOMP based preselection.

  • The proposed m2OLS enjoys lower computational complexity than mOLS, while enjoying recovery performance at par with mOLS for correlated dictionaries.

  • Presents convergence analysis for both noiseless and noisy measurements.

  • Derives a recovery bound in terms of restricted isometry constant.

Abstract

Orthogonal least square (OLS) is an important sparse signal recovery algorithm in compressive sensing, which enjoys superior probability of success over other well known recovery algorithms under conditions of correlated measurement matrices. Multiple OLS (mOLS) is a recently proposed improved version of OLS which selects multiple candidates per iteration by generalizing the greedy selection principle used in OLS and enjoys faster convergence than OLS. In this paper, we present a refined version of the mOLS algorithm where at each step of iteration, we first preselect a submatrix of the measurement matrix suitably and then apply the mOLS computations to the chosen submatrix. Since mOLS now works only on a submatrix and not on the overall matrix, computations reduce drastically. Convergence of the algorithm, however, requires to ensure passage of true candidates through the two stages of preselection and mOLS based identification successively. This paper presents convergence conditions for both noisy and noise free signal models. The proposed algorithm enjoys faster convergence properties similar to mOLS, at a much reduced computational complexity.

Introduction

Signal recovery in compressive sensing (CS) requires evaluation of the sparsest solution to an underdetermined set of equations y=Φx, where ΦCm×n(m<<n) is the so-called measurement matrix and y is the m × 1 observation vector. It is usually presumed that the sparsest solution is K-sparse, i.e., not more than K elements of x are non-zero, and also that the sparsest solution is unique which can be ensured by maintaining every 2K columns of Φ as linearly independent. There exist a popular class of algorithms in literature called greedy algorithms, which obtain the sparsest x by iteratively constructing the support set of x (i.e., the set of indices of non-zero elements in x) via some greedy principles. Orthogonal Matching Pursuit(OMP) [1] is a prominent algorithm in this category, which, at each step of iteration, enlarges a partially constructed support set by appending a column of Φ that is most strongly correlated with a residual vector, and updates the residual vector by projecting y on the column space of the sub-matrix of Φ indexed by the updated support set, and then taking the projection error. Tropp and Gilbert [1] have shown that OMP can recover the original sparse vector from a few measurements with exceedingly high probability when the measurement matrix has i.i.d Gaussian entries. OMP was extended by Wang et al. [2] to the generalized orthogonal matching pursuit (gOMP)where at the indentification stage, multiple columns are selected based on the correlation of the columns of matrix Φ with the residual vector, which allows gOMP to enjoy faster convergence compared to OMP. It has, however, been shown recently by Soussen et al. [3] that the probability of success in OMP reduces sharply as the correlation between the columns of Φ increases, and for measurement matrices with correlated entries, another greedy algorithm, namely, the Orthogonal Least Squares (OLS) [4] enjoys much higher probability of recovery of the sparse signal than OMP. OLS is computationally similar to OMP except for a more expensive greedy selection step. Here, at each step of iteration, the partial support set already evaluated is augmented by an index i which minimizes the energy (i.e., the l2 norm) of the resulting residual vector.

An improved version of OLS called multiple OLS (mOLS) has been proposed recently by Wang et al.  [5], where unlike OLS, a total of L (L > 1) indices are appended to the existing partial support set by suitably generalizing the greedy principle used in OLS. As L indices are chosen each time, possibility of selection of multiple “true” candidates in each iteration increases and thus, the probability of convergence in much fewer iterations than OLS becomes significantly high.

In this paper, we present a refinement of the mOLS algorithm, named as modified mOLS (m2OLS), where, at each step of iteration, we first pre-select a total of, say, N columns of Φ by evaluating the correlation between the columns of Φ with the current residual vector and choosing the N largest (in magnitude) of them. The steps of mOLS are then applied to this pre-selected set of columns. Here the preselection strategy is identical to the identification strategy of gOMP so that chances of selection of multiple “true” candidates in the pre-selected set is expected to be high. Furthermore, as the mOLS subsequently works on this preselected set of columns and not on the entire matrix Φ, to determine a subset of L columns (L < N), computational costs reduce drastically compared to conventional mOLS. This is also confirmed by our simulation studies. Derivation of conditions of convergence for the proposed algorithm is, however, tricky, as it requires to ensure simultaneous passage of at least one true candidate from Φ to the pre-selected set and then, from the pre-selected set to the mOLS determined subset at every iteration step. This paper presents convergence conditions of the proposed algorithm for the cases of both noise free and noisy observations. It also presents the computational steps of an efficient implementation of both mOLS and m2OLS, and brings out the computational superiority of m2OLS over mOLS analytically. Detailed simulation results in support of the claims made are also presented.1

Section snippets

Preliminaries

The following notations have been used throughout the paper : ‘H’ in superscript indicates matrix / vector Hermitian conjugate, H denotes the set of indices {1, 2, ⋅⋅⋅, n}, T denotes the true support set of x, i.e., T={iH|[x]i0} and ϕi denotes the ith column of Φ,iH. For any two vectors u,vCn, u,v=uHv. All the columns of Φ are assumed to have unit l2 norm, i.e., ϕi2=1, which is a common assumption in literature [1], [5]. For any SH, xS denotes a vector comprising those entries of x

Proposed algorithm

The proposed m2OLS algorithm is described in Table 1. At any kth step of iteration (k ≥ 1), assume a residual signal vector rk1 and a partially constructed support set Tk1 have already been computed (r0=y and T0=). In the preselection stage, N columns of Φ are identified that have largest (in magnitude) correlations with rk1 by picking up the N largest absolute entries of ΦHrk1, and the set Sk containing the corresponding indices is selected. This is followed by the identification stage,

Signal recovery using m2OLS algorithm

In this section, we obtain convergence conditions for the proposed m2OLS algorithm. In particular, we derive conditions for selection of at least one correct index at each iteration, which guarantees recovery of a K-sparse signal by the m2OLS algorithm in a maximum of K iterations.

Unlike mOLS, proving convergence is, however, trickier in the proposed m2OLS algorithm because of the presence of two selection stages at every iteration, namely, preselection and identification. In order that the

Comparative analysis of computational complexities of mOLS and m2OLS

By restricting the steps of mOLS to a pre-selected subset of columns of Φ, the proposed m2OLS algorithm achieves considerable computational simplicity over mOLS. In this section, we analyze the computational steps involved in both mOLS and m2OLS at the (k+1)th iteration (i.e., assuming that k iterations of either algorithm have been completed), and compare their computational costs in terms of number of floating point operations (flops) required.

Simulation results

For simulation, we constructed measurement matrices with correlated entries, as used by Soussen et al. [3]. For this, first a matrix A is formed such that aij=[A]ij is given by aij=nij+tj where nijN(0,1/m) i.i.d. ∀i, j, tjU[0,τ]j, and {nij} is statistically independent of {tk}, ∀i, j, k. The measurement matrix Φ is then constructed from A as ϕij=aij/aj2, where ϕij=[Φ]ij and ai denotes the ith column of A. Note that in the construction process for Φ, the random variables nij play the role

Conclusion

In this paper we have proposed a greedy algorithm for sparse signal recovery which preselects a few (N) possibly “good” indices according to correlation of the respective columns of the measurement matrix with a residual vector, and then uses an mOLS step to identify a subset of these indices (of size L) to be included in the estimated support set. We have carried out a theoretical analysis of the algorithm using RIP and have shown that for the noiseless signal model, if the sensing matrix

Declaration of Competing Interest

The authors declare that they do not have any financial or nonfinancial conflict of interests.

Acknowledgment

This research was in part supported by a research grant from the Science and Engineering Research Board, Govt. of India (grant number: EMR/2016/005290) and a chair professorship grant from the Indian National Academy of Engineering.

References (20)

There are more references available in the full text version of this article.
View full text