Improved Generalized Sparsity Adaptive Matching Pursuit Algorithm Based on Compressive Sensing

,e modified adaptive orthogonal matching pursuit algorithm has a lower convergence speed. To overcome this problem, an improved method with faster convergence speed is proposed. In respect of atomic selection, the proposed method computes the correlation between the measurement matrix and residual and then selects the atoms most related to residual to construct the candidate atomic set.,e number of selected atoms is the integral multiple of initial step size. In respect of sparsity estimation, the proposed method introduces the exponential function to sparsity estimation. It uses a larger step size to estimate sparsity at the beginning of iteration to accelerate the algorithm convergence speed and a smaller step size to improve the reconstruction accuracy. Simulations show that the proposed method has better performance in terms of convergence speed and reconstruction accuracy for one-dimension signal and two-dimension signal.


Introduction
Compressed sensing (CS) [1] has become a popular research topic in recent years. Compared with traditional compression methods, the CS can be sampled at a rate far below the Nyquist sampling theorem, and the signal can be reconstructed with high probability. It can be used to reduce the amount of data transferred. Compressed sensing method has been applied in the context of medical imaging, radar imaging, and video transmission [2][3][4][5].
Signal reconstruction is one of the most important parts of compressed sensing. A good reconstruction algorithm can improve the accuracy and time of signal recovery. In the design of the reconstruction algorithm, the signal reconstruction based on the l 2 -norm optimization is firstly adopted. However, the reconstructed signal obtained by l 2 -norm optimization is not sparse, and the reconstruction accuracy is large. erefore, many researchers pay more attention to the use of the optimization algorithms based on l 1 -norm or l 0 -norm to reconstruct sparse signal in compressed sensing. e sparse signal reconstruction methods based on l 1 -norm include the basis pursuit (BP) method [6], iterative thresholding (IT) method [7], and homotopy method [8]. Some researchers also proposed the sparse signal recovery method with minimization of l 1 -norm minus l 2 -norm [9]. e matching pursuit algorithm is also a reconstruction algorithm based on l 0 -norm. Compared with convex optimization algorithm, it has lower computational complexity. erefore, it is the most commonly used algorithm. e orthogonal matching pursuit algorithm (OMP) [10] is the earliest matching pursuit algorithm. On the basis of OMP, regularized orthogonal matching pursuit (ROMP) [11] is proposed using regularized rule to refine the selected columns of the measurement matrix. Researchers also propose the generalized orthogonal matching pursuit (gOMP) algorithm [12]. e difference between the OMP and gOMP algorithm is that the OMP algorithm selects one atom with the highest correlation in each iteration and the gOMP algorithm selects the most relevant atom for reconstruction, S(S > 1). erefore, the gOMP reduces the running time. However, if the gOMP algorithm selects the atoms which do not contain the signal information, the gOMP algorithm cannot delete these atoms in the following iteration. is affects the reconstruction performance. erefore, the researchers propose compressed sampling matching pursuit (CoSaMP) [13] and subspace pursuit (SP) [14]. Both of them use backtracking strategy to select the most relevant atoms firstly and then check the atomic correlation again to remove unrelated atoms in the end. us, they can improve the reconstruction accuracy. Besides, block orthogonal matching pursuit algorithm (BOMP) is also proposed for block sparse signal [15,16], and sharp sufficient conditions for stable recovery are also given [17]. In this paper, we mainly consider the normal sparse signal.
However, these algorithms have the common limitation that sparsity information needs be known, but the sparsity is often unknown in the practical application. To solve this problem, ong proposes the sparsity adaptive matching pursuit (SAMP) [18] that can recover signals without knowing the sparsity value. It firstly sets a small estimated sparsity value and then uses a fixed step size to make the estimated sparsity value increase after each iteration and finally approach the true sparsity value to reconstruct the signal. However, the fixed step size may cause the estimated sparsity value to be inaccurate and affect the accuracy of the reconstruction. To overcome the problem, the modified adaptive orthogonal matching pursuit algorithm (MAOMP) [19] was proposed. It used a smaller factor, less than 1, to modify the step size and made the step size become smaller with the increase of iterations. e factor used in MAOMP is fixed. If the initial step size is too small, it requires a large number of iterations. is will affect the convergence speed of MAOMP algorithm.
To improve the convergence speed of MAOMP algorithm, we use a nonlinear function to modify the factor used in MAOMP algorithm. e factor is variable in our proposed method. e factor is larger at the beginning of iteration and gradually decreases with the increase of iteration number.
is can accelerate the convergence speed. Besides, the generalized atom selection strategy is also used to select the atoms that are most related to residual for constructing the candidateatomic set and thus improving the accuracy of reconstruction.

Compressed Sensing Theory
e basic assumption of compressed sensing is that the signal is sparse. If a signal s with length N has K nonzero values, K ≪ N, it is called K sparse signal. In some real applications, the signal s is not sparse, so the signal s is not compressible. In order to make the original signal sparse, the sparse basis matrix Ψ � φ 1 , φ 2 , ..., φ N is used. It can be expressed as where α is the sparse signal with K nonzero values (K ≪ N). e sparse basis methods include fast Fourier transform basis, discrete Fourier transform basis, wavelet transform basis, and redundant dictionary. In order to compress the observed signal s, a measurement matrix Φ is designed to deal with the signal s. e compressed signal is where Φ ∈ R M×N , y ∈ R m×1 , and M ≪ N. e length of observed signal is N and the length of compressed signal is M. Mixed with (1), (2) can be expressed as where A � ΦΨ is a matrix (M × N) called sensing matrix. e compressed signal, measurement matrix, and sparse basis are known. e aim of reconstruction is to recover the sparse signal and original signal under the above unknown information. erefore, the number of sensing matrix rows is smaller than the number of columns in (3). It cannot be solved by traditional method. In the compressed sensing method, the l 0 minimum norm method is used to solve (3).
at is, Some researchers also proposed to use l 1 minimum norm method to solve (3). at is,

MAOMP Algorithm
e SAMP algorithm uses a fixed step size method to estimate the sparsity, which is expressed in as follows: where k is the iteration number, L is the estimated sparsity, and s is fixed step size. From (6) and (7), the estimated sparsity becomes larger with the increasing iteration number, but the step size is fixed. erefore, the estimated sparsity is often insufficient or overestimated, which has negative influence on signal reconstruction accuracy. In order to solve this question, MAOMP algorithm was proposed. e MAOMP algorithm uses (8), (9), and (10) to estimate sparsity: where δ s is a constant value between 0 and 1, A is the sensing matrix, y is compressed vector, k is the number of iterations, L k is the estimated sparsity, β is also a constant value between 0 and 1, and ⌈a⌉ denotes the smallest integer that is not smaller than a. If (8) is satisfied, the initial sparsity will be increased by 1. If (8) is not satisfied, MAOMP will use (9) and (10) to continue estimating the sparsity. e variable step method can be expressed as (9) and (10). In (9) and (10), it can be seen that the step size is gradually reduced to 1 with the increase of iteration number. is can make the sparsity value more accurate, so the algorithm can improve reconstruction accuracy. e detailed steps of MAOMP are shown in Algorithm 1.

Proposed Method
Although the MAOMP uses (8) to modify the initial sparsity to reduce the iteration number of sparsity estimation, it also increases the computational complexity of sparsity initial estimation. Besides, if the initial step size is smaller, the step size based on (9) will be rapidly reduced to 1. When the initial sparsity is far away from the real sparsity, it will cost much time to converge. And in [19], the researcher has shown that when the real sparsity is relatively larger, whatever the value of δ s is, the optimal result of the estimated initial sparsity is only about half of the real sparsity value. ere is still a large distance between the estimated initial sparsity and the real sparsity. erefore, the algorithm may require more iterations to make the estimated sparsity value approach the real sparsity value.
To overcome these problems in MAOMP, we improve the MAOMP method in terms of estimating sparsity and selecting atom. Firstly, we directly set the initial sparsity as 1 and use a nonlinear function to adjust the step size to make it have larger value at the beginning phase of iteration process and smaller value with the increasing of iteration. It is expressed as follows: where k is the number of iterations, s k is the step size, a is constant that is larger than 1, ⌈a⌉ denotes the smallest integer that is not smaller than a, and L k is the estimated sparsity. According to (11), at the beginning of iteration, the step size is larger. As the number of iterations increases, the step size becomes smaller and gradually decreases to 1. is means that if the distance between the real sparsity and initial sparsity is larger, the proposed method can quickly make the estimated sparsity close to the real sparsity to reduce the consumption time for estimating the sparsity. As seen from (11), with increasing the number of iterations, the distance between the real sparsity and estimated sparsity is close, and we use a smaller step size to adjust the estimated sparsity to prevent the estimated sparsity from being insufficient or overestimated. is can make the sparsity estimation more accurate and faster. e value of nonlinear function (k/a) − k is shown in Figure 1, where k is the number of iterations and a � 2. From Figure 1, we can see that the function has faster descent speed for smaller iteration and slower descent speed for larger iteration. is will make the estimated sparsity move toward the real sparsity faster at the beginning phase of iteration process and move toward the real sparsity slower at the end phase of iteration process. erefore, it makes the proposed method have faster convergence speed and lower reconstruction error. However, if a is too large, the step size will be also very large at the beginning phase of iteration process. is can lead to overestimation of the sparsity value, thus affecting the accuracy of the algorithm.
Secondly, we use the generalized orthogonal matching pursuit to select the atoms according to the correlation between the sensing matrix and residual vector. It is expressed as

Input:
Sensing matrix A m×n Observation vector y m×1 Initial step size s 0 Constant parameter β Stop threshold ε Initialization parameter: x � 0{initialize signal approximation}

Journal of Electrical and Computer Engineering
where S k is projective set, num is the fixed number to select atomics, A is sensing matrix, and r k is residual error. Compared with MAOMP, our proposed method has the number of selected atomics num fixed at each iteration. is reduces the algorithm complexity. e more the correlation atomics are selected to extend the candidate atomic set, the better the accuracy is. However, if the number of selected atomics is larger, it will contain some atomics with lower correlation and reduce algorithm accuracy. How to choose a suitable num is discussed in the simulation. e proposed algorithm begins to select atoms using the generalized orthogonal matching pursuit method, then updates the step size, and estimates sparsity using the new variable step method. e detailed steps of the proposed method are shown in Algorithm 2.

Parameters Selection.
e selection of parameter a and the number of selected atomics num directly affect the algorithm performance. In this section, we use the source signal as a Gaussian signal with length N � 256 and measurement value M � 128. And the stop iteration parameter ε � 10 − 6 . erefore, we firstly set the num as 25 and the a as variable to search the optimal a. e relationship between the a and reconstruction probability is shown in Figure 2. From Figure 2(a), we can see that the reconstruction probability decreases with the increasing of sparsity level. When the sparsity level is between 40 and 45, the reconstruction probability for the proposed method with a � 2 is the highest; next is a � 3. When the sparsity level is greater than 50, the reconstruction probability for the proposed method with a � 3 is the highest; next is a � 2. Because the large step size leads to overestimation, the reconstruction probability drops rapidly at K � 30 when a � 4. And when a � 3.2, the reconstruction probability is not as good as a � 3. Based on the above analysis, the reconstruction probability for a � 3 is higher than the other values for the larger sparsity level. However, the reconstruction probability becomes poor when a > 3. us, we select the value of a between 2 and 3, as shown in Figure 2(b). From Figure 2(b), the reconstruction probability for the proposed method with a � 2.5 is the highest value. us, we select a � 2.5 for the experiments.
At the next experiment we set the a as constant and the number of selected atomics num as variable. e result can be seen in Figures 3 and 4. From Figure 3, the initial step size is 5, and we can see that, as the sparsity level increases, the reconstruction probability of proposed method for num � 10 is the lowest compared with the other values. When the sparsity level is between 45 and 50, the reconstruction probability for the proposed method with num � 30 is the highest; next is num � 20. When the sparsity level is greater than 55, the reconstruction probability of proposed method for num � 20 is the highest. Moreover, from Figure 4, when initial step size is 10, we can see that the reconstruction probability of proposed method for num � 20 is the lowest compared with the other values. Furthermore, the reconstruction probability of proposed method for num � 40 is the highest for all the sparsity levels. Comparing  Figures 3 and 4, we can conclude that when num is four times the initial step size, the reconstruction quality is the best.
Based on the above analysis, we set the number of selected atomics as four times the initial step size and the parameter a as 2.5 in the flowing experiments. e experiment conditions are as follows: the CPU is Intel ® Core ™ i5-8300H at 2.30 GHz and the size of RAM is 8 GB.

One-Dimensional Signal Reconstruction.
In this section, we use one-dimensional signal as source signal to test the reconstruction performance of different methods (SAMP algorithm, MAOMP algorithm, and our proposed method). e source signal is a Gaussian signal with length N � 256 and sparsity level K � 40. And the stop iteration parameter ε � 10 − 6 . e initial step size of all methods is set as 5 and 10. As known in [19], in MAOMP algorithm, when the parameter δ s is 0.2 and β is 0.6, the result of signal reconstruction is the best. erefore, we select the parameter δ s as 0.2 and β as 0.6 in these experiments. In our proposed algorithm, the parameter a � 2.5 and num are four times the initial step size. Figure 5 shows the reconstruction probability of signals under different measurement values. When the measurement value increases, the reconstruction probability becomes higher. Our proposed method is significantly better than the other two algorithms. When the measurement value is 70, the algorithm we proposed can reconstruct the signal, but the reconstruction probability of the other two algorithms is still 0. Moreover, whatever the measurement value is, our proposed algorithm has higher reconstruction probability than the other two algorithms.
is means that the proposed method has higher reconstruction probability under different measurement values.
Next, when comparing the reconstruction probability of the algorithm under different sparsity levels, the fixed measurement value M � 128, and the other experimental conditions are the same as the previous experiment. Figure 6 shows the reconstruction probability with different sparsity levels.
It can be seen from Figure 6 that the reconstruction probability decreases with the increasing of sparsity level. When 45 ≤ K ≤ 50, the SAMP with s � 5 begins to decline, but other algorithms still maintain almost 100% reconstruction probability. When 50 ≤ K ≤ 70, all algorithm reconstruction probability values begin to decline. In particular, in K � 65, the reconstruction probability of SAMP and MAOMP, in which the initial step size is 10, dropped to 0. However, the reconstruction probability of our proposed method is higher than that of the other two algorithms. And when the sparsity level is 70, the proposed method with s � 10 still can successfully reconstruct the signal with probability 37.54%, but the reconstruction probability of the other two algorithms has dropped to 0.
is shows that the proposed method has higher reconstruction probability under different sparsity levels. By Input: Sensing matrix A m×n Observation vector y m×1 Constant parameter a Initial step size s 0 e number of atoms selected each time num; Tolerance used to exit loop ε Initialize parameter: Get the estimate signal value and residual error by least squares algorithm: x (4) prune to obtain current support index set F � max(|x C k |, L k ) (5) update signal final estimate by least squares algorithm and compute residual error: Output:x � x k (s-sparse approximation of signal x) ALGORITHM 2: Proposed algorithm.
Journal of Electrical and Computer Engineering 5 comparing the one-dimensional signal with different measurement values and different sparsity levels, it can be seen that our proposed method has obvious advantage in onedimensional signal reconstruction.

Two-Dimensional Image Reconstruction.
In this section, we use a grayscale image of size 256 × 256 called Lena as source signal to test the reconstruction performance of different methods (SAMP algorithm, MAOMP algorithm, and our proposed method). e wavelet basis is used as the sparse basis to sparse the image. e initial step sizes of the three algorithms are 1, 5, and 10, respectively. e stop iteration parameter ε � 10 − 6 , the β of the MAOMP algorithm is 0.6, and δ s takes 0.2. In our proposed method, parameter a � 2.5 and num are four times the initial step size. Figure 7 shows the two-dimensional signal Lena that is reconstructed by our proposed method, SAMP method, and MAOMP method with sampling rate 0.6. We use the peak signal-to-noise ratio (PSNR) to represent the quality of image reconstruction, and it can be expressed as    Journal of Electrical and Computer Engineering where M � N � 256, x(i, j) represents the reconstructed value of the corresponding position of the test image, and x(i, j) represents the original value of the corresponding position of the test picture. MSE represents mean squared error. MAX x represents the maximum value of the image. e size of the image is 256, because 2 8 � 256, so that the sampling point is 8 bits and the value of n is 8. e larger the PSNR value, the better the image quality. Figures 7 to 9 show the original signal and the reconstructed signals by SAMP, MAOMP, and the proposed method with initial step sizes of 1, 5, and 10, respectively. It can be seen from Figures 7 and 8 that the PSNR of SAMP and MAOMP decreases when the initial step size becomes larger. e SAMP cannot reconstruct signal when the initial step size is 5 and 10. e PSNR is also lower for MAOMP algorithm when the initial step size is 10. is is because a large initial step size causes the sparsity value to be overestimated and affects accuracy. However, in Figure 9, our proposed method uses (11) to adjust the step size to make the step size gradually decrease. is can prevent overestimation   for larger initial step size and reduce error. erefore, the proposed method has better performance in terms of error and stability than SAMP and MAOMP algorithm for the two-dimensional image based on analysis of Figures 7 to 9. And can be seen from Figures 7 to 9, when the initial step size is 1, SAMP and MAOMP have better reconstructed result. erefore, we select s � 1 in the following experiments. Figure 10 shows the PSNR value with different sampling rates. In this experiment, we choose sampling rate of 0.3, 0.4, 0.5, 0.6, 0.7, and 0.8 and select the initial step size as 1. e test image is Lena with 256 × 256 size. e other experimental conditions are the same as the previous experiment. As can be seen from Figure 8, as the sampling rate increases, the PSNR value becomes larger. Compared with the other two algorithms, the PSNR value of our proposed method is the highest. is shows that the proposed method has smaller error in reconstructing images. Figure 11 shows the recovery times of the three algorithms with different sampling rates. When the sampling rate is 0.3, the three algorithms have almost the same recovery time. With the sampling rate increasing, recovery time becomes longer. Our proposed method still consumes less time than the other methods for different sampling rates. When the sampling rate is 0.8, the proposed method runs 10.28 seconds shorter than SAMP and 4.46 seconds shorter than MAOMP. is proves that the proposed method has better performance than the other two algorithms in terms of convergence speed. Based on the above analysis, the proposed method has smaller error, better stability, and faster convergence speed than the SAMP and MAOMP algorithm.

Conclusions
In this paper, we proposed a generalized sparsity adaptive matching pursuit algorithm with variable step size. is algorithm uses the idea of generalized atom selection to choose more atoms at the beginning of the iteration, and the signal estimation solution is more accurate by backtracking. Regarding variable step size, an idea of variable step size of a nonlinear function is proposed. is can make the step size large at the beginning, which is used to speed up the convergence of the algorithm. en the step length is reduced to 1, so the sparsity estimation value is more accurate, thereby improving the reconstruction accuracy and reducing the running time of the algorithm.
Simulation results demonstrate that our proposed method has a better reconstruction performance compared with SAMP algorithm and MAOMP algorithm. For onedimensional Gaussian signal, among the different measurement values and different sparsity levels, the reconstruction probability of our proposed method is the best, and the signal can be reconstructed at low observation or high sparsity. For two-dimensional image, our proposed method has better reconstruction quality, which is measured by PSNR. Compared to MAOMP and SAMP, our proposed method has faster convergence speed. Moreover, as the initial step size increases, our proposed method can still reconstruct images with high quality. In a word, our proposed method is better than similar algorithms in terms of convergence speed and reconstruction accuracy.

Data Availability
All the data used for the simulations can be obtained from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.