A re-weighted smoothed L0 -norm regularized sparse reconstructed algorithm for linear inverse problems

This paper addresses the problems of sparse signal and image recovery using compressive sensing (CS), especially in the case of Gaussian noise. The main contribution of this paper is the proposal of the regularization re-weighted Composite Sine function smoothed L 0-norm minimization (RRCSFSL0) algorithm where the Composite Sine function (CSF), the iteratively re-weighted scheme and the regularization mechanism represent the core of an approach to the solution of the problem. Compared with other state-of-the-art functions, the CSF we proposed can better approximate the L 0-norm and improve the reconstruction accuracy, the new re-weighted scheme we adopted can promote sparsity and speed up convergence. Moreover, the use of the regularization mechanism makes the RRCSFSL0 algorithm more robust against noise. The performance of the proposed algorithm is verified via numerical experiments in the noise environment. Furthermore, experiments and comparisons demonstrate the superiority of the RRCSFSL0 algorithm in image restoration.


Introduction
What makes some scientific problems at once interesting and challenging is how to reconstruct sparse signal with only a small amount of information. Mathematically, the question is how to obtain the optimal solution of a system with fewer equations than unknowns without additional information. Fortunately, as a theory to solve the underdetermined linear inverse problem, CS [1][2][3] offers us an efficient framework to solve this problem. The CS task comes down to is the compressed signal, and x n  Î is the original signal, m n  . , , , n m n 1 2 i m  f Î = ¼ represents column vector of F. Therefore, the key problem of CS is to accurately reconstruct the original signal x from the compressed signal y, and the reconstruction process is to solve the following optimization problem [4][5][6]: x y x s t argmin .
. where 0 || · || is L 0 -norm, which represents the number of nonzero elements. Considering the effect of noise, equation (2) can be converted into [7]: x y x s t argmin .
. where bound ε is related to the variance of noise. Equation (3) is a nonconvex NP-hard problem whose computation process is very complicated. In practice, three alternative approaches are usually employed to solve this problem: (1) Greedy algorithm.
(2) Convex relaxation algorithm. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Greedy algorithm as a representative compressive sensing algorithm mainly includes Matching Pursuit (MP) [8], Orthogonal Matching Pursuit (OMP) [9,10], Compressed Sampling Matching Pursuit (CoSaMP) [11], Regularized Orthogonal Matching Pursuit (ROMP) [12], Subspace Pursuit (SP) [13,14], etc. The disadvantage of greedy algorithm is that it is sensitive to noise, and its computational complexity is increased with respect to signal sparse (k, the number of non-zero elements in x). Convex relaxation algorithm, such as Basis Pursuit (BP) [15,16], Least Absolute Shrinkage and Selection Operator (LASSO) [17], basis pursuit denoising (BPDN) [18,19], reconstructs the sparse signal by linear programming, but its computational complexity is increased . Non-convex relaxation algorithm reformulates equation (3) as [20,21] x y x argmin 1 2 4 where λ>0 is the regularization parameter that balances the trade off between deviation term x y 2 2 F -|| | | and sparsity regularizer x 0 || || . Focal Underdetermined System Solver (FOCUSS) [22], Iterative Re-weighted Least Squares (IRLS) [23,24] and Bayesian CS (BCS) [25] are typical approaches to solve equation (4), which have rapid reconstruction speed, but are easy to fall into local optimum. [26] proposed a new algorithm called Lp-RLS, which converts x F s ( ) is a smoothed function that approximates the L 0 -norm, which can enhance the sparseness of the signal. This above algorithms can suppress noise and achieve successful reconstruction of x, and based on this, this paper proposes RRCSFSL0 algorithm. In this algorithm, a CSF and a new re-weighted function are proposed to approximate x 0 || || and promote signal sparsity, respectively. Then the regularization mechanism is employed to denoise. Finally, the CG method is applied to optimize the RRCSFSL0 algorithm to approximate the optimal solution. On this basis, the proposed RRCSFSL0 algorithm is applied to image processing. This paper is organized as follows: the section 2 introduces the theories of the proposed RRCSFSL0 algorithm. Then the performance of the RRCSFSL0 algorithm is verified through simulation experiments and the application of this algorithm is shown in section 3. Section 4 concludes this paper.
2. RRCSFSL0 algorithm 2.1. New smoothed L 0 -norm function model The problems of using L 0 -norm (that is, the need for a combinatorial search for its minimization, and its too high sensibility to noise) are both due to the fact that the L 0 -norm of a vector is a discontinuous function of that vector. Our idea is then to approximate this discontinuous function by a suitable continuous one, and minimize it by means of a minimization algorithm for continuous functions [30]. Consider the function: where σ is a smoothed factor that determines the quality of the approximation, and x i is an independent variable.
Note that: Then, by defining: =s s ( ) in [31][32][33]. The effect of these two smoothed functions at the same σ (σ=0.1) is shown in figure 1. It can be seen that the CSF and Gaussian function are all close to the x 0 || || for small value of σ, but the CSF performs better than the Gaussian function at the same σ. In general, this proposed smoothed function has two obvious merits: • It closely matches L 0 -norm, so it can be more satisfied with CS model and thus performs better in signal recovery.
• It is simple, so it can reduce computational complexity.

New re-weighted function design
The signal recovery based on (4) works successfully, but converges slowly. Fortunately, we can obtain the most sparse solution of x faster through new re-weighted function, which is given by In (9), when the x 0 i  , the w 1 i  , and when the x i  -¥ or x i  +¥, the w 0 i  . w i is an even function that monotonically decreases on ], which shows the re-weighted function has maximum in location x i =0 and minimum in location . We know, when dealing with an optimization problem, each component of x is treated equally, namely, each x i is re-weighted with a same value: w i =1 [34]. But, for a sparse signal of that only a small number of nonzero components, adapting this method will ignore the difference among the signal components, and easily leads a not sparsest solution. So we suppose a signal x x x x , , , n T ) , and a re-weighted function W w w w diag , , , n In the process of CS optimization, we add the W T to x, so the x will be converted to x¢ which equals to Wx w x w x w x , , , n n T In x¢, the large w i , i=1, 2, Kn could be used to discourage nonzero entries in the recovered signal, while small w i , i=1, 2, Kn could be used to encourage nonzero entries. So, in the process of optimization, more x values better optimize and are then closer to zero with the effect of W . With iterative optimization, there will be more values quickly approaching zero, which can lead to a sparse solution faster. Pant applied one re-weighted function to his algorithm in [35], which can be described as where ζ defaulted to 10 −8 in [35] for regularization factor that can be used for avoid the zero denominator. Evidently, for a small x , the re-weighted strategy has a large re-weighted value w i in (10) while ζ is small enough. Whereas, it tends to optimize x i | |further, thus forcing a sparse solution. But the re-weighted function in (10) also has obvious disadvantages: 1) The re-weighted values will be quite large when the signal components are close to zero, leading the effects of larger components not better. 2) The value of the ζ is hard to select to satisfied with the process of optimization.
Compared with (10), the re-weighted function in (9) has better characteristics. The range of w i in (9) is [0, 1], while the range of it in (10) is 0, ]. If ζ equals to 1, the values of two re-weighted functions have the same range, but the former in equation (9) has the better performance that can significantly improve the effect of each signal component. And if ζ is large enough, the range of w i in equation (10) will be large as well, this will make the larger effect of value of x i not remarkable. In conclusion, the proposed re-weighted function has two merits: • It has proper range that can give each signal component a proper re-weighted value, and when the signal component is close to zero, the re-weighted value will not be large.
• It needn't adjust parameters like ζ, and the denominator does not equal zero.

The new proposed RRCSFSL0 algorithm and its steps
As explained above, the objective function can be described as where λ is a regularized factor that revises the original objective function. Re-weighted function } , and w i are illustrated in equation (9), can show the difference of each component of signal. The differentiable smoothed accumulated function Let Therefore, the gradient for equation (11) can be written as According to the objective function, the Hessian of (11) can be readily expressed in closed form as where i=1, 2, K, n. In equation (14), U is the differential of gradient g in equation (12), so H is the differential of G, and in (16), g i represents the column vector of g . In fact, the problem of solving the objective function in equation (11) translates into an optimization problem. This paper applies CG method to RRCSFSL0 algorithm to optimize the objective function. The problem firstly can be solved by using a sequntial σ −continuation strategy as detailed in the next paragraph.
Given a small target value σ T , and a sufficiently large initial value of parameter σ, i.e. σ 1 , monotonically decreasing sequence {σ t : t=1, 2, 3, K, T} is generated as e a n dt T , 1 ,2 , , , and T is the maximum number of iterations.
In the CG algorithm [36], iterate x L ( ) is updated as where the parameter d L can be given by  (11) evaluated at x x = L ( ) using equations (13) and (14), respectively. As shown in equation (21), ñ Λ is positive if H L is positive definite (PD). And we can see from equation (14) that the T F F is PD, and W is PD, so H L is PD if U L is PD. To get the positive definiteness of U L , we can make the following processing: || , are the same as the value in [37]. As for σ, it can be shown that function x F s ( ) remains convex in the region where the largest magnitude of the component of x is less than σ. Based on this, a reasonable initial value of σ can be chosen as

Numerical simulation and analysis
In this section, we will verify the performance of the RRCSFSL0 algorithm in the case of noise and apply the algorithm to image restoration. The numerical simulation platform is MATLAB 2017b, which is installed on the WINDOWS 10, 64-bit operating system, the CPU is Inter (R) Core (TM) i5-3230M, and the frequency is 2.6 GHz.
First, we analyze the performance of the proposed RRCSFSL0 algorithm in sparse signal recovery, and compare it with the SL0 [31][32][33], L 2 -SL0 [27][28][29] and L p -RLS [26] algorithms. We fix n=256 and m=100 and the sparsity degree k=4s+1, s=1, 2, K, 15, or let n=[170, 220, 270, 320, 370, 420, 470, 520], m=n/2, k=n/5. For every experiment, we randomly generate a pair of x b , , F { }: F is a m ń random Gaussian matrix with normalized and centralized rows; the nonzero entries of the sparse signal x n  Î are i.i.d. generated according to the Gaussian distribution 0, 1 ; ( ) b is randomly formed and follows the Gaussian distribution of 0,  x ( ). Given the measurment vector y x b F = + , the sensing matrix F and noise b, we try to recover the signal x. Choose the parameters that give the best performance for each method: for the SL0 algorithm, Step2: Compute W using (9), t s for t=2, 3, K, T−1 using (17); Step3: For t T 1, 2, , As shown in the figure 2, the SNR of all algorithms decreases sharply with increasement of ξ, which shows the noise can seriously affect the performance of an algorithm. Despite this, RRCSFSL0 gets the largest SNR, followed by three other algorithms with similar SNR, which proves the de-noise performance of RRCSFSL0 is better than the other three algorithms.  The figure 3 shows the NMSE of all algorithms as k changes. As can be seen from the figure, the NMSE of the RRCSFSL0 rises the slowest as the k increases. And under the same k, the NMSE of the RRCSFSL0 is the smallest. This indicates that the RRCSFSL0 algorithm has the most accurate reconstruction performance.
The CRT is shown in figure 4. The figure shows that in terms of CRT performance, although RRCSFSL0 is superior to L p -RLS, it is inferior to SL0 and L 2 -SL0. Therefore, improving the reconstruction speed is one of the main directions of the RRCSFSL0 algorithm in the future.

The convergence performance comparison of the algorithms
To illustrate the convergence of the proposed RRCSFSL0 alorithm, we present the performance of NMSE with iterations in figures 5 and 6. For this section, the signal is set as random signal x n  Î and measurement vector y m  Î with n=256, m=100, k=20. The figure 5 shows the NMSE of SL0, L 2 -SL0, L p -RLS and RRCSFSL0 algorithms with iterations. It can be seen that these algorithms eventually converge to a very small value, but obviously, RRCSFSL0 has the fastest convergence rate. In addition, the NMSE of the RRCSFSL0 algorithm is always minimal at any time. This fully proves that the re-weighted function proposed in this paper can promote the sparsity of the signal and thus improve the convergence speed.  The figure 6 shows the NMSE of the proposed RRCSFSL0 algorithm at different k. As k increases, the convergence speed of the RRCSFSL0 gradually decreases. But when k is less than 25, the RRCSFSL0 has a faster convergence speed. It is proved that the RRCSFSL0 has good robustness.
Through the above simulation experiments, we proved that the RRCSFSL0 algorithm can accurately reconstruct the sparse signal under noise conditions and has good convergence performance, which provides a basis for the application of the RRCSFSL0 algorithm in image processing.
Among this, μ p is the mean of image p, μ q is the mean of image q, σ p is the variance of image p, σ q is the variance of image q and σ pq is covariance between image p and image q. Parameters c 1 =z 1 L and c 2 =z 2 L, which z 1 =0.01, z 2 =0.03 and L is dynamic range of pixel values. range of SSIM is [−1, 1], when these two image are same, the SSIM equals to 1. Figures 7-9 show the situaiton about image recovery when CR is respectively 0.4, 0.5, 0.6. Table 2 shows the PSNR and SSIM of recovery images by these selected algorithm. As shown in figures 7-9, all algorithms can clearly recover image when CR is over 0.5, but the image recovery performance difference of each algorithm is not obvious. Furthermore, we show the difference of each algorithm in detail by sciencific data in table 2. From the table, RRCSFSL0 has better PSNR and SSIM than other three selected algorithms, which varified the good image recovery performance of proposed RRCSFSL0 algorithm. So, the proposed RRCSFSL0 algorithm can be used for image recovery. Figure 10 shows anti-noise performance of the proposed RRCSFSL0 algorithm when recovering sparse images. When ξ is less than 0.2, the difference in image recovery is not obvious. But when ξ is over 0.2, the effect of image reovery is significantly reduced. These show that proposed RRCSFSL0 has a certain ability to denoise, but under high noise conditions, the effect needs to be improved. Table 3 gives the Scientific data. we can see that as ξ gradually increases from 0 to 0.5, both PSNR and SSIM decrease. Through experiments, we can know that the proposed RRCSFSL0 algorithm can obtain better results in sparse image restoration. This is mainly because the CSF used by it can approximate the L 0 -norm well, its reweighted function can promote the sparsity, and the regularization mechanism has the ability to resist noise.

Conclusions
This paper proposed a RRCSFSL0 algorithm for reconstructing sparse signal and images. The RRCSFSL0 is based on the Composite Sine function as a DA function, and it can better approximate the L 0 -norm. Then, we used the re-weighted function is uesd to promote sparseness and the regularization mechanism is employed to resist noise. Based on this, the RRCSFSL0 algorithm performs iterative optimization by the CG method to approximate the optimal solution. Furthermore, experiments show that: (1) the RRCSFSL0 algorithm can    improve accuracy and has certain anti-noise performance; (2) the RRCSFSL0 has a faster convergence speed; (3) the RRCSFSL0 satisfies the needs of sparse signal and image recovery, and improves chance of CS apply to other fieids. In our future research, the RRCSFSL0 algorithm will be optimized for operating rates and high anti-noise ability.