Compressed Imaging Reconstruction Based on Block Compressed Sensing with Conjugate Gradient Smoothed l0 Norm

Compressed imaging reconstruction technology can reconstruct high-resolution images with a small number of observations by applying the theory of block compressed sensing to traditional optical imaging systems, and the reconstruction algorithm mainly determines its reconstruction accuracy. In this work, we design a reconstruction algorithm based on block compressed sensing with a conjugate gradient smoothed l0 norm termed BCS-CGSL0. The algorithm is divided into two parts. The first part, CGSL0, optimizes the SL0 algorithm by constructing a new inverse triangular fraction function to approximate the l0 norm and uses the modified conjugate gradient method to solve the optimization problem. The second part combines the BCS-SPL method under the framework of block compressed sensing to remove the block effect. Research shows that the algorithm can reduce the block effect while improving the accuracy and efficiency of reconstruction. Simulation results also verify that the BCS-CGSL0 algorithm has significant advantages in reconstruction accuracy and efficiency.


Introduction
With the development of information technology and the continuous improvement of related requirements in the military and civilian fields, higher standards for the imaging resolution of optical systems have been put forward. The traditional optical imaging system improves resolution mainly by increasing the focal length and aperture of the optical system, reducing the size of detector pixels, and increasing the number of pixels. In practical engineering, the aperture and focal length of optical systems are challenging to improve and limited by observation conditions with stricter requirements. At present, the size of the detector pixels is close to the physical limit, making subsequent improvements difficult. Moreover, increasing the number of detector pixels will increase power consumption, volume, weight, and data storage and processing complexity for the system [1,2]. Therefore, seeking a new image data acquisition and processing method is necessary to improve the imaging system's resolution.
The emergence of compressed sensing (CS) [3,4] theory provides a new idea for improving the resolution of optical imaging systems. According to CS theory, signals with sparse characteristics can be compressed and projected into a low-dimensional space by a specific observation matrix to obtain a small number of projected signals that can reconstruct the original signal using the corresponding reconstruction algorithm. Compressed imaging super-resolution reconstruction technology applies CS theory to traditional optical systems to achieve super-resolution reconstruction and improve the resolution of imaging systems. This reconstructs higher-resolution images with a small number of observation results. This method has excellent application prospects, reducing the information for image storage, transmission, and processing in optical systems and the requirements of the detector. It transfers the difficulty of improving the system resolution from the hardware to the software side, thus decreasing the equipment cost.
The observation matrix and the reconstruction algorithm determine the reconstruction accuracy of the compressed imaging super-resolution reconstruction technique. The reconstruction algorithm is the primary determinant factor. At present, reconstruction algorithms can be broadly divided into three directions, including the minimization of the l 0 , l 1 , and l p norms. The solving model of the reconstruction algorithm, based on the l 0 norm minimization, is min s 0 , s.t.y = As, which is an intractable NP−hard problem [5] and must be transformed. The standard methods include the greedy [6] and SL0 algorithms [7]. The reconstruction algorithm, based on the l 1 norm minimization, uses the l 1 norm to approximate the l 0 norm, whose solution model is min s 1 , s.t.y = As. The common methods include the Basis Pursuit algorithm [8] and the Split-Bregman algorithm [9]. The reconstruction algorithm based on the l p norm minimization uses the l p norm to approximate the l 0 norm. The solution model is min s p , s.t.y = As. The commonly used algorithms include the iterative reweighted least squares [10] and the FOCUSS algorithms [11].
Among the three types of algorithms mentioned above, the algorithm based on the minimum l 1 norm that introduced the convex programming methods has excellent reconstruction precision. However, its time complexity is much larger than others, with refactoring needed over a long time. At the same time, this method requires more observations because the l 1 norm cannot accurately represent the signal's sparsity. The algorithm based on the minimum l p norm reduces the number of required observations, but the reconstruction time is still long, and the practicability is not higher. The algorithm based on the minimum l 0 norm is divided into the greedy and SL0 algorithms. The greedy algorithm requires a small number of calculations and has a fast reconstruction speed, but the reconstruction accuracy is poor, and there are reconstruction errors.
The SL0 algorithm, as a reconstruction algorithm based on the l 0 norm minimization, approximates the l 0 norm through the constructor, combines the idea of convex programming with the greedy algorithm, and greatly reduces the reconstruction time while ensuring a high reconstruction accuracy. Therefore, this paper focuses on the improvement of the SL0 algorithm.
Based on the SL0 algorithm, this paper adopts the inverse trigonometric fraction function approximation to express the l 0 norm and combines the modified conjugate gradient method to improve the SL0 algorithm, which is called the CGSL0 (Conjugate Gradient Smooth l0 Norm) algorithm. On this basis, to eliminate the image block effect after reconstruction, this paper proposes a BCS reconstruction algorithm based on the CGSL0 and BCS-SPL [12]

Compressed Sensing Theory
The theory of compressed sensing projects a signal with sparse characteristics onto a low-dimensional space to realize the sampling and compression of the signal at the same time [13]. It accurately reconstructs the original signal with fewer sample values, and its basic model is as follows: where y ∈ R m represents the result of compressed sampling, Φ ∈ R m×n represents the observation matrix, and x ∈ R n represents the original signal. The crux of the problem of CS is how to reconstruct the original signal x accurately from the compressed signal y.
The signal x in CS theory must be sparse [14]. In this way, the observed values after a random sub-sampling of the observation matrix can break the Nyquist sampling theorem and be reconstructed accurately. Most natural signals in the time domain are non-sparse but may be sparse in some transformation domains. That is to say that non-sparse signals can be converted into sparse signals using the sparse transformation basis [15]: where Ψ ∈ R n×n is the sparse transformation basis and s ∈ R n is the signal after sparse transformation. At this point, the basic model of CS can be expressed as follows: where A = ΦΨ is the perception matrix that is the product of the observation matrix and the sparse basis; this model meets the requirements of CS theory. Sparse signals s can be obtained using the reconstruction algorithm, and then reconstructed signals x can be obtained using the sparse transformation x = Ψs, which is the reconstruction process of compressed sensing. The common sparse transformation basis includes the discrete Fourier transform basis (DFT) [16], discrete cosine transform basis (DCT) [17], and discrete wavelet transform basis (DWT) [18]. The DCT can reflect the correlation of image signals, so it is mainly used for sparse representations of image signals.
In (3), this paper takes the random Gaussian measurement matrix as the observation matrix and the discrete cosine transform basis as the sparse basis to verify the performance of these reconstruction algorithms under the same conditions.

Compressed Imaging System and Reconstruction
Different from traditional image compression, which compresses and encodes complete image data, the compressed imaging system combines compressed sensing theory into the traditional optical imaging system and collects incomplete data. In this system, a coding mask is added to the imaging focal plane to encode the target scene, which is then collected by a low-resolution detector. The reconstruction algorithm is used to reconstruct the low-resolution images observed many times to obtain the corresponding high-resolution target scene. Its optical imaging system is shown in Figure 1. The imaging principle can be summarized as a light emphasis on the encoding mask and a down-sampling of the detector. Finally, the low-resolution image is the obtained observation value that adds detector noise. It can be reconstructed using the theory in Section 2.1.  (3) is a one−dimensional signal. This must be converted when applied to the compressed imaging system in Figure 1.
For a two−dimensional image signal of size M × M and a detector of size N × N, the single sampling process is as follows: 1.
Coding modulation. The coding board used in this system is a 0-1 coding board. The 0 position corresponds to no light, and the 1 position corresponds to light. 2.
down-sampling. The size of the image block of each pixel of the detector detected is B × B, where B = N/M, and the sum of the corresponding image block signals is the detection value of the detector.
It is expanded into a one−dimensional column vector x ∈ R B 2 ×1 for the image block and into a one−dimensional row vector ψ ∈ R 1×B 2 for the corresponding block of the encoding mask. We discover that y = φx represents a single sampling process of the corresponding image block position. Each image block uses the same encoding mask block, and all image encoding processes can be expressed as follows: where y i ∈ R 1 is the observation value of the i−th detector pixel. The result of sampling the compression imaging system pB 2 times, using a different coding mask each time, is as follows: is the column vector of the image block signal, and Φ ∈ R pB 2 ×B 2 is the observation matrix. In this case, the compressed sensing reconstruction algorithm can be used for super-resolution from low-resolution images obtained by multiple sampling into high-resolution images. This is because the model in (5) meets the compressed sensing theoretical model in (3). This process is the multi-image super-resolution reconstruction technology based on compressed sensing theory.

BCS-SPL Algorithm
As can be seen in Section 2.2, the core of the compressed imaging reconstruction lies in block compressed sensing (BCS) [19]. The idea of the block also brings some adverse effects, such as the block effect in the reconstructed image. The block effect of the image under a DCT transformation is pronounced. Therefore, [12] proposed the Smooth Projection Landweber (SPL) algorithm, which deals with the image's block effect and noise. The core idea of the SPL algorithm is two−step iterations starting from the initial sparseness on the transform domain. The process is as follows: where θ [n] is the reconstructed signal in the n−th iteration, ζ is the largest singular value of is the iteration threshold.
Mun S. et al. [12] proposed the reconstruction algorithm called BCS-SPL by applying the SPL algorithm to CS theory, which can remove block effects and impose smoothness. The BCS-SPL algorithm has been verified to have a good reconstruction effect, and its steps are as follows: 1.
Obtain the initial solution as x Smooth the reconstructed signal of the n−th iteration as x i ), where wiener() is the Wiener filtering;

3.
Calculate according to (6) Transform the reconstructed signal to thw Ψ domain and obtainŝ i

SL0 Algorithm
The solution of (3) is essentially to reconstruct the original signal with the least number of non-zeros based on the observation value and the perception matrix to obtain the most sparse solution. After that, the original signal in the natural domain can be obtained by sparse transformation, and the CS algorithm can be summarized as follows: where s 0 = ∑ n i=1 |s i | 0 is the l 0 norm of s, which represents the number of non-zero elements in the vector and the sparsity of the vector. The sparsest solution can be obtained by solving its minimum value. However, it can be seen that this is an NP-hard problem and difficult to solve when n is large.
Using a smooth continuous function to approximate the l 0 norm of the original sparse signal discontinued was proposed for the SL0 algorithm by Mohimani et al. [7]. The smoothing function adopted is the standard Gaussian function: It is easier for us to derive the following: (9), the approximation of the l 0 norm can be obtained as follows: (12) where σ is the smoothness factor, whose value determines the smoothness degree of F σ (s). The larger σ is, the smoother F σ (s) is, and the lower the degree of approximation to the l 0 norm is. Otherwise, the l 0 norm of the signal s can be approximated by (10) when σ is approximately equal to zero. The model in (8) can be transformed into an optimization problem to solve continuous signals. On this basis, the fastest descent method and gradient projection principle are used to gradually approach the optimal solution of the continuous function through several iterations.
The search direction of the optimal value is expressed as follows: The effect called "sawtooth" will appear in the search for the optimal value of the fastest descent method, which causes the global optimal value not to be obtained and the estimation accuracy of the l 0 norm to reduce. For these reasons, the algorithm needs to be improved.

Construct an Approximate Estimation Function of the l 0 Norm
The accuracy of the approximate l 0 norm is the key factor for improving the effectiveness of the SL0 algorithm. The closer the continuous smoothing function constructed is to the l 0 norm, the more accurate the result of the algorithm reconstruction will be. In this paper, the CGSL0 algorithm uses the following inverse trigonometric fraction function. It is proposed as the approximate estimation function of the l 0 norm: where ρ and σ are the parameters of controlling the steepness of the smoothing function and s i is the component of the sparse vector s. Therefore, the l 0 norm is approximately expressed as follows: As shown in Figure 2, the Gaussian function, hyperbolic tangent function, and compound trigonometric function are, respectively, used to approximate the SL0 algorithm [7], NSL0 algorithm [20], and DNSL0 algorithm [21]. It can be observed that in the case of ρ = 0.05 and σ = 0.01, for the interval [−0.1 0.1], the inverse trigonometric fraction function proposed in this paper is steeper than other algorithms and has a better approximation to the l 0 norm; that is, the sparsity of the signals can be expressed more accurately. It can improve the accuracy of the optimization algorithm.

CGSL0 Algorithm
The SL0 algorithm has a "sawtooth" effect using the Gaussian function to approximate the l 0 norm. The NSL0 algorithm is solved by the modified Newton method. It needs to calculate the first-and second-order derivatives of the iterative points. At the same time, the Hesse matrix of the objective function must be positive definite, which has high requirements for the objective function. The inverse matrix of the second-order Hesse matrix requires a large amount of calculation, which presents some problems for practical use.
The conjugate gradient method is an unconstrained optimization method between the fastest descent method and the Newton method. It has a superlinear convergence speed, a simple algorithm structure, and easy programming implementation. This method only uses the first derivative to avoid the calculation of the second derivative, reducing the amount of calculation and storage, much like the fastest descent method. For the sake of the above, we propose an algorithm combining the smoothing function and the conjugate gradient method in Section 3.1, BCS-CGSL0.
According to (14), the reconstruction model of the CS algorithm is as follows: The solution can be divided into two steps: 1.
Calculate the iteration direction using the conjugate gradient method and search for the optimal value; 2.
Project the results of the conjugate gradient method into the feasible set using constraints.

Conjugate Gradient Method to Find the Optimal Solution
The idea of the conjugate gradient method is to generate the conjugate direction of the Hesse matrix of the convex quadratic function by using the fastest descending direction at the current step at each iteration step. Firstly, choose the least squares solution as the initial value: The gradient direction of the fastest descent can be obtained according to (14): In this method, we assume that the current iteration value is x k and the next iteration value is x k+1 = x k + α k d k , where α k is the step of the current iteration and d k represents the search direction of the minimum value. The conjugate gradient method can then be expressed as follows: where g k represents the gradient of the function and β k can be solved using different conjugate gradient methods. The classical solution expressions are as follows: Different gradient conjugate methods usually have different performance results in different scenarios, among which FR and PRP conjugate gradient methods are more commonly used. Therefore, the hybrid conjugate gradient method can be used to modify it to a certain extent. In this paper, the conjugate gradient method of the FR and PRP hybrid is adopted, and the hybridization method is shown in (19): The conjugate gradient method of the FR and PRP hybridization, which can avoid the disadvantage of producing continuous small steps, is an algorithm with a better all around performance among conjugate gradient methods. For this reason, it is selected as the search direction for calculating the minimum value.

Project the Optimal Solution into the Feasible Set
The solution obtained above is s, which needs to be projected as follows to ensure that the solution is in the feasible set {X|Y = AX} limited by the constraints.
where s p is the solution in the feasible set.

BCS-CGSL0 Algorithm
Combining the BCS-SPL algorithm introduced in Section 2.3 and the CGSL0 algorithm introduced in Section 3.2, we propose a BCS-CGSL0 algorithm, which presents the iterative idea of SL0 under the framework of BCS-SPL. Not only does the BCS-CGSL0 algorithm ensure the accuracy and efficiency of reconstruction, but it also removes the block effect caused by block compressed sensing. The step-by-step process of the BCS-CGSL0 algorithm is as Algorithm 1:

Algorithm 1: BCS-CGSL0 Algorithm
Input: measure signal y, measurement matrix Φ, transform domain basis Ψ, block size B initialization: x i ), where wiener() represents the smoothing filter; x i ; σ = σ j and s = s

Experiments and Results
In the experiment, the size of the original images is 512 × 512. Due to the characteristics of block compressed sensing, a too large block size will greatly increase the reconstruction time, and a too small block size will reduce the reconstruction accuracy. Therefore, we choose the block size B = 16 after comprehensive consideration. The observation matrix is an orthogonalized random Gaussian matrix. The sparsity transform basis is a discrete cosine transform (DCT). All experiments were performed using MATLAB 2022a on a computer equipped with an Intel Core TM i9, 3.7 GHz processor, with 32GB of RAM and running on Windows 10. In the BCS-CGSL0 algorithm, we set the decreasing factor as ρ = 0.6, the iteration number as J = 200, L = 3, and the step size as µ = 0.01. The threshold τ can be calculated according to (7), where λ = 6.
Two sets of experiments are described in this paper. The first experiment compares the reconstruction effects of the BCS-CGSL0 and SL0 series algorithms at a 0.1-0.5 sampling rate. The second experiment compares the reconstruction effects of the BCS-CGSL0 and no-SL0 series algorithms at a 0.1-0.5 sampling rate. Figure 3 presents the original and observed images of Goldhill and Clown.
The evaluation indicator in the experiment includes the peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and reconstruction time. The results of the two groups of experiments are in Figure 3.

The Comparison of the BCS-CGSL0 and SL0 Series Algorithms
In the first experiment, we evaluated the effectiveness of the proposed BCS-CGSL0 and SL0 series of algorithms, including the SL0 algorithm [7], NSL0 algorithm [20], DNSL0 algorithm [21], and CGSL0 algorithm proposed in this paper. Table 1 lists these algorithms' PSNR, SSIM, and reconstruction time for the Goldhill and Clown images when the rate is 0.5. As can be seen, the SL0 series algorithms have a fast reconstruction speed while ensuring a high reconstruction accuracy. Compared with the other four algorithms, the BCS-CGSL0 algorithm proposed in this paper reduces the reconstruction time while improving the reconstruction accuracy (PSNR and SSIM). The BCS-CGSL0 algorithm has the best effect and reasonable practicability when combining the three evaluation indicators.  Table 2 lists the reconstruction effect of these algorithms for the Goldhill image when the rate is between 0.1 and 0.5. It can be seen that the effect of the SL0 series algorithm is very poor at low sampling rates. For example, the reconstruction effect of the SL0 algorithm is worse when the rate is 0.1, making the reconstructed image unusable. This demonstrates that the reconstruction accuracy of the BCS-CGSL0 algorithm is the highest; the PSNR is increased by more than 2 dB, and the SSIM is higher than the other four algorithms. Figure 4 compares the PSNR and SSIM of the BCS-CGSL0 and SL0 series algorithms for the Goldhill image when the rate is between 0.1 and 0.8. It can be seen that the SL0 series algorithms are greatly affected by the sampling rate, and the effect is extremely poor when the sampling rate is low. However, the BCS-CGSL0 algorithm overcomes this shortcoming and achieves good reconstruction results when the rate is 0.1, making the BCS-CGSL0 algorithm adaptable to more scenarios.

The Comparison Of The BCS-CGSL0 And non-SL0 Series Algorithms
In the second experiment, we evaluated the effectiveness of the proposed BCS-CGSL0 and no-SL0 series of algorithms, including OMP algorithm [6], Split-Bregman algorithm [9], IRLS algorithm [10], FOCUSS algorithm [11], BCS-SPL algorithm [12], and BCS-TVAL3 algorithm [22]. Table 3 lists these algorithms' PSNR, SSIM, and reconstruction time for the Goldhill and Clown images when the rate is 0.5. Compared with Table 1, it can be seen that these typical no-SL0 series algorithms have a good reconstruction accuracy. However, most take a long time and are unsuitable for real-time scenarios. In contrast, the BCS-CGSL0 algorithm has good practicability since it can complete the reconstruction in a short time and have a better reconstruction accuracy.  Table 4 lists the reconstruction effect of these algorithms for the Goldhill image when the rate is between 0.1 and 0.5. Compared with Table 1, it can be seen that the no-SL0 series algorithms still have a relatively stable reconstruction effect at low resolutions, which is better than the SL0 series algorithms. However, their disadvantage is that the reconstruction time is too long for adaptation to real-time scenarios. Although the BCS-SPL algorithm's reconstruction speed is fast, the reconstruction accuracy still needs to be improved. In contrast, the advantages of the BCS-CGSL0 algorithm are more obvious, including a higher reconstruction accuracy and shorter reconstruction time.   Figure 5 compares the PSNR and SSIM of the BCS-CGSL0 and no-SL0 series algorithms for the Goldhill image when the rate is between 0.1 and 0.8. It intuitively demonstrates that the reconstruction effect of no-SL0 series algorithms is more stable at different sampling rates and that the BCS-CGSL0 algorithm has an improved performance.

Conclusions
This study proposes a compressed sensing reconstruction algorithm based on block compressed sensing with a conjugate gradient smoothed l 0 norm, BCS-CGSL0. The core of our method is to combine the idea of the SL0 series algorithms with the BCS-SPL algorithms, to remove the block effect of the reconstructed image and improve the reconstruction accuracy and speed. The main contributions of this paper are as follows:

1.
We propose a new function called the inverse trigonometric fraction function, which approximates the l 0 norm better than similar functions; 2.
We propose a method for optimizing the SL0 algorithm (CGSL0), using the inverse trigonometric fraction function to approximate the l 0 norm and the modified conjugate gradient method to solve the optimization problem; 3.
We propose a reconstruction algorithm that combines CGSL0 and BCS-SPL, which has a high reconstruction accuracy and removes the blockiness of reconstructed images.
Through the simulation experiment verification of encoding low-resolution images, compared with SL0 series and no-SL0 series algorithms, this algorithm can ensure a good reconstruction speed when improving the reconstruction accuracy, which ensures that the algorithm has great value in practice. In future work, we plan to further study improvements to the SL0 series of algorithms, improve their accuracy and speed, and enhance their applicability in practical scenarios.