JPEG Lifting Algorithm Based on Adaptive Block Compressed Sensing

*is paper proposes a JPEG lifting algorithm based on adaptive block compressed sensing (ABCS), which solves the fusion between the ABCS algorithm for 1-dimension vector data processing and the JPEG compression algorithm for 2-dimension image data processing and improves the compression rate of the same quality image in comparison with the existing JPEG-like image compression algorithms. Specifically, mean information entropy and multifeature saliency indexes are used to provide a basis for adaptive blocking and observing, respectively, joint model and curve fitting are adopted for bit rate control, and a noise analysis model is introduced to improve the antinoise capability of the current JPEG decoding algorithm. Experimental results show that the proposed method has good performance of fidelity and antinoise, especially at a medium compression ratio.


Introduction
Image processing technology has always been a research hotspot in the field of computer science. Especially, in the recent years, under the emergence of high-definition and large-scale images and the impact of massive video information, image compression processing technology has become particularly noticeable. Image compression technology can use limited storage space to save a larger proportion of image data; at the same time, it can also reduce the data size of images of the same quality, which can effectively improve the efficiency of network data transmission. e traditional image compression technology includes two independent parts, image acquisition and image compression, which limit the fusion improvement method of the two correlated compression technology parts. e emergence of compressed sensing (CS) theory breaks the above frame of image compression, and it completes the image acquisition and compression in the step of sparse observation synchronously; on the one hand, it simplifies the image processing process, and on the other hand, it also provides new research areas for image fusion compression.
ere are many types of images processed in image compression technology, and this article selects a still image as the research object. e common still image compression formats include JPEG, JPEG2000, JPEG-XR, TIFF, GIF, and PCX.
is paper focuses on the research of image compression algorithms with a JPEG similar structure and improves them with the combination of CS technology. In addition, the algorithms with a similar principle architecture to JPEG are collectively referred to as JPEG-like algorithms, including traditional JPEG, JPEG-LS, JPEG2000, and JPEG-XR. Data redundancy is essential to the compression of a still image. JPEG-like algorithms use time-frequency transform algorithms and entropy coding as main methods to eliminate data redundancy [1][2][3]. Although having achieved certain effects of still image compression, these algorithms have insufficient considerations on three types of data redundancy (coding redundancy, interpixel redundancy, and psychological visual redundancy) [4]. Firstly, the simple image blocking without guidance could not support the effective coding efficiency to eliminate redundancy in the existing JPEG-like algorithms. Secondly, the uniform timefrequency transform of the same dimension cannot reasonably use the a priori information between pixels of different subimage blocks to reduce interpixel redundancy. In the end, the former JPEG-like algorithms fail to eliminate psychological visual redundancy by considering overall and local saliency. CS technology breaks through the limitations of the Nyquist sampling theorem to provide innovative ideas for sparse reconstruction of signals [5]. In particular, the adaptive block compressed sensing (ABCS) combined with adaptive partitioning and sampling provides a feasible solution for the optimization of JPEG-like algorithms [6,7].
at is, the block compression measurement matrix could be used as the forward discrete cosine transform (FDCT) matrix in the JPEG coding, and the inverse discrete cosine transform (IDCT) process is replaced by sparse reconstruction. In addition, multiple feature saliency and noise analysis are introduced to implement adaptive control of the observation matrix and minimal error iterative reconstruction [8,9].
In this article, we proposed a JPEG lifting algorithm based on the ABCS, and named it as JPEG-ABCS. is proposed algorithm focuses on the following aspects: (1) guiding best morphological blocking by minimizing mean information entropy (MIE); (2) generating an element vector of subimage pixels using the texture feature and 2dimensional direction DCT; (3) selecting the dimension of the measurement matrix by variance and local significance factors; (4) rate control by matching the overall sampling rate and the quantization matrix; (5) realizing iterative reconstruction of a minimum error under noise condition by using noise influence model analysis. e remainder of this paper is organized as follows. In Section 2, the basic theories of JPEG-like algorithms and the ABCS algorithm are illustrated. In Section 3, we focus on the introduction of the JPEG-ABCS algorithm.
en, the implementation of the proposed JPEG-ABCS algorithm is analyzed in Section 4. In Section 5, the experiment and result analysis shows the benefit of the JPEG-ABCS. e paper concludes in Section 6.

Background of the Existing JPEG-Like Algorithms.
e existing JPEG-like algorithms are similar in structure, mainly including blocking, forward time-frequency transform, quantization, entropy coding, and the inverse operation of the above four processes. As the basic one of JPEGlike algorithms, the structure of the JPEG model is shown in Figure 1.
It can be seen from Figure 1 that in the entire JPEG model, the original image I is treated as two-dimensional data, and its key link is adopting the 2-dimensional DCT. Generally, the block size is square, such as 8 * 8, and the recommended quantization matrix (light-table) is given in equation (1) [10]. Based on the Hoffman coding, the encoding part adopts differential pulse code modulation (DPCM) for DC coefficients and run length coding (RLC) for AC coefficients: 16 11 10 16 24 40 51 61   12 12 14 19 26 58 60 55   14 13 16 24 40 57 69 56   14 17 22 29 51 84 80 62   18 22 37 56 68 109 103 77   24 35 55 64 81 104 113 92   49 64 78 87 103 121 120 101   72 92 95 98 112 100 103 Compared with the fixed bit rate of the JPEG algorithm, the JPEG-LS algorithm adds the function of rate control by using a quality factor. JPEG2000 adopts nonfixed square blocking (tile) and discrete wavelet transform (DWT) to improve the quality of the restored image. JPEG-XR introduces the lapped orthogonal transform (LOT) to reduce the blocking artifact at low bit rates.

Basic eory of CS Algorithm.
CS theory was originally proposed by Candès et al. in 2006, which proved that the original signal can be accurately reconstructed by partial Fourier transform coefficients. e advent of CS technology solves the problem that image sampling and compression cannot be performed simultaneously. In general, the main contents of the research on CS theory include sparse representation, compression observation, and optimization reconstruction [11]. Firstly, the main task of sparse representation is to find a set of bases that can make the signal sparse representation, which is the premise and foundation of the entire CS theory. Secondly, the primary task of compression observation is to design a linear measurement matrix uncorrelated with the basis vector to obtain dimensionality reduction observation data, which is the key content of CS theory. Lastly, optimization reconstruction is a difficult problem in CS theory, and its main goal is to solve the original signal through the reverse optimization problem of the sparse vector. e specific solution method of this process is the constrained optimization method.
CS mathematical model is based on the assumption of signal sparsity. Let x ∈ R N be the original signal with n dimension. Suppose that the sparse matrix Ψ ∈ R N×N makes the sparse representation coefficient of x as s � Ψ −1 x, where s ∈ R N contains only K (K ≪ N) nonzero elements. e original signal x is called the K sparse signal under sparse basis Ψ. e number of nonzero elements in the coefficient vector s can be calculated by K � ‖s‖ 0 , where ‖ * ‖ 0 denotes l 0 norm.
CS theory states that the information content in sparse signals can be effectively captured by a smaller number of observations. Let Φ ∈ R M×N be the measurement matrix, where M < N. e linear dimension-reduction acquisition vector of the original signal x is given as y � Φx, where y ∈ R M represents the CS observation signal. In addition, the CS theory points out that to accurately recover the original signal by the observation signal, its dimensions must obey the following condition: M ≥ cK log(N), where c is an adjustment constant.
Since M < N, the reconstruction of the sparse signal x from the measurement vector y is ill-posed which requires us to solve the underdetermined system of equations. ere are many solutions for such a system. It is common practice to achieve effective signal reconstruction by using signal sparsity as an additional constraint. e accurate signal reconstruction is accomplished through solving the following optimization problem: min where Ω is the sensing matrix, ‖ * ‖ p denotes the l p norm, and the value of p is usually 0, 1, and 2 according to different optimization goals. is is a NP-hard problem, and in order to ensure the stability and robustness of the reconstruction process, the measurement matrix Φ must satisfy the restricted isometric property (RIP). e above is the description of the three important problems of the CS algorithm, which solves the separation problem of traditional image acquisition and compression. However, when CS is applied to large-scale high-definition images and video processing, because the 2-dimensional image contains a lot of information, the overall projection requires a large-scale measurement matrix, which will inevitably lead to two major problems: excessive storage and reconstruction algorithm complexity. e above problems limit the application of CS in image processing. e emergence of block compressed sensing (BCS) theory solves this problem well. e solution is to cut the whole image into several small unit blocks, reconstruct after independent observation, and then perform stitching to restore and reconstruct the original image.
Traditional block compressed sensing (BCS) technology introduces the idea of blocks in CS theory to solve the dimensional disaster of data processing, and then improves the processing speed of the algorithm [12]. Its basic model is shown in the following equation: where x i ∈ R n , y i ∈ R m , and Φ B ∈ R m×n are the i-th subblocks of the original signal, observation signal, and block measurement matrix and T 1 � (N/n) � (M/m) is the number of blocks. In addition, a coefficient η � (M/N) is often defined in the BCS, which is called the mean sampling rate. In the analysis of the above BCS algorithm model, although the blocking strategy solves the problems of dimensional disaster and computational complexity, the model uses a unified measurement matrix which can neither reflect the inherent differences between each subimage, nor can it achieve differentiated blocking. In order to overcome the above shortcomings, the nonuniform blocking and observing are introduced into BCS, and combined with the idea of the adaptive algorithm, the ABCS algorithm is generated. e ABCS algorithm mentioned in this article is the introduction of the adaptive strategy into BCS, which is mainly reflected in adaptive blocking and observation [13,14]. e ABCS algorithm model is as follows: where x i ∈ R n i , y i ∈ R m i , and Φ i ∈ R m i ×n i are the i-th subblocks of the original signal, observation signal, and measurement matrix. e difference between the ABCS algorithm and the BCS algorithm is that it gives the dimensional freedom of subblock and measurement matrix, which provides conditions for the reasonable use of the correlation of the internal elements of the original signal.

Fusion of JPEG Model and ABCS Algorithm
3.1. Workflow of JPEG Lifting Algorithm. According to the above section, the JPEG image compression model mainly includes blocking, FDCT, quantization, coding, and the inverse process of the above four parts. e focus of this section is to do the research about the method on how to embed the advantages of the ABCS algorithm into the JPEG model. e basis for the fusion of the JPEG model and the ABCS algorithm is that the consistent purpose is for image compression. e former mainly compresses the image by reducing the number of bits occupied by each pixel, and the   Figure 1: Structure of the JPEG model. Mathematical Problems in Engineering latter mainly compresses the data by reducing the amount of sampled data, so the ABCS algorithm is suitable to be embedded to the data acquisition stage of the JPEG model; that is, the ABCS algorithm is fused in the blocking and FDCT processes to reduce the amount of input data in the quantization process. In addition, the distinction of the data processing method in JPEG and ABCS is noticed. e image data in the JPEG algorithm are processed in the form of two-dimensional data, which is conducive to saving the two-dimensional structural characteristics of the image, while the input signal in the ABCS algorithm is a simple one-dimensional vector form and does not have two-dimensional characteristics. e proposed algorithm JPEG-ABCS mainly includes the solution of two main problems: (a) the conversion problem between two-dimensional time-frequency transformation of JPEG and one-dimensional measurement model of ABCS; (b) the specific method of applying ABCS to the JPEG compression algorithm.
In typical JPEG image compression, after the preprocessing stage, an Each subimage is transmitted to a 2D DCT transform, and the 2D DCT can be completed using two one-dimensional DCTs according to the separability of the DCT. In addition, the blocking method designed in this paper adopts the variable shape blocking method under a unified dimension (n � r × c), so the FDCT process can be described as follows: where I f i is the subimage in the DCT domain, D r ∈ R r×r and D c ∈ R c×c are the 1D vertical and horizontal DCT orthogonal matrices, respectively, and r and c are the number of rows and columns of each subimage [15]. e block sparse representation and the flexible uniform-dimension blocking are introduced into the ABCS algorithm. Equation (4) can be rewritten as follows: where x i ∈ R n , y i ∈ R m i , and s i ∈ R n are the i-th subblocks of the original signal, observation signal, and sparse signal; and Ω i ∈ R m i ×n are the i-th subblocks of the measurement matrix, sparse matrix, and sensing matrix; η i � (m i /n) is the subsampling rate of the i-th subblock.
For retaining the two-dimensional characteristics of the image signal in the application of JPEG-ABCS, it is necessary to analyze the two-dimensional DCT transform in JPEG and the compression observation in ABCS. It is impossible that the 1-dimension vector x i generated directly from the subimage I i through column/row scanning has two-dimensional structural characteristics. e inverse solution of the reconstructed signal x i in the ABCS algorithm is generally denoted as i Ω i s i ; that is, the reconstruction of the original signal is only related to the sparse representation coefficient s i . If the sparse representation coefficient s i has two-dimensional structure information, it is equivalent to the original signal x i with two-dimensional structure information. erefore, the equivalent two-dimensional block vector generation can be achieved by taking the sparse matrix of ABCS as the corresponding matrix under the two-dimensional DCT transform: Analyzing equation (7), the function between the sparse matrix and the DCT orthogonal matrices is established as follows: where kron( * ) represents the Kronecker product function.
Original signal vector x i is obtained by scanning the pixel value of the subimage I i vertically. In addition, if the texture of the image is not in the vertical and horizontal directions, the directional DCT is used instead of the horizontal and vertical DCT orthogonal matrix.
Replacing FDCT and IDCT in the JPEG with adaptive sparse observing and sparse restoring, respectively, replacing blocking in the JPEG with adaptive blocking and vectorization, adding noise to the data storage or data transmission are done, and then the workflow of the proposed JPEG-ABCS algorithm is shown in Figure 2. Comparing the two image compression models shown in Figures 1 and 2, the key points of the JPEG-ABCS model are (1) adaptive blocking, that is, replacing a fixed block with a variable block; (2) adaptive vectorization, that is, providing a matching vector generation method based on image orientation characteristics; (3) adaptive observing, that is, replacing uniform observing with nonuniform observing; (4) adding a controllable variable in the rate control process to improve the JPEG algorithm; (5) designing a denoising method in adaptive restoring to reduce the noise impaction on restored data.

Innovation of JPEG Lifting Algorithm.
e innovations of the above JPEG lifting algorithm are as follows: (1) Adding the mean sampling rate to overcome the deficiency of the traditional JPEG-like algorithm that can only use the time-frequency transform and the quantization matrix to eliminate redundant information in image compression. (2) By analyzing the correlation between sparseness and error, the optimal OMP iterative algorithm is established to enhance the JPEG-like algorithm's noise immunity performance.

Implementation of JPEG-ABCS
is section mainly describes the implementation of the JPEG-ABCS algorithm mentioned in the previous section. e specific implementation is discussed from four aspects: adaptive blocking, adaptive vectorization, adaptive observing, and denoising by optimizing the number of iterations.

Adaptive Blocking Method Based on MIE.
e adaptive blocking method proposed in this paper is a variable partitioning in the same dimension, that is, n � r × c, where n is a fixed value, typical value is 64, and r and c are the number of rows and columns of the variable block, typical value Specifically, the optimized block n opt � r opt × c opt is based on minimizing the mean information entropy (MIE) of the block observation signal set. Since blocking process needs to be completed before observation, it is impossible to use an ungenerated observation set y i T 1 i�1 for guiding the blocking optimization. erefore, an alternative method is introduced to guide the reasonable blocking by minimizing the MIE of the original signal's block set: where MIE( * ) represents the MIE function of the pixel set, IE(x i ) represents the information entropy of the i-th subimage in the pixel domain, p i,j represents the proportion of elements with the pixel gray value j in the i-th subimage, T 2 is the number of blocking ways, and h min and h max are the minimum and maximum values of the pixel gray in the original signal, respectively. However, the effectiveness of the above method lies on consistency between the observation signal's MIE and the original signal's MIE at the same partitioning. In order to verify the above consistency problem, this paper conducted a test experiment using multiple standard images, and its experimental results are shown in Figure 3. e experimental data show that, under the constraint of minimizing MIE, the optimal block of the original signal and the observed signal is consistent, which verifies the feasibility and rationality of the proposed block optimization method. Specifically, it can be seen from Figure 3 that under the constraint of minimum MIE, the optimal block shape is only related to the test image itself, not to the sampling rate. In addition, it has been verified by a large number of other standard test images that the method of finding the best block has the same trend whenever applied to the observation signal and the original signal, and there must be an extreme point.

Adaptive Vectorization Based on ASM.
e basis of adaptive vectorization is how to identify the directional characteristics of the image. ere are many methods for identifying direction features in the field of array signal processing, especially in DOA estimation research, such as the Capon algorithm, MUSIC algorithm, maximum likelihood algorithm, subspace fitting algorithm, and ESPRIT algorithm [16,17]. In this article, the angular second-order moment (ASM) value under the gray-level cooccurrence matrix (GLCM) is used to characterize the saliency of the direction [18]: where gray comatrix( * ) is the GLCM function, is the adjacent pixel pairs in the image with distance d, direction θ, and gray values (i, j), and R is the ideal maximum number of pixel pairs under the selected conditions. Combined with the rectangular shape of adaptive blocking, the ASM values in four directions are defined for adaptive vectorization, namely, g 0 ASM , g 90 ASM , g 45 ASM , and g 135 ASM [19]. In addition, the maximum value of these four values is defined as g MAX � max g 0 ASM , g 90 ASM , g 45 ASM , g 135 ASM .  It should be noted that the adaptive vectorization of each subimage must be related to the design of the sparse matrix to jointly realize the one-dimensional vectorization that preserves the two-dimensional structural characteristics of the subimage data.

Adaptive Observing Based on Multifeature Saliency and Bit Rate Control.
e key point of the nonuniform measurement matrix Φ i ∈ R m i ×n is the determination of m i . Considering that different subimages contain different amounts of information and the sensitivity of the human eye's attention mechanism to different images is different, this paper proposes an adaptive measurement matrix Φ i � ���� n/m i Γ m i based on multifeature saliency J(x i ) and the orthogonal symmetric Toeplitz matrix (OSTM): where c is the adjustment factor, J 1 ( * ) stands for the overall variance function, J 2 ( * ) is the local saliency function according to Weber's theorem [20], q is the number of elements in the salient domain determined by the optimal bounding box, OSTM(m i , n) is formed by randomly taking m i rows of n × n-dimensional OSTM [21], and α � 2 and β � 1 are the recommended values. e purpose of designing the adaptive measurement matrix in this way is to rationalize the sampling process and to achieve more sampling of detail blocks and less sampling of smooth blocks. e traditional JPEG-like algorithms control the bit rate (bits per pixel, bpp) through the quantization matrix, encoding, and bit-stream organization [22]. In this paper, the mean sampling rate η has been used to improve the compression performance of the JPEG-ABCS algorithm. e bit rate control for an 8-bit 256-level grayscale image is as follows:  Mathematical Problems in Engineering where η is the mean sampling rate and also corresponds to information decay ratio caused by sparse measurement, μ (μ ≥ 1) represents information decay ratio caused by the quantization, and ε (ε ≥ 1) is the bit compression ratio of the entropy encoding and bit-stream organization. Analysis of the above three factors that affect the rate control in the JPEG-ABCS model shows that once the encoding method is determined, the only factors that can be optimized are η and μ, while ε is a fixed number. In order to reduce the bit rate of restored images at the same quality, the value of these two factors must be set reasonably. is article focuses on the analysis of image performance impact in terms of different η under the same bpp and matching design of the quantization matrix that can determine the value of μ. In the process of analyzing the impact of η on the performance of compressed images, the synthetic indicator composed of the peak signal-to-noise rate (PSNR) and structural similarity (SSIM) is used as evaluation criteria to find the best η under different bpp. At the same time, in order to complete the comparison experiment of different η under the same bpp, it is necessary to set different quantization matrixes. is article uses the quality factor (QF) to design different quantization matrices [23]. Because the data ( T 1 i�1 y i ) quantized in the JPEG-ABCS algorithm are a onedimensional vector and are also a normalized measurement of the frequency domain sparse coefficients of the original signal ( T 1 i�1 x i ), the quantization matrix is weakened into a quantization vector whose elements no longer characterize frequency domain property and have the same importance. erefore, the elements of the quantization matrix for the subimage designed in this paper have the same value (Q * 0 � ones(m i )). e goal of the quantization matrix matching design is only to find a fitting function to approximate the relationship between bpp and QF. Figure 4 shows the experimental data of the above test process using Lena. According to Figure 4(a), it can be seen that under the constraint of maximizing synthetic features, the optimal mean sampling rate (η) increases with the increase in bpp. Meanwhile, an η obtaining function can be summarized, as shown in equation (14), and the typical values of B 1 th and B 2 th in the equation are 0.15 and 0.3. In addition, it should be noted that equation (14) can only be directly applied to images with a similar MIE of Lena. For other images, the threshold determination condition in the equation should be corrected according to the MIE of the image block set. Specifically, the coefficient ξ is introduced for the correction of the above two threshold conditions (that is, B 1 th � 0.15ξ and B 2 th � 0.3ξ). e coefficient ξ can be defined as the MIE ratio of other images to the Lena image. e design of the fitting function (QF � f(bpp)) adopts the cubic curve fitting method whose data are derived from the actual measurement value of QF and bpp. Figure 4(b) shows the comparison of consistency between the actual light-table's QF and the design value obtained from equation (15). From the results, the QF obtained by equation (15) satisfies the actual requirements well: where ⌊ is the floor function and λ 2 � 38.5972, λ 1 � 6.1241, and λ 0 � −0.0938 are obtained by quadratic curve fitting.

Denoising by Optimizing the Number of Iterations.
Consider the noise observation model as follows: where w i is the additive white Gaussian noise with zeromean and standard deviation σ w and x i is the equivalent noisy original signal. Since the reconstructed original signal (x * i ) is recovered from the noisy observation signal (y i ), the reconstruction error (e x i ) is mainly caused by the noise and reconstruction algorithm, and its mathematical expression can be defined as the following equation using the l 2 norm: where x * i is restored by the pseudoinverse operation, that is, and also represents the reconstruction algorithm in CS. e reconstruction algorithm of CS is based on the sparse representation of the signal [24], that is, the reconstruction sparsity (v i ) of s * i satisfies the inequation (v i ≪ m < n), so the pseudoinverse operation to get x * i can be rewritten as follows: . . ω i,n that has the greatest correlation with y i . Because equation (17) cannot be calculated directly, we add x i to help in analysis and calculation:

Mathematical Problems in Engineering
where G x i is a projection matrix of rank n − v i and G w i is a projection matrix of rank v i . Since G x i and G w i satisfy orthogonality, the inner product of G x i x i and G w i w i is equal to zero. erefore, equation (19) can be transformed into the following form: increases with the reconstruction sparsity (v i ) [25,26]. erefore, reconstruction error and reconstruction sparsity are a bias-variance trade-off, and there must be an optimal reconstruction sparsity (v opt i ) that minimizes the reconstruction error (e x i ): (22) Figure 5 shows the relationship between reconstruction sparsity and reconstruction error under different noise conditions by using a modified Lena test image. e modified Lena image is generated by intercepting 60 sparse coefficients under a discrete cosine basis; that is, its original sparsity (K) is 60. e noise added in the test is zero-mean Gaussian white noise, and its standard deviation (σ w � noise − std) also represents the intensity of the noise. e indicator PSNR is used to characterize the size of the reconstruction error. It can be seen from Figure 5 that the optimal reconstruction sparsity decreases as the noise intensity increases and is less than the original sparsity (v opt i ≤ K).
From the verification experiment shown in Figure 5, we can see that there is indeed an optimal reconstruction sparsity in the reconstruction process under the noise background. However, equation (22) is not a feasible solution that can be directly used to optimize the reconstruction process. In the actual reconstruction process, only the observation data at the receiving end can be used for the optimization algorithm. erefore, this paper designs a solution that uses observation data to optimize the reconstruction sparsity.
According to the definition of CS, the measurement matrix (Φ i ) obeys the RIP criteria, and therefore where δ K is a coefficient related to Φ i and K, and e y i is the reconstruction error of observation data. e transformation of formula (23) can get the boundaries of the original data reconstruction error as follows:

Mathematical Problems in Engineering
It can be seen from the above two equations that the reconstruction errors of the original data and the observation data are consistent, so the reconstruction sparsity can be optimized by minimizing the errors of the observation data: It is known from the above equation that the reconstruction errors of the observation signal satisfies the chisquare distribution, so the upper and lower boundary of e y i can be derived from the chi-square distribution probability. In addition, when calculating the minimum value of e y i , the worst condition is considered, that is, by calculating the minimum value of the upper bound of e y i .
In the l 0 norm reconstruction algorithm of CS, the reconstruction sparsity is equal to the number of iterations. erefore, optimizing the number of iterations can reduce the noise impact on image quality in using orthogonal matching pursuit (OMP) as the signal recovery algorithm [27,28]. According to the Bayesian information criterion (the tuning parameters of confidence probability and effective probability are taken as � � v i √ log m and 0, respectively) [29], the optimal value of iteration number v opt i can be achieved by minimizing the noise influence: where is the noise error of observation data.

Pseudocode of JPEG-ABCS.
e JPEG lifting algorithm (JPEG-ABCS) described in this article mainly consists of the above four sections, except for the entropy codec, and its full pseudocode is shown in Algorithm 1.

Experiment and Result Analysis
In order verify the superiority of the JPEG-ABCS algorithm, experiments were conducted in two cases: noiseless and noisy. Standard JPEG and JPEG2000 algorithms were used as comparison algorithms, and multiple grayscale standard images with 256 × 256 resolution were used in the following experiments which were conducted in the simulation software environment of Matlab2016b. In order to objectively evaluate the performance of the algorithm, peak signal-tonoise ratio (PSNR) and structural similarity (SSIM) were introduced as image reconstruction evaluation indexes. e PSNR index is the most widely used objective standard for characterizing the quality of reconstructed images: where x(i) and x * (i) are the i-th element of the original image signal and the reconstructed original image signal, respectively. e SSIM is another common signal reconstruction quality evaluation index used to describe the similarity between the original image signal and the reconstructed image signal: where μ x and μ x * are the average gray value of all elements in x and x * , σ x and σ x * are the standard deviation of all elements in x and x * , σ xx * is the covariance of x and x * , c 1 � 0.01 × H 2 and c 2 � 0.03 × H 2 are the constants, and H � h max − h min is the range of pixel gray values.

Experiments and Analysis without
Noise. e experiments under noiseless condition is mainly divided into three parts. minimization adaptive blocking method, this paper has conducted experiments on two standard images (Lena and Parrots), whose MIE under different blocks is known in the above section, and the experimental data are recorded in Table 1. e basic JPEG algorithm is used in the experiment, and the quantization matrix (light-table) is generated using formula (15), where QF is 50 and 25, respectively. From the data in Table 1, it can be seen that the Lena and Parrots recovery images have the minimum bpp in the block shape of 16 × 4 and 8 × 8, which just coincides with the minimum MIE block in the above section.

Verification Experiment of ASM-Based Adaptive
Vectorization to Improve the 2D Image Reconstruction Performance in BCS Algorithm.
e application of CS to image signal processing requires vectorization of two-dimensional images. erefore, the proposed JPEG-ABCS algorithm also needs to vectorize the subimages. Different vectorization methods directly affect the reconstruction quality of the image.
In this article, three common vectorization methods (vertical scanning, 2D scanning, and zigzag scanning) are compared and analyzed. e verification experiment is carried out in the CS algorithm, using two kinds of images (standard test image and texture test image) to test under different blocking and sampling rates. e experimental data are recorded in Tables 2 and 3, respectively. It can be seen from Table 2 that for standard test images that meet ASM conditions 1 or 2, vectorization using 2D scanning is optimal, and its PSNR and SSIM of the reconstructed image have a relative advantage than the other two methods. Table 3 shows the results of texture test image reconstruction using three vectorization methods. Obviously, the vector generation using zigzag scanning has better performance at this case.
It can be seen from the above tables that for different types of test images, the use of a single mode of the vector generation method cannot always effectively improve the quality of image reconstruction, and the multimode method combined with image texture direction feature detection is recommended. erefore, this paper proposes an adaptive vectorization method based on ASM to maximize the performance of 2D image reconstruction in the BCS algorithm.

Performance Comparison of Various JPEG-Like
Algorithms.
e image reconstruction quality of the three algorithms was compared with each other under noiseless condition to verify the benefit and universality of the proposed JPEG-ABCS. e experiment includes two parts: verification under different bpp and different test images. Figure 6 shows the experimental results of the Lena test image under three JPEG-like algorithms. Figures 6(a) and 6(b) show that compared with the JPEG and JPEG2000 algorithms, the proposed algorithm has advantages in PSNR and SSIM indicators under different bpp conditions. Furthermore, from the transformation trend of the curve, it can be seen that the JPEG-ABCS algorithm has good performance at medium and high bit rates, but as the bit rate decreases, the performance of the algorithm proposed in this article has declined, the main reason is that as the dimension of the measurement matrix decreases with bpp, the observation process cannot cover all the information of the image. Figure 6(c) is the restored grayscale images of Lena using the three algorithms under 0.25 bpp condition. It can be seen from a subjective vision that JPEG-ABCS's performance is better than the other two algorithms.
In addition, Table 4 records the experimental results of the three algorithms for different images under the condition of bpp � 0.25, 0.3, and 0.4. e data in the table show that the improvement effect of the proposed algorithm is universal for different images. For instance, at the condition of bpp � 0.3, the PSNR index of the four standard test images under the JPEG-ABCS algorithm is improved by 8.34%, 15.09%, 4.46%, and 8.13% compared to the JPEG algorithm and 6.19%, 12.98%, 3.39%, and 6.39% compared to the JPEG2000 algorithm, respectively; the SSIM index has also been improved, compared to JPEG it increased by 0.96%, 0.88%, 1.22%, and 0.59% and compared to JPEG2000 it increased by 0.62%, 0.64%, 0.84%, and 0.30%.
As can be seen from Figure 6 and Table 4, compared to the JPEG and JPEG2000 algorithms, the proposed JPEG-ABCS algorithm has a large improvement on PSNR and SSIM, mainly due to the adaptive blocking and adaptive sampling reducing MIE of image blocks under the same conditions, while sparse restoration guarantees image restoration quality.

Experiments and Analysis under Gaussian Noisy
Conditions. In the Gaussian noise condition, it is verified that the JPEG-ABCS algorithm has improved antinoise performance compared to the standard JPEG algorithm. Figure 7 shows the experimental results which are obtained by using the monarch, peppers, and cameraman as test images. It can be clearly seen from Figure 7 that the test images reconstructed using JPEG-ABCS are superior to the test images reconstructed by JPEG in different noise intensities (here, the noise standard deviation is used as the noise intensity), especially the more noise intensity, the more obvious the superiority. Table 5 shows the PSNR and SSIM comparison records of the noisy monarch test image under two different algorithms. As can be seen from the data in Table 5, the noisy image reconstruction performance of the JPEG-ABCS algorithm under different bpp conditions is better than that of the JPEG algorithm. For example, at the condition of bpp � 0.25, the PSNR index of the JPEG-ABCS algorithm under the four noise intensities is improved by 9.68%, 4.25%, 0.74%, and 3.06% compared to the JPEG algorithm; the SSIM index has also been improved, compared to JPEG by 4.54%, 3.69%, 1.53%, and 7.71%. In other words, JPEG-ABCS adds antinoise capability that the JPEG algorithm does not have.

Conclusions
In this paper, a JPEG lifting algorithm based on ABCS was proposed, and its structure and implementation method were specifically introduced. At the same time, the improvements of the algorithm were described, and the feasibility and rationality of the above improvements were demonstrated by experiments. Finally, through comparison experiments with similar algorithms, the contribution of this lifting algorithm to JPEG-like algorithms, that is, to improve the quality of image reconstruction, reduce bit rate (bpp), and add the antinoise function has been evaluated Data Availability e simulation results used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.