Abstract

Compressive Sensing (CS) realizes a low-complex image encoding architecture, which is suitable for resource-constrained wireless sensor networks. However, due to the nonstationary statistics of images, images reconstructed by the CS-based codec have many blocking artifacts and blurs. To overcome these negative effects, we propose an Adaptive Block Compressive Sensing (ABCS) system based on spatial entropy. Spatial entropy measures the amount of information, which is used to allocate measuring resources to various regions. The scheme takes spatial entropy into consideration because rich information means more edges and textures. To reduce the computational complexity of decoding, a linear mode is used to reconstruct each block by the matrix-vector product. Experimental results show that our ABCS coding system provides a better reconstruction quality from both subjective and objective points of view, and it also has a low decoding complexity.

1. Introduction

Compressive Sensing (CS) is a novel sampling theory that goes against the conventional Nyquist-Shannon theorem in data acquisition [1]. When married with image coding, CS brings a low-complex encoding architecture, which is appealing for resource-constrained wireless sensor network [2]. Image CS coding is to reconstruct the natural image from its observed measurements , where is lexicographically stacked representations of the original image and is the CS measurements observed by a random measurement matrix . Once the image is K-sparse signal in some space , CS theory can guarantee that the image is accurately recovered with high probability from measurements [3]. The CS measurement process combines image acquisition and image compression; thus the computational burdens are greatly reduced at encoder. Each element in carries equal amount of the information on , which offers a robust ability against noise in wireless communication. The advantages of CS attract many researchers to explore applications of CS in multimedia system [4, 5].

Many researchers have been attempting to develop effective image reconstruction algorithms in order to improve the rate-distortion performance of image CS coding. A good reconstruction performance relies on a more sparse representation of image; for example, Zhang et al. [8] exploit the intrinsic local sparsity and nonlocal self-similarity to design a dynamically varying space; Wu et al. [9] introduce a local autoregressive model to explore sparse components; Eslahi et al. [10] construct an adaptively learned space by using local and nonlocal sparsity of image; Liu et al. [11] use Principle Component Analysis (PCA) to sparsely decompose each patch in image. In the field of Magnetic Resonance Imaging (MRI), some works also invest many efforts to improve the reconstruction performance; for example, Zhang et al. [12] proposed an energy preserving sampling to enhance the quality of digital phantom, Zhang et al. [13] proposed an exponential wavelet iterative shrinkage/threshold algorithm to reduce the blurs existing in the reconstructed image, and Sun and Gu [14] proposed an adaptive observation matrix for sparse samples for ultrasonic wave signals that are analyzed in the phased array structural health monitoring. The above-mentioned methods all involve numerical iteration, which brings a high computational complexity at decoder. Therefore, the image CS coding is always characterized by light encoding and heavy decoding. However, because natural images typically exhibit nonstationary statistics, high computational complexity does not necessarily bring a satisfactory result. That poses us a challenge about how to design a CS codec system which can overcome the negative effects of nonstationary statistics.

Block-based CS (BCS) hybrid coding framework [1517] solves the problem of high computational complexity of decoding by measuring and recovering nonoverlapping blocks independently, but nonstationary statistics of image could lead to blocking artifacts. Different statistics of block result in different sparsity of block; thus the measurement times of block should be set accordingly. Based on BCS framework, some research on Adaptive BCS (ABCS) framework [6, 7, 18] is done to suppress blocking artifacts. The research all uses some image features (e.g., DCT coefficient [18], variance [6], and saliency [7]) to measure statistics of block and then adaptively allocates CS measurements for each block according to the measured feature of block. ABCS is a successful scheme to reduce the negative effect of nonstationary statistics while guaranteeing a low computational complexity of decoding. However, some time and space complexities would inevitably be introduced at encoder due to the existence of feature exaction. The existing ABCS schemes invest many matrix-vector products to compute image feature; for example, two matrix-vector products and one convolution operation are performed for the whole image to compute the visual saliency in [7]. The matrix-vector product is too expensive for wireless sensor network because the processor of mobile note has limited computing capability. Therefore, in order to make encoder lighter, ABCS framework requires a simple feature while effectively reducing blocking artifacts.

In this paper, we propose an ABCS coding system which uses spatial entropy of block to allocate measuring resources. Spatial entropy measures the amount of information, revealing statistical characteristic of data. The main contributions of this work can be summarized as follows:(i)We propose using the spatial entropy of image block as a criterion of CS measurements allocation.(ii)We reduce the computational complexity of reconstructing image by using a linear model.We assign higher measurement rate to blocks with much information but lower measurement rate to blocks with less information. By entropy-based adaptive measuring, the quality of reconstructed block could not vary greatly with nonstationary statistics of image. Since the computing of entropy requires only a few floating-point operations, our ABCS system also has a light encoder. To realize real-time decoding, we use a linear model to recover all blocks. Combined with adaptive measuring based on spatial entropy, the linear recovery method improves the reconstruction quality effectively.

The rest of this paper is organized as follows. Section 2 summarizes ABCS coding framework. Section 3 presents the proposed adaptive measuring and linear recovery schemes. Experimental results are given in Section 4 and conclusion in Section 5.

2. ABCS Coding Framework

The advantage of ABCS framework is nonuniform allocation of CS measurements based on the image feature. This section shows how ABCS framework works.

Given an -pixel image from a real-world scene and supposing we want to take CS measurements, we summarize the flow of ABCS coding, as shown in Figure 1. The encoding part is described as follows.

Step 1. Divide image into nonoverlapping blocks of in size and let () represent the vectorized signal of the th block through raster scanning.

Step 2. The feature of each block is extracted. Block variance [18], edge [6], and saliency information [7] are common features.

Step 3. We set the measurement number of each block according to the distribution of these image features. The total number of CS measurements of all blocks is ; that is, .

Step 4. We use Marsaglia’s ziggurat algorithm [19] to produce pseudorandom data which obey Gaussian distribution, and these random data form a matrix . After that, we randomly pick rows from to construct the measurement matrix of .

Step 5. The CS measurement vector of is observed with as follows:We define the block measurement rate as .

Through the above steps, we perform ABCS encoding for an image. According to (1), the measurement rate of each block varies with different image features. By measuring block features, more CS measurements are allocated to blocks with high-level features but fewer to blocks with low-level features.

At the ABCS decoder, after receiving the measurement vector of each block, ABCS framework generally uses the minimum norm model to recover each block as follows:in which and are and norms, respectively, is the transformation matrix of each block, for example, DCT and wavelet matrices, and is the noise tolerance which can be set based on experience. Model (2) can be solved by many numerical iterative algorithms, for example, Orthogonal Matching Pursuit (OMP) [20] and Gradient Projection for Sparse Reconstruction (GPSR) [21]. These algorithms require a high computational complexity to reconstruct a whole image. No matter what recovery algorithm is chosen, more CS measurements mean a better reconstruction quality. Therefore, ABCS framework ensures a good recovery quality for every block by feature-based adaptive measuring.

3. Proposed Scheme

Figure 2 presents the framework of the proposed ABCS scheme. At encoder, we compute the spatial entropy of the th block and set its measurement number according to the distribution of spatial entropy. We construct the measurement matrix to observe CS measurement vector . Spatial entropy measures the information amount of each block and directly reveals the nonstationary statistics of image. By entropy-based adaptive measuring, each block has sufficient CS measurements to describe the block statistics. At decoder, in order to realize real-time decoding, we transform the measurement vector into the reconstructed block by a linear model. In the following three parts of this section, we first describe how to compute the distribution of spatial entropy, then design an adaptive measuring scheme, and finally present the linear recovery model.

3.1. Spatial Entropy

Spatial entropy of image is the expected value of the information contained in some pixels. We compute the spatial entropy of the th block as follows:in which represents pixel value and is the probability of pixel value in . The unit of is bit per pixel (bpp), and is the minimum number of bits to encode any pixel in a block with no loss. Data processing inequality states that the information content of a signal cannot be increased via a local physical operation [20], which implies that the information contained in sparse components is close to spatial entropy. Therefore, the bigger the spatial entropy of block is, the less sparse the representation coefficients are, and vice versa. According to CS theory, we should allocate more CS measurements to blocks with much information but fewer to blocks with less information. By normalizing the spatial entropy of each block,we can control the measurement rate according to the entropy contrast. The probabilities can be expressed in the form of histograms; thus the spatial entropies of all blocks can be computed in time order.

3.2. Measuring Allocation

Our entropy-based CS scheme aims to allocate measuring resources according to the information contained in each block. By (4), we obtain the distribution of spatial entropy. Suppose is the total number of CS measurements for a whole image, we set the number of CS measurements for each block as follows:in which is the initial measurement number of each block and is the round operation. By (1), the excessive CS measurements are assigned to blocks with much information. BCS allocates measuring resources equally to all blocks because it cannot tell how much information it contains and differentiate one from another. Our scheme takes into account the statistics of image. By exploiting the spatial entropy of each block, the scheme allocates more random measurements to rich-information blocks but fewer to poor-information blocks. The CS theory states that a recovery algorithm would offer better reconstruction quality of a block with more measurements. Therefore, when using the same number of measurements for the whole image, our entropy-based scheme can better recover blocks with much information compared to BCS.

3.3. Linear Recovery

Conventional CS recovery algorithms use numerical calculation to nonlinearly reconstruct the image. The numerical calculation involves loop iteration, introducing a high computational complexity. Therefore, the conventional recovery algorithm is not suitable for the real-time decoding. Equation (1) indicates that the measurement vector is a projection of onto a low-dimensional space; thus there is a linear relation between and . By using the linear relation, we can design a projection matrix to back-project onto the neighboring region of ; that is,in which is the linear estimation of . From the above, the linear recovery consists of two steps: learning a projection matrix and reconstructing each block by using the matrix . We first describe how to learn the projection matrix . The error vector between and can be computed as follows:We should select a projection matrix to minimize the error vector . Based on this motivation, we design an optimization model to choose the best projection matrix as follows:in which is autocorrelation function of and is the expectation function. Setting the gradient of (with respect to ) to 0, we can obtain the solution of model (8) as Plug (1) into (9) and we getBecause is a known matrix, we can move it to the outside of ; that is,Letin which we regard as a random vector, and is autocorrelation function of . That is,It is difficult to directly compute each element of , but we can estimate it by the following statistic model: in which is the spatial position of pixel and is the spatial position of pixel . is the chessboard distance between and . is a constant between 0.9 and 1, and we set to be 0.95 by experience. Through the above operations, we obtain the best projection matrix , and then each block can be recovered byThe flow of linear image recovering is summed in Algorithm 1.

Task: Linearly recovering each image block , .
Input:   CSmeasurement vectors , , and block measurement matrices , .
Step:
    (a) Compute the auto-correlation matrix according to Eq. (13);
    (b) Compute the projection matrix according to Eq. (11);
    (c) Reconstruct each image block , according to Eq. (15);
    (d) Merge all image block into a whole image .
Output: The recovered image .

Through this matrix-vector product for each image block, we can get the estimation of the original block. Divide an image into nonoverlapping blocks, use matrix-vector product for times, and we can achieve the reconstruction of the whole image. The total computation is multiplications and additions, which is far less than that of conventional CS recovery algorithm.

4. Experimental Results

We evaluate the performance of our ABCS coding system on a number of grayscale images of 512 × 512 in size including Lenna, Barbara, Peppers, Goldhill, and Mandrill. These reconstructed images by our system are compared with those by conventional BCS system [15], variance-based ABCS (V-ABCS) system [6], and saliency-based ABCS (S-ABCS) system [7] from subjective and objective points of view. These compared schemes use OMP algorithm [20] to nonlinearly recover all blocks. In all experiments, the block size is set to be 16, and we set the total measurement rate (=) to be between 0.1 and 0.5. PSNR in dB and Structure SIMilarity (SSIM) [22] between the reconstructed image and the original image are used in the objective evaluation. All experiments are conducted under the following computer configuration: Intel(R) Core (TM) i7 @ 3.30 GHz CPU, 8 GB RAM, Microsoft Windows 7 64 bits, and MATLAB Version 7.6.0.324 (R2008a).

4.1. Subjective Evaluation

Figures 3, 4, and 5 present the visual reconstruction results of Lenna, Barbara, and Mandrill by various CS-based codecs at different measurement rates. When measurement rate is 0.1, the CS measurements of each block are not enough to guarantee the convergence of OMP algorithm for BCS, V-ABCS, and S-ABCS systems; thus lots of reconstructed blocks lose structural details. The reconstructed images by our ABCS system have better surfaces and edges of objects, but there are many blocking artifacts in a whole image. As the measurement rate increases, the reconstructed images by BCS, V-ABCS, and S-ABCS systems are improved significantly, but there are still many blocking artifacts, and some blurs occur in the region of edges and textures. Although it cannot better recover texture details (e.g., periodic stripes near trouser legs in Barbara), our system effectively reduces blurs in edge regions. For Mandrill with lots of hairs, our system also recovers finer hairs than those of other systems at any measurement rate. On the whole, we can see that our ABCS system can guarantee a better visual quality.

4.2. Objective Evaluation

Table 1 compares PSNR for test images at the measurement rate of 0.1, 0.3, and 0.5, respectively. The results indicate that our ABCS system achieves the highest average PSNR values for all test images at any measurement rate; for example, when the measurement rate is 0.1, our system is 5.18 dB on average higher than S-ABCS for Lenna. For Barbara, our system cannot obtain higher PSNR than other systems at the measurement rate of 0.3 and 0.5, resulting from its limited ability to recover periodic patterns. Table 2 presents SSIM values for test images at the measurement rate of 0.1, 0.3, and 0.5. We can see that our system outperforms other systems in most cases. For Lenna, our system is 0.2649, 0.0785, and 0.0396 on average higher than S-ABCS at the measurement rate of 0.1, 0.3, and 0.5, respectively. There is still SSIM degradation for our system when reconstructing Barbara at a high measurement rate. Table 3 lists the average reconstruction time of various systems for all test images at the measurement rate of 0.1 to 0.5. We can see that our system requires only 1.74 s on average to reconstruct a 512 × 512 image, while other systems need about 5 s on average. The execution time of our system increases with the rising measurement rate, but only slightly. From the above, we can see that our ABCS system provides a better objective quality while guaranteeing a low computational complexity.

5. Conclusion

In this paper, we propose an ABCS system that adaptively measures each block according to spatial entropy and reconstructs images using a linear model. Spatial entropy reveals the variation of block sparse degree and is a simple feature revealing statistics of image. Based on the distribution of spatial entropy, we observe image blocks at different measurement rates. The entropy-based measuring reduces the redundancy of block measurements. To reduce the computational complexity of decoding, we adopt a linear model to reconstruct each block. Experimental results show that our ABCS system improves the quality of reconstructed image from both subjective and objective points of view while guaranteeing a low computational complexity.

As the research in this paper is exploratory, there are many intriguing questions that our future work should consider. First, the theory of adaptive block CS needs to be developed. Second, the entropy computation in the measurement domain is the target in our future work. And last, we hope to extend this work to CS of color images and videos.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China, under Grants nos. 61501393 and 61601396, in part by the Key Scientific Research Project of Colleges and Universities in Henan Province of China, under Grant no. 16A520069, and in part by the MOE (Ministry of Education in China) Project of Humanities and Social Sciences, under Grant no. 17YJCZH123.