Face Image Compression and Reconstruction Based on Improved PCA

Face recognition technology has many usages in the real-world applications, and it has generated extensive interest in recent years. However, the amount of data in a digital image is growing explosively, taking up a lot of storage and transmission resources. There is a lot of redundancy in an image data representation. Thus, image compression has become a hot topic. The principal component analysis (PCA) can effectively remove the correlation of an image and condense the image information into a characteristic image with several main components. At the same time, it can restore different data images according to their principal components and meet the needs of image compression and reconstruction at diverse levels. This paper introduces an improved PCA algorithms. The covariance matrix, calculated according to a batch of training samples, is an approximation of the real covariance matrix. The matrix is relatively to the dimension of the covariance matrix, and the number of training samples is often too small. Therefore, it difficult to accurately obtain the covariance matrix. This improved PCA algorithm called 2DPCA can solve this problem effectively. By comparing it with several discrete PCA improvement algorithms, we show that the 2DPCA has a better dimensionality reduction effect. Compared with the PCA algorithm, the 2DPCA has a lower root-mean-square error under the constant noise condition.


Introduction
At present, the amount of digital image data is soaring, occupying a lot of storage space and apportioning increased transmission resources [1]. Due to the high correlation of adjacent pixels, there is a lot of redundancy in image data representation. The principal component analysis (PCA) method can remove the correlation of the image data [2], and effectively compress the image information into several main components. At the same time, it can restore different data images according to their number of principal components, thus meet the needs of image compression and reconstruction at different levels. Moreover, PCA is often used for feature selection [3][4][5].
Among the active subspaces, the researchers' top concern is the face image. It has been of a wide concern and deeply studied by the academic community. Feature extraction and dimension reduction are the key steps of face compression [6]. However, there are many shortcomings in the PCA algorithm. The common PCA compression method cannot achieve good results due to external conditions such as change of facial expression and strong light. Another important factor to consider is the dimension of the pictures [7]. Therefore, it is necessary to study an improved PCA algorithm, which can enhance the compression efficiency and ameliorates the accuracy of reconstruction [8]. It should be noticed that selfadaptive parameter is a good direction to optimization PCA. The image compression and reconstruction can be used in drones [9]. It should be noticed that self-adaptive parameter is a good direction to optimization PCA [10][11][12][13].
The main work of this paper is to study and analyzes the PCA algorithm for image compression and reconstruction. This paper focus on the study of PCA improved algorithm which includes 2DPCA, Mat PCA and Module PCA. The rest of the article is structured as follows: Section 2 introduces related work. Section 3 describes the PCA and improved PCA. Section 4 introduces the designs of experiment and analysis the experimental results. Finally, Section 5 provides the conclusions.

Related Work
PCA is also called principal component analysis. It is a statistical method that converts the original multiple variables into several new composite variables [14]. These new variables are uncorrelated with each other and effectively to represent the information of the original variables. PCA can remove the correlation between image data, and condense the image information on the characteristic image which is several main components. The PCA is effectively to realize image compression. At the same time, it can recover different data image according to the number of principal components which are meet the needs of image compression and reconstruction at diverse levels. To conduct data analysis through deep learning [15,16]. The PCA can be used to preprocess multi-objective optimization algorithms [17]. The basic PCA image compression algorithm can achieve ideal compression ratio, but this method does not have a good standard for the selection of the number of retained features. The signal-to-noise ratio is very low, and the non-linear or non-stationary image signals are not considered. At the same time, the algorithm is optimized by the evolutionary algorithm and deep learning [18,19].

Improved PCA Algorithms
The pixel represents redundant information on the face image. It can be used to subtract the predicted value I P from the actual value I, which obtain the difference value DI, and value DI is known as the prediction error. Finally, the prediction error is only compressed and encoded. Since the predicted value of each pixel only uses the previously encoded pixel, this coding process is also said to be causal. The decoding process based on causal encoding is shown in Fig. 1.
Another way of image compression is transformation. In the process of transformation, the image is first obtained by some transforming (linear or nonlinear), and then which quantize these coefficients to obtain the compressed image. At the end of decoding, the encoded coefficients are quantized inversely, and the actual image are produced by inverse transformation. A typical transformation based on compression system is shown in Fig. 2.
Both forecast code and conversion code have their own advantages. The former one is relatively simple to implement, and the algorithm itself is adaptive to the original information of the image. The latter one generally has a higher compression ratio, but the cost is the complexity of transformation calculation, which also makes the implementation more complex. The evaluation method of image compression is usually divided into two aspects, compression performance and compression image quality. Compression performance is usually measured by compression ratio C R or relative data redundancy R, which is defined as the ratio between the total amount of original data b and the total amount of compressed data b'. Relative data redundancy R is defined as the percentage of the reduced amount of compressed data relative to the original amount of data: The quality of the compressed image can be evaluated either subjectively or objectively. Among them, common objective quality evaluation methods include root mean square error, SNR (signal to noise ratio) and PSNR (peak signal to noise ratio). The I x; y ð Þ andÎ x; y ð Þ represent decompressed image and original image respectively. In PSNR calculation formula, N is equal to the bits per pixel, usually 8 bits, and MSE represents the root mean square error. Although the factual quality assessment method is convenient and feasible, it can't really reflect people's subjective feelings towards the image, so the subjective quality assessment is more accurate. In this paper, the root mean square error is used primarily to evaluate the quality of image reconstruction.

PCA
K-L transformation is one of the main processes of the PCA method. It is necessary to use K-L transformation to realize facial image compression and reconstruction. The K-L transformation method is classical and easy to implement. The basic PCA method first selects some image as training image before facial image compression. Assuming that the image to be trained has a size of N 2 Â N 2 , the pixels of all its columns can be joined end to end. In this way, each image can be stretched into a column vector of length N 2 which can be viewed as a point in N 2 dimensional space. Because the training image has a lot of similarities between each other, the vector on the shearing section is also different. The vector in highdimensional space distribution is not random or chaotic, principal component analysis has the very strong correlation between each other, which uses a low-dimensional subspace to describe image. Assuming that the spatial description of these training image sets contains m sets of images, let X i ; i 2 1; 2; 3; . . . ; m f gto be the image vector of i training sample, and x ¼ ½x 1 ; x 2 ; . . . ; x m , u is the average image vector of all training sample images, namely: PCA requires the population dispersion matrix of the training sample set, which named the covariance matrix: It is a matrix with dimension N 2 Â N 2 and the principal component analysis method needs to calculate its eigenvalues and orthogonal normalized eigenvectors. Since N 2 will be very large in practical application, it is very difficult to directly calculate the formula. For this reason, SVD decomposition can well solve this problem.

SVD Decomposition
SVD decomposition is a common method to deal with matrices with high dimensions. SVD decomposition can effectively decompose high-dimensional matrices into low-dimensional space. Through SVD decomposition, we can solve the eigenvalues of the high-dimensional matrix easily. The following is the exact theory related to SVD decomposition.
If A is n Â r dimensional matrix of rank R, then there are two orthogonal matrices: where, i ði ¼ 1; 2; Á Á Á ; rÞ is the non-zero eigenvalue of matrix AA T and A T A, u i and v i are eigenvectors of AA T and A T A. The above decomposition becomes the Singular Value Decomposition (SVD) of matrix A, and the ffiffiffiffi i p can be expressed as: Therefore, construct the matrix: It is easy to find its eigenvalue i and corresponding orthogonal normalized eigenvector v i ði ¼ 1; 2; Á Á Á ; M Þ. From the above inference, the orthogonal normalized eigenvector is: Arranging the eigenvalues from large to small: 1 ! 2 ! Á Á Á ! M ; their corresponding eigenvector is u i , in this way, each face image can be projected into a subspace of {u 1 ; u 2 ; Á Á Á ; u M } spans. Therefore, each face image is commensurate with a point in the subspace. Similarly, any point in the subspace corresponds to an image. With such {u 1 ; u 2 ; Á Á Á ; u M } span subspace, any face image can be projected onto it and a set of coordinate coefficients are obtained, which show the position of the image in the subspace. In other words, any face image can be represented as a linear combination of {u 1 ; u 2 ; Á Á Á ; u M } and its weighted coefficient is the expansion coefficient of K-L transformation, which can also be called the algebraic feature of the image.
For any face image f to be compressed, its coefficient vector can be obtained by projecting it into the feature subspace: The resulting coefficient vector can be thought as a compression. Since the coefficient vector dimension m is usually much less than f, thus it saves storage space greatly. It can be transformed back from u:

Improved Algorithm
Basic principally component analysis method has some disadvantages. When the face image illumination position changes, basic PCA cannot capture these changes effectively. Studies have shown that the basic PCA can capture some of the most simply consistency between image hardly, unless the information is included in training image. In addition, the basic PCA will stretch the pixels of the image in some way (usually the first place of each column is connected) into a vector with high dimension. When the image size is bigger, the vector dimension after stretching will be very prominent, not to mention the covariance matrix between the training image. Although the SVD decomposition can be utilized for approximating the feature image, which avoid the emergence of large covariance matrix, it is not accurate in many cases. Due to the deficiency in the PCA method, an improved method, named 2DPCA is proposed in this paper.
Let's X present a n dimensional normalized column vector. The 2DPCA algorithm take the image A (A matrix of m Â n) according to the formula: we going to project it onto X . Thus, we get M dimensional projection vector Y, which is called the projection eigenvector of image A. Finding a good projection direction X in the 2DPCA method is a key step, and the strength of the projection vector X can be determined by the dispersion degree after training samples are projected on it. The higher the dispersion of the sample after projection, the better the projection direction X .
Through the study we know that we can use the trace of the covariance of the projected vector to describe the dispersion degree of the projected sample. That is: where, S x represents the covariance of the vector after training samples are projected on it; tr S x ð Þ is the trace of S x .
The physical meaning of the maximization equation is to find a projection direction that maximizes the dispersion between the vectors after all training samples are projected on it. The covariance matrix of S x can be expressed as: The matrix G t is called the image covariance matrix. It's easy to know by definition that G t is a nonnegative definite matrix for n Â n. The training sample image can be used to directly calculate G t . Suppose there are M training image samples, n Â n matrix A j j ¼ 1; 2; 3; . . . ; m ð Þrepresents the j training image, and " A represents the average image of all training samples.
Eq. (25) is generalized criterion. The normalized vector X opt that maximizes J ðX Þ is called the optimal projection axis. According to literature, we can know that X opt is the eigenvector corresponding to the maximum eigenvalue of G t . Generally speaking, it's not enough to have one optimal projection direction. It is necessary to find a series of projection directions, the set of projection directions is fX 1 ; X 2 ; Á Á Á ; X d g, which satisfies the principle of maximizing the J ðX Þ: In fact, in the projection direction set fX 1 ; X 2 ; Á Á Á ; X d g, satisfying the above principle is the orthogonal eigenvectors which corresponds to the first maximum eigenvalue of G t .

Experimental Results
This section mainly summarizes the experimental results of the above algorithms, including the degree of image compression and the size of the root mean square error in image reconstruction. This paper mainly assesses the quality of several algorithms based on the size of the reconstruction error.
From Tab. 1, we can see that PCA algorithm can achieve an image compression ratio of about 3, and the image of the ORL database can have a very stable effect. As shown in Tab. 2, when noise is added, there will be no difference in compression effect.
From the above table, it shows that no matter what noise is added to PCA image compression. It has no significant influence on the image compression result. But adding noise will have a huge impact on image reconstruction. Tab. 3 shows the mean square deviation value of image reconstruction data adds noise.
In Tab. 4, the mean value of Gaussian noise is all 0 by default. From Tab. 4, we can see that the 2DPCA algorithm also have better results when processing noisy image. Compared with PCA algorithm, 2DPCA algorithm has lower root-mean-square error under the same noise condition. From Tab. 5, we can see that compared with Mat PCA, 2DPCA also has lower root-mean-square error under the same noise.

Conclusion
This article provides an image reconstruction and compression algorithm based on principal component analysis and its improved algorithm. PCA is effective to reduce the dimension of data and minimize the error between the extracted components and the original data, so it can be used to data compression and feature extraction. Especially with the development of multimedia image data information technology, abundant image media contains a lot of information. In order to store and transmit these image data effectively, more and more attention is being paid to image compression technology. The image compression and reconstruction based on PCA and its improved algorithm is proposed in this paper. The experimental results demonstrate that the implementation method is simple. It can realize image compression effectively and restore different data images according to the number of principal components. It also satisfies the needs of different levels of image compression and reconstruction.   Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.