Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.


Introduction
The conventional minimum squared error (MSE) algorithm has been widely used for pattern recognition and this algorithm also performs well in face classification. MSEC [1,2], respectively, takes the sample and its class label as the input and output and tries to obtain the mapping that can best transform the input into corresponding outputs. MSE has many advantages in classification. This method is simple and easy to operate. MSEC not only is suitable for two-class classification [3], but also can be applied to multiclass classification [4][5][6]. MSEC has been extended to nonlinear methods. Kernel MSE (KMSE) [7,8] is a wellknown nonlinear extension of MSEC. Ruiz and Lopez-de-Teruel [9] proposed a KMSE method in which the solution is based on generalized inverse of the kernel matrix. Differing from MSE, KMSE tries to obtain nonlinear mapping between the input and output. A "Lasso" method based on MSE (LMSE) [10][11][12] has been proposed for pattern recognition. LMSE tries to obtain a good performance by minimizing the 1 norm of the solution vector and can be viewed as an extension of conventional MSEC. The total least squares (TLS) is also a well-known improvement to the MSE. TLS [13,14] assumes that both the input and output are corrupted and each of them can be expressed as sum of the corresponding "true data" and "measurement noise." Other people also proposed another method [15][16][17][18][19] to improve this MSE algorithm. For example, Xu et al. [15] proposed a modified minimum squared error (MMSE) to improve MSE. Besides pattern recognition, MSE has been applied for other fields such as clustering, data fitting, and density estimation [20][21][22][23]. Moreover, the well-known representation based methods can be viewed as generalized MSEC methods, for example, collaborative representation classification (CRC) [24], twophase test samples sparse representation (TPTSSR) [25], and sparse representation-based classifier (SRC) [26].
The MSE algorithm first uses training sample and its class label of learning mapping and exploits the obtained mapping to predict the class label of testing sample [27,28]. Then, MSE chooses the training sample that is the nearest to the test sample. Finally, MSE assigns the testing sample into class that this training sample belongs to.
The conventional MSEC is limited by the number of training samples. And the images in face recognition were faced with some problems, such as variations of illumination, facial expression, and poses. Facial expression and poses variations can be dealing with the restrict condition to reduce errors, but illumination variation is hard to control [29][30][31]. Therefore, dealing with illumination problem is necessary for face recognition. Shang et al. proposed an 2 The Scientific World Journal illumination face recognition algorithm based on ordinal feature [32], adopted ordinal feature as the ordinary variable. The solutions of illumination invariable face recognition [33][34][35][36] can be classified into three kinds, method based on the normal features, modeling based on changed illumination, and having a standard condition for illumination.
It seems that more training samples are able to reveal more possible variations of the illumination, facial expression, and poses and beneficial for correct classification of the face. However, in real world, it is hard to capture enough samples and a system usually has a limited number of training samples. In order to obtain better face classification, previous literatures have proposed synthesizing new samples from the true face image. These new samples were called virtual samples [37,38]. For example, Tang et al. [39] proposed prototype faces and an optic flow and expression ratio image based method to generate virtual samples. Ryu  In this paper, our method proposed the MSE classification based on mirror faces [44,45] for face recognition. We first establish the equation of the MSE and solve it. Then, we exploit the obtained solution to predict the class labels of test samples. Finally we classify the test sample and obtain the accuracy of classification. In this paper, we use three databases to do experiments, PIE database, Yale B database, and subset of Yale database. Our method has a higher classification accuracy than conventional MSE. This paper will describe the following parts. Section 2 introduces our proposed method, Section 3 analyzes our method, Section 4 shows the results of experiments, and Section 5 is the conclusion.

The Proposed Method
In this section we will present the main steps of our proposed method in detail. Suppose that there are classes and each class has training samples. There are numbers of total training samples ( = * ). Let 1 , . . . , represent all the training samples and we assign a class label to each class.

Main
Steps of the Proposed Method. The proposed method includes five steps. The first step generates mirror faces of original training samples. The second step puts both the mirror faces and original training samples into a new set and obtains the class label of each new training sample. The third step uses the new training samples to perform MSE algorithm and obtain mapping. The fourth step predicts the class label of testing sample for face recognition. The last step gets the ultimate classification result. And we present these steps as follows.
Step 2. Use both original training samples and mirror faces to structure a new training set and sign its class label matrix. The new training sample number = 2 . Transforming every original training sample and the mirror face into × -dimensional column vector, we can obtain ( × ) × -dimensional training samples matrix = ( 1 , 1 , 2 , 2 , . . . , , ). We use -dimensional vector , = 1, . . . , , to represent the class label of each training sample and the class label matrix defined as = ( 1 2 ⋅ ⋅ ⋅ ); for the definition of please see Section 2.2.
Step 3. Use the new training sample set to perform MSE algorithm face recognition. From this step, we will obtain mapping for test samples. For the algorithm, please see Section 2.2.
Step 4. Exploit the mapping matrix obtained in Step 3 to classify the test samples, and predict the class label of test sample which the training samples are nearest to.
Step 5. Exploit the result obtained by conventional MSE and get the classification result. Combine the original class label of test sample with its predicted class label in step three. If the same, then we consider that the test sample is correctly classified. Finally, we can obtain the accuracy of face recognition.

Minimum Squared Error (MSE) Algorithm.
In this subsection, we present the algorithm of MSE face recognition. There, we use -dimensional vector to represent the class label for each class. If a sample is from th class, then th element of its class label is one and the other elements are all zeros. For example, if the sample is from the first class, we will take 1 = [1 0 0 ⋅ ⋅ ⋅ 0] as its class label. There, we assume that matrix can transform each training sample into its class label; MSE has the following equation: ] . ( We refer to as transform matrix, and is the class label of the th training sample. As (1) cannot be directly solved, we convert it into the following equation: We can obtain using where and denote a small positive constant and the identity matrix. MSE classification classified test sample in the form of row vector as follows: the class label of is first predicted using the following equation: Then, the distance between and the class labels of the classes is calculated. We choose the minimum distance which means is the closest to the class label of the th class, and then will be classified into th class.

Analysis of the Proposed Method
In this section we show the rationales of the proposed method. First, the mirror faces in the proposed method indeed reflect some possible appearance of the face, which are not shown by the original training samples. Figure 1 shows some original training samples from the PIE face database and the mirror faces generated from the original training samples. Figure 2 shows some original training samples from the Yale B face database and the mirror faces generated from the original training samples. Figure 3 shows some original training samples from the Yale database and the mirror face images generated from the original samples. It seems that mirror training samples have different illumination and features in comparison with the original training samples.     Figure 4 shows some test samples from the PIE face database from the same class. Figure 5 shows some test samples from the Yale B face database from the same class. Figure 6 shows some test samples from the Yale face database from the same class. Mirror faces generated from the original samples do not show high accuracy classification in all case; this is only playing well on different illumination. Figure 7 shows some original training samples from the FERET face database in the same illumination condition and the mirror faces generated from the original training samples. We can see that mirror face training samples have not shown other obvious features. PIE database and Yale B database are all better for our experiments.
The other rationale of the proposed method is that it uses MSE algorithm for face recognition. MSE is easy to calculate and reduce the effect caused by negative errors. Our method uses all training samples and tries to minimize the sum of the deviation between the obtained class labels and true class labels. MSEC is able to convert every training sample. By mapping (4), we obtained predicted class label of test sample, where = 0.01. Then, we choose the minimum distance class label as its class label. Finally, we judge the accuracy The Scientific World Journal

Experimental Results
In this paper, we use three databases to conduct experiments.  Every image was resized to 32 × 32. We, respectively, took first 2, 3, and 4 face images of each subject as the original training samples and took the remaining face images as the test samples. Table 3 shows the experiments on the Yale database. According to the experimental results, our method shows better than conventional MSEC, especially for conventional MSEC which only uses mirror images as training samples.

Conclusions
We propose a very promising method to exploit limited training samples for MSEC face recognition. The new training samples generated in this paper can well exploit the mirror structure of the face. The mirror face is helpful for overcoming the drawback of limited training samples in the real-world face recognition system. And the MSEC is able to obtain high classification accuracy as the training sample nearest to the test sample can provide useful information for classifying it.

Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.