FACE RECOGNITION USING COMPLEX VALUED BACKPROPAGATION

Face recognition is one of biometrical research area that is still interesting. This study discusses the Complex-Valued Backpropagation algorithm for face recognition. Complex-Valued Backpropagation is an algorithm modified from Real-Valued Backpropagation algorithm where the weights and activation functions used are complex. The dataset used in this study consist of 250 images that is classified in 5 classes. The performance of face recognition using Complex-Valued Backpropagation is also compared with Real-Valued Backpropagation algorithm. Experimental results have shown that Complex-Valued Backpropagation accuracy is better than Real-Valued Backpropagation.


Introduction
Face recognition is one of the biometrical research. Until todays face recognition still interesting and challenging research area. Face recognition already using in many applications such as security sys-tems, credit card verification, criminal identifi-cation etc. Recently, there are many methods and algorithms that is used for face recognition such as Chittora et al using support vector machine [1], Kathirvalavakumar et al using Wavelet Packet Coefficients and RBF network [2], Cho et al using PCA and GABOR wavelets [3], Sharma using PCA and SVM [4]. Almost all those authors studying face recognition using Real Valued Machine Learning.
Eiichi Goto in 1954 introduced the phase information for computation in his invention of "Parametron". He utilized the phase of a highfrequency carrier to represent binary or multivalued information. However, the computational principle employed there was "logic" of Turing type, or von Neumann type, based on symbol processing, so that he could not make further extensive use of the phase [5]. Currently, many researchers extend the world of computation to pattern processing fields based on a novel use of the structure of complex-amplitude (phase and amplitude) information. Complex Valued neural Network (CVNN) have recently attracted broad research interests, for example, in seismics, sonar, and radar. CVNN  One of the learning processes that utilizes Complex valued neural network is Complex Valued backpropagation (CVBP). CVBP is an algorithm developed from Real Valued Backpropagation algorithm where the weights and activation functions used are complex. According to research conducted by Zimmerman, the average learning speed of the CVBP algorithm is better than the Real Valued backpropagation (RVBP) [11]. Based on that literature, this study will discuss the performace comparison between Complex Valued Backpropagation (CVBP) algorithm and Real Valued Backpropagation (RVBP) algorithm for face recognition. This study already modified the converting process from complex value in hidden layer to real value in output layer using complex modulus, then the complex modulus is using as input variable in activation function. This process is more simple than CVBP that proposed by Zimmerman, because CVBP that proposed by Zimmerman is still using complex value until its mapping into output classes. This paper is organized as follows: Section II describes the classification process using Real Valued backpropagation (RVBP) and Complex Valued Backpropagation (CVBP). The experi-mental results are given in Section III. Finally, section IV is present the conclusion of the study.

Feature Extraction
Principal component analysis (PCA) is a feature extraction technique that can reduce data dimensions without losing important information in the data [4]. In PCA face recognition system, every face image is represented as a vector [12]. Let there are N face images and each image is a 2dimensional array of size mxn of intensity values. An image can be converted into a vector of B (B=mxn) pixels, where, = ( 1 , 2 , … , ) . Define the training set of N images by = ( 1 , 2 , … , ) ⊂ ℜ . The covariance matrix is defined in Eq. 1.
where ̅ = 1 ∑ =1 is the mean image of the training set. Then, the eigenvalues and eigenvectors are calculated from the covariance matrix . Let = ( 1 , 2 , … , ) ⊂ ℜ ( < ) be the r eigenvectors corresponding to r largest nonzero eigenvalues. Each of the r eigenvectors is called an eigenface. Now, each of the face images of the training set is projected into the eigenface space to obtain its corresponding eigenface-based feature ⊂ ℜ , which is defined in Eq. 2.
where is the mean-subtracted image of .

Real Valued Backpropagation (RVBP)
Real Valued Backpropagation is models of multiple layer network where the weights and activation function used are real [13]. RVBP is a supervised learning algorithm, both input and target output vectors are provided for training the network [14]. The error data at the output layer is calculated using network output and target output. Then the error is back propagated to intermediate layers, allowing incoming weights to these layers to be updated [15]. Figure 1 shows RVBP model. The input, weights, threshold, and output signals are a real number. The activity of the neuron and is defined in Eq. 3 and Eq. 4.
where and are real valued weight connecting neuron m and p, is the input signal from neuron n, is the input signal from neuron p, and , are the real valued threshold value of neuron p and m. To obtain the output signal, the activity value is defined in Eq. 5.
where the function above is called the sigmoid binary function. For the sake of simplicity, the networks used both in the analysis and experiments will have three layers. We will use for the weight between the input neuron n and the hidden neuron p, for the weight between the hidden neuron p and the output neuron m, for the threshold of the hidden neuron , and for the threshold of the output neuron m. Let , , denote the output values of the input neuron n, the hidden neuron p, and the output neuron m, respectively. Let also and denote the internal potentials of the hidden neuron p and the output neuron m, respectively. , , , can be defined respectively as = ∑ + , = ∑ + , = ( ) , and = ( ) . Let = − denoted the error between the actual pattern and the target pattern of output neuron m. The real valued neural network error is defined by the following Eq. 6.
where N is the number of output neurons. Next, we define a learning rule for RVBP model described above. We can show that the weights and the thresholds should be updated according to the following Eq. 7 and Eq.8.
where is learning rate. Equation can be expressed in Eq. 9 and Eq. 10.
The algorithm of RVBP can be written as follows: (1) Initialization, set all weights and thresholds with a random real number; (2) Presentation of input and desired outputs (target), present the input vector 1 , 2 , . . . , and corresponding desired target 1 , 2 , . . . , one pair at a time, where N is the total number of training patterns; (3) Calculation of actual outputs, use the

Complex Valued Backpropagation (CVBP)
Complex Valued Backpropagation is an algorithm developed from Real Valued Backpropagation where the weights and activation function used are complex [16]. The goal of CVBP is to minimize the aproximation error. Several kinds of literature that compare CVBP and RVBP often conclude that CVBP is better than RVBP . This research will also prove some statement best algorithm CVBP modelling than RVBP modelling. Figure 2 shows a CVBP model. The input and output signals in this study are a real number, while weights, threshold, and activation function are a complex number. The activity of the neuron and is defined in Eq. 11 and Eq. 12.
where and are complex valued weight connecting neuron n and m, is the input signal from neuron l, is the input signal from neuron m, and , are the complex valued threshold value of neuron n and m. The ouput is a complex number consisting of real and imaginary parts is defined in Eq. 13.
where i denotes √−1. Althought various output functions of each neuron can be considered, the output functions used in this study is defined by the following Eq. 14 [17].
where ( ) = 1/(1 + exp(− )) and is called the sigmoid function, , , ∈ ℝ. For the sake of simplicity, the networks used both in the analysis and experiments will have three layers. We will use for the weight between the input neuron l and the hidden neuron m, for the weight between the hidden neuron m and the output neuron n, for the threshold of the hidden neuron m, and for the threshold of the output neuron n. Let , , denote the output values of the input neuron l, the hidden neuron m, and the output neuron n, respectively. Let also and denote the internal potentials of the hidden neuron m and the output neuron n, respectively. , , , can be defined respectively as = ∑ + , = ∑ + , = ( ) , and = ( ) . Let = − denoted the error between the actual pattern and the target pattern of output neuron n. The complex valued neural network error is defined by the following Eq.15 [11].
where N is the number of output neurons. Next, we define a learning rule for CVBP model described above. We can show that the weights and the thresholds should be updated according to the following Eq. 16 and Eq. 17 [18].
where ̅ denotes the complex conjugate of complex number . The algorithm of CVBP can be written as follows: (1) Initialization, set all weights and thresholds with a random complex number; (2) Presentation of input and desired outputs (target), present the input vector 1 , 2 , . . . , and corresponding desired target 1 , 2 , . . . , one pair at a time, where N is the total number of training patterns; (3) Calculation of actual outputs, use the formula in Eq. (11)(12)(13)(14) To calculate output signals; (4) Updates of weights and thresholds, use the formulas in Eq. (18)(19) To calculate adapted weights and thresholds.

Datasets
The data that were used in this study are taken from web www-prima.inrialpes.fr [19] that consist of 250 images from 5 persons, each person has 50 images. The face images taken from various positions and there is one person wearing glasses. Data selection was done by performing preprocessing all of the data. Pre-processing data begins with grayscalling, cropping, resize, and feature extraction using Principal Component Analysis (PCA). After pre-processing, data will be divided into two parts: training dataset and testing dataset. The division was based on holdout cross validation by randomly dividing the data with ratio 2:1. So, 165 images were generated as training data and 85 images as test data.
The sample data that were used in this study can be seen in Figure 3. Then each face image is converted to a grayscale image [20]. After that, each image is cropped so that it is clearly visible on the face, and the size of each image is changed to (64x64) pixels. Figure 4 shows the result from pre-processing image. Next step is feature extraction using Principal Component Analysis (PCA). Figure 5 shows the result of feature extraction using PCA. This study used image extracted features 99%.

Experimental Set Up
This research was conducted with multiple steps including pre-processing data, Real valued Backpropagation (RVBP) modeling, Complex Valued Backpropagation (CVBP) modeling, and analysis of performance comparison between CVBP and RVBP. For more details, research process can be seen in Figure 6.

Result
RVBP and CVBP experiment was done by using data based on pre-processing data. The RVBP and CVBP training process is conducted to obtain the optimal model to be used in the testing process. Table I showed the characteristics and specifications that were used for RVBP and CVBP architecture.
Some models have been produced that is, the weight of v,w and threshold b in the training process will be used in the testing process to determine the level of accuracy using test data. The complex valued backpropagation testing process begins with input data and the optimum weight and threshold that has been generated in the training process, and then the RVBP and CVBP algorithm is applied only on feedforward section. After that, all network outputs are compared with actual targets for calculated accuracy values. The accuracy values is defined in Eq. 20.

=
× 100 (20) Tables II, III, and IV are the results of RVBP and CVBP testing performance by changing the number of hidden neurons, epoch number, and learning rate. Each experiment was conducted 10 times to obtain an average of accuracy. From Table II, III, and IV, it can be seen that CVBP algorithm has the best accuracy value 92,35 ± 3,77 percent in 100 hidden nodes, 200 epoch, and value of learning rate 0,1. For RVBP algorithm the best accuracy value is 89,76 ± 3,42 percent in 150 hidden nodes, 300 epoch, and value of learning rate 0,1.
Training time and testing time for RVBP and CVBP with various hidden neuron and learning rate can be seen in Figure 7  better than RVBP because CVBP requires less hidden neurons in the formation of networks that will shorten the training and testing time. In addition, CVBP algorithm also requires shortening a number of epoch than the RVBP to obtain the best accuracy. This proves that the learning rate of the CVBP algorithm is faster than the RVBP algorithm.

Conclusion
From the conducted research, it can be concluded that performance of face recognition using CVBP is better than RVBP. CVBP algorithm is more detailed in the generalization of the learning model because it consists of real parts and imaginary parts. CVBP algorithm has the best accuracy value 92,35 ± 3,77 percent in 100 hidden nodes, 200 epoch, and value of learning rate 0,1 with time of test is 0,04 second, while RVBP algorithm has the best accuracy value 89,76 ± 3,42 percent in 150 hidden nodes, 300 epoch, and value of learning rate 0,1 with time of test is 0,04 second.