Face recognition based on fusion of SVD and Legendre moment

. Abstract. Face recognition system may be defined as identifying the human face from scenes of video or a static digital image using a computer application. Human face recognition has been gained huge attention as a result of the significant role in various applications such as security, medical applications, forensic evidence, etc. Human face recognition has been faced by many challenges such as illuminations, rotation, noise, blure, etc. Each challenge may need special handling, hence , the face identification and face verification considered as a difficult field. Several algorithms were applied, each one has weaknesses and strengths, these algorithms focus on the pre-processing stage or features extraction or features selections or classification or it may be focuses on all these steps. This paper suggested a new way based on the fusion of features resulted from applying a common known moment, Legendre with a vector of features produced by a applying singular value decomposition transform (SVD), The system has been tested on FEI Brazil database and achieved recognition rate from 95% to 100%, also the suggested algorithm applied on ORL Databases and achieved recognition rate exceed to 98%. Finally, the proposed method performed a higher recognition rate under uncontrolled environment and noisy and blur images.


Introduction
Face recognition is the ability to establish a human face identification based on facial characteristics, on the other hand, the needing for human face recognition is increasing as a result to its accuracy and uniqueness.Several approaches was suggested some of them based on transformations, or statistical techniques or information theory.Dr.S. Annadurai and A.Saradha(2004) proposed an approach to face recognition problem using Legendre moments for representing features and nearest neighbor classifier for classification.The obtained feature vectors are transformed using Linear Discriminant Analysis and stored in the database and are compared using Nearest neighbor classifier during testing.Similarly the Hu moments , Discrete Cosine Transforms (DCT) are also used for feature extraction.The recognition percentage is compared with the proposed approach.The recognition percentage of 98.25% is achieved using Legendre moments which are comparatively superior than other Face recognition approaches using central moments (Hu), DCT or other statistical approaches [1].Doaqiang and et.al (2005) presented method utilized SVD as well as PCA to handle one example face image problem, their experimental results showed that the generalized eigen face algorithms are accurate for fewer images of face [2].Jin Zhang et al.(2006) suggested algorithm for extracting feature using SVD and DCT, this algorithm applied on special neural network on human face identification, final results showed that the network produced a highly performances with image pattern classification [3].Rohollah Akbari and et.al (2010) produced a human face recognition system utilized Legendre moment to produce vector of features to find solution for the single image problem.Dataset contains 200 images from FERET and 100 images from AR database are used.The accuracy resulted from this algorithm was 91% and 89.5% accuracy for AR and FERET respectively [4].Several researchers suggested many variation for SVD.Qiang Zhang and Baoxin Li(2010) suggested another approach based on discriminative K-SVD and extending the K-SVD algorithm by enrollment the classification error into the objective function, the DKSVD algorithm found the dictionary and solved for the classifier problems [5].D. Sridhar and Dr I.V. Murali Krishna (2012) introduced a hybrid face recognition system based on linear discriminant analysis , Legendre moments, as well as probabilistic neural network.The proposed face recognition system contains features extraction using Legendre, Dimensionality reduction based on Linear Discrminant Analysis , finally, classification step can be done based on Probabilistic Neural Network , the proposed system gives fast and good recognition rate [6].Rajiv Kapoor and Pallavi Mathur(2013) suggested a promise approach for face recognition based on moments.Quadruple extractor for features extraction has been suggested: Zernike transform, Hu, Cumulants, and Legendre.These are orthogonal polynomials and can be used to retrieve an image from moment invariants.Cumulants are a good tool for discover details of image and suitable for representing the image features.Features extractor utilize moments with different orders.Average performance of the feature extraction methods accuracy with Hu moments was 53.33% ,with Zernike moments was 86.66% and with Legendre moments was 93.33%.[7].
Tapabrata Chakraborti et al.( 2014) Presented an algoorithm involved a stochastic optimization to handle the face identification problem in which singular value decomposition(SVD) has been used to convert a single training image into two component arrays to specify the within class scatter matrix.The suggested system applied on Yale A and ORL databases, and showed results better than existing schemes [8].Asaad and Zahir(2014) proposed a new approach using image-dependent properties to determine similarity measures, they utilized edge detection and segmentation as embedded step to improve the recognition rate, in addition to the statistical properties, this approach has been utilized to build a resistance recognition system and invariant [9].Asaad and Zahir(2014) introduced a highly efficient approach for recognition based on feature extractor that gathering the global as well as local features in different views (poses) of facial images.The system got recognition rate 98.75% with ORL data set, while it got 96.57%.with FEI (Brazilian) Database [10].Guiying Zhang et al. (2016) proposed an approach to get a couple of groups of virtual samples from the right singular vectors and from left using singular value decomposition (SVD) for each class of training samples, The system tested on three benchmark datasets and it showed robustness and better accuracy for face recognition [11].Face recognition has several challenges, however, it is the most powerful features extractor among all other biometric features as human face remains unchanged during whole of the life.

Face Recognition System
Face identification considered as one of the most important types of biometric, many applications utilized biometrics such as identification and validation, as well as it played a significant role in the fields of image processing and pattern recognition.Several face recognition systems were suggested most of them have the following stages which explained in figure (1).
Figure1.Block diagram of a typical face recognition system 2.1 Pre-processing: Several operations can be done as primary steps in the face recognition system, these steps aimed to modified enrolled images to the system.These steps may be include resizing of scale, data type conversion, detection the important parts of the face, etc.

Features Extraction:
Features have a significant plays a very important role in the area of image processing.There are several approaches which usually applied to get features that will be useful in identification of images, classifying and face recognition.As features define the behavior of an image.These approaches may be using statistical properties or geometrics, some of them utilize transformation methods or moments, etc.

Features Selection:
Feature selection is an important process due to reducing in the space of generated features.Several techniques can be used such as genetic algorithms , PSO, ant-colony, etc. 2.4 Classification: Algorithms used for classifying the objects based on specific features, some of these algorithms considered as supervised approach while other considered as unsupervised, many algorithms have been suggested such as Neural Network, Bayesian Classifiers, K-Nearest Neighbors etc. [12].

Methodologies
Most of the suggested algorithms focused of who to extract most significant features, then how to used it for recognition.To satisfy these goals, Several feature extractor approaches have been produced: Zernike, Legendre , Hu moments, Cumulants, etc. .Hu moments produces a group of seven moments.Usually these moments are invariant to the common challenges such as translation, scaling, and any rotate operation on the enrolled image.Several moments such as Zernike and Legendre moments contain an orthogonal basis set which can be utilized to represent an array of image with minimum redundancy of information.These have been used assumptions of orthogonal polynomials and can be used to retrieve an image from moment invariants.Again, the most significance property of moments is the sensitivity to the details of image, hence, they are suitable for representing the features of image.Usually, moments utilize different orders to got several features vectors.

Legendre Moment
The Legendre moment suggested by Teague as an orthogonal as a promised approach for analyzing and extracting features, several applications can be embedded Legendre moment such as face recognition, image indexing, pattern recognition, etc. Legendre moment usually contains Legendre polynomials as the kernel which approximated by sampling at fixed intervals.so, the resulted moments have approximated values.The Legendre moments are orthogonal and scale invariants hence they are suitable for representing the features of the face images [13].
The 2-dimensional Legendre moments of order (p + q), Legendre computed by [6][14]: ଵ ିଵ ଵ ିଵ where x, y ∈ {−1,1} , The kernel functions P denote Legendre polynomials of order p: And, the recurrent formula of Legendre polynomials is: Where P (x) = 1, P ଵ (x) = x and p > 1.Based on the Legendre polynomial theory, the interior regions is determined by {−1,1}, a square array of N × N pixels with intensity function f(i, j), i ≤ 0, j ≤ (N − 1) is scaled in the region of −1 < x, y < 1. Eq.( 1) can now rewritten in discrete form as It played a significant role in several different applications such, face identification, object detection, data compression, data hiding, texture classification etc. SVD decompose correlated v into a set of uncorrelated variables that produce several relations among the original values or data, on the other hand, SVD identified and ordered the dimensions along which data points exhibit the most variation [15].The distinguishing feature of SVD is that it can be applied on any real matrix.Singular Value Decompositions (SVDs) are define as an algebraic practice to factor several rectangular matrices for the product of three other matrices.If ‫ܣ‬ ∈ ܴ * is identify as a gray level facial image and ‫)ܣ(݇݊ܽݎ‬ = ‫ݎ‬ , after that there are two orthogonal matrices [16]: ) where ߣ ଶ is the eigenvalue of ‫ܣ‬ ௧ ‫ܣ‬ in addition to ‫ܣܣ‬ ௧ , ߣ is Singular of Value (SV) from facial pic A, ‫ݑ‬ , ‫ݒ‬ , are pillar of eigenvectors of ‫ܣ‬ ௧ ‫ܣ‬ and ‫ܣܣ‬ ௧ corresponding to eigenvalue ߣ ଶ respectively.SVD has important properties and these are verified.these properties are stability, transposition invariance, rotation invariance, proportion invariance, mirror trans-form invariance.The singular values are invariant to translation, rotation, mirror transforms, and this is proven in literature.It should be mention that it is stable in spite of the occurrence of other variations, such as noise or changing lighting.

Properties of SVD
SVD has several properties can be summarized in the following [17]: 1.The singular values for S ( σ1, σ2…….σ n) are unique, yet the matrix U and V are not unique.2. The matrix S is a diagonal matrix with singular values.It has descending order with non-negative values.2. AA = U SV ( USV ) = U SV VSU = U S ଶ U .It depended on the matrix V can be computed using the eigen vector of A A.
It depended on the matrix U can be computed using the eigen vector of AA 4. The rank of the matrix A is equal to the number of its non-zero singular values.Thus the expression USV can be intuitively interpreted as a composition of three geometrical transformations: a rotation or reflection or a scaling.The resulted arrays U and V as rotations and reflections and S as stretching matrix.

Local Ternary Pattern(LTP)
LTP are extending form to local binary patterns where use the same size of 3×3 window.LTP produces three valued codes based on specific threshold T, +1 is they above or -1 is they below [18] as next equation: where N ୡ is center and N ୧ is neighbor, figure( ) shows the ….
Figure2 .Figure shows the order of the binary numbers and then converts them to a decimal number.
There are couple ways to obtain results [19], first by use 3 ୬ valued codes [20,21] as equation ( 2): ଼ ୧ୀଵ where LTP is the updated value of center pixel.The another approach is achieved by splitting the C(N ୧ , N ୡ ) into upper and lower vectors as equations (3 and 4): and each vector can computed a new value of center pixel as equation ( 5): Algorithm(1): Training for database Input :Database of training images.
Output :Database of features.

Start:
Step1: Read all images from the dataset, each time just one. Step2:Preprocessing: Step2.1: Image type conversion(colour image to grayscale).
Step2.4: Applying edge detection using canny filter, suppose the resulted image called X Step3: Features extraction: Step3.1: Apply SVD on X to produce three matrices, choose U matrix, then store its results in separated vector, called it V1 Step3.2: Apply LTP algorithm on X to produce an array of neighbours, called it Y.
(15) where F is UPPER or LOWER vector.

4.The Proposed System
The suggested algorithm utilized two basic algorithms to achieve recognition operation, they explained briefly in the following:- Step3.2.1: Apply SVD on Y to produce three arrays, choose U matrix, then store its results in separated vector, called it V2 Step3.3: Apply Legendre moment on X to produce array, convert it to vector called V3.
Step4: Vectors concatenation: Concatenate the three vectors V1, V2, and V3 to produce a single vector; called it P1, it will occupy a single row in the final big array.
Step5: Save results: store the resulted array as well as the required parameters in a specific file which be loaded later in the testing algorithm.Step4: Classification, to select the closest vector which corresponds to the closest imagee, the suggested algorithm utilized City Block distance which based on the minimum difference between the corresponding vectors of features for testing and training images.

5-Performances and results
Several experiments have been done with many circumstances and environments, so, all cases and performances mentioned here in briefly way.

Environments
The suggested algorithm has been applied on three main datasets: ORL, FEI , and a natural dataset which created by the authors to check the system in a real, unconditional, unconstrained circumstances environments.

AT&T Dataset
Olivetti Research Laboratory (ORL) face database.It name is attributed to AT&T Laboratories Cambridge was founded in 1986 as the Olivetti Research Laboratory, better known as ORL.This database contains 400 images that belong to 40 person.Figure (4 ) shoes samples of AT&T [22].

Brazilin database
The FEI is a Brazilian face database that includes a variety of face images are captured among (June 2005 -March 2006) in the Artificial Intelligence Laboratory for FEI in Sao Bernardo do Campo in 4.2 Testing algorithm: Testing algorithm term refers to the steps have been applied on a single image has been enrolled in the system and comparing the results, the following steps were listed: Sao Paulo, Brazil.It contains from 200 individuals each one has 14 poses images, so, a total of 2800 images, Figure (3) shoes samples of FEI dataset [23].

5.1.3.Real life dataset
The real dataset has been created for several persons with multiple poses for each one(generally 10 poses for each), these poses have been gotten with unconstrained environment conditions; with different facial expressions, rotation, illumination, etc., figure (5) shows samples of the local dataset.

Results
All results explained in the following tables with all cases, several cases have examined and analyzed with the suggested algorithm which based fusion approach of both Legendre and SVD, Legendre alone, and SVD alone.For more explanation, table1 and table2 showed the results of the proposed system with the AT&T and FEI, tables (3,4,5,6) showed several approaches with several cases.
xi and yi referred to the normalized pixel coordinates in the range of {−1,1}, which are gotten by:

4. 1
Training algorithm: Enrolling algorithm or training algorithm term refers to the steps have been applied on the whole dataset of images, in the suggested algorithm, the following steps were listed: MAICT Journal of Physics: Conference Series 1530 (2020) 012120 IOP Publishing doi:10.1088/1742-6596/1530/1/0121206

Algorithm( 2 )
: Testing Algorithm Input: Image, Data Set, e.g.Brazilian data set Output: Recognized person Start: Step1: Read test image Step2: Check similarity based on a specific threshold, if the resulted coefficient smaller than or equal to the threshold do the following steps, else go to the End.Step3: Do steps 2-4 of the training algorithm.

Table 1 .
Show result of all cases applied on AT&T database

Table 2 .
Show Result Of All Cases Applied on FEI Database