Intelligent sensor for image control point of eigenface for face recognition

Sensors for Image control point in Face recognition (FR) is one of the most active research areas in computer vision and pattern recognition. Its practical application includes forensic identification, access control and human computer interface. The research aimed at comparing an input face image against a database containing a set of face samples with known identity and identifying the subject to which the input face belongs. To attain these objectives, we have used several images combined by image registration to improve eigenface recognition. Sensor detection by head orientation for image control point of the training sets collected in a database is also discussed. The methodological approach used to analyze eigenface focused on three fundamental points; Generating Eigenfaces, Classification and Identification using Image processing toolbox to perform the matrix calculations. The results obtained showed that the performance of the proposed technique proved to be less affected by registration errors and the quality of image was successfully.


INTRODUCTION
Face recognition is a weak biometric because it is unable to reliably identify persons or worse it will identify the wrong person. Face recognition uses images of people that have different facial expressions and angles to the camera. The ideal center, portrait type environment greatly improves face recognition. However, in reality this environment is not always available in real applications due to tilted camera angles.
Face recognition works well for small groups of data but as the number of individuals are added to the database, the number of false identifications increases. If a system can reliably identify over 99% of subjects that at first seems like excellent reliability. However, that 1% represents a considerable waste of time. The 1% of subjects falsely identified means that in a terrorism situation for a large population, significant resources must be spent manually excluding false positives. Alternatively, it also means there is a remote chance that a person may be misidentified as someone else and slip past security. Any incremental improvement designed to enhance face recognition performance is worth considering.
In addition to any initial filtering, the face is first identified (called face detection) out of the scene using software such as Intel's OpenCV image processing library. Next various algorithms can be used to do the face recognition. One approach uses eigenfaces which approaches face recognition not from the standpoint of features such as ears, eyes, mouth and nose but from encoding of the variations in the face. These variations are described mathematically in a form called eigenvectors each one describing a variation in the face. These vectors are combined to form a ghostly image called an eigenface. A set of weights are calculated permitting comparison and thereby identification.
Image registration combines and aligns two different images of the same scene to provide a combined image. These two images are usually used to represent the change in an overhead shot taken from an aircraft or satellite for instance to indicate building city growth or vegetation changes over time. One of the images is the base and the other image is the difference under evaluation. The images can be taken from different cameras and camera angles and like a rubber sheet stretched, the images are aligned to form one image.
In security environments, variations in the face can be caused by shadows caused by camera angle and angle to the sun including reflections off nearby surfaces. The face is constantly moving and images taken will have different head orientations. In this difficult environment, an eigenface would be different for the same person depending on the head orientation. A camera positioned near an entryway would expect to obtain images of persons at different angles from the camera. The face area must be detected in the scene and tracked. The face will not appear centered but at different head orientations.
Some means of averaging these differences before vectorizing the face would be useful.
Figure1 represents the principle of acquisition of the image that will compare the images of standard type stores in the database. The sensor detection of image and selection of attention is such a fundamental mechanism that leads to a prioritization of sensor information analysis and actuator information synthesis. Different models for selection of attention try to explain the mechanism through the interaction between head, eyes and mouth.

II. RELKATED WORK
Eigenfaces are a set of eigenvectors used in the computer vision problem of human face recognition. Eigenfaces assume ghastly appearance. They refer to an appearancebased approach to face recognition that seeks to capture the variation in a collection of face images and use this information to encode and compare images of individual faces in a holistic manner. Specifically, the eigenfaces are the principal components of a distribution of faces, or equivalently, the eigenvectors of the covariance matrix of the set of face images, where an image with N×N pixels is considered a point (or vector) in N2-dimensional space.
Turk and Pentland [1]originally developed face recognition using eigenface techniques. The project provides for a form of computational pattern recognition for the face. The term eigenfaces is used because mathematical algorithms using eigenvectors represent the primary components of the face. Weights are used to represent the eigenface features so a comparison of these weights permits identification of individual faces from a database. They also discuss the problems of not having a perfect centered face image, which results in some performance degradation not so much effected from sideways tilt of the head but by a nonupright view. They suggest two approaches to solving it by calculating the head orientation and performing rotation.
Matta and Jean-Luc [2] describes the art on person recognition using facial video information and classify the existing approaches present in the scientific literature between those that neglect the temporal information, and those that exploit it even partially, search to detail the extensions to video data of: eigenfaces, fisherfaces, and active appearance models.
Yeasin, M. and Y. Kuniyoshi, [3]In, application the system track the object in real time and use the neural network to detects the face and eye location from the video stream at a slower rate.
The face must be identified in a scene first before recognition can take place. This is called face detection. It is fully described as active research projects in Rischholz for finding faces in arbitrary scenes. A rectangular image containing the face is created which is separate from other objects in the scene.
The Automatic Identification and Data Capture (AIDC) site at Western Carolina University [4] describes eigenface technology. The eigenface recognition system commonly takes all of the faces and forms a mean face image. It then compares the differences and similarities for each face between the mean image and the input face being subject to face recognition. The input face gets a fitness score assigned to it which indicates if the face is in the database. The fitness score also determines if the image is a face and can then be stored as a new face in the database if the fitness score does not match a current face in the database. The advantage of the eigenface is that it is fast and does not require a high storage of data. The disadvantage is that it has problems identifying faces that do not face the camera directly and under different light levels. The eigenface approach also has problems dealing with facial hair and skin scarring. Another approach called eigenfeatures measured facial metrics such as the distances between eyes and nose. Combining eigenfaces with eigenfeatures improves face recognition considerably.
Peter N. Belhumeur et al. [5] describes in Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, consider each pixel in an image as a coordinate in a highdimensional space Heseltine et al. [6]describe preprocessing techniques used to improve eigenface face recognition. Tests are performed to compile data on False Acceptance Rates (FAR) and False Rejection Rates (FFR). Factors affecting face recognition include changes in intensity and direction of light, partially covered faces through sunglasses, hats and facial hair and changes in expressions on the face. The study discusses the problems occurring when light illuminates one side of the face resulting in a principal component, which provides errors in identification of faces.
As In Face Recognition using a Cognitive Processing Model (Gorn Tepvorachai and Chris Papachristou) [7] . In the conventional eigenface method, the principle component analysis (PCA) algorithm associates the Eigenvectors with the changes in illumination. Object is the improvement of facial image association for face recognition using a cognitive processing model. This method is based on the notion of multiple-phase associative memory. The Essex face database is used to verify our model for facial image recognition and compare the results of face recognition with conventional eigenface method.
Lanitis [8] describes an algorithm that uses the nonoccluded part of the face for face recognition. The hidden part of the face is excluded from interfering with the face recognition process so that identification is improved. The study focuses on excluding non-systematic facial appearance variations caused by sunglasses and hats, instead of systematic variations caused by aging, illumination, viewpoint and expression.
Gupta and Jain [9] describe a Visual Information Retrieval (VIR) system using recall of different types of images from a repository, one of which has face retrieval using eigenfeatures. Image transformations are computed for each face. A limitation found is that as the database increases the computations become too intense and difficult to automate requiring human intervention along with high associated training costs.
Graham and Allinson [10] states that if the image of the person in the database is different from the test image, the system should recognize the person, which is defined as pose invariant recognition. When test images have different poses, the system should still be able to recognize the individual. For a computer this task may be unlikely because computers view images in pixels. To make the computer capable of invariant recognition, features must be removed. Pose invariant recognition is based on using different images of people for training, or by creating a 3D model, which can be used to generate more images.

A. Experiments with eigenfaces and method
This approach focuses on three fundamental points: Generating eigenfaces Classification and Identification of face Method used image processing toolbox to perform the matrix calculations.

B. Generating eigenfaces
One of the main applications of the PCA in Computer Vision is in facial recognition.
Assume a face image I(x, y) be a two-dimensional M by N array of intensity values, or a vector of dimension M×N. The Training set used for the analysis is of size 110×129, resulting in 14, 190 dimensional spaces. A typical image of size 256 by 256 describes a vector of dimension 65, 536, or, equivalently, a point in 65,536-dimensional space. For simplicity the face images are assumed to be of size N×N resulting in a point in N2 dimensional space. An ensemble of images, then, maps to a collection of points in this huge space. Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. The main idea of the principal component analysis (or Karhunen-Loeve transform) is to find the vectors which best account for the distribution of face images within the entire image space. These vectors define the subspace of face images, which we call "face space". Each vector is of length N2, describes a N by N image and is a linear combination of the original face images. Because these vectors are the eigenvectors of the covariance matrix corresponding to the original face images and because they are face like in appearance, we refer to them as "eigenfaces".
Let . The matrix C, however N2×N2 by N and determining the N eigenvectors and eigenvalues is an intractable task for typical image sizes. A computationally feasible method is to be funded to calculate these eigenvectors. If the number of data point in the image space is M(M N2) there will be only M-1 meaningful eigenvectors, rather than N2. The eigenvector can be determined by solving much smaller matrix of the order M2×M2 which, reduces the computations from the order of N2-M, pixels. Therefore the construct the matrix L: That describes the contribution of each eigenface in representing the input face image, treating the eigenfaces as basis set for face images.
The vector is used to find which of a number of predefined face classes, if any, best describes the face. The simplest method for determining which face class provides the best description of an input face image is to find the face class k that minimizes the Euclidean distance: where, k is a vector describing the k th face class. A face is classified as belonging to class k when the minimum k is below some chosen threshold otherwise the class is classified "unknown". The distance threshold, is half the largest distance between any two face images, mathematically can be expressed as: There are various programs available to do eigenface recognition experiments.
Gonzalez's [11] in his book contains integrates materials of leading text and many algorithms and matlab functios developed a m-file program called Face recognition system using the Mathworks Matlab language based on the eigenface methods described by Turk and Pentland. It uses the Mathworks image processing toolbox to do the image processing and perform the matrix calculations. The program permits input of images for each person as part of training sets. After the training sets are loaded in for multiple persons, the input image can be selected for face recognition. The result indicates the nearest class (training set), the distance from the averaged eigenface of the set and the distance from the facespace, which indicates if the image is a face. The test faces are obtained from the AT and T Laboratories Cambridge (2005), which contains ten images of 40 persons with each image undergoing different lighting, head orientation and facial expressions.

IV. RESULTS
Two images were read into the MATLAB command window and converted successfully to matrices representations. The two images were then selected for interactive selection of control points using the cpselect command. This tool is shown in Fig. 3a-d. The base image is on the left and a different head orientation and eye detection is shown on the right.
Unfortunately, there are problems creating matching control points for the faces. Only two matching control points can be selected and three are needed to use transformation type affine, which combines scaling, rotation, shearing and translation. We remark the Eigenfaces technique raises a number of issues, such as: -How to select k, the number of Eigenfaces to keep.
-How to efficiently update the face space when new images are added to the data set.
-How best to represent classes and perform classification within the face space.
-How to separate intraclass and interclass variations in the initial calculation of face space.
-How to generalize from a limited set of face images and imaging conditions.

V. CONCLUSION
In this study we must retain the eigenface approach to face recognition as motivated by information theory. This led to basing face recognition on a small set of image features that best approximates the set of known face images without requiring that they correspond to our intuitive notions of facial parts, and features.
It is important to note that many applications of face recognition do not require perfect identification, although most require a low false-positive rate. In searching a large database of faces, for example, it may be preferable to find a small set of likely matches to present to the user. For applications such as security systems or human computer interaction, the system will normally be able to 'view' the subject for a few seconds or minutes and thus will have a number of chances to recognize the person.
However, the selection of control points used to align two images for image registration can be easily done for backgrounds that are fixed such as a highway as was done previously. The selection of control points could also be performed using a test pattern background in a portrait for faces using different head positions. But in a dynamic video motion environment there is no standard test pattern background to align control points with.
It was thought that the face's eyes and mouth could be used for alignment of the control points. At least predictive tie points could be used. However, the face images at different angles even if filtered are hard to obtain control points from.