Indian Sign Language Recognition

Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. Different methods used for the feature extraction like HMM, SVM, Neural Network and PCA. The system is mainly implemented by using image preprocessing, segmentation, feature extraction and gesture recognition. PCA is used for feature extraction. Using PCA method we can reduce original variables into a lower number of orthogonal. Principal component analysis (PCA) is a statistical technique for reducing the dimensionality of a data set in which there are many interrelated variables, while retaining as much of the variation in the dataset as possible. The data set is reduced by transforming the old data to a new set of variables (principal components) that are ordered so that the first few variables contain most of the variation present in the original variables. The original data set is transformed by computing the eigenvectors and eigen values of the data set’s covariance matrix.


INTRODUCTION
Hand gesture recognition has been a very active research topic in recent years with an application such as human computer interaction (HCI), Robot control & sign language interpretation. In order to find a solution to the problem of communication with deaf and dump people the sign language is useful. Computers have become a key element of our society e.g. surfing a web, typing a letter, playing video game or storing & retrieving data. Computers will increasingly influence our everyday life because of the constant decrease in the price of computers. Thus, HCI has become an active field of research in the past few years. Gesture recognition and gesture based interaction have received increasing attention as an area of HCI [1].Vision based method is potential and low cost. So it is very attractive sensing modality for developing hand gesture recognition. The goal of gesture recognition is to create a system which can identify specific human gestures use to convey the information.
Gesture: Gesture may be defined as a physical movement of the hands, arms, face and body to convey information or meaning. Basically two methods are used for human computer interaction they are: 1. Data Gloves: In this method user used gloves with sensor attached to it like mechanical or optical. This transfer finger into electrical signal for determininghand pos tures. In this case user approach forces to carry load of cables which are connected to the computer.

Vision based: Computer vis ion based it is technique
based on the way of human beings perceive information about their surroundings. It is field that includes methods for acquiring processing analyzing and understanding images and high dimens ion data from real world in order to produce numerical or symbolical information i.e. in the form of decisions [2].
Gesture recognition different color space used for skin color detection. Skin color plays very important role in image processing. Primary step is chose color space for skin color classification. RGB is default color space for most available. Image formation from RGB to any color space obtained using linear or non-linear transformation to decrease the overlap between skin and non-skin pixels. Skin colors differ more in intensity than chrominance so, commonly the luminance component used for skin classification.

Sign Language
Sign language recognition is one of the growing fields of research today. Different sign languages are like British, American, Japanese and Indian sign language. Most work done in other sign language than Indian sign language [4]. Figure 1 shows the Symbols for Indian sign language number system.  Our hand gesture recognition system consist of two stages the training stage and the testing stage as shown in figure  2. Same steps followed in the training and the testing but firstly database is trained in training mode and then it is tested. Testing Stage: Before taking testing image we loaded the trained database which contains the M highest Eigenvectors and weights of the training images. Then select the testing database by comparing its position in hand gesture space with the position of known hand postures training images.
Hand Gesture recognition: As mentioned above the file that contains the weights of hand posture training images in the Eigen space was loaded before selecting testing image and will be used in recognition.
Finally, minimum Euclidean distance between the detected hand gesture weights and the training weights of each training image is determined to recognize the hand gesture.