Multi-Sign Language Glove based Hand Talking System

Many people around the world suffer from losing the ability to talk and hear with different levels of disabilities, caused by either a car or a work accident or some diseases. After losing communication, these people cannot do normal functions of normal life. Along with the aforementioned disabilities, those people may also have psychological effects. This paper introduces a technique to realize multiple sign language translation using a sensors-based glove and an Android smartphone for speech impaired people to communicate normally with people. The design of the hand talking system (HTS) was implemented with a minimum possible number of sensors and a capable sewing controller (Lilypad). The proposed HTS includes flex sensors, Arduino, smartphone, and accelerometer. The HTS uses an Android application programmed to store multi-language in the SQLite database and enables the user to interact with the system. The system provides talking with a letter formed words, or using the most frequently used words in daily communication by hand gesture. The HTS has achieved high accuracy obtained for American Sign Language and Arabic Sing Language which are about 98.26% and 99.33% respectively with an average accuracy of 98.795 for both Sign Languages.


Introduction
Sign Language Recognition (SLR) typically has numerous gained attention incrementally in recent years. SLR has received considerable interest in providing a social benefit for improving communication between deaf-dumb persons with normal peoples. Today, there is rapid development in the field of various motion sensors, and the small size of the system, fast processing unit with the availability of artificial intelligent capability. Furthermore, it minimizes the human effort and makes him able to translate gestures into speech successfully. Many types of academic research which involve several techniques for SLR have been published. Numerous sophisticated devices have been proposed and developed with different data acquisition methods. These include a Web Camera, Microsoft Kinect device, 4 cameras, 3-axis accelerometer and flex sensors, and a leap motion controller [1]. Developed an electronic glove for the hand gestures detection [2]. Overall device operation was simple, and desired results were obtained by simulation before hardware implementation. The system has a limited number of text commands related to gestures. Hand motion recognition was developed and designed for individuals with speech impairments [3]. Nevertheless, the system recorded only eight words were produced in the voice module, which could also be increased to more words as voice approval ratings. Proposed a remote communication system that uses a ring-shaped wearable system (Magic Ring) to detect bi-manual movements [4]. Experimental tests showed 77.4 % for right-hand use only. However, system accuracy was low for using only one hand. Introduced two leap motion controllers prototypes for sign language translation based on multi-sensors fusion [5]. The overall accuracy of two developed LMC prototypes was about 97.686%. However, the device was not portable and the cost was relatively IOP Publishing doi:10.1088/1757-899X/1105/1/012078 3 for both dynamic and static image successfully. However, performance accuracy decreased for the large distance between the user and camera. Furthermore, the palm should face the camera and the arm should be in a vertical position. An experimental demonstration generates a reliable system of finger segmentation that can be modified to various hand alignments, such as translations and rotations [21,22]. Also, a human verification system was planned and applied based on the merging between the FTs of the five fingers. To test the verification approach, an assist was provided by three databases. Suggested to use both hands for producing a considerable part of a face image with the aid of a multilayer perceptron (MLP) [23]. This architecture provides a full-face image reconstruction with an equal error rate (EER) of 1.99%. Extensive Review for different finger texture (FT) relevant researches [24]. Besides, the summarized disadvantages and difficulties for FT are considered a biometric characteristic. Furthermore, the survey provided a suggestion to enhance the work on FT. The present research has listed a comparison of HTS in several previous related articles scientifically based on the input method, controller, algorithm, sensor type, accuracy, and the limitation for HTS as presented in Table 1. The purpose of this paper is to develop a system that can help impaired people without any assistance from others. It focuses on effective solutions able to operate the system which can be easily fitted the user requirements. The developed system is comprised of a graphical user interface (GUI) on the Android application which enables the user to change the language and modification of some gestures, flex sensors to sense the finger motion, Arduino, and accelerometer. The performance evaluation measurement of the system is employed to run multiple sign language and achieve high accuracy for American Sign Language and Arabic Sign Language. This research contribution can be summarized as follows: 1. Implementation of HTS which is practically implemented to achieve communication for disabled with speech impaired problems by moving finger and hand of the user. 2. An HTS is suitable for two languages by providing multiple states of speech (English and Arabic) and ensuring that many users can use it. 3. The accurate measurement of the design and implementation of HTS for the mute person has been tested with an average accuracy of 98.795 for both American Sign Language and Arabic Sign Language. 4. The system is designed to run multiple sign language and achieved high accuracy of 98.26% for American Sign Language and 99.33% for Arabic Sign Language. A. An accelerometer was used to made distinguish between similar letters like "U", "V", "C" and "E" The not proper controller application The camera and the arm should be in a vertical position

System Design
The complete circuit diagram of the HTS has been illustrated in Figure 1 as well as Figure 2 shows the experimental setup of the HTS, which includes five flex sensors with variable resistivity ranging from approximately 10kΩ in the normal state to (60-110)kΩ according to bending of the sensor [25]. So, the sophisticated sensors are connected to a specific form of voltage divider arrangement with the 10kΩ resistor to sense the bending of each finger according to equation 1 [26]. A low-cost, electronic textile is suitable for sewing; Arduino LilyPad328P is used as the controller of the glove [27]. The mainboard uses ATmega328 at 16 MHz for data collection and pre-processing and controlling all parts of the glove. Ultralow power, small, and thin 3-Axis ADXL345 digital accelerometer is utilized for hand posture detection. Typically, the current consumed by the accelerometer is about 1µA in standby mode and 23µA in measurement mode. It has a small dimension that makes it suitable for a wearable device [28]. The transmitting part of the system is represented by the HC-05 Bluetooth module that operates using Serial Port Protocol (SPP) which makes it very easy to pair the module with a microcontroller [29]. The Bluetooth sends data under the controller instruction. The receiving part of the device is an Android smartphone which comprises the text display on the screen and enables conversion of the text into speech.

System architecture and implementation
The overall operation of the system is divided into software and hardware part. The hardware part included in the portable wearable glove consists of five flex sensors, an adxl345 3-axis accelerometer, LilyPad328p Arduino, Bluetooth module, and smartphone whereas the software part comprises the Arduino programming software written in C language [30] and android application developed using java language [31]. The system operation is explained in the flowchart in Figure 3. To begin this process, a hand gesture is detected by flex sensors bending and accelerometer readings and mapped into binary numbers. The controller will check if the gesture is valid or not by comparing it with a pre-stored list of valid gestures. Only correct data will be sent into the smartphone for energy saving. The user must release his/her hand to a normal state for beginning another gesture to avoid accidental gestures as soon as data received by the Android application the corresponding letter is fetched from the SQLite database [32]. The database will store multi-sign language data and the correct letter will be chosen according to the language set by the user. The aforementioned text, letters, and additional words have been displayed on the phone screen as shown in Figure 4 a and b. After the word complete, the user can send another command by a hand gesture to make the application read the text using google text to speech (TTS) API [33]. In addition to the letters stored in the database, the user can save the most used words in his daily life. These are saved in the specified category in the application for complete words as well as he/she can edit the pre-store letters for any gesture he wants.

Data formation
Each flex sensor is calibrated to typically take the minimum and maximum reading and connected to an analogue input of the microcontroller by built-in an analogue to digital converter (ADC) consequently the reading ranges from 0-1023 according to output of voltage divider circuit. The experimental sensors reading from the controller's built-in ADC were found (528-723, 558-730, 484-643, 543-736, 531-718) for the thumb, index finger, middle finger, ring finger, and pinky respectively. These values are mapped into the 0-9 decimal number to eliminate small variation in the sensor reading. The decimal value is converted into binary numbers to decide whether a finger is bent or not. The bending condition is illustrated below which is located in code. If senor reading > 5 output = 1 (bent) Else output = 0 (relaxed) In the same manner, the hand posture is taken by conditioning the adxl345 accelerometer according to table2. The resultant five bits from the accelerometer are concatenated with the other five bits of the flex sensors and form the final ten-bit output data that transferred via Bluetooth device.

Results and Discussions
The system is examined for American Sign Language and Arabic Sign Language as demonstrated in Table 3, and Table 4, respectively for each letter and its relevant bit sequence. In addition to Table 5 which represents the commands table that will be sent to the smartphone to perform various functions. The letter 'W' is defined three times and the Arabic letter ‫'ث'‬ is defined two times in the database to increase the accuracy and avoid miss-match during gesture formation. Alongside other letters like 'M' and 'N' which have a similar gesture and hand posture, therefore, tilting direction of the hand is changed for the letter 'N' to identify it. Every single letter is iterated 20 times to obtain accuracy and is obtained according to equation (2) [34].

Accuracy% = * 100
(2) Where T = total number of true gesture detection and N = total number of tries. The accuracy obtained from equation (2) for English letters is about 98.26% and for Arabic letters is about 99.33%. Furthermore, the accuracy of each letter has been shown in Figure 5 a and b.

Conclusion and future works
The implemented HTS can translate any sign language into speech as soon as it is realized by the developer to download it into the application. The HTS uses the least possible number of sensors without affecting the system efficiency. The small size, lightweight, and sewing capability of the controller besides using the wireless Bluetooth module for data transmissions make the device portable and user-friendly. The suggested HTS contains Arduino, smartphone flex, and accelerometer sensors. The system is designed to run multiple sign languages and it has achieved high accuracy of about 98.26% for American Sign Language and 99.33% for Arabic Sign Language. In addition to translation sign language, the system enables the user to save and run most of the words he uses in his/her daily routine. Future works should be possible in including words suggestion and auto-complete word.