Continuous Arabic Sign Language Recognition in User Dependent Mode
K. Assaleh, T. Shanableh, M. Fanaswala, F. Amin, H. Bajaj
.
DOI: 10.4236/jilsa.2010.21003   PDF    HTML     7,209 Downloads   14,166 Views   Citations

Abstract

Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy.

Share and Cite:

K. Assaleh, T. Shanableh, M. Fanaswala, F. Amin and H. Bajaj, "Continuous Arabic Sign Language Recognition in User Dependent Mode," Journal of Intelligent Learning Systems and Applications, Vol. 2 No. 1, 2010, pp. 19-27. doi: 10.4236/jilsa.2010.21003.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] J. S. Kim, W. Jang, and Z. Bien, “A dynamic gesture recognition system for the Korean sign language (KSL),” IEEE Transations on Systerms, Man, Cybern. Part B, Vol. 26, pp. 354–359, April 1996.
[2] H. Matsuo, S. Igi, S. Lu, Y. Nagashima, Y. Takata, and T. Teshima, “The recognition algorithm with noncontact for Japanese sign language using morphological analysis,” Proceedings of Gesture Workshop, pp. 273–284, 1997.
[3] S. S. Fels and G. E. Hinton, “Glove-talk: A neural network interface between a data-glove and a speech synthesizer,” IEEE Transactions on Neural Networks, Vol. 4, pp. 2–8, January 1993.
[4] O. Al-Jarrah and A. Halawani, “Recognition of gestures in Arabic sign language using neuro-fuzzy systems,” Artificial Intelligence, Vol. 133, No. 1–2, pp. 117–138, December, 2001.
[5] K. Assaleh and M. Al-Rousan, “Recognition of Arabic Sign language alphabet using polynomial classifiers,” EURASIP Journal on Applied Signal Processing, Vol. 2005, No. 13, pp. 2136–2146, 2005.
[6] M. AL-Rousan, K. Assaleh, and A. Tala'a, “Video-based signer-independent Arabic sign language recognition using hidden Markov models,” Applied Soft Computing, Vol. 9, No. 3, June 2009.
[7] T. Shanableh, K. Assaleh, and M. Al-Rousan, “Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language,” IEEE Transactions on Systems, Man and Cybernetics Part B, Vol. 37, No. 3, pp. 641–650, 2007.
[8] T. Shanableh and K. Assaleh, “Telescopic vector composition and polar accumulated motion residuals for feature extraction in Arabic sign language recognition,” EURASIP Journal on Image and Video Processing, vol. 2007, Article ID 87929, 10 pages, 2007. doi:10.1155/2007/ 87929.
[9] T. Starner, J. Weaver, and A. Pentland. “Real-time American sign language recognition using desk and wearable computer based video,” IEEE Transations on Pattern Analysis and Machine Intelligence, Vol. 20, No. 12, pp. 1371–1375, 1998.
[10] T. Starner, J. Weaver and A. Pentland. “A wearable computer-based American sign language recogniser,” in personal and ubiquitous computing, 1997.
[11] Sharjah City for Humanitarian Services (SCHS), website: http://www.sharjah-welcome.com/schs/about/.
[12] T. Westeyn, H. Brashear and T. Starner, “Georgia tech gesture toolkit: Supporting experiments in gesture recognition,” Proceedings of International Conference on Perceptive and Multimodal User Interfaces, Vancouver, B.C., November 2003.
[13] K. R. Rao and P. Yip, “Discrete cosine transform: Algorithms, advantages, applications,” Academic Press, ISBN 012580203X, August 1990.
[14] T. Dietterich, “Machine learning for sequential data: A review,” In T. Caelli (Ed.) Structural, Syntactic, and Statistical Pattern Recognition; Lecture Notes in Computer Science, Vol. 2396, pp. 15–30, Springer-Verlag, 2002.
[15] L. Rabiner, B. Juang, “Fundamentals of speech recognition,” New Jersey: Prentice-Hall Inc., 1993.
[16] F. S. Chen, C. M. Fu, and C. L. Huang, “Hand gesture recognition using a real-time tracking method and hidden Markov models,” Image and Vision Computing, Vol. 21, No. 8, pp. 745–758, 2003.
[17] M. Ghanbari, “Video coding: An introduction to standard codecs,” Second Edition, The Institution of Engineering and Technology, 2003.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.