Skip to main content
Log in

An algorithm on sign words extraction and recognition of continuous Persian sign language based on motion and shape features of hands

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Sign language is the most important means of communication for deaf people. Given the lack of familiarity of non-deaf people with the language of deaf people, designing a translator system which facilitates the communication of deaf people with the surrounding environment seems to be necessary. The system of translating the sign language into spoken languages should be able to identify the gestures in sign language videos. Consequently, this study provides a system based on machine vision to recognize the signs in continuous Persian sign language video. This system generally consists of two main phases of sign words extraction and their classification. Several stages, including tracking and separating the sign words, are conducted in the sign word extraction phase. The most challenging part of this process is separation of sign words from video sequences. To do this, a new algorithm is presented which is capable of detecting accurate boundaries of words in the Persian sign language video. This algorithm decomposes sign language video into the sign words using motion and hand shape features, leading to more favorable results compared to the other methods presented in the literature. In the classification phase, separated words are classified and recognized using hidden Markov model and hybrid KNN-DTW algorithm, respectively. Due to the lack of proper database on Persian sign language, the authors prepared a database including several sentences and words performed by three signers. Simulation of proposed words boundary detection and classification algorithms on the above database led to the promising results. The results indicated an average rate of 93.73 % for accurate words boundary detection algorithm and the average rate of 92.4 and 92.3 % for words recognition using hands motion and shape features, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Baker-Shenk CL, Cokely D (1991) American sign language: a teacher’s resource text on grammar and culture. Gallaudet University Press, Washington, D.C: Clerc Books

  2. Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern Part C Appl Rev 37:311–324

    Article  Google Scholar 

  3. Ong SC, Ranganath S (2005) Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans Pattern Anal Mach Intell 27:873–891

    Article  Google Scholar 

  4. Chaudhary A, Raheja JL, Das K, Raheja S (2011) A survey on hand gesture recognition in context of soft computing. In: Meghanathan N, Kaushik B, Nagamalai D (eds) Advanced computing, vol 133. Springer, Heidelberg, pp 46–55

  5. Mehdi SA, Khan YN (2002) Sign language recognition using sensor gloves. In: Neural information processing. Proceedings of the 9th international conference on ICONIP’02, 2002, pp 2204–2206

  6. Kadous MW (1996) Machine recognition of Auslan signs using powergloves: towards large-lexicon recognition of sign language. In: proceedings of the workshop on the integration of gesture in language and speech, pp 165–174

  7. Cui Y, Weng J (2000) Appearance-based hand sign recognition from intensity image sequences. Comput Vis Image Underst 78:157–176

    Article  Google Scholar 

  8. Madani H, Nahvi M (2013) Isolated dynamic Persian sign language recognition based on camshift algorithm and radon transform. In: First Iranian conference on pattern recognition and image analysis (PRIA), pp 1–5

  9. Zaki MM, Shaheen SI (2011) Sign language recognition using a combination of new vision based features. Pattern Recogn Lett 32:572–577

    Article  Google Scholar 

  10. Huang C-L, Jeng S-H (2001) A model-based hand gesture recognition system. Mach Vis Appl 12:243–258

    Article  Google Scholar 

  11. Chen F-S, Fu C-M, Huang C-L (2003) Hand gesture recognition using a real-time tracking method and hidden Markov models. Image Vis Comput 21:745–758

    Article  Google Scholar 

  12. Hamada Y, Shimada N, Shirai Y (2004) Hand shape estimation under complex backgrounds for sign language recognition. In: Proceedings of sixth IEEE international conference on automatic face and gesture recognition, pp 589–594

  13. Starner T, Weaver J, Pentland A (1998) Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20:1371–1375

    Article  Google Scholar 

  14. Wu Y, Huang TS (2001) Hand modeling, analysis and recognition. Sig Process Mag IEEE 18:51–60

    Article  Google Scholar 

  15. Tanibata N, Shimada N, Shirai Y (2002) Extraction of hand features for recognition of sign language words. In: International conference on vision interface , pp 391–398

  16. Bauer B, Kraiss K-F (2002) Video-based sign recognition using self-organizing subunits. In: Proceedings of 16th International conference on pattern recognition

  17. Yang M-H, Ahuja N, Tabb M (2002) Extraction of 2D motion trajectories and its application to hand gesture recognition. IEEE Trans Pattern Anal Mach Intell 24:1061–1074

    Article  Google Scholar 

  18. Murakami K, Taguchi H (1991) Gesture recognition using recurrent neural networks. In: Proceedings of the SIGCHI conference on Human factors in computing systems, pp 237–242

  19. Cooper H, Bowden R (2010) Sign language recognition using linguistically derived sub-units. In: Proceedings of 4th workshop on the representation and processing of sign languages: corpora and sign language technologies, pp 57–61

  20. Kobayashi T, Haruyama S (1997) Partly-hidden Markov model and its application to gesture recognition. In: IEEE international conference on acoustics, speech, and signal processing. ICASSP-97, pp 3081–3084

  21. Fang G, Gao W, Zhao D (2007) Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans Syst Man Cybern Part A Syst Hum 37:1–9

    Article  Google Scholar 

  22. Zadghorban M, Nahvi M (2014) Improvement of hand trajectory in Persian sign language video. In: 19th National annual conference of computer society of Iran (CSICC2014 Tehran, Iran, 12–14 February, 2014. In Persian

  23. Zahn CT, Roskies RZ (1972) Fourier descriptors for plane closed curves. IEEE Trans Comput 100:269–281

    Article  MathSciNet  MATH  Google Scholar 

  24. Zhang D, Lu G (2002) A comparative study of Fourier descriptors for shape representation and retrieval. In: Proceedings of the fifth Asian conference on computer vision, pp 646–651

  25. Hu M-K (1962) Visual pattern recognition by moment invariants. IRE Trans Inf Theory 8:179–187

    MATH  Google Scholar 

  26. Zernike VF (1934) Beugungstheorie des schneidenver-fahrens und seiner verbesserten form, der phasenkontrastmethode. Physica 1:689–704

    Article  MATH  Google Scholar 

  27. Han J, Awad G, Sutherland A (2009) Modelling and segmenting subunits for sign language recognition based on hand motion analysis. Pattern Recogn Lett 30:623–633

    Article  Google Scholar 

  28. Senin P (2008) Dynamic time warping algorithm review. Honolulu, USA

    Google Scholar 

  29. Rabiner LR (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 77:257–286

    Article  Google Scholar 

  30. Baum LE, Petrie T, Soules G, Weiss N (1970) A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Stat 41:164–171

    Article  MathSciNet  MATH  Google Scholar 

  31. Hsu H-H, Yang AC, Lu M-D (2011) KNN-DTW Based missing value imputation for microarray time series data. J Comput 6:418–425

    Google Scholar 

  32. Yang HD, Sclaroff S, Lee SW (2009) Sign language spotting with a threshold model based on conditional random fields. IEEE Trans Pattern Anal Mach Intell 31(7):1264–1277

    Article  Google Scholar 

  33. Fang G, Gao W, Zhao D (2007) Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans Syst Man Cybern Part A Syst Hum 37(1):1–9

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manoochehr Nahvi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zadghorban, M., Nahvi, M. An algorithm on sign words extraction and recognition of continuous Persian sign language based on motion and shape features of hands. Pattern Anal Applic 21, 323–335 (2018). https://doi.org/10.1007/s10044-016-0579-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-016-0579-2

Keywords

Navigation