Skip to main content
Log in

Vision-based hand gesture recognition of alphabets, numbers, arithmetic operators and ASCII characters in order to develop a virtual text-entry interface system

  • New Trends in data pre-processing methods for signal and image classification
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Hand gesture recognition can substitute the use of text-entry interface for human computer interaction. However, it is a challenging task to develop a virtual text-entry interface covering a large number of gesture-based characters. In this paper, 18 new ASCII printable characters have been introduced along with some of the previously introduced characters [A–Z alphabets, 0–9 numbers and four arithmetic operators (add, minus, multiply, divide)]. In addition to some of the efficient existing features, three new features of 15 dimensions have been incorporated to enhance the performance of the system, which are normalized distance between direction extreme, close figure test and direction change ratio. These features are measured for single-stroke as well as multistroke gestures. An experimental analysis has been carried out for selection of optimal features using the statistical analysis techniques such as one-way analysis of variance test, Kruskal–Wallis test, Friedman test in combination with incremental feature selection technique. Furthermore, a comparative study has been carried out for classification of 58 gestures with the new list of features. A comparative analysis has been performed using five classifiers, namely SVM, kNN, Naïve Bayes, ANN and ELM. It has been observed that maximum accuracy achieved using the combination of existing and proposed features is 96.95%, as compared to 94.60% accuracy achieved using existing features for classification of 58 gestures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Sturman DJ, Zeltze D (1994) A survey of glove-based input. IEEE Comput Graph Appl 14(1):30–39

    Article  Google Scholar 

  2. Suarez J, Murphy RR (2012) Hand gesture recognition with depth images: a review. In: 2012 IEEE RO-MAN: the 21st IEEE international symposium on robot and human interactive communication. IEEE, pp 411–417

  3. LaViola J (1999) A survey of hand posture and gesture recognition techniques and technology. Brown University, Providence, p 29

    Google Scholar 

  4. Ren Z, Yuan J, Zhang Z (2011) Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of the 19th ACM international conference on Multimedia. ACM, pp 1093–1096

  5. Wu Y, Huang TS (1999) Vision-based gesture recognition: a review. International gesture workshop. Springer, Berlin, pp 103–115

    Google Scholar 

  6. Lockton R, Fitzgibbon AW (2002) Real-time gesture recognition using deterministic boosting. In: BMVC, pp 1–10

  7. Campbell LW, Becker DA, Azarbayejani A, Bobick, AF, Pentland A (1996) Invariant features for 3-D gesture recognition. Citeseer

  8. Cui Y, Weng JJ (1996) Hand sign recognition from intensity image sequences with complex backgrounds. In: Proceedings of the second international conference on, in automatic face and gesture recognition. IEEE, pp 259–264

  9. Liang RH, Ouhyoung M (1998) A real-time continuous gesture recognition system for sign language. In: Third IEEE international conference on, in automatic face and gesture recognition, 1998. Proceedings, IEEE

  10. Pavlovic VI, Sharma R, Huang TS (1997) Visual interpretation of hand gestures for human–computer interaction: a review. IEEE Trans Pattern Anal Mach Intell 19(7):677–695

    Article  Google Scholar 

  11. Bhuyan MK, Kumar DA, MacDorman KF, Iwahori Y (2014) A novel set of features for continuous hand gesture recognition. J Multimodal User Interfaces 8(4):333–343

    Article  Google Scholar 

  12. Singha J, Laskar RH (2015) Self co-articulation detection and trajectory guided recognition for dynamic hand gestures. IET Comput Vis 10(2):143–152

    Article  Google Scholar 

  13. Singha J, Misra S, Laskar RH (2016) Effect of gesture pattern variation in dynamic hand gesture recognition system. Neurocomputing 208:269–2805

    Article  Google Scholar 

  14. Douglas DH, Peucker TK (2011) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Class Cartogr Reflect Influ Artic Cartogr 10(2):15–28

    Article  Google Scholar 

  15. Paulson B, Hammond T (2008) Paleosketch: accurate primitive sketch recognition and beautification. In: Proceedings of the 13th international conference on intelligent user interfaces. ACM, pp 1–10

  16. Zaki MM, Shaheen SI (2011) Sign language recognition using a combination of new vision based features. Pattern Recognit Lett 32(4):572–577

    Article  Google Scholar 

  17. Bhuyan MK, Bora PK, Ghosh D (2008) Trajectory guided recognition of hand gestures having only global motions. Int J Comput Sci 21:753–764

    Google Scholar 

  18. Elmezain M, Al-Hamadi A, Michaelis B (2009) Hand gesture recognition based on combined features extraction. World Acad Sci Eng Technol 60:395

    Google Scholar 

  19. Che ZG, Chiang TA, Che ZH (2011) Feed-forward neural networks training: a comparison between genetic algorithm and back-propagation learning algorithm. Int J Innov Comput Inf Control 7(10):5839–5850

    Google Scholar 

  20. Svozil D, Kvasnicka V, Pospichal J (1997) Introduction to multi-layer feed-forward neural networks. Chemometr Intell Lab Syst 39(1):43–62

    Article  Google Scholar 

  21. Wang Z, Xue X (2014) Multi-class support vector machine. In: Ma Y, Guo G (eds) Support vector machines applications. Springer International Publishing, Support Vector Machines Applications, New York, pp 23–48

    Chapter  Google Scholar 

  22. Liu B (2007) Web data mining: exploring hyperlinks, contents, and usage data. Springer Science & Business Media, New York

    MATH  Google Scholar 

  23. McCue R (2009) A comparison of the accuracy of support vector machine and Naıve Bayes algorithms. In: Spam classification. University of California, Santa Cruz

  24. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501

    Article  Google Scholar 

  25. Cambria E, Huang GB, Kasun LLC, Zhou H, Vong CM, Lin J, Leung VC (2013) Extreme learning machines [trends & controversies]. IEEE Intell Syst 28(6):30–59

    Article  Google Scholar 

  26. Oh B-S, Jeon J, Toh K-A, Jaihie K (2013) A system for signature verification based on horizontal and vertical components in hand gestures. IEEE Intell Syst 28(6):52–55

    Google Scholar 

  27. Yu H, Chen Y, Liu J (2013) An adaptive and iterative online sequential ELM-based multi-degree-of-freedom gesture recognition system. IEEE Intell Syst 28(6):55–59

    Google Scholar 

  28. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  29. Chan Y, Walmsley RP (1997) Learning and understanding the Kruskal–Wallis one-way analysis-of-variance-by-ranks test for differences among three or more independent groups. Phys Ther 77(12):1755–1761

    Article  Google Scholar 

  30. Zimmerman DW, Zumbo BD (1993) Relative power of the Wilcoxon test, the Friedman test, and repeated-measures ANOVA on ranks. J Exp Educ 62(1):75–86

    Article  Google Scholar 

  31. Sheldon MR, Fillyaw MJ, Thompson WD (1996) The use and interpretation of the Friedman test in the analysis of ordinal-scale data in repeated measures designs. Physiother Res Int 1(4):221–228

    Article  Google Scholar 

  32. Singha J, Laskar RH (2015) ANN-based hand gesture recognition using self co-articulated set of features. IETE J Res 61(6):597–608

    Article  Google Scholar 

  33. Singha J, Laskar RH (2016) Hand gesture recognition using two-level speed normalization, feature selection and classifier fusion. Multimedia Syst 1–16. doi:10.1007/s00530-016-0510-0

  34. Singha J, Laskar RH (2016) Recognition of global hand gestures using self co-articulation information and classifier fusion. J Multimodal User Interfaces 10(1):77–93

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge the Speech and Image Processing Laboratory under Department of ECE at National Institute of Technology, Silchar, India, for providing all necessary facilities to carry out the research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Songhita Misra.

Ethics declarations

Conflict of interest

The project is not funded by any organization. The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Misra, S., Singha, J. & Laskar, R.H. Vision-based hand gesture recognition of alphabets, numbers, arithmetic operators and ASCII characters in order to develop a virtual text-entry interface system. Neural Comput & Applic 29, 117–135 (2018). https://doi.org/10.1007/s00521-017-2838-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2838-6

Keywords

Navigation