Skip to main content
Log in

Visual multi-touch air interface for barehanded users by skeleton models of hand regions

  • Regular Paper
  • Robotics and Automation
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

In this paper, we propose a visual multi-touch air interface. The proposed air interface is based on the image processing and does not require any additional equipment except a built-in camera. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To improve the robustness of the extraction of hand regions under various circumstances, we propose an image processing algorithm based on the HCbCr color model, the fuzzy color filter, and the labeling. In addition, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. T. S. Huang and V. I. Pavloic, “Hand gesture modeling, analysis, and synthesis,” Proc. of International Workshop on Automatic Face and Gesture Recognition, pp. 73–79, June 1995.

  2. L. H. Kim, W. J. Park, H. C. Cho, and S. Y. Park, “A universal remote control with haptic interface for customer electronic devices,” IEEE Trans. on Consumer Electronics, vol. 56, no. 2, pp. 913–918, May 2010.

    Article  Google Scholar 

  3. V. I. Pavlovic, R. Sharma, and T. S. Huang, “Visual interpretation of hand gestures for human-computer interaction: a review,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 667–695, July 1997.

    Article  Google Scholar 

  4. J. G. Kang, J. H. Kook, K. S. Cho, S. M. Wang, and J. H. Ryu, “Effects of amplitude modulation on vibrotactile flow displays on piezo-actuated thin touch screen,” International Journal of Control, Automation, and Systems, vol. 10, no. 3, pp. 582–588, June 2012.

    Article  Google Scholar 

  5. S. G. Kim and C. W. Lee, “Tabletop techniques for multi-touch recognition: survey,” The Korea Contents Society, vol. 7, no. 2, pp. 84–91, 2007.

    Article  Google Scholar 

  6. Q. Chen, N. D. Georganas, and E. M. Petriu, “Hand gesture recognition using Haar-like features and a stochastic context-free grammar,” IEEE Trans. on Instrumentation and Measurement, vol. 57, no. 8, pp. 1562–1571, 2008.

    Article  Google Scholar 

  7. P. H. J. Chong, P. L. So, P. Shum, X. J. Li, and D. Goyal, “Design and implementation of user interface for mobile devices,” IEEE Trans. on Consumer Electronics, vol. 50, no. 4, pp. 1156–1161, November 2004.

    Article  Google Scholar 

  8. A. Just and S. Marcel, “A comparative study of two state-of-the-art sequence processing techniques for hand gesture recognition,” Computer Vision and Image Understanding, vol. 113, no. 4, pp. 532–543, April 2009.

    Article  Google Scholar 

  9. M. Ishikawa and H. Matsumura, “Recognition of a hand-gesture based on self-organization using a data glove,” Proc. of International Conference on Neural Information Processing, vol. 2, pp. 739–745, 1999.

    Google Scholar 

  10. V. Frati and D. Prattichizzo, “Using Kinect for hand tracking and rendering in wearable haptics,” Proc. of IEEE World Haptics Conference, pp. 317–321, June 2011.

  11. J. Ohya and Y. Kitamura, “Real-time reproduction of 3D human images in virtual space teleconferencing,” Proc. of IEEE Virtual Reality Annual International Symposium, pp. 408–414, 1993.

  12. M. Van den Bergh, D. Carton, R. De Nijs, N. Mitsou, C. Landsiedel, K. Kuehnlenz, D. Wollherr, L. Van Gool, and M. Buss, “Real-time 3D hand gesture interaction with a robot for understanding directions from humans,” Proc. of the 20th IEEE International Symposium on Robot and Human Interactive Communication, pp. 357–362, August 2011.

  13. J. Segen and S. Kumar, “Shadow gestures: 3D hand pose estimation using a single camera,” Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 479–485, June 1999.

    Google Scholar 

  14. K. Imagawa, S. Lu, and S. Igi, “Color-based hand tracking system for sign language recognition,” Proc. of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 462–467, April 1998.

  15. S. Mitra and T. Acharya, “Gesture recognition: a survey,” IEEE Transactions on System, Man, and Cybernetics, Part C: Applications and Reviews, vol. 37, no. 3, pp. 311–324, 2007.

    Article  Google Scholar 

  16. J. Handaru and D. D. Dominic, “Human skin detection using defined skin region,” Proc. of IEEE Information Technology International Symposium, pp. 1–4, 2008.

  17. M. H. Kim, J. B. Park, and Y. H. Joo, “Automatic extraction of facial region using fuzzy color filter,” Proc. of International Symposium on Advanced Intelligent Systems, pp. 638–641, October 2005.

  18. M. H. Kim, J. B. Park, and Y. H. Joo, “New fuzzy skin model for face detection,” Lecture Notes in Computer Science, vol. 3809, pp. 557–566, 2005.

    Article  MathSciNet  Google Scholar 

  19. J. S. Kim, D. H. Yeom, and Y. H. Joo, “Fast and robust algorithm of tracking multiple moving objects for intelligent video surveillance systems,” IEEE Trans. on Consumer Electronics, vol. 57, no. 3, pp. 1165–1170, August 2011.

    Article  Google Scholar 

  20. J. S. Kim, D. H. Yeom, and Y. H. Joo, “Intelligent unmanned anti-theft system using network camera,” International Journal of Control, Automation, and Systems, vol. 8, no. 5, pp. 967–974, October 2010.

    Article  Google Scholar 

  21. R. M. Haralick, S. R. Stemberg, and X. Zhuang, “Image analysis using mathematical morphology,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 4, pp. 532–550, April 1987.

    Article  Google Scholar 

  22. F. Chang, C. J. Chen, and C. J. Lu, “A linear-time component labeling algorithm using contour tracing technique,” Computer Vision and Image Understanding, vol. 93, no. 2, pp. 206–220, 2004.

    Article  Google Scholar 

  23. A. A. Argyros and M. I. A. Lourakis, “Vision-based interpretation of hand gesture for remote control of a computer mouse,” Computer Vision in Human-Computer Interaction, Lecture Notes in Computer Science, vol. 3979, pp. 40–51, 2006.

    Article  Google Scholar 

  24. G. Welch and G. Bishop, An Introduction to the Kalman Filter, Technical Report 95-041, Department of Computer Science, University of North Carolina at Chapel Hill, July 2006.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Young Hoon Joo.

Additional information

Recommended by Editorial Board member Pinhas Ben-Tzvi under the direction of Editor Myotaeg Lim.

This work was supported by the Human Resources Development of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 20124010203240).

Jin Gyu Kim received his B.S. and M.S. degrees in the School of Electronics and Information Engineering from Kunsan National University, Kunsan, Korea, in 2007 and 2009, respectively. He is currently working toward a Ph.D. degree. His research interests include humanrobot interaction and intelligent surveillance systems.

Young Hoon Joo received his B.S., M.S., and Ph.D. degrees in Electrical Engineering from Yonsei University, Seoul, Korea, in 1982, 1984, and 1995, respectively. He worked with Samsung Electronics Company, Seoul, Korea, from 1986 to 1995, as a project manager. He was with the University of Houston, Houston, TX, from 1998 to 1999, as a visiting professor in the Department of Electrical and Computer Engineering. He is currently a professor in the Department of Control and Robotics Engineering, Kunsan National University, Korea. His major interest is mainly in the field of intelligent robot, intelligent control, human-robot interaction, and intelligent surveillance systems. He served as President for Korea Institute of Intelligent Systems (KIIS) (2008–2009) and is serving as Editor for the International Journal of Control, Automation, and Systems (IJCAS) (2008-present), and is serving as the Vice-President for the Korean Institute of Electrical Engineers (KIEE) (2012-present).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, J.G., Joo, Y.H. Visual multi-touch air interface for barehanded users by skeleton models of hand regions. Int. J. Control Autom. Syst. 11, 84–91 (2013). https://doi.org/10.1007/s12555-012-9217-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-012-9217-y

Keywords

Navigation