ABSTRACT
With the 4th industrial revolution and the increased use of cobots in the industries comes many opportunities for new generation control panels. In this article, we proposed to develop a deep learning model to recognize in real time 10 different gestures that can be used to interact with a cobot. We proposed a new dataset containing gestures that can be used in an industrial context. The videos were taken from a computer webcam and then processed to remove the noise created by the background by isolating the movement of the gray scale images. We proposed to extract the spatio-temporal features by the combination of 3D convolution and LSTM layers. We also proposed a real time method to recognize our gestures, the frames are captured continuously and fed to the model to get a prediction every 2.4 seconds. Our experimental results show for 8 out of 10 gestures, a recognition rate of more than 90%. Furthermore, an interface was created to test our method in real time and to add new classes of gestures to be recognized by our model.
- Z.-h. Chen, J.-T. Kim, J. Liang, J. Zhang, and Y.-B. Yuan, “Real-time hand gesture recognition using finger segmentation,” The Scientific World Journal, vol. 2014, 2014.Google Scholar
- R. L. Ogniewicz and M. Ilg, “Voronoi skeletons: theory and applications.” In CVPR, vol. 92, 1992, pp. 63–69.Google ScholarCross Ref
- B. Ionescu, D. Coquin, P. Lambert, and V. Buzuloiu, “Dynamic hand gesture recognition using the skeleton of the hand,” EURASIP Journal on Advances in Signal Processing, vol. 2005, no. 13, pp. 1–9, 2005.Google ScholarDigital Library
- A. Beristain Iraola, Skeleton based visual pattern recognition. Applications to tabletop interaction. Servicio Editorial de la Universidad del Pais Vasco/Euskal Herriko..., 2009.Google Scholar
- M. A. Butt and P. Maragos, “Optimum design of chamfer distance transforms,” IEEE Transactions on Image Processing, vol. 7, no. 10, pp. 1477–1484, 1998.Google ScholarDigital Library
- N. L. Hakim, T. K. Shih, S. P. Kasthuri Arachchi, W. Aditya, Y.-C. Chen, and C.-Y. Lin, “Dynamic hand gesture recognition using 3dcnn and lstm with fsm context-aware model, “Sensors, vol. 19, no. 24, p.5429, 2019.Google Scholar
- S. Hochreiter and J. Schmidhuber, “Long short-term memory, “Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.Google Scholar
- Q. Chen, N. D. Georganas, and E. M. Petriu, “Real-time vision-based hand gesture recognition using haar-like features,” in 2007 IEEE instrumentation & measurement technology conference IMTC 2007.IEEE, 2007, pp. 1–6.Google Scholar
- Trong-Nguyen Nguyen, Huu-Hung Huynh, and Jean Meunier , "Static Hand Gesture Recognition Using Artificial Neural Network," Journal of Image and Graphics, Vol. 1, No. 1, pp. 34-38, March 2013. doi: 10.12720/joig.1.1.34-38Google ScholarCross Ref
- C. L. Giles, G. M. Kuhn, and R. J. Williams, “Dynamic recurrent neural networks: Theory and applications,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 153–156, 1994.Google ScholarDigital Library
- V. Veeriah, N. Zhuang, and G.-J. Qi, “Differential recurrent neural net-works for action recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 4041–4049.Google Scholar
- Y. Du, W. Wang, and L. Wang, “Hierarchical recurrent neural network for skeleton based action recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1110–1118.Google Scholar
- L. Pigou, A. Van Den Oord, S. Dieleman, M. Van Herreweghe, and J. Dambre, “Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video,” International Journal of Computer Vision, vol. 126, no. 2, pp. 430–439, 2018.Google ScholarDigital Library
- Z. Ren, J. Meng, J. Yuan, and Z. Zhang, “Robust hand gesture recognition with Kinect sensor,” in Proceedings of the 19th ACM international conference on Multimedia, 2011, pp. 759–760.Google ScholarDigital Library
- Jiaqing Liu, Kotaro Furusawa, Tomoko Tateyama, Yutaro Iwamoto, and Yen-wei Chen, "An Improved Kinect-Based Real-Time Gesture Recognition Using Deep Convolutional Neural Networks for Touchless Visualization of Hepatic Anatomical Mode," Journal of Image and Graphics, Vol. 7, No. 2, pp. 45-49, June 2019. doi: 10.18178/joig.7.2.45-49Google ScholarCross Ref
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv preprintarXiv:1706.03762, 2017.Google Scholar
- M. De Coster, M. Van Herreweghe, and J. Dambre, “Sign language recognition with transformer networks,” in12th International Conference on Language Resources and Evaluation, 2020.Google Scholar
- Mygel Andrei M. Martija, Jakov Ivan S. Dumbrique, and Prospero C. Naval, Jr, "Underwater Gesture Recognition Using Classical Computer Vision and Deep Learning Techniques," Journal of Image and Graphics, Vol. 8, No. 1, pp. 9-14, March 2020. doi: 10.18178/joig.8.1.9-14Google ScholarCross Ref
- GitHub, 2021, last consulted July 29, 2021 at https://github.com/KurukW/AI_Project1_G7/tree/main/DATAGoogle Scholar
- A. Ullah, J. Ahmad, K. Muhammad, M. Sajjad, and S. W. Baik, “Action recognition in video sequences using deep bi-directional lstm with cnn features,” IEEE access, vol. 6, pp. 1155–1166, 2017.Google ScholarCross Ref
- S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 1, pp. 221–231, 2012.Google Scholar
- D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp.4489–4497.Google ScholarDigital Library
- Tkinter documentation: https://docs.python.org/3/library/tkinter.htmlGoogle Scholar
- OpenCV : https://opencv.org/Google Scholar
Recommendations
Smart Hand Device Gesture Recognition with Dynamic Time-Warping Method
BDIOT '17: Proceedings of the International Conference on Big Data and Internet of ThingIn this paper, we present a smart wearable hand-gesture recognition system based on the movement of the hand and fingers. The proposed smart wearable system is built using the fewest sensors necessary for gesture recognition. Thus, motion sensors are ...
Multi-scenario gesture recognition using Kinect
CGAMES '12: Proceedings of the 2012 17th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES)Hand gesture recognition (HGR) is an important research topic because some situations require silent communication with sign languages. Computational HGR systems assist silent communication, and help people learn a sign language. In this article, a ...
Real-Time Dynamic Hand Gesture Recognition
IS3C '14: Proceedings of the 2014 International Symposium on Computer, Consumer and ControlA real time dynamic hand gesture recognition system is performed in this paper. The eleven kinds of hand gestures have been dynamic recognized, which represent the number from one to nine. The dynamic images are caught by a dynamic video. We use the ...
Comments