Skip to main content

Light-Weight CNN-Attention Based Architecture Trained with a Hybrid Objective Function for EMG-Based Human Machine Interfaces

  • Chapter
  • First Online:
Transactions on Computational Science XL

Part of the book series: Lecture Notes in Computer Science ((TCOMPUTATSCIE,volume 13850))

  • 171 Accesses

Abstract

The presented research focuses on Hand Gesture Recognition (HGR) utilizing Surface-Electromyogram (sEMG) signals. This is due to its unique potential for decoding wearable data to interpret human intent for immersion in Mixed Reality (MR) environments. The existing solutions so far rely on complicated and heavy-weighted Deep Neural Networks (DNNs), which have restricted practical application in low-power and resource-constrained wearable systems. In this work, we propose a light-weight hybrid architecture (\(\text {HDCAM}\)) based on Convolutional Neural Network (CNN) and attention mechanism to effectively extract local and global representations of the input. The proposed \(\text {HDCAM}\) model with 58, 441 parameters reached a new state-of-the-art (SOTA) performance with \(83.54\%\) and \(82.86\%\) accuracy on window sizes of 300 ms and 200 ms for classifying 17 hand gestures. The number of parameters to train the proposed \(\text {HDCAM}\) architecture is \(18.87 \times \) less than its previous SOTA counterpart. Furthermore, the model is trained based on a hybrid loss function consisting of two-fold: (i) Cross Entropy (CE) loss which focuses on identifying the helpful features to perform the classification objective, and (ii) Supervised Contrastive (SC) loss which assists to learn more robust and generic features by minimizing the ratio of intra-class to inter-class similarity.

This Project was partially supported by Department of National Defence’s Innovation for Defence Excellence & Security (IDEaS), Canada.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rahimian, E., Zabihi, S., Asif, A., Farina, D., Atashzar, S.F., Mohammadi, A.: Hand gesture recognition using temporal convolutions and attention mechanism. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1196–1200 (2022)

    Google Scholar 

  2. Rahimian, E., Zabihi, S., Asif, A., Atashzar, S.F., Mohammadi, A.: Few-shot learning for decoding surface electromyography for hand gesture recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1300–1304 (2021)

    Google Scholar 

  3. Rahimian, E., Zabihi, S., Atashzar, F., Asif, A., Mohammadi, A.: XceptionTime: independent time-window XceptionTime architecture for hand gesture classification. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1304–1308 (2020)

    Google Scholar 

  4. Tsinganos, P., Cornelis, B., Cornelis, J., Jansen, B., Skodras, A.: Improved gesture recognition based on sEMG signals and TCN. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1169–1173 (2019)

    Google Scholar 

  5. Ovur, S.E., et al.: A novel autonomous learning framework to enhance sEMG-based hand gesture recognition using depth information. Biomed. Sig. Process. Control 66, 102444 (2021)

    Article  Google Scholar 

  6. Toledo-Peral, C.L., et al.: Virtual/augmented reality for rehabilitation applications using electromyography as control/biofeedback: systematic literature review. Electronics. Electronics 14(11), 2271 (2022)

    Google Scholar 

  7. Guo, L., Lu, Z., Yao, L.: Human-machine interaction sensing technology based on hand gesture recognition: a review. IEEE Trans. Hum.-Mach. Syst. (2021)

    Google Scholar 

  8. Mongardi, A., et al.: Hand gestures recognition for human-machine interfaces: a low-power bio-inspired armband. IEEE Trans. Biomed. Circuits Syst. (2022)

    Google Scholar 

  9. Farina, D., et al.: The extraction of neural information from the surface EMG for the control of upper-limb prostheses: emerging avenues and challenges. Trans. Neural Syst. Rehabil. Eng. 22(4), 797–809 (2014)

    Article  Google Scholar 

  10. Castellini, C., et al.: Proceedings of the first workshop on peripheral machine interfaces: going beyond traditional surface electromyography. Front. Neurorobot. 8, 22 (2014)

    Article  Google Scholar 

  11. Dhillon, G.S., Horch, K.W.: Direct neural sensory feedback and control of a prosthetic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 13(4), 468–472 (2005)

    Article  Google Scholar 

  12. Milosevic, B., Benatti, S., Farella, E.: Design challenges for wearable EMG applications. In: Design, Automation and Test in Europe Conference and Exhibition, pp. 1432–1437 (2017)

    Google Scholar 

  13. Han, B., Schotten, H.D.: Multi-sensory HMI for human-centric industrial digital twins: a 6G vision of future industry. In: IEEE Symposium on Computers and Communications (ISCC), pp. 1–7 (2022)

    Google Scholar 

  14. Qu, Y., Shang, H., Li, J., Teng, S.: Reduce surface electromyography channels for gesture recognition by multitask sparse representation and minimum redundancy maximum relevance. J. Healthc. Eng. (2021)

    Google Scholar 

  15. Toro-Ossaba, A., et al.: LSTM recurrent neural network for hand gesture recognition using EMG signals. Appl. Sci. 12(9), 9700 (2022)

    Article  Google Scholar 

  16. Sun, T., Hu, Q., Gulati, P., Atashzar, S.F.: Temporal dilation of deep LSTM for agile decoding of sEMG: application in prediction of upper-limb motor intention in NeuroRobotics. IEEE Robot. Autom. Lett. (2021)

    Google Scholar 

  17. Hudgins, B., Parker, P., Scott, R.N.: A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 40(1), 82–94 (1993)

    Article  Google Scholar 

  18. Atzori, M., et al.: A benchmark database for myoelectric movement classification. Trans. Neural Syst. Rehabil. Eng. (2013)

    Google Scholar 

  19. Atzori, M., et al.: Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 1(1), 1–13 (2014)

    Google Scholar 

  20. Geng, W., et al.: Gesture recognition by instantaneous surface EMG images. Sci. Rep. 6, 36571 (2016)

    Article  Google Scholar 

  21. Wei, W., Wong, Y., Du, Y., Hu, Y., Kankanhalli, M., Geng, W.: A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Pattern Recogn. Lett. (2017)

    Google Scholar 

  22. Ding, Z., et al.: sEMG-based gesture recognition with convolution neural networks. Sustainability 10(6), 1865 (2018)

    Google Scholar 

  23. Wei, W., et al.: Surface electromyography-based gesture recognition by multi-view deep learning. IEEE Trans. Biomed. Eng. 66(10), 2964–2973 (2019)

    Article  MathSciNet  Google Scholar 

  24. Simao, M., Neto, P., Gibaru, O.: EMG-based online classification of gestures with recurrent neural networks. Pattern Recogn. Lett. 45–51 (2019)

    Google Scholar 

  25. Rahimian, E., Zabihi, S., Atashzar, S.F., Asif, A., Mohammadi, A.: Surface EMG-based hand gesture recognition via hybrid and dilated deep neural network architectures for neurorobotic prostheses. J. Med. Robot. Res. 1–12 (2020)

    Google Scholar 

  26. Karnam, N.K., Dubey, S.R., Turlapaty, A.C., Gokaraju, B.: EMGHandNet: a hybrid CNN and Bi-LSTM architecture for hand activity classification using surface EMG signals. Biocybern. Biomed. Eng. 42(1), 325–340 (2022)

    Article  Google Scholar 

  27. Gulati, P., Hu, Q., Atashzar, S.F.: Toward deep generalization of peripheral EMG-based human-robot interfacing: a hybrid explainable solution for neurorobotic systems. IEEE Robot. Autom. Lett. 6(2), 2650–2657 (2021)

    Article  Google Scholar 

  28. Rahimian, E., Zabihi, S., Atashzar, S.F., Asif, A., Mohammadi, A.: Semg-based Hand gesture recognition via dilated convolutional neural networks. In: Global Conference on Signal and Information Processing, GlobalSIP (2019)

    Google Scholar 

  29. Bai, S., Kolter, J.Z., Koltun, V.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 (2018)

  30. Tsinganos, P., Jansen, B., Cornelis, J., Skodras, A.: Real-time analysis of hand gesture recognition with temporal convolutional networks. Sensors 22(5), 1694 (2022)

    Article  Google Scholar 

  31. Rahimian, E., Zabihi, S., Asif, A., Atashzar, S.F., Mohammadi, A.: Trustworthy adaptation with few-shot learning for hand gesture recognition. In: IEEE International Conference on Autonomous Systems (ICAS), pp. 1–5 (2021)

    Google Scholar 

  32. Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)

  33. Rahimian, E., Zabihi, S., Asif, A., Farina, D., Atashzar, S.F., Mohammadi, A.: FS-HGR: few-shot learning for hand gesture recognition via ElectroMyography. IEEE Trans. Neural Syst. Rehabil. Eng. (2021)

    Google Scholar 

  34. Wang, S., et al.: Improved multi-stream convolutional block attention module for sEMG-based gesture recognition. Front. Bioengineering Biotechnol. 10 (2022)

    Google Scholar 

  35. Hu, Y., et al.: A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS ONE 13(10), e0206049 (2018)

    Article  Google Scholar 

  36. Wei, W., et al.: A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Pattern Recogn. Lett. 119, 131–138 (2019)

    Article  Google Scholar 

  37. Atzori, M., Cognolato, M., Müller, H.: Deep learning with convolutional neural networks applied to electromyography data: a resource for the classification of movements for prosthetic hands. Front. Neurorobot. 10, 9 (2016)

    Article  Google Scholar 

  38. Gao, S.H., Cheng, M.M., Zhao, K., Zhang, X.Y., Yang, M.H., Torr, P.: Res2net: a new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 43(2), 652–662 (2019)

    Article  Google Scholar 

  39. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural net-works with noisy labels. In: Advances in Neural Information Processing Systems, pp. 8778–8788 (2018)

    Google Scholar 

  40. Liu, W., Wen, Y., Yu, Z., Yang, M.: Large-margin softmax loss for convolutional neural networks. In: International Conference on Machine Learning (ICML), vol. 2, p. 7 (2016)

    Google Scholar 

  41. Huang, G., Ma, F.: ConCAD: contrastive learning-based cross attention for sleep apnea detection. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 68–84 (2021)

    Google Scholar 

  42. Jeon, S., Hong, K., Lee, P., Lee, J., Byun, H.: Feature stylization and domain-aware contrastive learning for domain generalization. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 22–31 (2021)

    Google Scholar 

  43. Khosla, P., et al.: Supervised contrastive learning. Adv. Neural. Inf. Process. Syst. 33, 18661–18673 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arash Mohammadi .

Editor information

Editors and Affiliations

Ethics declarations

Conflicts of Interests/Competing Interests

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer-Verlag GmbH, DE, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zabihi, S., Rahimian, E., Asif, A., Yanushkevich, S., Mohammadi, A. (2023). Light-Weight CNN-Attention Based Architecture Trained with a Hybrid Objective Function for EMG-Based Human Machine Interfaces. In: Gavrilova, M., et al. Transactions on Computational Science XL. Lecture Notes in Computer Science(), vol 13850. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-67868-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-67868-8_4

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-67867-1

  • Online ISBN: 978-3-662-67868-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics