Skip to main content
Log in

Action recognition through fusion of sEMG and skeletal data in feature level

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Human action can be recognized through a unimodal way. However, the information obtained from a single mode is limited due to the fact that a single mode contains only one type of physical attribute. Therefore, it is motivational to improve the accuracy of actions through fusion of two different complementary modality, which are the surface electromyography (sEMG) and the skeletal data. In this paper, we propose a general framework of fusion of sEMG signals and skeletal data. Firstly, vector of locally aggregated descriptor (VLAD) was extracted from sEMG sequences and skeletal sequences, respectively. Secondly, features obtained from sEMG and skeletal data are mapped through different weighted kernels using multiple kernel learning. Finally, the classification results are obtained through the model of multiple kernel learning. A dataset of 18 types of human actions is collected via KinectV2 and Thalmic Myo armband to verify our ideas. The experimental results show that the accuracy of human action recognition are improved by combining skeletal data with sEMG signals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Bentley JL (1975) Multidimensional binary search trees used for associative searching. Commun ACM 18(9):509–517

    Article  Google Scholar 

  • Chen C, Jafari R, Kehtarnavaz N (2015) Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans Hum Mach Syst 45(1):51–61

    Article  Google Scholar 

  • Chen C, Jafari R, Kehtarnavaz N (2016) Fusion of depth, skeleton, and inertial data for human action recognition. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), p 2712–2716

  • Chen C, Jafari R, Kehtarnavaz N (2017) A survey of depth and inertial sensor fusion for human action recognition. Multimedia Tools Appl 76(3):4405–4425

    Article  Google Scholar 

  • Davis JW, Bobick AF (2001) The recognition of human movement using temporal templates. IEEE Trans Pattern Anal Mach Intell 23(3):257–267

    Article  Google Scholar 

  • Gao J, He T, Zhou X, Ge S (2019) Focusing and diffusion: bidirectional attentive graph convolutional networks for skeleton-based action recognition. arXiv preprint arXiv:1912.11521

  • Guo Y, Lei L, Liu W, Cheng J, Tao D (2017) Multiview Cauchy estimator feature embedding for depth and inertial sensor-based human action recognition. IEEE Trans Syst Man Cybern Syst 47(4):617–627

    Article  Google Scholar 

  • Gönen M, Alpaydın E (2011) Multiple kernel learning algorithms. J Mach Learn Res 12:2211–2268

    MathSciNet  MATH  Google Scholar 

  • Jegou H, Douze M, Schmid C, Perez P (2010) Aggregating local descriptors into a compact image representation. Proc Cvpr 238(6):3304–3311

    Google Scholar 

  • Liu K, Chen C, Jafari R, Kehtarnavaz N (2014) Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens J 14(6):1898–1903

    Article  Google Scholar 

  • Liu J, Shahroudy A, Xu D, Kot AC, Wang G (2018) Skeleton-based action recognition using spatio-temporal LSTM network with trust gates. IEEE Trans Pattern Anal Mach Intell 40(12):3007–3021. https://doi.org/10.1109/TPAMI.2017.2771306

    Article  Google Scholar 

  • Luvizon DC, Tabia H, Picard D (2017) Learning features combination for human action recognition from skeleton sequences. Pattern Recognit Lett 99:13–20

    Article  Google Scholar 

  • Löpez-Nava IH,Muñoz-Meléndez A(2016) Complex human action recognition on daily living environments using wearable inertial sensors. In: ACM

  • Mahbub U, Imtiaz H, Rahman Ahad MA (2011) An optical flow based approach for action recognition. In: 14th international conference on computer and information technology (ICCIT 2011), p 646–651

  • Sonnenburg S, Rätsch G, Schäfer C, Schölkopf B (2006) Large scale multiple kernel learning. J Mach Learn Res 7(2006):1531–1565

    MathSciNet  MATH  Google Scholar 

  • Sonnenburg S,Strathmann H et al (2017) shogun-toolbox/shogun: Shogun 6.1.0. https://doi.org/10.5281/zenodo.1067840

  • Sun Y, Li C et al (2018) Gesture recognition based on kinect and SEMG signal fusion. Mobile Netw Appl 23(4):797–805

    Article  MathSciNet  Google Scholar 

  • Vrigkas M, Nikou C, Kakadiaris IA (2015) A review of human activity recognition methods. Front Robot AI 2:28

    Article  Google Scholar 

  • Wei H, Jafari R, Kehtarnavaz N (2019) Fusion of video and inertial sensing for deep learning-based human action recognition. Sensors 19(17):3680

    Article  Google Scholar 

  • Xia L,Chen CC, Aggarwal J (2012) View invariant human action recognition using histograms of 3D joints. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, 16–21 June 2012, pp 20–27

  • Yi L, Cheng J, Ji X, Wei F, Tao D (2017) Real-time action recognition by feature-level fusion of depth and inertial sensor. In: 2017 IEEE international conference on real-time computing and robotics (RCAR), pp 109–114

  • Zhang X (2010) Body gesture recognition and interaction based on surface electromyogram. PhD thesis, University of Science and Technology of China

  • Zhang P, Lan C, Xing J, Zeng W, Xue J, Zheng N (2019) View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Trans Pattern Anal Mach Intell 41(8):1963–1978. https://doi.org/10.1109/TPAMI.2019.2896631

    Article  Google Scholar 

  • Zhang Z, He C, Yang K (2020) A novel surface electromyographic signal-based hand gesture prediction using a recurrent neural network. Sensors (Basel, Switzerland) 20(14):3994

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (62073279, 61733011), Central government guided local science and Technology Development Fund Project (216Z2001G), and the Hebei innovation capability improvement plan project (22567619H).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weili Ding.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Ding, W., Bian, S. et al. Action recognition through fusion of sEMG and skeletal data in feature level. J Ambient Intell Human Comput 13, 4125–4134 (2022). https://doi.org/10.1007/s12652-022-03867-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-022-03867-0

Keywords

Navigation