Skip to main content
Log in

Viewpoint-invariant exercise repetition counting

  • Research
  • Published:
Health Information Science and Systems Aims and scope Submit manuscript

Abstract

Counting the repetition of human exercise and physical rehabilitation is common in rehabilitation and exercise training. The existing vision-based repetition counting methods less emphasize the concurrent motions in the same video, and counting skeleton in different view angles. This work analyzed the spectrogram of the pose estimation cosine similarity to count the repetition. Besides the public datasets. This work also collected exercise videos from 11 adults to verify that the proposed method can handle concurrent motion and different view angles. The presented method was validated on the University of Idaho Physical Rehabilitation Movements Data Set (UI-PRMD) and MM-fit dataset. The overall mean absolute error (MAE) for MM-fit was 0.06 with off-by-one Accuracy (OBOA) of 0.94. As for the UI-PRMD dataset, MAE was 0.06 with OBOA 0.95. We have also tested the performance in various camera locations and concurrent motions with 57 skeleton time-series videos with an overall MAE of 0.07 and OBOA of 0.91. The proposed method provides a view-angle and motion agnostic concurrent motion counting. This method can potentially use in large-scale remote rehabilitation and exercise training with only one camera.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The data supporting this study’s findings are available on request from the corresponding author, YCH. The data are not publicly available due to the privacy of research participants.

Code availability

The code for this work is available on https://github.com/YuChengHSU/repetition-counting.

References

  1. Jack K, McLean SM, Moffett JK, Gardiner E. Barriers to treatment adherence in physiotherapy outpatient clinics: a systematic review. Manual Ther. 2010;15(3):220–8.

    Article  Google Scholar 

  2. Heath G, Howze EH, Kahn EB, Ramsey LT. Increasing physical activity; a report on recommendations of the task force on community preventive services. Atlanta: CDC; 2001

  3. Standage M, Duda JL, Ntoumanis N. A model of contextual motivation in physical education: using constructs from self-determination and achievement goal theories to predict physical activity intentions. J Educ Psychol. 2003;95(1):97.

    Article  Google Scholar 

  4. Garcia-Garcia FE, Boccherini-Gallardo M, Rossa-Sierra A, Cortes-Chavez F. Rehab: New ways to improve physiotherapy rehabilitation experience. In: International conference on applied human factors and ergonomics. Cham: Springer; 2021. p. 1134–1143.

  5. Triandafilou KM, Tsoupikova D, Barry AJ, Thielbar KN, Stoykov N, Kamper DG. Development of a 3d, networked multi-user virtual reality environment for home therapy after stroke. J. Neuroeng. Rehabil. 2018;15(1):1–13.

    Article  Google Scholar 

  6. Ofli F, Kurillo G, Obdržálek Š, Bajcsy R, Jimison HB, Pavel M. Design and evaluation of an interactive exercise coaching system for older adults: lessons learned. IEEE J. Biomed. Health Inf. 2015;20(1):201–12.

    Article  Google Scholar 

  7. Ishii S, Yokokubo A, Luimula M, Lopez G. Exersense: physical exercise recognition and counting algorithm from wearables robust to positioning. Sensors. 2021;21(1):91.

    Article  Google Scholar 

  8. Fieraru M, Zanfir M, Pirlea SC, Olaru V, Sminchisescu C. Aifit: Automatic 3d human-interpretable feedback models for fitness training. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2021. p. 9919–28.

  9. Roosink M, Robitaille N, McFadyen BJ, Hébert LJ, Jackson PL, Bouyer LJ, Mercier C. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept. J. Neuroeng. Rehabil. 2015;12(1):1–10.

    Article  Google Scholar 

  10. Dwibedi D, Aytar Y, Tompson J, Sermanet P, Zisserman A. Counting out time: Class agnostic video repetition counting in the wild. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2020. p. 10387–96.

  11. Levy O, Wolf L. Live repetition counting. In: Proceedings of the IEEE international conference on computer vision; 2015. https://doi.org/10.1109/ICCV.2015.346.

  12. Thangali A, Sclaroff S. Periodic motion detection and estimation via space-time sampling. In: 2005 7th IEEE workshops on applications of computer vision (WACV/MOTION’05), vols. 1, 2. IEEE; 2005. p. 176–82.

  13. Ferreira B, Ferreira PM, Pinheiro G, Figueiredo N, Carvalho F, Menezes P, Batista J. Exploring workout repetition counting and validation through deep learning. In: International conference on image analysis and recognition; 2020. https://doi.org/10.1007/978-3-030-50347-5_1.

  14. Strömbäck D, Huang S, Radu V. Mm-fit: Multimodal deep learning for automatic exercise logging across sensing devices. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 2020;4(4):1–22.

    Article  Google Scholar 

  15. Runia TFH, Snoek CGM, Smeulders AWM. Real-world repetition estimation by div, grad and curl. In: 2018 IEEE/CVF conference on computer vision and pattern recognition; 2018.

  16. Briassouli A, Ahuja N. Extraction and analysis of multiple periodic motions in video sequences. IEEE Trans. Pattern Anal. Mach. Intell. 2007;29(7):1244–61.

    Article  Google Scholar 

  17. Sun K, Xiao B, Liu D, Wang J. Deep high-resolution representation learning for human pose estimation. In: CVPR; 2019.

  18. Wang J, Sun K, Cheng T, Jiang B, Deng C, Zhao Y, Liu D, Mu Y, Tan M, Wang X, Liu W, Xiao B. Deep high-resolution representation learning for visual recognition. In: TPAMI (2019)

  19. Yuan Y, Chen X, Wang J. Object-contextual representations for semantic segmentation. In: Proceedings of European conference on computer vision (ECCV), Glasgow, UK; 2020.

  20. Martinez J, Hossain R, Romero J, Little JJ. A simple yet effective baseline for 3d human pose estimation. In: Proceedings of the IEEE international conference on computer vision; 2017. p. 2640–9.

  21. Cao Z, Hidalgo G, Simon T, Wei SE, Sheikh Y. Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2021. https://doi.org/10.1109/TPAMI.2019.2929257.

  22. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X. Tensorflow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX conference on operating systems design and implementation; 2016.

  23. Junejo IN, Dexter E, Laptev I, Perez P. View-independent action recognition from temporal self-similarities. IEEE Trans. Pattern Anal. Mach. Intell. 2010;33(1):172–85.

    Article  Google Scholar 

  24. Körner M, Denzler J. Temporal self-similarity for appearance-based action recognition in multi-view setups. In: International conference on computer analysis of images and patterns. Berlin: Springer; 2013. p. 163–71.

  25. Sun C, Junejo IN, Tappen M, Foroosh H. Exploring sparseness and self-similarity for action recognition. IEEE Trans. Image Process. 2015;24(8):2488–501.

    Article  MathSciNet  MATH  Google Scholar 

  26. Vakanski A, Jun HP, Paul D, Baker R. A data set of human body movements for physical rehabilitation exercises. Data; 2018. https://doi.org/10.3390/data3010002.

Download references

Funding

This work is funded by National Key Research and Development Program of China, Ministry of Science and Technology of China: 2019YFE0198600 and Innovation and Technology Fund of Innovation and Technology Commission of Hong Kong: MHP/081/19.

Author information

Authors and Affiliations

Authors

Contributions

YCH conducted the analysis and writing of the report and data collection. YCH, TE, and KT contributed to the study design and review of the manuscript.

Corresponding authors

Correspondence to Yu Cheng Hsu or Kwok-leung Tsui.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Ethical approval

This research was approved by the Human Subjects Ethics Sub-Committee, City University of Hong Kong (Ref. 3-2-201803_02). All of the participants were well-informed and consent to participate the experiment.

Consent for publication

Written informed consent for publication was obtained from all of the participants in our experiment.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (GIF 5363 kb).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hsu, Y.C., Efstratios, T. & Tsui, Kl. Viewpoint-invariant exercise repetition counting. Health Inf Sci Syst 12, 1 (2024). https://doi.org/10.1007/s13755-023-00258-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13755-023-00258-3

Keywords

Navigation