Skip to main content
Log in

MAG: a smart gloves system based on multimodal fusion perception

  • Regular Paper
  • Published:
CCF Transactions on Pervasive Computing and Interaction Aims and scope Submit manuscript

Abstract

The value and significance of intelligent experiments have gained widespread consensus and formed an industry chain in the education field. However, the interaction issues in such systems have severely constrained their extensive application. These issues manifest in weak operational feedback, problems with traditional interactive devices such as obstructed views and inability to observe subtle experimental phenomena at close range. The most prominent challenge is the inability of existing virtual experiments to accurately perceive user intentions. Therefore, in response to the aforementioned challenging problems in intelligent experiment interaction tools, this paper proposes an overall approach to design a multimodal fusion perception-based Smart gloves system for middle school experiments. It adopts a combination of virtual and real experiments, utilizing the Smart gloves to facilitate the experiment process. The paper designs the hardware structure of the Smart gloves and proposes the YSA algorithm and ICAM algorithm to capture user multimodal intentions, constructing the MAG algorithm to achieve multimodal fusion comprehension of user intentions at the decision-making level. This enhances the system's ability to accurately comprehend and analyze user intentions, thereby strengthening the Smart gloves's intent understanding capability. Experimental results demonstrate that the designed Smart gloves can promptly infer users' experimental intentions and provide corresponding feedback. The experiments further indicate that the developed Smart gloves interaction tool exhibits unique advantages in addressing challenging technical issues faced by existing intelligent experiment tools and traditional data gloves, offering encouraging potential application prospects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • Aljuhani, K., Sonbul, M., Althabiti, M., Meccawy, M.: Creating a Virtual Science Lab (VSL): the adoption of virtual labs in Saudi schools. Smart Learning Environments 5, 1–13 (2018)

    Article  Google Scholar 

  • Castelvecchi, D. Low-cost headsets boost virtual reality's lab appeal. Nature, 533(7602) (2016)

  • Chen, B.C., Qin, X.Y.: The integration of reality and human-machine intelligence in mixed reality (in Chinese). Chinese Sci: Inf Sci 46(12), 1737–1747 (2016)

    Article  Google Scholar 

  • Chhabria, S. A., Dharaskar, R. V., & Thakare, V. M.:. Survey of fusion techniques for design of efficient multimodal systems. In 2013 International Conference on Machine Intelligence and Research Advancement (pp. 486–492). IEEE (2013)

  • Chu, X., Xie, X., Ye, S., Lu, H., Xiao, H., Yuan, Z., Wu, Y.: TIVEE: Visual exploration and explanation of badminton tactics in immersive visualizations. IEEE Transactions on Visualization and Computer Graphics, 28(1), 118–128 (2021)

  • Clark, H. H.: Using language cambridge university press Cambridge, (1996)

  • Ding, H., Yang, X., Zheng, N., Li, M., Lai, Y., Wu, H.: Tri-Co Robot: a Chinese robotic research initiative for enhanced robot interaction capabilities. Natl. Sci. Rev. 5(6), 799–801 (2018)

    Article  Google Scholar 

  • Dorsey, K. L.: Electronics-free soft robot has a nice ring to it. Science Robotics, 7(63), eabn6551 (2022)

  • Fang, B., Sun, F., Liu, H., & Guo, D.: A novel data glove for fingers motion capture using inertial and magnetic measurement units. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 2099–2104). IEEE (2016)

  • Ghayoumi, M., Thafar, M., & Bansal, A. K.: Towards Formal Multimodal Analysis of Emotions for Affective Computing. In DMS (pp. 48–54) (2016)

  • Guo, X. P.: Research on gesture recognition algorithm based on data glove and Kinect(in Chinese) (Master's thesis, University of Jinan) (2019)

  • Han, C.Z., Zhu, H.Y., Duan, Z.S.: Multi-source information fusion (in Chinese). Tsinghua University Press, Beijing (2010)

    Google Scholar 

  • Hu, Z., Yang, D., Cheng, S., Zhou, L., Wu, S., Liu, J.: We know where they are looking at from the rgb-d camera: Gaze following in 3d. IEEE Trans. Instrum. Meas. 71, 1–14 (2022a)

    Google Scholar 

  • Hu, X., Liu, Y., Zhang, H. L., Wang, W., Li, Y., Meng, C., & Fu, Z.: Noninvasive Human-Computer Interface Methods and Applications for Robotic Control: Past, Current, and Future. Computational Intelligence and Neuroscience, 2022b

  • Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S.: Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques: Multimodal Fusion Using Gesture and Speech Input for AR. In Intelligent Systems and Applications: Proceedings of the 2018 Intelligent Systems Conference (IntelliSys) Volume 1 (pp. 309–322). Springer International Publishing (2019)

  • Johannsen, G.: Human-machine interaction. Control Syst. Robot. Automat. 21, 132–162 (2009)

    Google Scholar 

  • Lang, X., Feng, Z., Yang, X., Xu, T.: HMMCF: A human-computer collaboration algorithm based on multimodal intention of reverse active fusion. Int. J. Hum Comput Stud. 169, 102916 (2023)

    Article  Google Scholar 

  • Li, X. H., Zhang, X. Y., Mao, S. Y.: Design of minimally invasive surgical robotic arm gripping clamp system with data glove control (in Chinese). Advances in Biomedical Engineering (2022)

  • Liu, D., Valdiviezo-Díaz, P., Riofrio, G., Sun, Y.M., Barba, R.: Integration of virtual labs into science e-learning. Procedia Comput Sci 75, 95–102 (2015)

    Article  Google Scholar 

  • Liu, H., Fang, T., Zhou, T., Wang, L.: Towards robust human-robot collaborative manufacturing: Multimodal fusion. IEEE Access 6, 74762–74771 (2018)

    Article  Google Scholar 

  • Nobook Virtual Lab, http://school.nobook.com.cn/site

  • Norman, K. L., Kirakowski, J.: The Wiley handbook of human computer interaction set (2017)

  • Rasheed, F., Onkar, P., & Narula, M.: Immersive virtual reality to enhance the spatial awareness of students. In: Proceedings of the 7th International Conference on HCI, IndiaHCI 2015 (pp. 154–160) (2015)

  • Sidenmark, L., Gellersen, H.: Eye, head and torso coordination during gaze shifts in virtual reality. ACM Transact. Comput-Human Int (TOCHI) 27(1), 1–40 (2019)

    Google Scholar 

  • Vu, H. A., Yamazaki, Y., Dong, F., & Hirota, K.: Emotion recognition based on human gesture and speech information using RT middleware. In 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011) (pp. 787–791). IEEE (2011)

  • Wang, R., Popovi, J. Real-time hand-tracking with a color glove. ACM Transactions on Graphics (TOG) (2009)

  • Wang, H., Feng, Z., Tian, J., & Fan, X.: MFA: A Smart Glove with Multimodal Intent Sensing Capability. Computational Intelligence and Neuroscience, 2022.

  • Won, M., Mocerino, M., Tang, K. S., Treagust, D. F., & Tasker, R.: Interactive immersive virtual reality to enhance students’ visualisation of complex molecules. In Research and Practice in Chemistry Education: Advances from the 25th IUPAC International Conference on Chemistry Education 2018 (pp. 51–64). Springer Singapore (2019)

  • Yip, J., Wong, S.H., Yick, K.L., Chan, K., Wong, K.H.: Improving quality of teaching and learning in classes by using augmented reality video. Comput. Educ. 128, 88–101 (2019)

    Article  Google Scholar 

  • Yu, J., Huang, W., Zhang, X.B., Yin, H.F.: Multimodal Fusion Hash Learning Method Based on Relaxed Hadamard Matrix. Acta Electonica Sinica 50(4), 909 (2022)

    Google Scholar 

  • Yu, M., Liu, Y. J., Zhao, G., & Wang, C. C.: Tangible interaction with 3D printed modular robots through multi-channel sensors. In SIGGRAPH Asia 2018 Posters (pp. 1–2) (2018)

  • Zhong, X. H., Liu, Y. M., Gao, P. T.: Application and development of VR/AR/MR technology in the field of aircraft assembly and manufacturing(in Chinese). Science and Technology Innovation (2020)

Download references

Acknowledgements

I would like to thank those who helped me in the preparation of my thesis.

Funding

This work was supported by the Independent Innovation Team Project of Jinan City (no. 2019GXRC013) and Upper-level project of Natural Science Foundation of Shandong Province (no. ZR2022MF352).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by HC, ZF, JT, DK, ZX and WL. The first draft of the manuscript was written by Hong Cui and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhiquan Feng.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, H., Feng, Z., Tian, J. et al. MAG: a smart gloves system based on multimodal fusion perception. CCF Trans. Pervasive Comp. Interact. 5, 411–429 (2023). https://doi.org/10.1007/s42486-023-00138-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42486-023-00138-5

Keywords

Navigation