Skip to main content

Model-Based Dynamic Human Tracking and Reconstruction During Dynamic SLAM

  • Conference paper
  • First Online:
ROMANSY 23 - Robot Design, Dynamics and Control (ROMANSY 2020)

Part of the book series: CISM International Centre for Mechanical Sciences ((CISM,volume 601))

Included in the following conference series:

Abstract

Most of the existed Simultaneous Localization and Mapping solutions cannot work in dynamic environments since the dynamic objects lead to wrong uncertain feature associations. In this paper, we involved a learning-based object classification front end to recognize and remove the dynamic object, and thereby ensure our ego-motion estimator’s robustness in high dynamic environments. The static backgrounds are used for static environment reconstruction, the extracted dynamic human objects are used for human object tracking and reconstruction. Experimental results show that the proposed approach can provide not only accurate environment maps but also well-reconstructed moving humans.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 279.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bescos, B., Fácil, J.M., Civera, J., Neira, J.: Dynaslam: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)

    Article  Google Scholar 

  2. Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: Yolact: real-time instance segmentation. In: Proceedings of the IEEE International Conference on Computer Vision (2019)

    Google Scholar 

  3. Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2D pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)

  4. Chen, X., Yu, Z., Zhang, W., Zheng, Y., Huang, Q., Ming, A.: Bioinspired control of walking with toe-off, heel-strike, and disturbance rejection for a biped robot. IEEE Trans. Industr. Electron. 64(10), 7962–7971 (2017)

    Article  Google Scholar 

  5. Gao, W., Tedrake, R.: Surfelwarp: efficient non-volumetric single view dynamic reconstruction. arXiv preprint arXiv:1904.13073 (2019)

  6. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  7. Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., Stamminger, M.: Volumedeform: real-time volumetric non-rigid reconstruction. In: ECCV, pp. 362–379. Springer (2016)

    Google Scholar 

  8. Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7122–7131 (2018)

    Google Scholar 

  9. Mur-Artal, R., Tardós, J.D.: ORB–SLAM2: an open-source slam system for monocular, stereo and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017). https://doi.org/10.1109/TRO.2017.2705103

    Article  Google Scholar 

  10. Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: CVPR, pp. 343–352 (2015)

    Google Scholar 

  11. Rünz, M., Agapito, L.: Co-fusion: real-time segmentation, tracking and fusion of multiple objects. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 4471–4478. IEEE (2017)

    Google Scholar 

  12. Scona, R., Jaimez, M., Petillot, Y.R., Fallon, M., Cremers, D.: Staticfusion: background reconstruction for dense RGD-B SLAM in dynamic environments. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1–9 (2018)

    Google Scholar 

  13. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGD-B SLAM systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580 (2012)

    Google Scholar 

  14. Whelan, T., Salas-Moreno, R.F., Glocker, B., Davison, A.J., Leutenegger, S.: Elasticfusion: real-time dense SLAM and light source estimation. Int. J. Robot. Res. 35(14), 1697–1716 (2016)

    Article  Google Scholar 

  15. Yu, T., Zheng, Z., Guo, K., Zhao, J., Dai, Q., Li, H., Pons-Moll, G., Liu, Y.: Doublefusion: real-time capture of human performances with inner body shapes from a single depth sensor. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7287–7296 (2018)

    Google Scholar 

  16. Zhang, T., Nakamura, Y.: Hrpslam: a benchmark for RGB-D dynamic SLAM and humanoid vision. In: Proceedings of the IEEE International Conference on Robotic Computing, pp. 110–116 (2019)

    Google Scholar 

  17. Zhang, T., Nakamura, Y.: Posefusion: dense RGB-D SLAM in dynamic human environments. In: Xiao, J., Kröger, T., Khatib, O. (eds.) Proceedings of the 2018 International Symposium on Experimental Robotics, pp. 772–780. Springer International Publishing, Cham (2020)

    Google Scholar 

  18. Zhang, T., Zhang, H., Li, Y., Nakamura, Y., Zhang, L.: FlowFusion: dynamic dense RGB-D SLAM based on optical flow. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 7322–7328 (2020)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Natural Science Foundation of China (Grant No. 61473027), Beijing Key Laboratory of Robot Bionics and Function Research (Grant No. BZ0337) and Beijing Advanced Innovation Center for Intelligent Robots and Systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianwei Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 CISM International Centre for Mechanical Sciences

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, H., Zhang, T., Zhang, L. (2021). Model-Based Dynamic Human Tracking and Reconstruction During Dynamic SLAM. In: Venture, G., Solis, J., Takeda, Y., Konno, A. (eds) ROMANSY 23 - Robot Design, Dynamics and Control. ROMANSY 2020. CISM International Centre for Mechanical Sciences, vol 601. Springer, Cham. https://doi.org/10.1007/978-3-030-58380-4_2

Download citation

Publish with us

Policies and ethics