Abstract
Most of the existed Simultaneous Localization and Mapping solutions cannot work in dynamic environments since the dynamic objects lead to wrong uncertain feature associations. In this paper, we involved a learning-based object classification front end to recognize and remove the dynamic object, and thereby ensure our ego-motion estimator’s robustness in high dynamic environments. The static backgrounds are used for static environment reconstruction, the extracted dynamic human objects are used for human object tracking and reconstruction. Experimental results show that the proposed approach can provide not only accurate environment maps but also well-reconstructed moving humans.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bescos, B., Fácil, J.M., Civera, J., Neira, J.: Dynaslam: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)
Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: Yolact: real-time instance segmentation. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2D pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)
Chen, X., Yu, Z., Zhang, W., Zheng, Y., Huang, Q., Ming, A.: Bioinspired control of walking with toe-off, heel-strike, and disturbance rejection for a biped robot. IEEE Trans. Industr. Electron. 64(10), 7962–7971 (2017)
Gao, W., Tedrake, R.: Surfelwarp: efficient non-volumetric single view dynamic reconstruction. arXiv preprint arXiv:1904.13073 (2019)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., Stamminger, M.: Volumedeform: real-time volumetric non-rigid reconstruction. In: ECCV, pp. 362–379. Springer (2016)
Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7122–7131 (2018)
Mur-Artal, R., Tardós, J.D.: ORB–SLAM2: an open-source slam system for monocular, stereo and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017). https://doi.org/10.1109/TRO.2017.2705103
Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: CVPR, pp. 343–352 (2015)
Rünz, M., Agapito, L.: Co-fusion: real-time segmentation, tracking and fusion of multiple objects. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 4471–4478. IEEE (2017)
Scona, R., Jaimez, M., Petillot, Y.R., Fallon, M., Cremers, D.: Staticfusion: background reconstruction for dense RGD-B SLAM in dynamic environments. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1–9 (2018)
Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGD-B SLAM systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580 (2012)
Whelan, T., Salas-Moreno, R.F., Glocker, B., Davison, A.J., Leutenegger, S.: Elasticfusion: real-time dense SLAM and light source estimation. Int. J. Robot. Res. 35(14), 1697–1716 (2016)
Yu, T., Zheng, Z., Guo, K., Zhao, J., Dai, Q., Li, H., Pons-Moll, G., Liu, Y.: Doublefusion: real-time capture of human performances with inner body shapes from a single depth sensor. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7287–7296 (2018)
Zhang, T., Nakamura, Y.: Hrpslam: a benchmark for RGB-D dynamic SLAM and humanoid vision. In: Proceedings of the IEEE International Conference on Robotic Computing, pp. 110–116 (2019)
Zhang, T., Nakamura, Y.: Posefusion: dense RGB-D SLAM in dynamic human environments. In: Xiao, J., Kröger, T., Khatib, O. (eds.) Proceedings of the 2018 International Symposium on Experimental Robotics, pp. 772–780. Springer International Publishing, Cham (2020)
Zhang, T., Zhang, H., Li, Y., Nakamura, Y., Zhang, L.: FlowFusion: dynamic dense RGB-D SLAM based on optical flow. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 7322–7328 (2020)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Grant No. 61473027), Beijing Key Laboratory of Robot Bionics and Function Research (Grant No. BZ0337) and Beijing Advanced Innovation Center for Intelligent Robots and Systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 CISM International Centre for Mechanical Sciences
About this paper
Cite this paper
Zhang, H., Zhang, T., Zhang, L. (2021). Model-Based Dynamic Human Tracking and Reconstruction During Dynamic SLAM. In: Venture, G., Solis, J., Takeda, Y., Konno, A. (eds) ROMANSY 23 - Robot Design, Dynamics and Control. ROMANSY 2020. CISM International Centre for Mechanical Sciences, vol 601. Springer, Cham. https://doi.org/10.1007/978-3-030-58380-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-58380-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58379-8
Online ISBN: 978-3-030-58380-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)