Skip to main content
Log in

3D reconstruction considering calculation time reduction for linear trajectory shooting and accuracy verification with simulator

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Photogrammetry is a three-dimensional (3D) reconstruction from images. In photogrammetry, when each image captures the features of the target object for 3D reconstruction, a more highly accurate 3D reconstruction can be obtained in a shorter time from a small number of images. For this reason, effective images for 3D reconstruction must be selected. In this paper, we generate test images by constructing a virtual environment and changing the shooting conditions based on the simulation for this image selection. In particular, we focus on linear trajectory shooting, in which the camera moves on a straight rail and obtains images. We verify the 3D reconstruction considering the calculation time reduction based on effective image selection. The experimental result shows that the camera pose estimation can be improved using images obtained at multiple shooting angles in the case of linear trajectory shooting. The additional experimental result reveals that the calculation time for the reconstruction result is reduced by applying the images selected at regular intervals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Organisation for Economic Co-operation and Development, and Nuclear Energy Agency, “Fukushima Daiichi Nuclear Power Plant Accident, Ten Years On Progress, Lessons and Challenges’, https://www.oecd-nea.org/upload/docs/application/pdf/2021-03/fukushima_10_years_on.pdf. Accessed 10 May 2022

  2. Nagatani K, Kiribayashi S, Okada Y, Tadokoro S, Nishimura T, Yoshida T, Koyanagi E, Hada Y (2011) Redesign of rescue mobile robot Quince. In: 2011 IEEE international symposium on safety, security, and rescue robotics, pp 13–18

  3. Sato T, Moro A, Sugahara A, Tasaki T, Yamashita A, Asama H (2013) Spatio-temporal bird’s-eye view images using multiple fish-eye cameras. In: Proceedings of 2013 IEEE/SICE international symposium on system integration, pp 753–758

  4. Kawabata K (2020) Toward technological contributions to remote operations in the decommissioning of the Fukushima Daiichi Nuclear Power Station. Jpn J Appl Phys 59(5):567–574

    Article  Google Scholar 

  5. Wright T, Hanari T, Kawabata K, Lennox B (2020) Fast In-situ mesh generation using Orb-SLAM2 and OpenMVS. In: 2020 17th international conference on ubiquitous robots (UR), pp 315–321

  6. Ullman S (1979) The interpretation of structure from motion. Proc R Soc Lond B 203:405–426

    Article  Google Scholar 

  7. Seitz SM, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol 1, pp 519–528

  8. Hanari T, Kawabata K (2019) 3D environment reconstruction based on images obtained by reconnaissance task in Fukushima Daiichi Nuclear Power Station. E-Journal of Advanced Maintenance 11(2):99–105

    Google Scholar 

  9. Hanari T, Kawabata K, Nakamura K, Naruse K (2020) An image selection method from image sequence collected by remotely operated robot for efficient 3D reconstruction. In: 2020 RISP international workshop on nonlinear circuits, communications and signal processing (NCSP 2020), pp 242–245

  10. Hanari T, Kawabata K, Nakamura K (2022) Image selection method from image sequence to improve computational efficiency of 3D reconstruction: analysis of inter-image displacement based on optical flow for evaluating 3D reconstruction performance. In: Proceedings of 2022 IEEE/SICE international symposium on system integration (SII2022), pp 1041–1045

  11. Yokomura R, Yasuda M, Tsunano Y, Yoshida T, Warisawa S, Fukui R (2022) ”Automated Construction System of a Modularized Rail Structure for Locomotion and Operation in Hazardous Environments: Realization of Stable Transfer Operation of Different Modules in Multiple Load Directions”, 2022 IEEE/SICE International Symposium on System Integration (SII), pp.34-39

  12. Schaffalitzky F, Zisserman A (2002) Multi-view matching for unordered image sets, or ”How Do I Organize My Holiday Snaps?”. In: European conference on computer vision, pp 414–431

  13. Frahm JM, Fite-Georgel P, Gallup D, Johnson T, Raguram R, Wu C, Jen YH, Dunn E, Clipp B, Lazebnik S, Pollefeys M (2010) Building Rome on a Cloudless Day. European Conference on Computer Vision 2010:368–381

    Google Scholar 

  14. Blender, https://www.blender.org. Accessed 10 May 2022

  15. Simulation of camera view in the virtual environment by 3D computer graphics software Blender , https://www.youtube.com/watch?v=2Qdc4_7Ys_o. Accessed 10 May 2022

  16. Stathopoulou E-K, Welponer M, Remondino F (2019) Open-source image-based 3D reconstruction pipelines: review, comparison and evaluation. In: 6th international workshop LowCost 3D-sensors. Algorithms, applications

  17. Kataria R, DeGol J, Hoiem D (2020) Improving structure from motion with reliable resectioning. In: 2020 international conference on 3D vision (3DV)

  18. OpenSfM, https://github.com/mapillary/OpenSfM. Accessed 10 May 2022

  19. Cernea D (2022) OpenMVS: multi-view stereo reconstruction library. https://github.com/cdcseacave/openMVS. Accessed 10 May

  20. Schönberger JL, Frahm JM (2016) Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4104–4113

  21. Schönberger JL, Zheng E, Frahm JM, Pollefeys M (2016) “Pixelwise View Selection for Unstructured Multi-View Stereo”, In European Conference on Computer Vision, pp.501–518

  22. Besl PJ, McKay ND (1992) Method for registration of 3-D shapes. In: Sensor fusion IV: control paradigms and data structures, vol 1611, International Society for Optics and Photonics

  23. Zhou QY, Park J, Koltun V (2018) “Open3D: A Modern Library for 3D Data Processing”, arXiv:1801.09847

  24. CloudCompare. https://www.danielgm.net/cc/. Accessed 10 May 2022

Download references

Acknowledgements

This work was supported by the Nuclear Energy Science & Technology and Human Resource Development Project (through concentrating wisdom) from the Japan Atomic Energy Agency/Collaborative Laboratories for Advanced Decommissioning Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keita Nakamura.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was presented in part at the joint symposium of the 27th International Symposium on Artificial Life and Robotics, the 7th International Symposium on BioComplexity, and the 5th International Symposium on Swarm Behavior and Bio-Inspired Robotics (Online, January 25- 27, 2022).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nakamura, K., Hanari, T., Kawabata, K. et al. 3D reconstruction considering calculation time reduction for linear trajectory shooting and accuracy verification with simulator. Artif Life Robotics 28, 352–360 (2023). https://doi.org/10.1007/s10015-022-00835-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-022-00835-x

Keywords

Navigation