Skip to main content

SynPhoRest - A Procedural Generation Tool of Synthetic Photorealistic Forest Datasets

  • Conference paper
  • First Online:
Robot 2023: Sixth Iberian Robotics Conference (ROBOT 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 976))

Included in the following conference series:

  • 14 Accesses

Abstract

The development cadence of Deep Learning algorithms is extremely high, with the development of autonomous navigation systems and especially autonomous vehicles in the forefront. Growing environmental awareness has directed efforts toward the development of autonomous systems for the maintenance and preservation of forested areas. Unlike urban areas, available datasets on these environments are scarce and incomplete. In addition, the complex and unstructured nature of forested areas and the tedious labeling process lead to a high rate of mislabeling. Given the success that the use of synthetic data has had in model training, this work proposes an approach that helps overcome these limitations. The SynPhoRest simulator can generate photorealistic synthetic data in the form of RGB images, semantic segmentation maps, and depth maps from procedurally generated virtual forest environments. The system supports both manual and automatic extraction and can generate 100 sets of data in about an hour.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    AAA (triple-A). Video-games with a high quality and production value, developed by established and reputable game development studios.

References

  1. FAO of the United Nations. A Fresh Perspective - Global Forest Resources Assessment 2020. https://www.fao.org/forest-resourcesassessment/2020/en/. Accessed 05 July 2023

  2. Giusti, A., et al.: A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robot. Automat. Lett. 1(2), 661–667 (2016). https://doi.org/10.1109/LRA.2015.2509024

  3. Munappy, A., Bosch, J., Olsson, H.H., Arpteg, A., Brinne, B.: Data management challenges for deep learning. In: 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Kallithea, pp. 140–147 (2019). https://doi.org/10.1109/SEAA.2019.00030

  4. Hu, Q., et al.: RandLA-net: efficient semantic segmentation of large-scale point clouds. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, pp. 11105–11114 (2020). https://doi.org/10.1109/CVPR42600.2020.01112

  5. V7Labs. V7 Darwin - Auto-Annotate Complex Objects 10x Faster. https://www.v7labs.com/auto-annotation. Accessed 04 July 2023

  6. He, R., et al.: Is synthetic data from generative models ready for image recognition? arXiv preprint arXiv:2210.07574 [cs.CV] (2023)

  7. Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision. ECCV 2016. LNCS, vol. 9906, pp. 102–118. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_7

  8. Brostow, G.J., Fauqueur, J., Cipolla, R.: Semantic object classes in video: a high-definition ground truth database. Pattern Recogn. Lett. 30(2), 88–97 (2009). https://doi.org/10.1016/j.patrec.2008.04.005

  9. Yue, X., Wu, B., Seshia, S.A., Keutzer, K., Sangiovanni-Vincentelli, A.L.: A LiDAR Point Cloud Generator: From a Virtual World to Autonomous Driving. arXiv preprint arXiv: 1804.00103 [cs.CV] (2018)

  10. Wu, B., Wan, A., Yue, X., Keutzer, K.: SqueezeSeg: convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, pp. 1887–1893 (2018). https://doi.org/10.1109/ICRA.2018.8462926

  11. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, pp. 3354–3361 (2012). https://doi.org/10.1109/CVPR.2012.6248074

  12. Hurl, B., Czarnecki, K., Waslander, S.: Precise synthetic image and LiDAR (PreSIL) dataset for autonomous vehicle perception. IEEE Intell. Veh. Symp. (IV) Paris, France 2019, 2522–2529 (2019). https://doi.org/10.1109/IVS.2019.8813809

  13. Griffiths, D., Boehm, J.: Synthcity: a large scale synthetic point cloud. arXiv:1907.04758 [cs.CV] (2019)

  14. Wrenninge, M., Unger, J.: Synscapes: a photorealistic synthetic dataset for street scene parsing. arXiv:1810.08705 [cs.CV] (2018)

  15. Nunes, R., Ferreira, J., Peixoto, P.: Procedural generation of synthetic forest environments to train machine learning algorithms. https://openreview.net/forum?id=rpzgjNCe4G9

  16. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). arXiv:1604.01685 [cs.CV] (2016)

  17. Perlin, K.: An image synthesizer. SIGGRAPH Comput. Graph. 19(3), 287–296 (1985). https://doi.org/10.1145/325165.325247

  18. Perlin, K.: Noise hardware (2001). https://redirect.cs.umbc.edu/~olano/s2001c24/ch09.pdf. Accessed 30 June 2023

  19. devdad. Simplexnoise plugin (2020). https://github.com/devdad/SimplexNoise. Accessed 14 Jan 2023. Commit: b57598706afd8fc4d50164da1ed58515595699b2

  20. Unreal Engine 5.1 Documentation - Procedural Mesh Component (2022). https://docs.unrealengine.com/5.1/enUS/API/Plugins/ProceduralMeshComponent/UProceduralMeshComponent/. Accessed 23 June 2023

  21. Adobe Inc. Adobe Photoshop. https://www.adobe.com/products/photoshop.html. Accessed 04 July 2023

Download references

Acknowledgement

This publication was co-financed by Programa Operacional Regional do Centro, Portugal 2020, European Union FEDER Fund, Project: CENTRO-01-0247-FEDER-045931 (SAFEFOREST - Semi-Autonomous Robotic System for Forest Cleaning and Fire Prevention).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bidault, R., Peixoto, P. (2024). SynPhoRest - A Procedural Generation Tool of Synthetic Photorealistic Forest Datasets. In: Marques, L., Santos, C., Lima, J.L., Tardioli, D., Ferre, M. (eds) Robot 2023: Sixth Iberian Robotics Conference. ROBOT 2023. Lecture Notes in Networks and Systems, vol 976. Springer, Cham. https://doi.org/10.1007/978-3-031-58676-7_6

Download citation

Publish with us

Policies and ethics