Skip to main content

Synthetic Data Generation on Dynamic Industrial Environment for Object Detection, Tracking, and Segmentation CNNs

  • Conference paper
  • First Online:
Technological Innovation for Connected Cyber Physical Spaces (DoCEIS 2023)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 678))

Included in the following conference series:

  • 244 Accesses

Abstract

The ability of Convolutional Neural Networks (CNNs) to learn from vast amounts of data and improve accuracy over time makes them an attractive solution for many industrial problems. In the context of Future Assembly Systems such as Line-Less Mobile Assembly Systems, CNNs can be used to monitor the networked system of mobile robots, human operators, and other movable objects that assemble products in flexible environment configurations. This paper explores the use of a simulated industrial environment to autonomously generate training data for object detection, tracking, and segmentation CNNs. The goal is to adapt state-of-the-art CNN solutions to specific industry use cases, where real data annotation can be time-consuming and expensive. The developed algorithm efficiently generates new random image data, allowing accurate object detection, tracking, and segmentation in dynamic industrial scenarios. The results show the effectiveness of this approach in improving the testing of CNNs for industrial applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rai, R., Tiwari, M. K., Ivanov, D., Dolgui, A.: Machine learning in manufacturing and industry 4.0 applications. Int. J. Prod. Res. 59(16), 4773–4778 (2021). https://doi.org/10.1080/00207543.2021.1956675

  2. Danielczuk, M., et al.: Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data. In: International Conference on Robotics and Automation (ICRA), pp. 7283–7290. IEE/Canada (2019). https://doi.org/10.1109/ICRA.2019.8793744

  3. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., Abbeel, P.: Domain randomization for transferring deep neural networks from simulation to the real world. In: International Conference on Intelligent Robots and Systems – IROS, pp. 23–30. IEEE/RSJ (2017). https://doi.org/10.1109/IROS.2017.8202133

  4. Andulkar, M., Hodapp, J., Reichling, T., Reichenbach, M., Berger, U.: Training CNNs from synthetic data for part handling in industrial environments. In: 14th International Conference on Automation Science and Engineering (CASE), Germany. IEEE (2018). https://doi.org/10.1109/COASE.2018.8560470

  5. Schnieders, B., Luo, S., Palmer, G., Tuyls, K.: Fully Convolutional One-Shot Object Segmentation for Industrial Robotics. arXiv preprint arXiv:1903.00683 (2019). https://doi.org/10.48550/arXiv.1903.00683

  6. Li, X., et al.: A sim-to-real object recognition and localization framework for industrial robotic bin picking. IEEE Robot. Autom. Lett. 7(2), 3961–3968 (2022). https://doi.org/10.1109/LRA.2022.3149026

  7. Xu, K., Ragot, N., Dupuis, Y.: View selection for industrial object recognition. In: 48th Annual Conference of the IEEE Industrial Electronics Society – IECON, Belgium, pp. 1–6. IEEE (2022). 1109/IECON49645.2022.9969121

    Google Scholar 

  8. Buckhorst, A.F., do Canto, M.K.B., Schmitt, R.H.: The line-less mobile assembly system simultaneous scheduling and location problem. Procedia CIRP 106, 203–208 (2022). https://doi.org/10.1016/j.procir.2022.02.179

  9. Villalonga, A., et al.: A decision-making framework for dynamic scheduling of cyber-physical production systems based on digital twins. Annu. Rev. Control. 51, 357–373 (2021). https://doi.org/10.1016/j.arcontrol.2021.04.008

    Article  Google Scholar 

  10. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision – ICCV, pp. 2961–2969 (2017). https://doi.org/10.48550/arXiv.1703.06870

  11. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems – NIPS, vol. 28 (2015). ISBN 9781510825024

    Google Scholar 

  12. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: Trainable Bag-of-Freebies Sets New State-Of-The-Art for Real-Time Object Detectors. arXiv preprint arXiv:2207.02696 (2022). https://doi.org/10.48550/arXiv.2207.02696

  13. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

    Chapter  Google Scholar 

  14. The CIFAR-10 dataset. https://www.cs.toronto.edu/~kriz/cifar.html. Accessed 30 Apr 2023

  15. ImageNET. https://www.image-net.org/. Accessed 30 Apr 2023

  16. COCO Dataset. https://cocodataset.org/#home. Accessed 30 Apr 2023

  17. Nunes, L., et al.: Unsupervised class-agnostic instance segmentation of 3D LiDAR data for autonomous vehicles. IEEE Robot. Autom. Lett. 7(4), 8713–8720 (2022). https://doi.org/10.1109/LRA.2022.3187872

  18. De Melo, M.S.P., da Silva Neto, J.G., Da Silva, P.J.L., Teixeira, J.M.X.N., Teichrieb, V.: Analysis and comparison of robotics 3D simulators. In: 21st Symposium on Virtual and Augmented Reality – SVR, pp. 242–251, Brazil. IEEE (2019). https://doi.org/10.1109/SVR.2019.00049

  19. Gazebo Simulator. https://gazebosim.org/home. Accessed 30 Apr 2023

  20. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Conference on Robot Learning – PMLR, pp. 1–16. (2017). https://proceedings.mlr.press/v78/dosovitskiy17a.html. Accessed 30 Apr 2023

  21. Nvidia Isaac Sim. https://developer.nvidia.com/isaac-sim. Accessed 30 Apr 2023

  22. Macenski, S., Foote, T., Gerkey, B., Lalancette, C., Woodall, W.: Robot operating system 2: design, architecture, and uses in the wild. Sci. Robot. 7(66), eabm6074 (2022). https://www.science.org/doi/abs/10.1126/scirobotics.abm6074

  23. Macenski, S., Martín, F., White, R., Ginés Clavero, J.: The marathon 2: a navigation system. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020). https://github.com/ros-planning/navigation2, https://arxiv.org/abs/2003.00368

  24. Moveit2 package. https://moveit.ros.org/. Accessed 30 Apr 2023

  25. Wu, Y., Kirillov, A., Massa, F., Lo, W.Y., Girshick, R.: Detectron2 (2019). https://github.com/facebookresearch/detectron2

  26. Data generation on Ignition Gazebo (Garden) for Segmentation and Object Tracking CNN’s. https://github.com/danilogsch/Coop-SLAM. Accessed 30 Apr 2023

Download references

Acknowledgments

We would like to acknowledge CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) for their financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danilo G. Schneider .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schneider, D.G., Stemmer, M.R. (2023). Synthetic Data Generation on Dynamic Industrial Environment for Object Detection, Tracking, and Segmentation CNNs. In: Camarinha-Matos, L.M., Ferrada, F. (eds) Technological Innovation for Connected Cyber Physical Spaces. DoCEIS 2023. IFIP Advances in Information and Communication Technology, vol 678. Springer, Cham. https://doi.org/10.1007/978-3-031-36007-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36007-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36006-0

  • Online ISBN: 978-3-031-36007-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics