Abstract
Depth cameras started to gain popularity in agricultural applications during the last years. This type of cameras has been implemented mainly for three-dimensional (3D) reconstruction of objects in indoor or outdoor sceneries. The use of such cameras in the construction of 3D models for simulation purposes of complex structures that are usually met in nature, such as trees and other plants, is a great challenge. Remarkably, agricultural environments are extremely complex. Thus, the proper setup and implementation of such technologies is particularly important in order to attain useable data. So far, the depth information collected using these cameras varies among different objects’ structure and sensing conditions due to the uncertainty of the outdoor environment. The use of a specific methodology using color and depth images gives the opportunity to extract geometrical characteristics information about point clouds of the targeted objects. This chapter explores the different technologies used by depth cameras and presents several applications concerning indoor and outdoor environments by presenting indicative scenarios for agricultural applications. Towards that direction, a 3D reconstruction of trees was established producing point clouds from Red Green Blue Depth (RGB-D) images acquired in real field conditions. The point cloud samples of trees were collected using an unmanned ground vehicle (UGV) and imported in Gazebo in order to visualize a simulation of the environment. This simulation technique can be used for testing and evaluating the navigation of robotic systems. By further analyzing the resulted 3D point clouds, various geometrical measurements of the simulated samples, such as the volume or the height of tree canopies, can be calculated. Possible weaknesses of this procedure are mainly attributed to the camera’s limitations and the sampling parameters. However, results show that it is possible to establish a suitable simulation environment to implement it in several agricultural applications by utilizing automated unmanned robotic platforms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., et al. (2011). KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the UIST’11 - Proceedings of the 24th annual ACM symposium on user interface software and technology. ACM Press, New York, NY, pp. 559–568.
Pan, H., Guan, T., Luo, Y., Duan, L., Tian, Y., Yi, L., Zhao, Y., & Yu, J. (2016). Dense 3D reconstruction combining depth and RGB information. Neurocomputing, 175, 644–651. https://doi.org/10.1016/j.neucom.2015.10.104
Biskup, B., Scharr, H., Schurr, U., & Rascher, U. (2007). A stereo imaging system for measuring structural parameters of plant canopies. Plant, Cell and Environment, 30, 1299–1308. https://doi.org/10.1111/j.1365-3040.2007.01702.x
Schöps, T., Sattler, T., Häne, C., & Pollefeys, M. (2017). Large-scale outdoor 3D reconstruction on a mobile device. Computer Vision and Image Understanding, 157, 151–166. https://doi.org/10.1016/j.cviu.2016.09.007
Li, J., Huang, W., Shao, L., & Allinson, N. (2014). Building recognition in urban environments: A survey of state-of-the-art and future challenges. Information Sciences (NY), 277, 406–420. https://doi.org/10.1016/j.ins.2014.02.112
Guo, Y., Sohel, F., Bennamoun, M., Wan, J., & Lu, M. (2015). A novel local surface feature for 3D object recognition under clutter and occlusion. Information Sciences (NY), 293, 196–213. https://doi.org/10.1016/j.ins.2014.09.015
Kim, D. Y., & Jeon, M. (2014). Data fusion of radar and image measurements for multi-object tracking via Kalman filtering. Information Sciences (NY), 278, 641–652. https://doi.org/10.1016/j.ins.2014.03.080
Yang, D., Liu, Z., Sun, F., Zhang, J., Liu, H., & Wang, S. (2014). Recursive depth parametrization of monocular visual navigation: Observability analysis and performance evaluation. Information Sciences (NY), 287, 38–49. https://doi.org/10.1016/j.ins.2014.07.025
Viejo, D., Garcia-Rodriguez, J., & Cazorla, M. (2014). Combining visual features and Growing Neural Gas networks for robotic 3D SLAM. Information Sciences (NY), 276, 174–185. https://doi.org/10.1016/j.ins.2014.02.053
Gálvez, A., & Iglesias, A. (2012). Particle swarm optimization for non-uniform rational B-spline surface reconstruction from clouds of 3D data points. Information Sciences (NY), 192, 174–192. https://doi.org/10.1016/j.ins.2010.11.007
Nevatia, R. (1976). Depth measurement by motion stereo. Comput. Graph. Image Process., 5, 203–214. https://doi.org/10.1016/0146-664X(76)90028-9
Mrovlje, J., & Vrani, D. (2008). Distance measuring based on stereoscopic pictures. In 9th Int. PhD Work. Syst. Control Young Gener. Viewp.
Liu, Z., & Chen, T. (2009). Distance measurement system based on binocular stereo vision. In Proceedings of the IJCAI international joint conference on artificial intelligence; pp. 456–459.
Condotta, I. C. F. S., Brown-Brandl, T. M., Pitla, S. K., Stinn, J. P., & Silva-Miranda, K. O. (2020). Evaluation of low-cost depth cameras for agricultural applications. Computers and Electronics in Agriculture, 173, 105394. https://doi.org/10.1016/j.compag.2020.105394
Bertheloot, J., Cournède, P.-H., & Andrieu, B. PART OF A SPECIAL ISSUE ON FUNCTIONAL-STRUCTURAL PLANT MODELLING—NEMA, a functional-structural model of nitrogen economy within wheat culms after flowering. I. Model description. Annals of Botany, 108, 1085–1096. https://doi.org/10.1093/aob/mcr119
Vos, J., Evers, J. B., Buck-Sorlin, G. H., Andrieu, B., Chelle, M., & De Visser, P. H. B. (2010). Functional-structural plant modelling: A new versatile tool in crop science. Journal of Experimental Botany, 61, 2101–2115.
Zhu, C., Zhang, X., Hu, B., & Jaeger, M. (2008). Reconstruction of tree crown shape from scanned data. In Proceedings of the lecture notes in computer science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Berlin, Heidelberg, Vol. 5093 LNCS, pp. 745–756.
Omasa, K., Hosoi, F., & Konishi, A. (2007). 3D lidar imaging for detecting and understanding plant responses and canopy structure. Journal of Experimental Botany, 58, 881–898.
Binney, J., & Sukhatme, G. S. (2009). 3D tree reconstruction from laser range data. In Proceedings of the proceedings - IEEE international conference on robotics and automation, pp. 1321–1326.
Chéné, Y., Rousseau, D., Lucidarme, P., Bertheloot, J., Caffier, V., Morel, P., Belin, É., & Chapeau-Blondeau, F. (2012). On the use of depth camera for 3D phenotyping of entire plants. Computers and Electronics in Agriculture, 82, 122–127. https://doi.org/10.1016/j.compag.2011.12.007
Andújar, D., Dorado, J., Fernández-Quintanilla, C., & Ribeiro, A. (2016). An approach to the use of depth cameras for weed volume estimation. Sensors, 16, 972. https://doi.org/10.3390/s16070972
Orriordan, A., Newe, T., Dooly, G., & Toal, D. (2019). Stereo vision sensing: Review of existing systems. In Proceedings of the Proceedings of the international conference on sensing technology, ICST; IEEE computer society, Vol. 2018-Decem, pp. 178–184.
Tran, V. L., & Lin, H. Y. (2018). A structured light RGB-D camera system for accurate depth measurement. International Journal of Optics, 2018. https://doi.org/10.1155/2018/8659847
Lachat, E., Macher, H., Mittet, M.-A., Landes, T., & Grussenmeyer, P. (2015). FIRST EXPERIENCES WITH KINECT V2 SENSOR FOR CLOSE RANGE 3D MODELLING. In ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., XL-5/W4, 93–100. https://doi.org/10.5194/isprsarchives-XL-5-W4-93-2015.
Kirsten, E., Inocencio, L. C., Veronez, M. R., Da Silveira, L. G., Bordin, F., & Marson, F. P. (2018). 3D data acquisition using stereo camera. In Proceedings of the international geoscience and remote sensing symposium (IGARSS), Institute of Electrical and Electronics Engineers Inc., Vol. 2018-July, pp. 9214–9217.
Marin, G., Agresti, G., Minto, L., & Zanuttigh, P. (2019). A multi-camera dataset for depth estimation in an indoor scenario. Open Data BR, 27, 104619. https://doi.org/10.1016/j.dib.2019.104619
Endres, F., Hess, J., Sturm, J., Cremers, D., & Burgard, W. (2014). 3-D Mapping with an RGB-D camera. IEEE Transactions on Robotics, 30, 177–187. https://doi.org/10.1109/TRO.2013.2279412
Tran, T. M., Ta, K. D., Hoang, M., Nguyen, T. V., Nguyen, N. D., & Pham, G. N. (2020). A study on determination of simple objects volume using ZED stereo camera based on 3D-points and segmentation images. International Journal of Emerging Engineering Research and Technology, 8, 1990–1995. https://doi.org/10.30534/ijeter/2020/85852020
Sarker, M. M., Ali, T. A., Abdelfatah, A., Yehia, S., & Elaksher, A. (2017). A cost-effective method for crack detection and measurement on concrete surface. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, XLII-2/W8, 237–241. https://doi.org/10.5194/isprs-archives-XLII-2-W8-237-2017
Burdziakowski, P. (2018). Low cost real time UAV stereo photogrammetry modelling technique-accuracy considerations. In Proceedings of the E3S web of conferences, EDP sciences, Vol. 63, p. 00020.
Gupta, T., & Li, H. (2017). Indoor mapping for Smart Cities-An affordable approach: Using kinect sensor and ZED stereo camera. In Proceedings of the 2017 international conference on indoor positioning and indoor navigation, IPIN 2017, Institute of Electrical and Electronics Engineers Inc., Vol. 2017-Janua, pp. 1–8.
Halmetschlager-Funek, G., Suchi, M., Kampel, M., & Vincze, M. (2019). An empirical evaluation of ten depth cameras: Bias, precision, lateral noise, different lighting conditions and materials, and multiple sensor setups in indoor environments. IEEE Robotics and Automation Magazine, 26, 67–77. https://doi.org/10.1109/MRA.2018.2852795
Wang, W., & Li, C. (2014). Size estimation of sweet onions using consumer-grade RGB-depth sensor. Journal of Food Engineering, 142, 153–162. https://doi.org/10.1016/j.jfoodeng.2014.06.019
Wang, Q., & Zhang, Q. (2013). Three-dimensional reconstruction of a dormant tree using RGB-D Cameras. In Proceedings of the 2013 Kansas City, Missouri, July 21–July 24, 2013, American Society of Agricultural and Biological Engineers, St. Joseph, MI, Vol. 2, pp. 1341–1350.
Andújar, D., Ribeiro, A., Fernández-Quintanilla, C., & Dorado, J. (2016). Using depth cameras to extract structural parameters to assess the growth state and yield of cauliflower crops. Computers and Electronics in Agriculture, 122, 67–73. https://doi.org/10.1016/j.compag.2016.01.018
van der Heijden, G., Song, Y., Horgan, G., Polder, G., Dieleman, A., Bink, M., Palloix, A., van Eeuwijk, F., & Glasbey, C. (2012). SPICY: Towards automated phenotyping of large pepper plants in the greenhouse. Functional Plant Biology, 39, 870. https://doi.org/10.1071/FP12019
Kazmi, W., Foix, S., & Alenyà, G. (2012). Plant leaf imaging using time of flight camera under sunlight, shadow and room conditions. In Proceedings of the 2012 IEEE International Symposium on Robotic and Sensors Environments, ROSE 2012 – Proceedings, IEEE Computer Society, pp. 192–197.
Swain, K. C., Zaman, Q. U., Schumann, A. W., Percival, D. C., & Bochtis, D. D. (2010). Computer vision system for wild blueberry fruit yield mapping. Biosystems Engineering, 106. https://doi.org/10.1016/j.biosystemseng.2010.05.001
Mai, C., Zheng, L., Sun, H., & Yang, W. (2015). Research on 3D reconstruction of fruit tree and fruit recognition and location method based on RGB-D camera. Nongye Jixie Xuebao/Transactions Chinese Soc. Agricultural Machinery, 46, 35–40. https://doi.org/10.6041/j.issn.1000-1298.2015.S0.006
Nguyen, T. T., Vandevoorde, K., Wouters, N., Kayacan, E., De Baerdemaeker, J. G., & Saeys, W. (2016). Detection of red and bicoloured apples on tree with an RGB-D camera. Biosystems Engineering, 146, 33–44. https://doi.org/10.1016/j.biosystemseng.2016.01.007
Tian, Y., Duan, H., Luo, R., Zhang, Y., Jia, W., Lian, J., Zheng, Y., Ruan, C., & Li, C. (2019). Fast recognition and location of target fruit based on depth information. IEEE Access, 7, 170553–170563. https://doi.org/10.1109/ACCESS.2019.2955566
Gené-Mola, J., Gregorio, E., Auat Cheein, F., Guevara, J., Llorens, J., Sanz-Cortiella, R., Escolà, A., & Rosell-Polo, J. R. (2020). Fruit detection, yield prediction and canopy geometric characterization using LiDAR with forced air flow. Computers and Electronics in Agriculture, 168, 105121. https://doi.org/10.1016/j.compag.2019.105121
Häni, N., Roy, P., & Isler, V. (2020). A comparative study of fruit detection and counting methods for yield mapping in apple orchards. Journal of Field Robotics, 37, 263–282. https://doi.org/10.1002/rob.21902
Bochtis, D. D., Sørensen, C. G., Jørgensen, R. N., Nørremark, M., Hameed, I. A., & Swain, K. C. (2011). Robotic weed monitoring. Acta Agriculturae Scandinavica Section B Soil and Plant Science, 61. https://doi.org/10.1080/09064711003796428
Anagnostis, A., Tagarakis, A. C., Asiminari, G., Papageorgiou, E., Kateris, D., Moshou, D., & Bochtis, D. (2021). A deep learning approach for anthracnose infected trees classification in walnut orchards. Computers and Electronics in Agriculture, 182, 105998. https://doi.org/10.1016/j.compag.2021.105998
Anagnostis, A., Asiminari, G., Papageorgiou, E., & Bochtis, D. (2020). A convolutional neural networks based method for anthracnose infected walnut tree leaves identification. Applied Sciences, 10. https://doi.org/10.3390/app10020469
Fu, L., Gao, F., Wu, J., Li, R., Karkee, M., & Zhang, Q. (2020). Application of consumer RGB-D cameras for fruit detection and localization in field: A critical review. Computers and Electronics in Agriculture, 177, 105687.
Kise, M., & Zhang, Q. (2008). Development of a stereovision sensing system for 3D crop row structure mapping and tractor guidance. Biosystems Engineering, 101, 191–198. https://doi.org/10.1016/j.biosystemseng.2008.08.001
Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots, 34, 189–206. https://doi.org/10.1007/s10514-012-9321-0
De Silva, K. T. D. S., Cooray, B. P. A., Chinthaka, J. I., Kumara, P. P., & Sooriyaarachchi, S. J. (2019). Comparative analysis of octomap and RTABMap for multi-robot disaster site mapping (pp. 433–438). Institute of Electrical and Electronics Engineers (IEEE).
Wang, Z. (2020). Digital twin technology. In Industry 4.0 - Impact on Intelligent Logistics and Manufacturing, IntechOpen.
Glaessgen, E. H., & Stargel, D. S. (2012). The digital twin paradigm for future NASA and U.S. Air force vehicles. In Proceedings of the Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference.
Vachalek, J., Bartalsky, L., Rovny, O., Sismisova, D., Morhac, M., & Loksik, M. (2017). The digital twin of an industrial production line within the industry 4.0 concept. In Proceedings of the proceedings of the 2017 21st international conference on process control, PC 2017, Institute of Electrical and Electronics Engineers Inc., pp. 258–262.
Knapp, G. L., Mukherjee, T., Zuback, J. S., Wei, H. L., Palmer, T. A., De, A., & DebRoy, T. (2017). Building blocks for a digital twin of additive manufacturing. Acta Materialia, 135, 390–399. https://doi.org/10.1016/j.actamat.2017.06.039
Söderberg, R., Wärmefjord, K., Carlson, J. S., & Lindkvist, L. (2017). Toward a Digital Twin for real-time geometry assurance in individualized production. CIRP Annual, 66, 137–140. https://doi.org/10.1016/j.cirp.2017.04.038
Rosen, R., Von Wichert, G., Lo, G., & Bettenhausen, K. D. (2015). About the importance of autonomy and digital twins for the future of manufacturing. Proceedings of the IFAC-Papers, 28, 567–572.
Gabor, T., Belzner, L., Kiermeier, M., Beck, M. T., & Neitz, A. (2016). A simulation-based architecture for smart cyber-physical systems. In Proceedings of the Proceedings – 2016 IEEE International Conference on Autonomic Computing, ICAC 2016, Institute of Electrical and Electronics Engineers Inc., pp. 374–379.
Kang, H. S., Lee, J. Y., Choi, S., Kim, H., Park, J. H., Son, J. Y., Kim, B. H., & Noh, S. (2016). Do Smart manufacturing: Past research, present findings, and future directions. International Journal of Precision Engineering and Manufacturing-Green Technology, 3, 111–128. https://doi.org/10.1007/s40684-016-0015-5
Wang, L., Törngren, M., & Onori, M. (2015). Current status and advancement of cyber-physical systems in manufacturing. Journal of Manufacturing Systems, 37, 517–527. https://doi.org/10.1016/j.jmsy.2015.04.008
Tao, F., Zhang, L., & Laili, Y. (2015). CLPS-GA for energy-aware cloud service scheduling (pp. 191–224).
Zhang, M., Sui, F., Liu, A., Tao, F., & Nee, A. Y. C. (2020). Digital twin driven smart product design framework. In Digital twin driven smart design (pp. 3–32). Elsevier.
Kousi, N., Gkournelos, C., Aivaliotis, S., Giannoulis, C., Michalos, G., & Makris, S. (2019). Digital twin for adaptation of robots’ behavior in flexible robotic assembly lines. In Proceedings of the procedia manufacturing (Vol. 28, pp. 121–126). Elsevier B.V.
Moshrefzadeh, M., Machl, T., Gackstetter, D., Donaubauer, A., & Kolbe, T. H. (2020). Towards a distributed digital twin of the agricultural landscape. Journal of Digital Landscape Architecture, 5, 173–118.
Alves, R. G., Souza, G., Maia, R. F., Tran, A. L. H., Kamienski, C., Soininen, J. P., Aquino, P. T., & Lima, F. (2019). A digital twin for smart farming. In Proceedings of the 2019 IEEE global humanitarian technology conference, GHTC 2019, Institute of Electrical and Electronics Engineers Inc.
Verdouw, C., & Kruize, J. W. (2017). Digital twins in farm management: Illustrations from the FIWARE accelerators SmartAgriFood and Fractals. In Proceedings of the 7th Asian-Australasian conference on precision agriculture, pp. 1–5.
Qian, W., Xia, Z., Xiong, J., Gan, Y., Guo, Y., Weng, S., Deng, H., Hu, Y., & Zhang, J. (2014). Manipulation task simulation using ROS and Gazebo. In Proceedings of the 2014 IEEE international conference on robotics and biomimetics, IEEE ROBIO 2014, Institute of Electrical and Electronics Engineers Inc., pp. 2594–2598.
Edwards, G., Christiansen, M., Bochtis, D., & Sørensen, C. (2013). A test platform for planned field operations using LEGO mindstorms NXT. Robotics, 2, 203–216. https://doi.org/10.3390/robotics2040203
Moysiadis, V., Tsolakis, N., Katikaridis, D., Sørensen, C. G., Pearson, S., & Bochtis, D. (2020). Mobile robotics in agricultural operations: A narrative review on planning aspects. Applied Sciences, 10, 3453. https://doi.org/10.3390/app10103453
Anagnostis, A., Benos, L., Tsaopoulos, D., Tagarakis, A., Tsolakis, N., & Bochtis, D. (2021). Human activity recognition through reccurent neural networks for human-robot interaction. Applied Sciences, 11(5), 2188. https://doi.org/10.3390/app11052188
Benos, L., Bechar, A., & Bochtis, D. (2020). Safety and ergonomics in human-robot interactive agricultural operations. Biosystems Engineering, 200, 55–72. https://doi.org/10.1016/j.biosystemseng.2020.09.009
Benos, L., Tsaopoulos, D., & Bochtis, D. (2020). A review on ergonomics in agriculture. Part I: Manual operations. Applied Sciences, 10, 1–21.
Benos, L., Tsaopoulos, D., & Bochtis, D. (2020). A review on ergonomics in agriculture. part II: Mechanized operations. Applied Sciences, 10, 3484.
Marinoudi, V., Sørensen, C. G., Pearson, S., & Bochtis, D. (2019). Robotics and labour in agriculture. A context consideration. Biosystems Engineering, 184, 111–121. https://doi.org/10.1016/j.biosystemseng.2019.06.013
Marinoudi, V., Lampridi, M., Sørensen, C. G., Pearson, S., & Bochtis, D. (2021). The agricultural occupations landscape in view of work automation. In Bio-economy and agri-production (pp. 289–348). Elsevier.
Shamshiri, R. R., Hameed, I. A., Pitonakova, L., Weltzien, C., Balasundram, S. K., Yule, I. J., Grift, T. E., & Chowdhary, G. (2018). Simulation software and virtual environments for acceleration of agricultural robotics: Features highlights and performance comparison. International Journal of Agricultural and Biological Engineering, 11, 15–31. https://doi.org/10.25165/ijabe.v11i4.4032
Tsolakis, N., Bechtsis, D., & Bochtis, D. (2019). Agros: A robot operating system based emulation tool for agricultural robotics. Agronomy. https://doi.org/10.3390/agronomy9070403
Lavrenov, R., Zakiev, A., & Magid, E. (2017). Automatic mapping and filtering tool: From a sensor-based occupancy grid to a 3D Gazebo octomap. In Proceedings of the 2017 international conference on mechanical, system and control engineering, ICMSC 2017, Institute of Electrical and Electronics Engineers Inc., pp. 190–195.
Guzman, R., Navarro, R., Beneto, M., & Carbonell, D. (2016). Robotnik—Professional service robotics applications with ROS. Studies in Computational Intelligence, 625, 253–288. https://doi.org/10.1007/978-3-319-26054-9_10
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Tagarakis, A.C., Kalaitzidis, D., Filippou, E., Benos, L., Bochtis, D. (2022). 3D Scenery Construction of Agricultural Environments for Robotics Awareness. In: Bochtis, D.D., Sørensen, C.G., Fountas, S., Moysiadis, V., Pardalos, P.M. (eds) Information and Communication Technologies for Agriculture—Theme III: Decision. Springer Optimization and Its Applications, vol 184. Springer, Cham. https://doi.org/10.1007/978-3-030-84152-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-84152-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-84151-5
Online ISBN: 978-3-030-84152-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)