skip to main content
research-article

Deep Learning Embedded into Smart Traps for Fruit Insect Pests Detection

Published:09 November 2022Publication History
Skip Abstract Section

Abstract

This article presents a novel approach to identify two species of fruit insect pests as part of a network of intelligent traps designed to monitor the population of these insects in a plantation. The proposed approach uses a simple Digital Image Processing technique to detect regions in the image that are likely the monitored pests and an Artificial Neural Network to classify the regions into the right class given their characteristics. This identification is done essentially by a Convolutional Neural Network (CNN), which learns the characteristics of the insects based on their images made from the adhesive floor inside a trap. We have trained several CNN architectures, with different configurations, through a data set of images collected in the field. We aimed to find the model with the highest precision and the lowest time needed for the classification. The best performance in classification was achieved by ResNet18, with a precision of 93.55% and 91.28% for the classification of the pests focused on this study, named Ceratitis capitata and Grapholita molesta, respectively, and a 90.72%overall accuracy. Yet, the classification must be embedded on a resource-constrained system inside the trap, then we exploited SqueezeNet, MobileNet, and MNASNet architectures to achieve a model with lesser inference time and small losses in accuracy when compared to the models we assessed. We also attempted to quantize our highest precision model to reduce even more inference time in embedded systems, which achieved a precision of 88.76% and 89.73% for C. capitata and G. molesta, respectively; notwithstanding, a decrease of roughly 2% on the overall accuracy was endured. According to the expertise of our partner company, our results are worthwhile for a real-world application, since general human laborers have a precision of about 85%.

REFERENCES

  1. [1] E. Goldshtein, Y. Cohen, A. Hetzroni, Y. Gazit, D. Timar, L. Rosenfeld, Y. Grinshpon, A. Hoffman, and A. Mizrach. 2017. Development of an automatic monitoring trap for Mediterranean fruit fly (Ceratitis capitata) to optimize control applications frequency. Comput. Electron. Agric. 139 (2017), 115125. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Collobert Ronan and Weston Jason. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning. ACM, 160167.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Dias Naymã Pinto, Silva Fernando Felisberto da, Abreu Jéssica Avila de, Pazini Juliano de Bastos, and Botta Robson Antonio. 2013. Nível de infestação de moscas-das-frutas em faixa de fronteira, no Rio Grande do Sul. Revista Ceres 60 (Aug. 2013), 589593. Retrieved from http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0034-737X2013000400020&nrm=iso.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Ding Weiguang and Taylor Graham. 2016. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 123 (04 2016), 1728. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Dukhan Marat, Dukhan Marat, Wu Yiming, and Lu Hao. 2020. QNNPACK: Open Source Library for Optimized Mobile Deep Learning. Retrieved from https://engineering.fb.com/ml-applications/qnnpack/.Google ScholarGoogle Scholar
  6. [6] Elliott N. C., Farrell J. A., Gutierrez A. P., Lenteren Joop C. van, Walton M. P., Wratten Steve, and Dent D.. 1995. Integrated Pest Management. Springer Science & Business Media, Berlim, Alemanha.Google ScholarGoogle Scholar
  7. [7] Faria Fábio Augusto, Perre P., Zucchi R. A., Jorge L. R., Lewinsohn T. M., Rocha Anderson, and Torres R. da S.. 2014. Automatic identification of fruit flies (Diptera: Tephritidae). J. Visual Commun. Image Represent. 25, 7 (2014), 15161527.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Glorot Xavier, Bordes Antoine, and Bengio Yoshua. 2011. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 315323.Google ScholarGoogle Scholar
  9. [9] Gong Yunchao, Liu Liu, Yang Ming, and Bourdev Lubomir D.. 2014. Compressing deep convolutional networks using vector quantization. Retrieved from http://arxiv.org/abs/1412.6115.Google ScholarGoogle Scholar
  10. [10] Goodfellow Ian, Bengio Yoshua, and Courville Aaron. 2016. Deep Learning. MIT Press. Retrieved from http://www.deeplearningbook.org.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] He K., Zhang X., Ren S., and Sun J.. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 770778. Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] He Yihui and Han Song. 2018. AMC: Automated deep compression and acceleration with reinforcement learning. Retrieved from http://arxiv.org/abs/1802.03494.Google ScholarGoogle Scholar
  13. [13] Huang Gao, Liu Zhuang, and Weinberger Kilian Q.. 2016. Densely connected convolutional networks. Retrieved from http://arxiv.org/abs/1608.06993.Google ScholarGoogle Scholar
  14. [14] Iandola Forrest N., Moskewicz Matthew W., Ashraf Khalid, Han Song, Dally William J., and Keutzer Kurt. 2017. SqueezeNet: AlexNet-level accuracy with \( 50\times \) fewer parameters and <1MB model size. Retrieved from https://arxiv.org/abs/1602.07360.Google ScholarGoogle Scholar
  15. [15] Ioffe Sergey and Szegedy Christian. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Retrieved from http://arxiv.org/abs/1502.03167.Google ScholarGoogle Scholar
  16. [16] Jacob B., Kligys S., Chen B., Zhu M., Tang M., Howard A., Adam H., and Kalenichenko D.. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 27042713.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Jiang Joe-Air, Tseng Chwan-Lu, Lu Fu-Ming, Yang En-Cheng, Wu Zong-Siou, Chen Chia-Pang, Lin Shih-Hsiang, Lin Kuang-Chang, and Liao Chih-Sheng. 2008. A GSM-based remote wireless automatic monitoring system for field information: A case study for ecological monitoring of the oriental fruit fly, Bactrocera dorsalis (Hendel). Comput. Electron. Agric. 62, 2 (2008), 243259. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Krishnamoorthi Raghuraman. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper. Retrieved from http://arxiv.org/abs/1806.08342.Google ScholarGoogle Scholar
  19. [19] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. MIT Press, 10971105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Kumar Ratnesh, Martin Vincent, and Moisan Sabine. 2010. Robust insect classification applied to real time greenhouse infestation monitoring. In Proceedings of the 20th International Conference on Pattern Recognition on Visual Observation and Analysis of Animal and Insect Behavior Workshop. 14.Google ScholarGoogle Scholar
  21. [21] LeCun Yann, Boser Bernhard, Denker John S., Henderson Donnie, Howard Richard E., Hubbard Wayne, and Jackel Lawrence D.. 1989. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1, 4 (1989), 541551.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Lin Min, Chen Qiang, and Yan Shuicheng. 2013. Network in network. Retrieved from https://arxiv.org/abs/1312.4400.Google ScholarGoogle Scholar
  23. [23] Long Jonathan, Shelhamer Evan, and Darrell Trevor. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 34313440.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] López Otoniel, Rach Miguel Martinez, Migallon Hector, Malumbres Manuel P., Bonastre Alberto, and Serrano Juan J.. 2012. Monitoring pest insect traps by means of low-power image sensor technologies. Sensors 12, 11 (2012), 1580115819.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Oquab Maxime, Bottou Leon, Laptev Ivan, and Sivic Josef. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 17171724.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Parkhi Omkar M., Vedaldi Andrea, Zisserman Andrew, et al. 2015. Deep face recognition. In Proceedings of the British Machine Vision Conference (BMVC’15), Vol. 1, 6.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Paszke Adam, Gross Sam, Massa Francisco, Lerer Adam, Bradbury James, Chanan Gregory, Killeen Trevor, Lin Zeming, Gimelshein Natalia, Antiga Luca, Desmaison Alban, Kopf Andreas, Yang Edward, DeVito Zachary, Raison Martin, Tejani Alykhan, Chilamkurthy Sasank, Steiner Benoit, Fang Lu, Bai Junjie, and Chintala Soumith. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, Wallach H., Larochelle H., Beygelzimer A., d'Alché-Buc F., Fox E., and Garnett R. (Eds.). Curran Associates, 80248035. Retrieved from http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.Google ScholarGoogle Scholar
  28. [28] Perez Luis and Wang Jason. 2017. The effectiveness of data augmentation in image classification using deep learning. Retrieved from http://arxiv.org/abs/1712.04621.Google ScholarGoogle Scholar
  29. [29] Potamitis Ilyas, Eliopoulos Panagiotis, and Rigakis Iraklis. 2017. Automated remote insect surveillance at a global scale and the Internet of Things. Robotics 6, 3 (2017). Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Powers David and Ailab. 2011. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2 (Jan. 2011), 22293981. Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Remboski Thainan B., Souza William D. de, Aguiar Marilton S. de, and Jr. Paulo R. Ferreira,2018. Identification of fruit fly in intelligent traps using techniques of digital image processing and machine learning. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing (SAC’18). ACM, New York, NY, 260267. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Rosenblatt Frank. 1958. The perceptron: A probabilistic model for information storage and organization in the brain.Psychol. Rev. 65, 6 (1958), 386.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Rosenblatt Frank. 1961. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Technical Report, Cornell Aeronautical Lab, Buffalo, NY.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Sandler Mark, Howard Andrew G., Zhu Menglong, Zhmoginov Andrey, and Chen Liang-Chieh. 2018. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. Retrieved from http://arxiv.org/abs/1801.04381.Google ScholarGoogle Scholar
  35. [35] Shelton A. M. and Badenes-Perez F. R.. 2006. Concepts and applications of trap cropping in pest management. Annu. Rev. Entomol. 51, 1 (2006), 285308.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Solis-Sánchez L. O., García-Escalante J. J., Castañeda-Miranda R., Torres-Pacheco I., and Guevara-González R.. 2009. Machine vision algorithm for whiteflies (Bemisia tabaci Genn.) scouting under greenhouse environment. J. Appl. Entomol. 133, 7 (2009), 546552.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Stock Pierre, Joulin Armand, Gribonval Rémi, Graham Benjamin, and Jégou Hervé. 2019. And the bit goes down: Revisiting the quantization of neural networks. Retrieved from http://arxiv.org/abs/1907.05686.Google ScholarGoogle Scholar
  38. [38] Tan Mingxing, Chen Bo, Pang Ruoming, Vasudevan Vijay, and Le Quoc V.. 2018. MnasNet: Platform-aware neural architecture search for Mobile. Retrieved from http://arxiv.org/abs/1807.11626.Google ScholarGoogle Scholar
  39. [39] Tirelli P., Borghese N. A., Pedersini F., Galassi G., and Oberti R.. 2011. Automatic monitoring of pest insects traps by Zigbee-based wireless networking of image sensors. In Proceedings of the Instrumentation and Measurement Technology Conference (I2MTC’11). IEEE, 15.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Wen Chenglu and Guyer Daniel. 2012. Image-based orchard insect automated identification and classification method. Comput. Electron. Agric. 89 (Nov. 2012), 110115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Xie S., Girshick R., Dollár P., Tu Z., and He K.. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 59875995. Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Zhang X., Zhou X., Lin M., and Sun J.. 2018. ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 68486856.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Deep Learning Embedded into Smart Traps for Fruit Insect Pests Detection

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Intelligent Systems and Technology
      ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 1
      February 2023
      487 pages
      ISSN:2157-6904
      EISSN:2157-6912
      DOI:10.1145/3570136
      • Editor:
      • Huan Liu
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 November 2022
      • Online AM: 30 July 2022
      • Accepted: 19 July 2022
      • Revised: 8 April 2022
      • Received: 23 July 2021
      Published in tist Volume 14, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed
    • Article Metrics

      • Downloads (Last 12 months)171
      • Downloads (Last 6 weeks)9

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format