Skip to main content
Log in

MAM-IncNet: an end-to-end deep learning detector for Camellia pest recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Camellia oil is one of the most healthy edible oils in the world. It has the effects of lowering blood pressure, reducing blood fat, and softening blood vessels. Whereas, the Camellia oleifera plant is easily infected by various pests and diseases in the process of growing, which limits the yield of Camellia oil. Thereupon, seeking an intelligent information tool to automatically detect Camellia oleifera pests is of great importance. Recent development in deep learning (DL)-based methods has provided promising performance in plant pest detection. However, to date, DL methods have been rarely applied in this field, except that some existing work focuses on it from public datasets. The main reasons behind the limited usage of DL models in Camellia pest detection include: a large number of training samples, which is difficult to collect, the complicated backdrops of experimental materials, which are not easy to train an efficient model, the low recognition accuracy, which is hard to apply in practical scenarios, and others. Therefore, this study proposes a novel network architecture, namely MAM-IncNet, to address these challenges. Referring to the cascaded structure of single shot multibox detector (SSD), we substitute the former convolutional layers of SSD with the optimized Inception modules (M-Inception), and the pre-trained VGG16 is utilized as the backbone network. Further, a hybrid attention mechanism including channel-wise and spatial attention is incorporated into the network to realize the maximum reuse of inter-channel relationships and spatial point characteristics. The proposed method has attained a recall rate of 81.44% for the detection of Camellia oleifera pests in practical field scenarios. Experimental findings demonstrate the efficacy and feasibility of the proposed method for the detection of Camellia oleifera insect pests.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

Data will be made available on reasonable request.

References

  1. Fina F, Birch P, Young R, Obu J, Faithpraise B, Chatwin C (2013) Automatic plant pest detection and recognition using k-means clustering algorithm and correspondence filters. Int J Adv Biotechnol Res 4(2):189–199

    Google Scholar 

  2. Kaya Y, Kayci L, Uyar M (2015) Automatic identification of butterfly species based on local binary patterns and artificial neural network. Appl Soft Comput 28:132–137

    Article  Google Scholar 

  3. Wang J, Lin C, Ji L, Liang A (2012) A new automatic identification system of insect images at the order level. Knowl-Based Syst 33:102–110

    Article  CAS  Google Scholar 

  4. Kang SH, Cho JH, Lee SH (2014) Identification of butterfly based on their shapes when viewed from different angles using an artificial neural network. J Asia-Pac Entomol 17(2):143–149

    Article  Google Scholar 

  5. Ebrahimi MA, Khoshtaghaza MH, Minaei S, Jamshidi B (2017) Vision-based pest detection based on SVM classification method. Comput Electron Agric 137:52–58

    Article  Google Scholar 

  6. Chen W, Chen J, Zeb A, Yang S, Zhang D (2022) Mobile convolution neural network for the recognition of potato leaf disease images. Multimed Tools Appl 81(15):20797–20816

    Article  Google Scholar 

  7. Hssayni EH, Joudar NE, Ettaouil M (2022) Localization and reduction of redundancy in CNN using L 1-sparsity induction. J Ambient Intell Humaniz Comput 1–13

  8. Ning X et al (2023) Hyper-sausage coverage function neuron model and learning algorithm for image classification. Pattern Recognit 136:109216

  9. Tian S et al (2023) Continuous transfer of neural network representational similarity for incremental learning. Neurocomputing 545:126300

  10. Joudar N-E, Ettaouil M (2022) An adaptive Drop method for deep neural networks regularization: estimation of DropConnect hyperparameter using generalization gap. Knowl-Based Syst 253:109567

    Article  Google Scholar 

  11. Hssayni El H, Joudar N‐E, Ettaouil M (2022) A deep learning framework for time series classification using normal cloud representation and convolutional neural network optimization. Comput Intell 38(6):2056–2074

  12. Ning X et al (2022) HCFNN: high-order coverage function neural network for image classification. Pattern Recognit 131:108873

  13. Shijie J, Peiyi J, Siping H (2017) Automatic detection of tomato diseases and pests based on leaf images. In: 2017 Chinese automation congress (CAC), IEEE, pp 2537–2510

  14. Liu Z, Gao J, Yang G, Zhang H, He Y (2016) Localization and classification of paddy field pests using a saliency map and deep convolutional neural network. Sci Rep 6(1):1–12

    Google Scholar 

  15. Thenmozhi K, Reddy US (2019) Crop pest classification based on deep convolutional neural network and transfer learning. Comput Electron Agric 164:104906

    Article  Google Scholar 

  16. Fuentes A, Yoon S, Kim SC, Park DS (2017) A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 17(9):2022

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  17. Lee SH, Lin SR, Chen SF (2020) Identification of tea foliar diseases and pest damage under practical field conditions using a convolutional neural network. Plant Pathol 69(9):1731–1739

    Article  CAS  Google Scholar 

  18. Wang X, Liu J (2021) Tomato anomalies detection in greenhouse scenarios based on YOLO-Dense. Front Plant Sci 12:533

    Google Scholar 

  19. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, ... Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  20. Chen L-C, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv: Computer Vision and Pattern Recognition

  21. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  22. Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the AAAI conference on artificial intelligence (vol. 31, no. 1)

  23. Hou Q, Zhou D, Feng J (2021) Coordinate attention for efficient mobile network design. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 13713–13722

  24. Tsotsos JK (2011) A computational perspective on visual attention. MIT Press

    Book  Google Scholar 

  25. Huang Z, Wang X, Huang L, Huang C, Wei Y, Liu W (2019) Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 603–612

  26. Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3146–3154

  27. Hou Q, Zhang L, Cheng MM, Feng J (2020) Strip pooling: Rethinking spatial pooling for scene parsing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4003–4012

  28. Hu J, Shen L, Albanie S, Sun G, Vedaldi A (2018) Gather-excite: Exploiting feature context in convolutional neural networks. Advances in neural information processing systems, 31

  29. Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3286–3295

  30. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  31. Woo S, Park J, Lee JY, Kweon IS (2018) Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19

  32. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) Ssd: Single shot multibox detector. In: European conference on computer vision. Springer, Cham, pp 21–37

  33. Zhao Z, Xu G, Qi Y, Liu N, Zhang T (2016) Multi-patch deep features for power line insulator status classification from aerial images. In: 2016 international joint conference on neural networks (IJCNN), IEEE, pp 3187–3194

  34. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  35. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, MaS ... Fei-Fei L (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115(3):211–252

  36. Kingma DP, Ba JL (2015) Adam: A Method for Stochastic Optimization. In: ICLR 2015 : International conference on learning representations 2015

  37. Younis A, Shixin L, Jn S, Hai Z (2020) Real-time object detection using pre-trained deep learning models MobileNet-SSD. In: Proceedings of 2020 the 6th international conference on computing and data engineering, pp 44–48

  38. Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767

  39. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  40. Liu S, Huang D (2018) Receptive field block net for accurate and fast object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 385–400

  41. Zhao Q, Sheng T, Wang Y, Tang Z, Chen Y, Cai L, Ling H (2019) M2det: A single-shot object detector based on multi-level feature pyramid network. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, no 01, pp 9259–9266

Download references

Acknowledgements

The work is partially supported by the Research Funds for the project of Automatic identification and diagnosis of plant diseases and pests on mobile phones using big data techniques (Hunan Provincial Natural Science Research Fund). The authors are very grateful to the editors and anonymous reviewers for their constructive suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junde Chen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Chen, W., Nanehkaran, Y.A. et al. MAM-IncNet: an end-to-end deep learning detector for Camellia pest recognition. Multimed Tools Appl 83, 31379–31394 (2024). https://doi.org/10.1007/s11042-023-16680-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-16680-4

Keywords

Navigation