Next Article in Journal
Immersion Therapy with Head-Mounted Display for Rehabilitation of the Upper Limb after Stroke—Review
Next Article in Special Issue
Ambient Electromagnetic Radiation as a Predictor of Honey Bee (Apis mellifera) Traffic in Linear and Non-Linear Regression: Numerical Stability, Physical Time and Energy Efficiency
Previous Article in Journal
Non-Steady State NMR Effect and Application on Time-Varying Magnetic Field Measurement
Previous Article in Special Issue
Water Color Identification System for Monitoring Aquaculture Farms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Tomato Leaf Miner Using Deep Neural Network

Department of Human Intelligence Robot Engineering, Sangmyung University, Cheonan-si 31066, Republic of Korea
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(24), 9959; https://doi.org/10.3390/s22249959
Submission received: 18 October 2022 / Revised: 13 December 2022 / Accepted: 15 December 2022 / Published: 17 December 2022
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)

Abstract

:
As a result of climate change and global warming, plant diseases and pests are drawing attention because they are dispersing more quickly than ever before. The tomato leaf miner destroys the growth structure of the tomato, resulting in 80 to 100 percent tomato loss. Despite extensive efforts to prevent its spread, the tomato leaf miner can be found on most continents. To protect tomatoes from the tomato leaf miner, inspections must be performed on a regular basis throughout the tomato life cycle. To find a better deep neural network (DNN) approach for detecting tomato leaf miner, we investigated two DNN models for classification and segmentation. The same RGB images of tomato leaves captured from real-world agricultural sites were used to train the two DNN models. Precision, recall, and F1-score were used to compare the performance of two DNN models. In terms of diagnosing the tomato leaf miner, the DNN model for segmentation outperformed the DNN model for classification, with higher precision, recall, and F1-score values. Furthermore, there were no false negative cases in the prediction of the DNN model for segmentation, indicating that it is adequate for detecting plant diseases and pests.

1. Introduction

Protection of crops from plant disease is a problem that is intertwined with agriculture and climate change [1]. Climate change caused by global warming alters host resistance and pathogenic rates, as well as affects physiological interaction between hosts and pathogens [2]. The plant disease problem has worsened as various plant diseases spread faster than ever before around the world. The possibility of plant diseases emerging in previously non-affected regions has increased, and there is a lack of local expertise to treat the new plant disease in the regions [3].
Tomato leaf miner causes crop losses of around 80 to 100% in places where it is present. By invading leaves, stems, flowers, and fruit, the tomato leaf miner destroys the structure of the tomato growth. It is extremely difficult to prevent the spread of the tomato leaf miner. Despite significant efforts to prevent its migration, the tomato leaf miner took only about three years to spread across southern Europe after it was first identified in Spain. Now the tomato leaf miner is found in most South American countries, Southern Europe, Northern Africa, and West Asia [4].
Leaves, the most vulnerable component of a plant, are where disease symptoms first appear [5]. From the very beginning of their life cycle until they are ready to be harvested, the crops need to be inspected in a timely manner to protect the crops against various plant diseases. Agricultural specialists conventionally observed agricultural fields using the time-consuming approach of naked-eye surveillance to keep a check on the plant leaves for symptoms of diseases [6].
In agriculture, computer vision tools have largely supplanted the naked eye to identify plant diseases and pests. Computer vision tools frequently employed conventional image processing algorithms, which need human feature design along with classifiers to detect plant diseases and pests. Computer vision tools increase detection performance by creating imaging schemes and selecting appropriate light sources and shooting angles based on the characteristics of plant diseases and pests.
Although handcrafted imaging schemes help computer vision tools detect plant diseases and pests, they also increase the application cost. Furthermore, in a complex natural environment, plant diseases and pests are difficult to identify via handcrafted computer vision tools because it is hard to expect that the traditional computer vision tools completely exclude the influence of low contrast, large variations in the scale, image noise, and disturbances under natural light [7].
The capacity to directly use raw data without using a handcrafted feature extractor is a significant advantage of deep neural network (DNN) models [8]. DNN models especially based on convolutional neural networks (CNN) have shown success in recent years when used in a variety of computer vision applications, such as traffic detection, medical image recognition, scenario text detection, and face recognition [9,10,11,12].
Several DNN-based approaches for detecting plant diseases and pests have been studied using leaf images. The DNN-based approaches can be further separated into a classification method, a detection method, and a segmentation method according to the types of output [13]. The classification method produces the types of plant diseases and pests [14,15,16,17]. The detection method provides the location, as well as the types of plant diseases and pests [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. The segmentation method yields the types of plant diseases and pests, as well as pixel information, such as location and geometric properties [33,34,35,36].
In this study, DNN-based approaches for the classification and the segmentation methods were used to diagnose the tomato leaf miner. Two DNN models were trained using the same RGB images of tomato leaves captured from real-world agricultural sites. The diagnosis performance of two DNN models was evaluated and compared. One of the two DNN models was suggested as for tomato leaf miner detection based on the diagnosis performance.

2. Materials and Methods

2.1. Dataset Description

AI Hub is operated by National Information Society Agency in the Republic of Korea to accelerate the advancement of artificial intelligence technology and its application. Various data were released on AI Hub related to natural language, healthcare, autonomous driving, agriculture, livestock, education, and so forth.
The Agricultural Knowledge Base (AKB) dataset [37], one of the agricultural datasets released on AI Hub, was organized by I IMC corporation in 2018. The AKB dataset contains a total of 40,704 RGB image data of rose leaves and tomato leaves taken in the laboratory and real-world agricultural sites. All leaf images of the AKB are labeled with normal and types of diseases. The rose and tomato leaves in the AKB dataset are labeled with 11 and 17 types of plant diseases, including normal leaves.
We processed the AKB dataset in two different ways to train and evaluate two types of deep neural networks (DNNs) applicable to real-world agricultural sites. Images and labels of normal and mined tomato leaves collected from real-world agricultural sites were selected from the AKB dataset. In the selected dataset, there are 3115 and 3341 pairs of images and labels of normal tomato leaves and mined tomato leaves, respectively. Images in the selected dataset had various sizes. All the images in the selected dataset were resized to 300 by 300 pixels. The selected and resized dataset (DRN152) was used to train and evaluate a DNN model for image classification. The DRN152 was separated into a training dataset, a validation dataset, and a test dataset while the split ratio was 60%, 20%, and 20%, respectively.
The region of tomato leaf infected by the tomato leaf miner was segmented into a polygonal shape by the human from the resized images in the AKB dataset to generate binary mask images. The binary mask images have a size of 300 by 300 pixels, which is the same as that of the resized images. Pixel values where the tomato leaf miner occurred in an image were converted into one. The other pixel values in the image were converted into zero. When all the pixel values in the image were converted into one or zero using the previously described method, the image was changed to the binary mask image.
Pairs of the resized image and the binary mask image (DMRCNN) were used to train and evaluate a DNN model for object segmentation. The DMRCNN was separated into a training dataset, a validation dataset, and a test dataset while the split ratio was 60%, 20%, and 20%, respectively.

2.2. Deep Neural Network for Tomato Leaf Miner Classification

Transfer learning is one type of machine learning method. Transfer learning makes use of previously learned knowledge from a different problem that is applicable to a new problem. Additionally, the knowledge is applied to solve the new problem.
ResNet [38] was developed using a residual learning framework with a shortcut structure to address the issue of DNN performance degrading as depth exceeds a certain number of layers. ResNet152, one of the ResNet structures, was pre-trained on the ImageNet dataset, which contains over 14 million images and 1000 different labels.
It was assumed that the classification process of DRN152 is relevant to that of the ImageNet dataset. ResNet152 was used for transfer learning to classify the DRN152 as a binary class of the normal tomato leaf and the tomato leaf infected by the tomato leaf miner. Figure 1 shows the developed DNN structure for processing DRN152 using transfer learning with ResNet152 (DNNRN152).
Structure of DNNRN152 is shown in Figure 1. In Figure 1, green, blue, grey, and orange boxes denote the pooling layer, convolutional layer, residual module, and fully connected layer, respectively. Red lines indicate shortcut structures for residual learning. DNNRN152 has two individual stages to process the images: feature extractor and classifier. From conv 1 to conv 5 in Figure 1 are the feature extractors that extract feature maps from the images. The fully connected layers (FCLs) in Figure 1 are the classifier that makes a prediction using the feature maps.
Layers of the DNNRN152 from conv 1 to conv 5 reused the structure and weights of the ResNet152 trained on the ImageNet dataset. The structure of the FCLs was determined by finding optimal hyperparameters using Bayesian optimization (BO). BO is one of strategy for finding a set of hyperparameters from a hyperparameter space to optimize an objective function that requires a large amount of computational power and, thus, is expensive to evaluate. Table 1 shows a hyperparameter space explored to determine the structure of the FCLs and training process. The objective function of the BO was set to an F-1 score of the validation dataset, which indicates the performance of classification (see Equation (3) for more detail). The hyperparameter space was iteratively explored by the BO method to find the optimal set of hyperparameters that maximize the F-1 score.
As a result of the BO, the number of layers, the number of neurons and dropout rate in each layer were determined for the FCLs as 1.64 and 0.2, respectively. The rectified linear unit and the softmax were used for the activation function in the hidden layer and the output layer. The number of neurons in the output layer was one to deal with the binary classification. For the learning process, the batch size, the optimizer, and the learning rate were set to 64, SGD, and 0.001, respectively.
The DNNRN152 was trained twice using the training dataset of the DRN152. For the first training process, the weights of the feature extractor were frozen and not trained. Only the weights of the classifier were trained. All the weights of the DNNRN152, both the feature extractor and the classifier, were trained during the second training process.

2.3. Deep Neural Network for Tomato Leaf Miner Segmentation

A DNN model for segmentation (DNNMRCNN) was trained in a transfer learning manner using Mask R-CNN [39], a type of region-based convolutional neural network. The Mask R-CNN was pretrained on the COCO dataset and implemented using Matterport’s library [40] in the TensorFlow environment. The DNNMRCNN was developed to segment and classify regions infected by tomato leaf miner from a leaf image by training the Mask R-CNN using DMRCNN.
The DNNMRCNN processed a leaf image as shown in Figure 2 and yielded class, confidence, bounding box, and binary mask features for segmented pixels.
The feature maps of the input leaf image were extracted using ResNet101, denoted by (a) in Figure 2. In Figure 2b, a region proposal network [41] generates anchor boxes in regions expected to contain plant diseases. In Figure 2c, the predicted anchor boxes and the feature maps were processed to generate fixed-size feature maps by using the region of interest pooling method [42]. The output of the region of interest pooling method was used as input for two types of DNNs—FCL and feature pyramid networks (FPN) [43]. The FCL, in Figure 2d, classified the anchor boxes as the tomato leaf miner or the background and predicted the position of bounding boxes for the tomato leaf miner. The FPN, in Figure 2e, generated a binary mask image with a value of 1 for the regions infected by the tomato leaf miner, and a value of 0 for the remaining regions.
The DNNMRCNN was trained using the DMRCNN. The batch size, optimizer, and learning rate were set to 2, SGD, and 0.001, respectively, during the learning process. Loss and MRCNN class loss on the training and validation dataset are shown in Figure 3. During the learning process of Mask R-CNN, five types of losses are computed. A MRCNN class loss is one of the five losses and represents how successfully the Mask R-CNN classifies the detected object from the image. A loss refers to the aggregate of all five losses. The loss and MRCNN class loss on the training dataset are shown in Figure 3a,b, respectively. Figure 3c,d represent the loss and MRCNN class loss on the validation dataset. Even after 25 epochs, the loss and MRCNN class loss on the training dataset tend to decrease. Whereas the loss and MRCNN class loss on validation dataset do not improve after 5 epochs, but rather deteriorate. The Mask R-CNN was trained until 5 epochs, where the loss and MRCNN class loss on the validation dataset had the lowest value.
When there were regions with a value of 1 in the binary mask image predicted in the input leaf image, it means that tomato leaf miner-infected regions were detected in the input leaf image. In this case, the input leaf image was classified as a tomato leaf miner infection. The input leaf image was classified as normal in the opposite case.

2.4. Performance Evaluation Metrics for Developed Deep Neural Networks

The performance of two developed DNNs for the tomato leaf miner was evaluated using four metrics of confusion matrix, precision, recall, and F1-score. The DNN for the tomato leaf miner segmentation was additionally evaluated using intersection over union (IoU).
In the classification problem, the confusion matrix compares the prediction results of DNN with the target value and presents the comparison results in matrix form. There are four types of comparison results: true positive (TP), false positive (FP), false negative (FN), and true negative (TN). When the regions infected by the tomato leaf miner are actually present in the leaf image, the TP is the case where the DNN predicts that the leaf image contains the infected regions. The FN is the case in which the DNN predicts that there is no infected region in the leaf image. The FP is the case where the DNN predicts the leaf image contains the infected regions when all of the leaves in the leaf image are normal and do not contain the infected region. The TN is the case in which the DNN predicts that there is no infected region in the leaf image. In other words, the TP and TN are the cases in which the DNN prediction and the target value match. The FP and FN are the cases when those are not matched.
The precision is the rate at which the leaf image actually contains the infected regions when the DNN predicts the leaf image as the infection of tomato leaf miner. The precision was calculated by Equation (1) using the number of the TP and the FP cases denoted by n(TP) and n(FP), respectively.
P r e c i s i o n = n T P n T P + n F P
The recall is the probability that the DNN prediction and the target value are matched when the leaf image actually contains the infected regions. The recall was calculated by Equation (2) using the number of the TP and the FN cases denoted by n(TP) and n(FN), respectively.
R e c a l l = n T P n T P + n F N
The precision and the recall have an inverse relationship. The F1-score is the harmonic mean of the precision and the recall, and it is used to reflect both the precision and the recall in the DNN performance evaluation for classification. The F1-score was calculated by Equation (3) using the calculation results of the precision and the recall denoted by cal(Precision) and cal(Recall), respectively.
F 1 s c o r e = 2 c a l P r e c i s i o n c a l R e c a l l c a l P r e c i s i o n + c a l R e c a l l
The IoU represents the percentage of matches between the human-segmented binary mask image and the DNN-predicted binary mask image. To calculate the IoU in the plant leaf images, an overlapping area and a union area between the plant disease regions segmented by the human and the plant disease regions predicted by the DNN are required. The human-segmented and DNN-predicted plant disease regions were polygonal, making it difficult to calculate their numerical area. The number of pixels in the overlapping area and the union area between the human-segmented and DNN-predicted plant disease regions were counted to substitute the area calculation.
Both the human-generated and the DNN-generated binary mask images were converted into two-dimensional matrices consisting of integers 0 and 1 to be used in counting the number of pixels in the overlapping and union areas. The sum of the two matrices was used to count the pixels in the union area between the human-segmented plant disease regions and the DNN-predicted plant disease regions. Elements with values one and two in the sum result of the two matrices were counted as the number of pixels in the union area. Then, the Hadamard product was computed between the two matrices to count the pixels in the overlapping area between the human-segmented plant disease regions and the DNN-predicted plant disease regions. Elements with the value one were counted as the number of pixels in the overlapping area as a result of the Hadamard product. The IoU was calculated by Equation (4) using the counted number of pixels in the overlapping area and the union area. In Equation (4), the number of pixels in the overlapping area and the union area are denoted by n(AO) and n(AU), respectively.
I o U = n A O n A U

3. Results

The performance of the DNNRN152 and the DNNMRCNN was evaluated using the test dataset. On both the DNNRN152 and the DNNMRCNN, the confusion matrix, the precision, the recall, and the F1-score were calculated. The IoU, on the other hand, was calculated only for DNNMRCNN.

3.1. Confusion Matrix

The DNNRN152 and the DNNMRCNN were both evaluated using the test dataset, which included 623 normal leaf images and 665 leaf images with infected regions of the tomato leaf miner. The DNNRN152 directly classified the input tomato leaf images as normal or infected with tomato leaf miner. The DNNMRCNN, on the other hand, segmented the regions infected by the tomato leaf miner, and the segmentation results were used to classify the input tomato leaf images, as explained in Section 2.3.
Two confusion matrices in Figure 4a,b describe the classification results using DNNRN152 and DNNMRCNN, respectively. As shown in Figure 4a, the DNNRN152 classified the 665 leaf images with infected regions of tomato leaf miner into 560 infected leaf images and 105 normal leaf images. The 623 normal leaf images were classified into 89 normal leaf images and 534 infected leaf images using the DNNRN152. In Figure 4b, the DNNMRCNN correctly classified all 665 leaf images with infected regions of tomato leaf miner as infected leaf images. The 623 normal leaf images were classified into the 594 normal leaf images and the 29 infected leaf images using the DNNMRCNN. Using the DNNRN152, 649 of 1288 images, or 51.7%, were classified to match the true and DNN-predicted classes. Using the DNNMRCNN, 1259 of 1288 images, or 97.7% of the total, were classified as matching between the true and DNN-predicted classes.
The DNNMRCNN predicted fewer cases of FN and FP than the DNNRN152. It is important to note that DNNMRCNN does not predict FN, which suggests that DNNMRCNN is more adequate than DNNRN152 in actual application for plant disease and pest detection.

3.2. Precision, Recall, and F1-Score

The precision, recall, and F1-Score were calculated based on the results of the confusion matrix. Figure 5 compares the precision, recall, and F1-Score calculated from the confusion matrices of the DNNRN152 and the DNNMRCNN. In Figure 5, red and blue bars denote the performance evaluation results of the DNNRN152 and the DNNMRCNN, respectively. The precision, the recall, and the F1-Score calculated from the prediction results of the DNNMRCNN all showed higher values than those calculated from the prediction results of the DNNRN152, indicating that the DNNMRCNN outperforms the DNNRN152 in terms of diagnostic performance of the tomato leaf miner.

3.3. Intersection over Union

IoU was calculated by comparing the human-segmented binary mask image and the DNN-predicted binary mask image. Figure 6 shows the calculation results of IoU. The human-segmented binary mask image and the DNN-predicted binary mask image are superimposed in Figure 6b. Yellow denotes the human-segmented plant disease region. In addition, the DNN-predicted plant disease region is depicted in yellow with high translucency. The overlapping images are arranged based on the IoU value.
A bar graph in Figure 6a shows the number of test datasets with IoU values within a certain range. The horizontal axis in the bar graph represents the range of IoU values with a minimum value of 0 and a maximum value of 1 divided by 0.1 intervals. The vertical axis indicates the number of DNNMRCNN predictions whose IoU value corresponds to the range of IoU values on the horizontal axis. The minimum and maximum IoU values for DNNMRCNN prediction results on test datasets were 0.05 and 1.0, respectively. In addition, the average of the IoU values was 0.59, with a variance of 0.03.
When the IoU value is 0.6 or higher in Figure 6b, two human-segmented and DNN-predicted binary mask images are nearly identical. As shown in Figure 6a, approximately half of the prediction results for 665 images containing the leaf infected by tomato leaf miner have an IoU value of 0.6 or higher. Except for 50 predictions, the vast majority have an IoU of 0.4 or higher. The IoU calculation results suggest that DNNMRCNN is capable of precisely locating lesions.

4. Conclusions

In this study, we developed two DNN models to diagnose the tomato leaf infected by the tomato leaf miner. DNNRN152, one of the developed DNN models, employed the well known convolutional neural network structure, ResNet152. DNNRN152 made use of a feature extractor same to the ResNet152 and a customized classifier. Using the tomato leaf image as input, DNNRN152 directly classified the image as the normal leaf or the leaf infected by the tomato leaf miner. The Mask RCNN was used to develop another DNN model, DNNMRCNN. The regions infected by tomato leaf miner were segmented from the tomato leaf image using DNNMRCNN, and the segmented results were used to classify the tomato leaf image as the normal leaf or the leaf infected by tomato leaf miner.
The same tomato leaf images captured from real-world agricultural sites were used to train and evaluate both DNN models. The human-segmented binary mask images were additionally provided to the DNNMRCNN for the training process.
As a preliminary study, we compared the performance of DNN models for classification (DNNRN152) and segmentation (DNNMRCNN) to determine which DNN model is better for detecting single plant disease in a single crop from images captured in real-world agricultural sites. The precision, recall, and F1-score were used to assess the performance of the two developed DNN models. For all criteria, the DNNMRCNN outperforms the DNNRN152 in terms of the diagnostic performance of the tomato leaf miner. The IoU was additionally calculated to assess the segmentation performance of the DNNMRCNN. The IoU calculation results showed that in the majority of test datasets, and the DNNMRCNN precisely segmented the regions infected by tomato leaf miner from the input image.
In future work, we intend to train the DNN model using Mask RCNN to detect multiple plant diseases and pests that occur in various crops in real-world agricultural sites.

Author Contributions

Conceptualization, S.J. (Seongkyun Jeong) and J.B.; methodology, S.J. (Seongho Jeong) and J.B.; software, S.J. (Seongho Jeong) and J.B.; validation, S.J. (Seongho Jeong) and J.B.; data curation, S.J. (Seongho Jeong); writing—original draft preparation, J.B.; writing—review and editing, S.J. (Seongkyun Jeong) and J.B.; supervision, S.J. (Seongkyun Jeong) and J.B.; project administration, J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a 2021 research grant from Sangmyung University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Loey, M.; ElSawy, A.; Afify, M. Deep learning in plant diseases detection for agricultural crops: A survey. Int. J. Serv. Sci. Manag. Eng. Technol. 2020, 11, 41–58. [Google Scholar] [CrossRef]
  2. Shruthi, U.; Nagaveni, V.; Raghavendra, B. A review on machine learning classification techniques for plant disease detection. In Proceedings of the International Conference on Advanced Computing and Communication Systems, Coimbatore, India, 15–16 March 2019. [Google Scholar]
  3. Bera, T.; Das, A.; Sil, J.; Das, A.K. A survey on rice plant disease identification using image processing and data mining techniques. In Emerging Technologies in Data Mining and Information Security; Abraham, A., Dutta, P., Mandal, J.K., Bhattacharya, A., Dutta, S., Eds.; Springer: Singapore, 2019; Volume 814, pp. 365–376. [Google Scholar]
  4. Tuta Absoluta The South American Tomato Leafminer. Available online: https://anrcatalog.ucanr.edu/Details.aspx?itemNo=8589 (accessed on 3 October 2022).
  5. Zhou, R.; Kaneko, S.; Tanaka, F.; Kayamori, M.; Shimizu, M. Disease detection of Cercospora Leaf Spot in sugar beet by robust template matching. Comput. Electron. Agric. 2014, 108, 58–70. [Google Scholar] [CrossRef]
  6. Barbedo, J.G.A.; Godoy, C.V. Automatic Classification of Soybean Diseases Based on Digital Images of Leaf Symptoms. In Proceedings of the Brazilian Congress of Agroinformatics(SBIAGRO), Ponta Grossa, Brazil, 21–23 October 2015. [Google Scholar]
  7. Fuentes, A.; Yoon, S.; Park, D.S. Deep learning-based techniques for plant diseases recognition in real-field scenarios. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Auckland, New Zealand, 10–14 February 2020; Springer: Cham, Switzerland, 2020; Volume 12002. [Google Scholar]
  8. Ganatra, N.; Patel, A. A survey on diseases detection and classification of agriculture products using image processing and machine learning. Int. J. Comput. Appl. 2018, 180, 7–12. [Google Scholar] [CrossRef]
  9. Yang, D.; Li, S.; Peng, Z.; Wang, P.; Wang, J.; Yang, H. MF-CNN: Traffic flow prediction using convolutional neural network and multi-features fusion. IEICE Trans. Inf. Syst. 2019, 102, 1526–1536. [Google Scholar] [CrossRef] [Green Version]
  10. Sundararajan, S.K.; Sankaragomathi, B.; Priya, D.S. Deep belief CNN feature representation-based content based image retrieval for medical images. J. Med. Syst. 2019, 43, 174. [Google Scholar] [CrossRef]
  11. Melnyk, P.; You, Z.; Li, K. A high-performance CNN method for offline handwritten Chinese character recognition and visualization. Soft Comput. 2020, 24, 7977–7987. [Google Scholar] [CrossRef] [Green Version]
  12. Kumar, S.; Singh, S.K. Occluded thermal face recognition using bag of CNN ($ Bo $ CNN). IEEE Signal Process. Lett. 2020, 27, 975–979. [Google Scholar] [CrossRef]
  13. Liu, J.; Wang, X. Plant diseases and pests detection based on deep learning: A review. Plant Methods 2021, 17, 1–18. [Google Scholar] [CrossRef]
  14. Thenmozhi, K.; Reddy, U.S. Crop pest classification based on deep convolutional neural network and transfer learning. Comput. Electron. Agric. 2019, 164, 104906. [Google Scholar] [CrossRef]
  15. Fang, T.; Chen, P.; Zhang, J.; Wang, B. Crop leaf disease grade identification based on an improved convolutional neural network. J. Electron. Imaging 2020, 29, 013004. [Google Scholar] [CrossRef]
  16. Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Picon, A.; Seitz, M.; Alvarez-Gila, A.; Mohnke, P.; Ortiz-Barredo, A.; Echazarra, J. Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput. Electron. Agric. 2019, 167, 105093. [Google Scholar] [CrossRef]
  18. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Ozguven, M.M.; Adem, K. Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Phys. A Stat. Mech. Its Appl. 2019, 535, 122537. [Google Scholar] [CrossRef]
  20. Zhou, G.; Zhang, W.; Chen, A.; He, M.; Ma, X. Rapid detection of rice disease based on FCM-KM and faster R-CNN fusion. IEEE Access 2019, 7, 143190–143206. [Google Scholar] [CrossRef]
  21. Xie, X.; Ma, Y.; Liu, B.; He, J.; Li, S.; Wang, H. A deep-learning-based real-time detector for grape leaf diseases using improved convolutional neural networks. Front. Plant Sci. 2020, 11, 751. [Google Scholar] [CrossRef]
  22. Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A dataset for visual plant disease detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, Hyderabad, India, 5–7 January 2020. [Google Scholar]
  23. Sun, J.; Yang, Y.; He, X.; Wu, X. Northern maize leaf blight detection under complex field environment based on deep learning. IEEE Access 2020, 8, 33679–33688. [Google Scholar] [CrossRef]
  24. Bhatt, P.V.; Sarangi, S.; Pappula, S. Detection of diseases and pests on images captured in uncontrolled conditions from tea plantations. In Proceedings of the Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping IV, Baltimore, MD, USA, 15–16 April 2019. [Google Scholar]
  25. Zhang, B.; Zhang, M.; Chen, Y. Crop pest identification based on spatial pyramid pooling and deep convolution neural network. Trans. Chin. Soc. Agric. Eng. 2019, 35, 209–215. [Google Scholar]
  26. Ramcharan, A.; McCloskey, P.; Baranowski, K.; Mbilinyi, N.; Mrisho, L.; Ndalahwa, M.; Legg, J.; Hughes, D.P. A mobile-based deep learning model for cassava disease diagnosis. Front. Plant Sci. 2019, 10, 272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Selvaraj, M.G.; Vergara, A.; Ruiz, H.; Safari, N.; Elayabalan, S.; Ocimati, W.; Blomme, G. AI-powered banana diseases and pest detection. Plant Methods 2019, 15, 92. [Google Scholar] [CrossRef] [Green Version]
  28. Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of apple lesions in orchards based on deep learning methods of cyclegan and yolov3-dense. J. Sens. 2019, 2019, 7630926. [Google Scholar] [CrossRef]
  29. Zheng, Y.Y.; Kong, J.L.; Jin, X.B.; Wang, X.Y.; Su, T.L.; Zuo, M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [Green Version]
  30. Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving current limitations of deep learning based approaches for plant disease detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
  31. Fuentes, A.F.; Yoon, S.; Lee, J.; Park, D.S. High-performance deep neural network-based tomato plant diseases and pests diagnosis system with refinement filter bank. Front. Plant Sci. 2018, 9, 1162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Jiang, P.; Chen, Y.; Liu, B.; He, D.; Liang, C. Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks. IEEE Access 2019, 7, 59069–59080. [Google Scholar] [CrossRef]
  33. Wang, Z.; Zhang, S. Segmentation of corn leaf disease based on fully convolution neural network. Acad. J. Comput. Inf. Sci. 2018, 1, 9–18. [Google Scholar]
  34. Wang, X.F.; Wang, Z.; Zhang, S.W. Segmenting crop disease leaf image by modified fully-convolutional networks. In International Conference on Intelligent Computing; Springer: Cham, Switzerland, 2019; Volume 11643. [Google Scholar]
  35. Lin, K.; Gong, L.; Huang, Y.; Liu, C.; Pan, J. Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Front. Plant Sci. 2019, 10, 155. [Google Scholar] [CrossRef] [Green Version]
  36. Kerkech, M.; Hafiane, A.; Canals, R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 2020, 174, 105446. [Google Scholar] [CrossRef]
  37. AI-Hub. Available online: https://aihub.or.kr/aidata/129 (accessed on 5 December 2021).
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  39. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  40. GitHub. Available online: https://github.com/matterport/Mask_RCNN.git (accessed on 12 October 2022).
  41. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. IEEE T PATTERN ANAL 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  42. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Las Vondes Araucano Park, Chile, 11–18 December 2015. [Google Scholar]
  43. Li, X.; Lai, T.; Wang, S.; Chen, Q.; Yang, C.; Chen, R.; Lin, J.; Zheng, F. Weighted feature pyramid networks for object detection. In Proceedings of the 2019 IEEE International Conference on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), Xiamen, China, 16–18 December 2019. [Google Scholar]
Figure 1. Developed DNN structure using transfer learning with ResNet152.
Figure 1. Developed DNN structure using transfer learning with ResNet152.
Sensors 22 09959 g001
Figure 2. Developed DNN structure using transfer learning with Mask R-CNN. (a) ResNet101, (b) Region proposal network, (c) Region of interest pooling, (d) Fully connected layer, (e) Feature pyramid networks.
Figure 2. Developed DNN structure using transfer learning with Mask R-CNN. (a) ResNet101, (b) Region proposal network, (c) Region of interest pooling, (d) Fully connected layer, (e) Feature pyramid networks.
Sensors 22 09959 g002
Figure 3. Learning curves for Mask R-CNN: (a) training loss, (b) training mrcnn class loss, (c) validation loss, and (d) validation mrcnn class loss.
Figure 3. Learning curves for Mask R-CNN: (a) training loss, (b) training mrcnn class loss, (c) validation loss, and (d) validation mrcnn class loss.
Sensors 22 09959 g003
Figure 4. Confusion matrices of two DNNs: (a) DNNRN152, (b) DNNMRCNN.
Figure 4. Confusion matrices of two DNNs: (a) DNNRN152, (b) DNNMRCNN.
Sensors 22 09959 g004
Figure 5. Comparison of performance evaluation results between two DNNs—DNNRN152 (red bar) and DNNMRCNN (blue bar).
Figure 5. Comparison of performance evaluation results between two DNNs—DNNRN152 (red bar) and DNNMRCNN (blue bar).
Sensors 22 09959 g005
Figure 6. Number of test datasets (vertical axis) corresponding to the IoU range (horizontal axis) calculated by comparing human-segmented and DNN-predicted binary mask images for test datasets. (a) Number of test dataset corresponding to the IoU range calculated from prediction, (b) Comparison between target masking and predicted masking according to IoU value.
Figure 6. Number of test datasets (vertical axis) corresponding to the IoU range (horizontal axis) calculated by comparing human-segmented and DNN-predicted binary mask images for test datasets. (a) Number of test dataset corresponding to the IoU range calculated from prediction, (b) Comparison between target masking and predicted masking according to IoU value.
Sensors 22 09959 g006
Table 1. Explored hyperparameter space.
Table 1. Explored hyperparameter space.
Types of HyperparameterExplored Range
FCL 1 Structure for ClassifierNumber of LayersInteger in a range of [0, 4]
Number of Neurons in Each Layer2n, where n is an integer in a range of [4, 8]
Dropout Rate in Each LayerReal number in a range of [0.1, 0.5] with an interval of 0.1
Training ProcessBatch Size2n, where n is 3 or 4
OptimizerOne among SGD, Adam, and RMSprop
1 Fully Connected Layer.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jeong, S.; Jeong, S.; Bong, J. Detection of Tomato Leaf Miner Using Deep Neural Network. Sensors 2022, 22, 9959. https://doi.org/10.3390/s22249959

AMA Style

Jeong S, Jeong S, Bong J. Detection of Tomato Leaf Miner Using Deep Neural Network. Sensors. 2022; 22(24):9959. https://doi.org/10.3390/s22249959

Chicago/Turabian Style

Jeong, Seongho, Seongkyun Jeong, and Jaehwan Bong. 2022. "Detection of Tomato Leaf Miner Using Deep Neural Network" Sensors 22, no. 24: 9959. https://doi.org/10.3390/s22249959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop