Next Article in Journal
Mixed Oxime-Functionalized IL/16-s-16 Gemini Surfactants System: Physicochemical Study and Structural Transitions in the Presence of Promethazine as a Potential Chiral Pollutant
Next Article in Special Issue
Recent Advances of Smart Systems and Internet of Things (IoT) for Aquaponics Automation: A Comprehensive Overview
Previous Article in Journal
Electrochemical Sensor for the Direct Determination of Warfarin in Blood
Previous Article in Special Issue
Prediction of Trained Panel Sensory Scores for Beef with Non-Invasive Raman Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics

1
College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
2
Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
3
Department of Soil and Water Sciences, Faculty of Environmental Agricultural Sciences, Arish University, North Sinai 45516, Egypt
4
Agricultural Engineering Department, Faculty of Agriculture, Suez Canal University, Ismailia 41522, Egypt
5
Department of Nutrition & Food Science, National Research Centre, Dokki, Giza 12622, Egypt
6
Department of Environmental Science & Technology, University of Maryland, College Park, MD 20742, USA
7
Department of Pharmacology & Toxicology, College of Pharmacy, King Saud University, Riyadh 11564, Saudi Arabia
*
Author to whom correspondence should be addressed.
Chemosensors 2022, 10(2), 45; https://doi.org/10.3390/chemosensors10020045
Submission received: 17 December 2021 / Revised: 16 January 2022 / Accepted: 21 January 2022 / Published: 25 January 2022
(This article belongs to the Special Issue Practical Applications of Spectral Sensing in Food and Agriculture)

Abstract

:
In the aquaponic system, plant nutrients bioavailable from fish excreta are not sufficient for optimal plant growth. Accurate and timely monitoring of the plant’s nutrient status grown in aquaponics is a challenge in order to maintain the balance and sustainability of the system. This study aimed to integrate color imaging and deep convolutional neural networks (DCNNs) to diagnose the nutrient status of lettuce grown in aquaponics. Our approach consists of multi-stage procedures, including plant object detection and classification of nutrient deficiency. The robustness and diagnostic capability of proposed approaches were evaluated using a total number of 3000 lettuce images that were classified into four nutritional classes—namely, full nutrition (FN), nitrogen deficiency (N), phosphorous deficiency (P), and potassium deficiency (K). The performance of the DCNNs was compared with traditional machine learning (ML) algorithms (i.e., Simple thresholding, K-means, support vector machine; SVM, k-nearest neighbor; KNN, and decision Tree; DT). The results demonstrated that the deep proposed segmentation model obtained an accuracy of 99.1%. Also, the deep proposed classification model achieved the highest accuracy of 96.5%. These results indicate that deep learning models, combined with color imaging, provide a promising approach to timely monitor nutrient status of the plants grown in aquaponics, which allows for taking preventive measures and mitigating economic and production losses. These approaches can be integrated into embedded devices to control nutrient cycles in aquaponics.

1. Introduction

Aquaponics has been considered an emerging industry promoted as a sustainable agricultural practice. It promises to be a sustainable alternative against global environmental and food problems [1,2]. It is an integrated agri-aquaculture system (IAAS) that combines aquaculture (farming of fish), soilless culture, and nitrifying bacteria in a symbiotic environment [2]. Thus, it enhances aquaculture sustainability for the potential production of natural organic plants [3]. Fish feed is the primary source of all nutrients in aquaponics [4]. Recent studies indicated that nutrients excreted by fish might not supply sufficient levels for optimal plant growth when used as the sole source of nutrients [5]. For instance, Pineda–Pineda et al. compared the aquaponics and conventional hydroponic systems to cultivate lettuce [6]. Their results revealed that aquaponics provides only 5–79% of the required mineral elements such as N-NO3, P, K, Ca, and Mg, which negatively affects productivity and overall quality [5]. Therefore, tracking a nutrient deficiency in plants is crucial in aquaponics management practices.
Over the last decades, the most practical method for identifying plant nutrient deficiency still relies on visual observation based on the experience of the growers [7]. Xu et al. added the symptoms of plant nutrient deficiency could be visualized from morphological features (e.g., leaf color and shape) thus could correctly identify the nutritional deficiency stage [8]. In addition, Barbedo mentioned that the plant’s symptoms visualization could be one of the efficient and fast methods for observing its nutritional status [9]. However, visual detection requires excellent expertise, and there is a high probability of misdiagnosis, especially in the early growth stages. Besides, chemical analyses are very costly, time-consuming, and destructive techniques. Thus, developing an accurate, automatic, non-contact, and non-destructive technique to detect phytonutrient deficiency is substantially crucial [10,11]. Different non-destructive methods have been proposed for detecting plant nutrient deficiencies, including portable chlorophyll and nitrogen meters such as the SPAD-502 (Soil Plant Analysis Development), hyperspectral imaging, and in situ measurements using spectral remote sensing [12]. Besides being complicated in data processing, they require advanced, high, and costly imaging techniques and are only suitable for specific circumstances. Another non-destructive detection technique is the computer vision technique, which requires only an imaging device (a camera) to acquire and transfer images to a computer to be processed for making image-based decisions about plant characteristics. Red-Green-Blue (RGB) images have been more frequently employed in many detection tasks of plant stress due to the low cost, portability, and availability of the respective cameras [9]. Additionally, RGB imaging is the earliest imaging technology and is extensively used for plant nutrition diagnosis due to its convenience and low cost [13]. Such conventional computer-integrated vision techniques have been validated to detect plant nutrient deficiencies such as N, P, and K [14,15].
Nowadays, the emergence of artificial intelligence, the development of computer vision, and machine learning have provided an excellent opportunity for continuous monitoring of a plant’s nutritional status. Fast and early detection of phytonutrient deficiencies requires the development of sophisticated computational classifiers to be incorporated into the imaging system. Numerous studies involving early and rapid detection of the nutritional status of the plants have been carried out using conventional ML approaches that depend on hand-crafted features. Several studies have used texture features, extracted from the plant images, to study the health status of the plants and proved that the nutrient deficiency of the plant embodies in this image texture. For example, Liu et al. [16] used five texture features, including homogeneity, contrast, energy, correlation, and variance to estimate the nitrogen status of the winter oilseed rape crop at three different growth stages. These features were derived using a gray-level co-occurrence matrix (GLCM). The results of this study proved that the texture feature offers meaningful information to diagnose the N status of oilseed rape. Liu et al. [16] used textural features to estimate the N status of the winter oilseed rape at three different growth stages. These features were derived using a gray-level co-occurrence matrix (GLCM). The results of this study proved that the textural features offer meaningful information to diagnose the N status of oilseed rape. Wiwart et al. [17] used color analysis for early diagnosis of nutrient deficiency of three legume species. A significant difference between the color of nutrient deficient and non-deficient plants was observed. Pagola et al. [18] developed a method to assess the N status of the barley using eight color-based indices derived from leaflet images in RGB color space. Story et al. [19] developed an approach that combined structural, color, and textural features for identifying calcium deficiency in lettuce crops. Although multiple features were extracted, the system could only be used for monitoring the plant growth status in a greenhouse during the night, and more effort is needed to implement the system during the day with a high variability of the illumination. Sanyal et al. [20] extracted some textural and color features as the input of a multilayer perceptron neural network for classifying six types of mineral deficiencies in rice. However, handcrafted features -based approaches have some limitations in diagnosing nutrient status of the plant [9]. The interpretation of the results obtained by hand-crafted features (e.g., structural, textural, and color features) is misleading because these features are profoundly influenced by cultivars, light conditions, the response of the camera sensor to various changes [21]. Furthermore, ML algorithms have been developed for just one task; therefore, they may fail to perform well in another task. In addition, correct image processing, feature extraction, and accurate classification are impossible without proper segmentation. It is the primary and crucial key step in computer vision technology [22]. In order to both spatially classify and finely segment images, several traditional ML algorithms have been proposed, such as watersheds [23], fuzzy c-means clustering [9], and k-means clustering [24]. However, most of these methods use low-level features for segmentation.
As a subclass of machine learning methods, deep learning (DL) models have recently realized ’success in the field of plant nutrient status diagnosis, which refers to using deep neural networks that include a pretty large number of processing layers to analyze and process data. Since the early 2000s, DCNNs have been utilized for analyzing RGB images [25], such as images segmentation of biological materials [26], recognition of plants [27], prediction of leaf water content [28] and plant diseases detection [29]. Additionally, DCNNs automatically learn and extract the most descriptive features from the images during the training process, which thoroughly addresses the problems of hand-crafted features [30]. SegNet [31] is one of the most potent DCNN architectures used for color image segmentation. It has a lower computational cost and higher precision than some other DCNN architectures [32,33]. In addition, Inceptionv3 can significantly reduce the number of used parameters for molding calculations [34]. This algorithm avoids overfitting or underfitting problems by increasing the number of layers to promote the non-linear expression of the network. ResNets are a particular type of CNN model with a residual learning structure that improves the learning dynamics in error propagation over many layers of non-linear transformations [35]. Although there are great horizons of application of DCNNs, only a few research attempts using such technique to detect plant nutrient status have been reported in the literature [36,37,38]. Tran et al. [38] employed deep learning networks, such as Inception-ResNetv2 and Autoencoder, for classifying calcium, nitrogen, and potassium deficit in tomato plants. Abdalla et al [39] deep learning model to diagnose the nutrient status of oilseed rape by classifying the nutrient statuses of the plants into nine classes.
Few studies about using RGB images combined with DCNNs for nutrient status detection have been proposed, although their efficacy has been proven. Additionally, their use in monitoring sustainable aquaponics systems has received some attention, creating a scientific gap that must be filled. Therefore, this study aims to evaluate the different DCNNs models includes SegNet, Inceptionv3, and ResNet18 algorithms to deal with nutrient status diagnosis as a multi-stage procedure including segmenting color images and building models for early and fast detection of the NPK deficiencies in two different lettuce cultivars (Lactuca sativa) that grown in aquaponics across the growing season. Also, the performance of the DCNNs models were compared against three different shallow classifiers (SVM, KNN, and DT) to confirm their validity and applicability. It is expected that this study would provide a baseline for managing aquaponics systems using computer vision technology, thus achieving greater sustainability and economic return.

2. Materials and Methods

2.1. Chemicals and Materials

To prepare the standard plant nutrient solution, the following chemicals were used; Potassium nitrate (KNO3), calcium nitrate tetrahydrate (Ca(NO3)2·4H2O), monopotassium phosphate (KH2PO4), magnesium sulfate heptahydrate (MgSO4·7H2O), trace elements (manganese chloride tetrahydrate (MnCl2·4H2O), zinc sulfate heptahydrate (ZnSO4·7H2O), boric acid (H3BO3), sodium molybdate (NaMoO4), copper sulfate pentahydrate (CuSO4·5H2O ), and iron chelates (FeEDTA). All of these chemicals were purchased from Shhushi Ltd. (Jing’an, Shanghai, China). Two lettuce (Lactuca sativa) cultivars of Flandria (var. Flandria) and Romaine (var. longifolia) plant seedlings were used, which were purchased from PengJi Industrial Zone, (Liantang, Luohu District, Shenzhen, Guangzhou, China). Lettuce was used because it is the main cultivated plant in aquaponic system [40].

2.2. Chemical Analysis

Across the growing season, lettuce and water were randomly sampled in both the aquaponics and control systems for chemical analysis of the nitrogen, phosphorus, and potassium content. The fresh samples of lettuce leaves were oven-dried at 75 °C to constant weight and then ground in a Wiley Mill with a 20-mesh sieve. The resulting powder is digested with sulfuric acid (98%, w/w). Then, the Kjeldahl method by a continuous-flow analyzer (AutoAnalyzer 3 (AA3), SEAL Analytical Co., Werxham, UK) was used to analyze nitrogen content following the method of Mao et al. [41]. Phosphorus content was measured by the method of ammonium molybdate spectrophotometry [42]. Potassium content was measured by inductively coupled plasma mass spectrometry [43]. Table 1 shows the average of nitrogen, phosphorus, and potassium content in lettuce for both systems across the growing season.
The water content of nitrogen, phosphorus, and potassium was measured weekly across the growing season. Nitrogen and potassium were measured with the method of ion-selective electrodes, Thermo Scientific Orion portable meter pH/EC (1215000) (Pittsburgh, PA, USA), ion plus Nitrate (9707BNWP) (725 Lohwest Ln, Billings, MT 59106, USA), and ion plus Potassium (9719BNWP) (725 Lohwest Ln, Billings, MT 59106, USA). Phosphorus was determined by the method of molibdovanada to yellow reading absorbance at 420 nm in a Thermo Scientific model Genesys 10 VW scanning (5225 Verona Road, Madison, Wisconsin, 53711-4495 USA) [6]. Table 2 shows the average of nitrogen, phosphorus, and potassium concentrations in water for both systems across the growing season.

2.3. Experimental Setup

A greenhouse with an area of 36 m2 (3 m wide × 12 m long) was designed and constructed on the rooftop of College of Biosystems Engineering and Food Science, Zhejiang University (30°16’N, 120°07’07’’E), Hangzhou, Zhejiang Province, China. Inside this greenhouse, aquaponics system was constructed following the standard design guidelines proposed by Somerville et al. [44]. Two cultivars of lettuce (Flandria and Romaine) were grown during the period from November 2019 to January 2020. In brief, the aquaponic system consisted of two main parts, a fish rearing tank (1 m3) made of low-density polyethylene (LDPE) and a plant growing unit, which was designed based on the nutrient film technique (NFT). Four polyvinyl chloride (PVC) pipes with 11 cm diameter and 4 m length were used as a cultivation unit. Each pipe contains 20 holes with 0.20 m between the centers of each hole to allow adequate plant space for accommodating a total of 80 plants. The water flowed by gravity from the fish rearing tank through a mechanical filter to the water reservoir (100 L). Then, water was pumped from the reservoir to a 60 L biological filter supported with bio balls as bio-stimulating media for the growth of microorganisms. Subsequently, water flows by gravity through the NFT system pipes, which were well exposed to sunlight. Finally, water recirculated back to the fish rearing tank to complete the cycle. The levels of dissolved oxygen (DO) were maintained above five mg/L using an air pump (0.5 HP) for providing a sufficient concentration of oxygen level to the fish. The fish rearing tank was covered by a piece of black cloth with 90% shade to protect it from direct sunlight, prevent algal growth, and prevent the fish from jumping out of the tank. Another hydroponic system (control system) with full nutrition (FN) regime was designed and constructed based on NFT and was consistently supplied with a Hoagland’s nutrient solution [45]. Thus, another 80 plants of the two cultivars of lettuce were transplanted in the control system. Optimum growing conditions for the lettuce plants were provided to obtain a dataset of healthy lettuce images to compare them with lettuce grown in aquaponics. Hoagland’s solution is composed of a mixture of KNO3, Ca(NO3)2·4H2O, KH2PO4, MgSO4·7H2O, and FeEDTA with concentrations of 101.1, 236.1, 136.1, and 246.5 g per 1 liter of distilled water, respectively. Some essential trace elements were added to the solution by adding MnCl2·4H2O, ZnSO4·7H2O, H3BO3, NaMoO4, and CuSO4·5H2O of quantity 1.8 g, 0.2 g, 2.8 g, 0.025 g, and 0.1 g, respectively.

2.4. Spectral Reflectance Measurement of Lettuce Leaves

For further investigation in tracking the nutritional status of lettuce grown in aquaponics, the leaf spectral reflectance properties were measured. The measurement of spectral reflectance properties has been demonstrated to be a very effective and reliable method for detecting and predicting the nutritional status of plants [46,47]. Pacumbaba Jr and Beyl [48] reported that leaf spectral reflectance measurements at the near-infrared (NIR) region (wavelength of 401.67 and 780.11 nm) have shown to be a valuable tool to identify the nutritional status of lettuce. The leaf spectral measurements were performed using analytical spectral devices (ASD FieldSpec Pro FR spectroradiometer), which has a spectral range of 350–2500 nm, and a spectral resolution of 3.0 and 10 nm.

2.5. Image Acquisition

Figure 1 shows the full protocol for detecting NPK deficiencies in lettuce plants grown in aquaponics using the proposed method. To create an image dataset, the images of lettuce canopy were acquired using a red, green, and blue camera (PowerShot SX720 HS, Canon Inc., Tokyo, Japan) during the growing season, starting from the first true leaf. The images were collected under sunny and cloudy conditions in the morning hours (9:00 –13:00) and in the night (20:00) local time, Greenwich Mean Time (GMT+8), as shown in Figure 2. Every plant was imaged at a distance of 50 cm from the canopy using 4 mm focal length, 3.43 mm max aperture, 6.17 mm × 4.55 mm sensor size, ISO speed of 80, and all images were saved in RAW format with a spatial resolution of 3888 × 5186 pixels.
A total of 3000 RGB images (2400 images for aquaponic lettuce and 600 images for hydroponic lettuce) were collected. All acquired images were divided into four classes based on the plant’s nutrient status: deficiency of nitrogen (N), deficiency of phosphorous (P), and deficiency of potassium (K), and full nutrition (FN). The visible symptoms of NPK deficiencies were recorded in Table 3 [49] and visualized in Figure 3. It is important to emphasize that all images captured from the control hydroponic system were classified as a fully nutritious (FN) class because the plants grown in this system were not suffering from any symptoms of nutrient deficiency due to the continuous supply of the nutrient solution during the entire growing season.

2.6. Image Segmentation

Image segmentation was performed to extract the plant from the background for further processing and computer vision applications following Ben–Chaabane et al. [50]. Due to fluctuations in natural lighting during image acquisition, the images were captured in different lighting conditions, which led to a big challenge for infield image segmentation. Uneven lighting would cause problems for reliable image segmentation if the traditional segmentation methods were applied. Therefore, the SegNet model is one of the most potent segmentation procedures with very accurate performance.
SegNet requires a large dataset size to obtain high segmentation performance. It also requires labeling training data, which is laborious and time-consuming, particularly in the agriculture domain, where plant images are acquired with high obstructions, high objects, and environmental complexity. In contrast, traditional learning algorithms do not require predefined class labels to represent patterns [24]. Instead, the algorithm can detect similarities for itself. Therefore, to further investigate and confirm the performance of SegNet, its outcome was compared with that of two traditional segmentation methods, namely the simple thresholding-based segmentation [51] and k-means segmentation algorithm.
A total of 600 RGB images were randomly selected from the image dataset for the manually labeling task via image Labeler Toolbox in MATLAB R2019b (The MathWorks, Inc., Natick, MA, USA) to provide pixel-level labels for two semantic classes (i.e., the background and the lettuce plant). The new dataset of the labeled images and their corresponding pixel-level labels was then used to train the SegNet model. For training the SegNet model, the 600 original and their corresponding pixel-level labels were divided into 60% for training, 20% for validation, and 20% for testing. Moreover, a data augmentation protocol has been applied for improving the accuracy of the segmentation process. Figure 4 shows the data augmentation protocols applied in this work to increase the reliability of the segmentation process despite the location and orientation of the lettuce plants in the acquired images in the training process of SegNet. Stochastic gradient descent was used with a fixed learning rate η of 1 × 10−10, and a momentum coefficient of 0.9 for training the network. The flowchart of the segmentation process using SegNet is schematically illustrated in Figure 5.
The model was optimized for a maximum of 100 epochs where every epoch passes through the dataset. To ensure that each image was not repeatedly used in each epoch, each minibatch of four images was selected in order. When the loss on the validation set stops diminishing over four consecutive epochs training ends.

2.7. Feature Extraction

After dividing the images into the four target classes (FN, N, P, K) and segmenting them. Extracted features, including morphological, color, and textural features, were also derived from the segmented images and concatenated as the input of the classic ML classifiers. Image texture analysis is a very important tool for exploring symptoms and data in the image that are difficult for humans to notice visually [52,53]. The texture of the plant arose due to the presence of many veins in different directions or parallel lines of different colors in the plant leaf. The whole plant also has its own texture that depends on the frequency, direction and curvature of the leaves [54]. Changes in plant leaf surface structure and yellowish may lead to significant changes in some texture features (energy, homogeneity, contrast, correlation, and entropy) [41]. To capture the spatial dependence of gray-level values that contribute to texture perception, the gray-level co-occurrence matrix was used. Because plant texture depends on orientation four different matrices were calculated based on the different angles of pixel relativity (0°, 45°, 90° and 135°) [41]. Then the classification results were compared with those obtained using DCNNs. The averages of an area, perimeter, and convex hull were calculated from the segmented images. Also, the averages of red, green, and blue color features were calculated from the segmented images. Then, the images transformed from the RGB model to HSV (hue, saturation, and value) color space. Finally, textural features were derived using the gray-level co-occurrence matrix, including energy, homogeneity, contrast, correlation, and entropy. All these features were then combined to form high-dimensional hand-crafted features.

2.8. Traditional ML Classifiers

Despite the widespread applications of deep learning in image classification and computer vision, traditional ML methods are still widely used even today. As the most prevalent ML methods for computer vision applications, pattern recognition, and image classification, the SVM, KNN, and DT were used as a benchmark to compare the performance of the DCNNs for detecting and monitoring the nutrient status of the lettuce [55]. SVM is a supervised ML technique, which creates a separate hyperplane. It is a linear, non-probabilistic binary classifier. The different classes are separated by the SVM based on the hyperplane of Equation (1):
W T X + b = 0
where X denotes the input vector, W denotes the weight vector, and b denotes the bias.
KNN is a supervised learning method, which is characterized by its simple implementation and good performance in classifying images. Mathematically it is defined by Equation (2).
y = 1 k   i : 1 k y i
DT also belongs to the category of classic supervised ML. The tree consists of two parts called the leaves and nodes. The nodes are points where the data is split, while the leaves represent the final decision. The DT used in this study grows by the classification and regression (CART) algorithm [56].

2.9. DCNNs Classifiers

To detect NPK deficiencies of lettuce plants grown in aquaponics, this study employed two pre-trained deep learning architectures, namely Inceptionv3 [34] and ResNet18 [35]. The Inceptionv3 and ResNet18 algorithms were selected because they have been proved as an accurate classification routine in many tasks [55,57,58]. To train the models, fine-tuning or transfer learning technique was implemented. Transfer learning is one of the commonly used deep learning techniques with most convolutional neural networks [59]. In deep learning techniques, the first layers of the network are trained to identify the descriptive features of the target task. In transfer learning, the last few layers of the pre-trained model are removed and retrained with new layers for the target task. Fine-tuned models are faster and more accurate than models trained from scratch [60]. A typical DCNN architecture consisting of four main layers applied in this study is shown in Figure 6. Initially, the input image was passed, and its neurons were computed by means of a convolution layer, which is connected to the local regions of the input.
The function of the convolution layer was used to extract descriptive features from the input image dataset. The noise effects in the extracted features from images were reduced by the pooling layer, where the irrelevant features were also diminished. The problem of overfitting was solved by max-Pooling, where 2 × 2 polling was mostly performed on the extracted matrix. High-level features are calculated by the FC layer, which is similar to the convolution layer as most studies activate on the FC layer to extract the deep features. The other layers, such as ReLu and FC, are defined by Equations (3) and (4).
Re i l = max h , h i l 1
Fc i l = f Z i l
with:
  Z i l = j = 1 m 1 l 1 r = 1 m 2 l 1 s = 1 m 3 l 1 w i , j , r , s l   ( Fc i l 1 ) r , s
where Re i l is the ReLu layer, Fc i l refers to the FC layer. Finally, if the target task was a multi-class classification task, the softmax function was applied to normalize the real values of the output from the last FC layer to the probabilities of the target class. Mathematically, a softmax function could be demonstrated as in Equation (5).
α c j = e c j k 1 K e c k
To accelerate learning, Table 3. and ResNet18 models were fine-tuned to identify and classify four different categories of nutrient deficiency in aquaponic lettuce with pre-trained models on the image dataset. The last layers of these networks were replaced by a classification layer having four neurons for the four classes (FN, N, P, and K). The images were resized to match the input size of each network. Learnable parameters were optimized with the stochastic gradient descent with a momentum of 0.09, a mini-batch size of 60 images, and a learning rate of 0.004. The two models were trained over 600 iterations, and the training was stopped when there was no improvement in the validation loss over four consecutive iterations. For training the models, the dataset was divided into training, validation, and testing sets by randomly splitting the 3000 images into 1800 images for training, 600 images for validation, and 600 images for testing sets.

2.10. Performance Evaluation of Segmentation and Classification Methods

The performance of the proposed segmentation method was evaluated qualitatively (visually). The qualitative evaluation is a common way a user visually compares and assesses the image segmentation results.
Moreover, the performance of the proposed segmentation and classification methods was quantitatively evaluated through a set of different measures such as the overall accuracy (Acc), precision (Pr), recall (Re), and F1_score explained in Equations (6)–(10).
Acc = TP + TN TP + TN + FP + FN × 100
Pr = TP TP + FP × 100  
Re = TP TP + FN × 100
F 1 _ score = 2 × Precision   ×   Recall Precision   +   Recall × 100  
Dice   score = 2 × TP 2 × TP + FP + FN
where: TP denotes the true positive; TN is true negative; FP denotes the false positive; FN refers to the false negative.

2.11. Statistical Analyses

To further investigate the significant differences among the four classification classes (FN, N, P, K) in terms of the extracted features, a statistical analysis was performed using one-way analysis of variance (ANOVA) and Tukey’s HSD at a confidence level of 95% (p < 0.05). One-way ANOVA was applied to each feature separately for all classes to recognize whether the four classes were significantly different in a particular feature or not. One-way ANOVA was also used to compare the performance of ML and DCNN classifiers for all performance measures. A MATLAB R2019b (The MathWorks, Inc., Natick, MA, USA) was used to implement the frameworks and image processing. The models were run on a graphics processing unit NVIDIA GeForce GTX1080Ti equipped with 3548 CUDA Cores and 16 GB GPU memory.

3. Results and Discussion

3.1. Training DCNN Models

Pretrained DCNNs were first trained on the training dataset to learn and extract the most descriptive features for segmentation and classification. SegNet was trained to segment image dataset before the feature extraction and classification phases. Inceptionv3 and ResNet18 were fine-tuned on the segmented image dataset to classify the images into four classes (FN, N, P, and K). The results of the training are presented in Table 4. The SegNet, Inceptionv3, and ResNet18 yielded overall accuracies of 98.30%, 98.90%, and 97.70%, and 99.29%, 98.00%, and 92.50% in the training and validation set, respectively. The total training time for the models was 194, 65, and 87 min, respectively, as shown in Table 4. It could be noted that the difference between the training and validation accuracies was minimal, indicating that the models were well-generalized to the dataset and there were no signs of overfitting problem.
The training accuracy, validation accuracy, training loss, and validation loss for each epoch of the training process for the SegNet, Inceptionv3, and ResNet18 are shown in Figure 7. It can be observed that the training and validation losses slowly reduced during the training process. For SegNet, it can be observed that the training process continued until the stopping criterion was met at 78 epochs. The maximum iteration in Inceptionv3 and ResNet18 models at the end of the training process was 600 epochs. At this point, the model reached the highest accuracy and gave the lowest loss, and no more iterations were required. The Inceptionv3 outperforms the ResNet18 in terms of accuracy and time of training, as depicted in Figure 7a,b.

3.2. Changes in Spectral Reflectance under NPK Levels

It has been reported that the deficiencies of NPK increase the leaf reflectance at the visible (VIS, 400–690 nm) and shortwave infrared (SWIR, 1400–2350 nm) regions of the spectrum [61,62,63]. Figure 8 shows the spectrum of lettuce leaves for aquaponics and control systems across the growing season. In the visible and middle infrared wavelength regions, the reflectance of lettuce grown in aquaponics was increased compared to lettuce grown in control system. The reflectance increase could be due to the decrease in nitrogen concentration in aquaponic lettuce [47]. The reason for this phenomenon is that the decrease in nitrogen concentrations leads to decreased chlorophyll content and photosynthesis, and thus decreases the visible light absorption, which causes an increase in the reflectance. The reflectance of aquaponic lettuce in the near-infrared region (700–1300 nm) was decreased compared to lettuce grown in control at wavelengths 955 and 1763 nm. These changes could be from the effect of nitrogen levels on the canopy structure and leaf area index. With the decrease in total nitrogen content, the leaf area index, and water content are also decreased, resulting in near-infrared reflectance inhibition [46]. Similarly, the reflectance of lettuce grown in aquaponics increased at the range of (400–480 nm) and (1400–2500 nm), which indicates a deficiency of phosphorus [61,63]. Finally, K deficiency leads to an increased reflectance at the range of 500–720 bands [62], as shown in Figure 8. The chemical analyses recorded in Table 1 support these results.

3.3. Evaluation of Segmentation Methods

The accuracy of image segmentation is strongly associated with variation amongst classes [64]. Since the images were taken in different lighting conditions (e.g., sunny, cloudy, and at night) with very noisy backgrounds and some non-Plant objects (pipes, shadows, wires, specular reflectance, etc.), multi-level thresholding [65] is usually needed if the simple thresholding method was applied. Therefore, SegNet was employed to overcome these constraints as one of the most effective segmentation procedures with a very accurate performance [65].
The performance of all segmentation methods was evaluated visually and quantitatively. The visual evaluation of the segmentation process using simple thresholding, k-means, and SegNet methods is shown in Figure 9. While both traditional methods did not wholly extract the plant from the background, the SegNet model completely extracted the plant from the background without noise or defects in the segmented image. The results revealed that the segmented images resulting from both traditional methods of segmentation suffered from variations in illumination. Different objects appeared in the images as some parts of the plant appeared as background and some parts of the background were classified as a plant as shown in Figure 9.
In addition to the visual evaluation of the segmentation results, the performance of the three segmentation methods was also evaluated using the measures of Acc, Pr, Re, F1_score, Dice score, and ST as presented in Table 5. The results indicated that the SegNet method exhibited the highest Acc, Pr, Re, F1_score, and Dice score (99.1%, 99.3%, 99.5%,99.4%, and 99.5%, respectively); followed by the k-means method with lower Acc, Pr, Re, F1_score, and Dice score (83.1%, 83.5%, 83.7%,83.5%, and 83.3% respectively). The thresholding-based segmentation method had the smallest values of Acc, Pr, Re, F1_score, and Dice score (75.2%, 75.5%, 75.6%,75.5%, and 75.5%, respectively). These results agreed well with the visualization results shown in Figure 9. On the other hand, the segmentation time is the time required to segregate different classes from a single image. The lowest segmentation time of 0.605 s was obtained when the SegNet method was applied, which meets the application requirements in agriculture [32,66]. On the contrary, the k-means method required the highest segmentation time of 5 s. The qualitative and quantitative results revealed the great superiority of the SegNet method over the other two traditional methods. These results agreed with the results reported by Yilmaz et al. and Bosilj et al. [67,68] who used SegNet to segment the image into weeds and crops. The highest segmentation accuracy obtained in this work was 91%. Also, Ma et al. [32] compared three semantic segmentation models namely FCN, UNet, and SegNet for segmentation of rice plants in the presence of some weeds. The SegNet model outperformed the other two models and had an overall segmentation accuracy of 92.7%. Similarly, Manickam et al. [69] compared four models (VGG16, GoogleNet, ResNet, and SegNet) for personal identification with aerial imaginary. Their results revealed that SegNet achieved the highest overall accuracy of 91.04%. Besides, SegNet can be used for semantic segmentation of remote sensing images [70,71]. Li et al. [70] used SegNet for semantic segmentation under multiple weakly-supervised constraints for remote sensing image; they compared their results to SegNet. Although their proposed method outperformed all other comparison approaches, SegNet performed well in cross-domain semantic segmentation. Chantharaj et al. [72] compared the performance of FCN-8s, SegNet, and Gated Segmentation Network (GSN) for very-high resolution International Society for Photogrammetry and Remote Sensing (ISPRS) data on medium resolution image from Landsat-8 satellite and concluded that SegNet achieved the highest accuracy.
The outstanding performance of SegNet model in the recent study may be due to its high ability to learn high-level features during the training process. Also it is characterized as deeper architecture compared to the other architectures [32,66], ease of training, a fewer number of parameters, and minor computational operations. Also, it required much less memory compared to the other DCNN architectures.

3.4. Analysis of Variance

The statistical analysis results shown in Table 6. The significant differences between classes were indicated by superscript letters above the values. Values that have the same letter have no significant differences among their corresponding classes and vice versa. The statistical results indicated significant differences in all textural features (energy, homogeneity, contrast, correlation, and entropy) among the four classes (FN, N, P, and K). On the other hand, there were no significant differences between plants having P and K in terms of red (0.338c, 0.340c), green (0.393c, 0.392c), and blue (0.269c, 0.268c) features. This may be due to the apparent color similarity of P and K [73]. The results showed significant differences between all the classes using hue, saturation, and value features. For the morphological features (area, perimeter, and convex hull), it is noticed that there were no statistically significant differences between FN and N for the area and perimeter features, as they both were followed by the same superscripted letter. Also, there were no statistically significant differences between P and K for the convex-hull feature. Perhaps the low ability of the morphological features to fully distinguish between deficiency classes is that the images were captured at different age stages. Therefore, the dimensions of the plants differed according to the maturity stages, not the difference between the classes. The difference between classes in terms of textural and HSV features should not be attributed to the discriminative ability of the feature. It is due mainly to the variation in illumination conditions because such features are not invariant to illuminations and scale of the image. In short, using these features to discriminate different nutrient classes may not provide reliable results. To make our claim more concrete, these results were compared with that obtained ones by DCNN methods. Statistical analysis also showed significant differences between DCNN models and the traditional ML classifying models in the performance accuracy (p < 0.05). Meanwhile, Inceptionv3 accuracy was highly significant (p < 0.0001) compared to the SVM model.

3.5. Performance of Classification Methods

An assessment of the appropriateness of state-of-the-art deep convolutional neural networks for defining the nutrient deficiency in lettuce grown in aquaponics based on RGB images was performed using two different algorithms (Inceptionv3 and ResNet18) compared with three conventional machine learning methods (i.e., support vector machine; SVM, k-Nearest neighbor; KNN, and decision Tree; DT).
Confusion matrices were generated to demonstrate the capability of our proposed framework to differentiate classes of nutrient deficiency as shown in Table 7. The Inceptionv3 revealed a superior performance to discriminate between the classes of FN, N, P, and K with the lowest overall classification error (3.5%), followed by the ResNet18 which provided relatively good performance also (classification error of 7.9%). On the contrary, the conventional ML classifiers did not perform well compared to the DCNNs, as the classification error increased. SVM obtained an overall classification error of 13.9%, besides DT obtained the highest classification error among all the proposed classification methods of 20.2%.
To better investigate the robustness and effectiveness of the proposed framework for diagnosing the lettuce nutritional status, a comparative experiment, and quantitative analysis in terms of Acc, Pr, Re, and F1_score was performed, and the results are presented in Table 8. The "Acc" refers to the ratio of elements that are correctly classified as overall dataset elements [8]. The Pr refers to the ratio of correctly predicted positive observations to the total predicted positive observations. The recall refers to the ratio of correctly predicted positive observations to all observations in an actual class. The F1_score determines the harmonic mean of Pr and Re measures, which therefore considers both false positives and false negatives; F1_score is usually more useful, especially for uneven class distribution, albeit not as intuitively understandable as accuracy [8].
It is evident from the obtained results that Inceptionv3 achieved remarkable performance in terms of all classification measures. Inceptionv3 achieved superior performance, with the highest Acc, Pr, Re, and F1_score of 96.5%, 95.7%, 96.2%, and 95.9%, respectively, compared to the ResNet18, which also performed well with Acc, Pr, Re, and F1_score values of 92.1%, 91.6%, 92.1%, and 91.8%, respectively. The performance of our proposed framework was comparable to the reported study that employed Inceptionv3 for extracting deep features and classification of Oilseed Rape [39]. At the same time, one should notice the difference between experimental conditions and classification subjects. The superiority of the Inceptionv3 (48 layers) may be attributed to the fact that it is deeper than ResNet18 (18 layers). As evident from other studies, more profound layers improve the performance [55]. Also, Inceptionv3 has an excellent performance in image recognition, and classification compared to any other deep learning algorithm [74]. The comparative analysis among conventional ML classifiers shows that SVM outperforms the other ML classifiers with 86.1%, 85.5%, 85.8%, and 85.7%, respectively, for all performance measures, while the DT achieved the worst results in terms of all performance measures of 79.8%, 78.9%, 79.2%, and 79.1.8%, respectively as shown in Table 8. The results of classical ML classifiers were comparable to the reported study by Azimi et al. [55], which employed SVM, KNN, and DT for stress classification in Sorghum plant shoot images. Besides, SVM has been proved to be superior over most of the ML methods in image classification and processing [75]. The classification performance was also evaluated based on the computational time (CT). The CT denotes the time needed to classify one image. The SVM method had the lowest classification time, followed by KNN, then DT, Inceptionv3, and ResNet18 of 0.2, 0.4, 0.5, 0.6, and 0.8 s/image, respectively; this result is consistent with Amri at al. [76]. The results obtained in the experimental studies prove that the proposed DCNNs outperformed other considered traditional ML methods. This result is consistent with many studies that have compared deep classifiers with traditional classifiers [39,55].
The reason why the DCNN methods showed outstanding performance over the other classification methods is that it can automatically learn high-level representative features during the training process. Also, it does not rely on hand-crafted features as the traditional classification methods. Hence, the proposed DCNNs were found to be the best tool in assisting researchers in the precise detection of nutrient deficiency in plants during their growth. This finding supports the concept that optimized deep learning architectures with RGB images can be valuable and effective in the detection and classification of the plants based on nutrient deficiency they may suffer [9]. The use of proposed DCNNs methods (SegNet and Inceptionv3) in conjunction with a digital camera is expected to assist aquaponics management by detecting NPK deficiencies at an early stage. In addition, it may be beneficial to create a mobile app that implements proposed SegNet and Inceptionv3 models for automatic diagnosis of symptoms of NPK deficiencies of lettuce in aquaponics, so that users with limited or no knowledge of its use can manage aquaponics. Moreover, the proposed model can also be adapted to perform a task from another domain, such as drought or heat stress detection or combinations of many plants’ abiotic stresses provided that the data shift between source and target domains is not large otherwise fine-tuning would be necessary to adjust the parameters of the model. To efficiently perform cross-domain without fine-tuning the model and reduce the influence of the data shift, advanced objective functions, such as ones proposed by Li et al. [70] and Benjdira et al. [71], should be integrated into the deep learning models.

4. Conclusions

This study evaluated the potential of DCNNs coupled with RGB images to diagnose the nutrient status of lettuce cultivars grown in aquaponics. Several models were used, such as SegNet for segmentation and Inceptionv3 and ResNet18 for classification. Our approaches were validated using 3000 color images of lettuce canopies, and results demonstrated the great superiority of a DCNN model in segmentation compared with the traditional ML methods. Also, the highest overall classification accuracy of 96.5% was obtained by the Inceptionv3 outperforming ResNet18 and all traditional ML classifiers. Our present framework can be combined with a simple digital camera for real-time monitoring of the nutritional status of lettuce grown in aquaponics. Due to the flexibility of the framework, it can also be used with minor modifications for other applications, such as real-time disease detection, as well as with other plants, only needing retraining on the new image dataset.

Author Contributions

Conceptualization, M.F.T., A.A., S.A.-R., M.G. and G.E.; methodology, M.F.T., A.A., M.G. and G.E.; software, M.F.T., A.A., G.E., L.Z. and N.L.; validation, M.F.T., N.Z., Y.H., and Z.N.; formal analysis, M.F.T., G.E. and A.A.; investigation, M.F.T., G.E., S.A.-R. and A.A.; resources, Z.Q., L.Z., N.Z., S.A.-R. and N.L.; data curation, Z.N., Z.Q. and A.H.; writing—original draft preparation, M.F.T., G.E., S.A.-R., M.G. and A.A.; writing—review and editing, M.F.T., A.A., G.E.,Y.H. and A.H.; visualization, M.F.T., A.A., G.E., Y.H. and Z.Q.; supervision, Z.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the key projects of international scientific and technological innovation cooperation among governments under national key R & D plan (2019YFE0103800) and Zhejiang province key research and development program (2021C02023).

Data Availability Statement

The data presented in this study are available within the article.

Acknowledgments

Authors appreciate the support from the Distinguished Scientist Fellowship Program (DSFP) King Saud University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Majid, M.; Khan, J.N.; Shah, Q.M.A.; Masoodi, K.Z.; Afroza, B.; Parvaze, S. Evaluation of hydroponic systems for the cultivation of Lettuce (Lactuca sativa L., var. Longifolia) and comparison with protected soil-based cultivation. Agric. Water Manag. 2021, 245, 106572. [Google Scholar] [CrossRef]
  2. Yanes, A.R.; Martinez, P.; Ahmad, R.J. Towards automated aquaponics: A review on monitoring, IoT, and smart systems. J. Clean. Prod. 2020, 263, 121571. [Google Scholar] [CrossRef]
  3. Fischer, H.; Romano, N.; Jones, J.; Howe, J.; Renukdas, N.; Sinha, A.K. Comparing water quality/bacterial composition and productivity of largemouth bass Micropterus salmoides juveniles in a recirculating aquaculture system versus aquaponics as well as plant growth/mineral composition with or without media. Aquaculture 2021, 538, 736554. [Google Scholar] [CrossRef]
  4. Gao, Y.; Zhang, H.; Peng, C.; Lin, Z.; Li, D.; Lee, C.T.; Wu, W.-M.; Li, C.J. Enhancing nutrient recovery from fish sludge using a modified biological aerated filter with sponge media with extended filtration in aquaponics. J. Clean. Prod. 2021, 320, 128804. [Google Scholar] [CrossRef]
  5. Yang, T.; Kim, H.J. Characterizing Nutrient Composition and Concentration in Tomato-, Basil-, and Lettuce-Based Aquaponic and Hydroponic Systems. Water 2020, 12, 1259. [Google Scholar] [CrossRef]
  6. Pineda-Pineda, J.; Miranda-Velázquez, I.; Rodríguez-Pérez, J.; Ramírez-Arias, J.; Pérez-Gómez, E.; García-Antonio, I.; Morales-Parada, J. Nutrimental balance in aquaponic lettuce production. In Proceedings of the International Symposium on New Technologies and Management for Greenhouses-GreenSys. Acta Hortic. 2015, 1170, 1093–1100. [Google Scholar]
  7. Cook, S.; Bramley, R. Coping with variability in agricultural production-implications for soil testing and fertiliser management. Commun. Soil Sci. Plant Anal. 2000, 31, 1531–1551. [Google Scholar] [CrossRef]
  8. Xu, Z.; Guo, X.; Zhu, A.; He, X.; Zhao, X.; Han, Y.; Subedi, R. Using Deep Convolutional Neural Networks for Image-Based Diagnosis of Nutrient Deficiencies in Rice. Comput. Intell. Neurosci. 2020, 2020, 7307252. [Google Scholar] [CrossRef]
  9. Barbedo, J.G.A. Detection of nutrition deficiencies in plants using proximal images and machine learning: A review. Comput. Electron. Agric. 2019, 162, 482–492. [Google Scholar] [CrossRef]
  10. Gouda, M.; Chen, K.; Li, X.; Liu, Y.; He, Y. Detection of microalgae single-cell antioxidant and electrochemical potentials by gold microelectrode and Raman micro-spectroscopy combined with chemometrics. Sens. Actuators B Chem. 2021, 329, 129229. [Google Scholar] [CrossRef]
  11. Gouda, M.; El-Din Bekhit, A.; Tang, Y.; Huang, Y.; Huang, L.; He, Y.; Li, X. Recent innovations of ultrasound green technology in herbal phytochemistry: A review. Ultrason. Sonochem. 2021, 73, 105538. [Google Scholar] [CrossRef] [PubMed]
  12. Eshkabilov, S.; Lee, A.; Sun, X.; Lee, C.W.; Simsek, H. Hyperspectral imaging techniques for rapid detection of nutrient content of hydroponically grown lettuce cultivars. Comput. Electron. Agric. 2021, 181, 105968. [Google Scholar] [CrossRef]
  13. Li, D.; Li, C.; Yao, Y.; Li, M.; Liu, L. Modern imaging techniques in plant nutrition analysis: A review. Comput. Electron. Agric. 2020, 174, 105459. [Google Scholar] [CrossRef]
  14. Lisu, C.; Yuanyuan, S.; Ke, W.J. Rapid diagnosis of nitrogen nutrition status in rice based on static scanning and extraction of leaf and sheath characteristics. Int. J. Agric. Biol. Eng. 2017, 10, 158–164. [Google Scholar]
  15. Zhang, K.; Zhang, A.; Li, C. Nutrient deficiency diagnosis method for rape leaves using color histogram on HSV space. Trans. Chin. Soc. Agric. Eng. 2016, 32, 179–187. [Google Scholar]
  16. Liu, S.; Li, L.; Gao, W.; Zhang, Y.; Liu, Y.; Wang, S.; Lu, J. Diagnosis of nitrogen status in winter oilseed rape (Brassica napus L.) using in-situ hyperspectral data and unmanned aerial vehicle (UAV) multispectral images. Comput. Electron. Agric. 2018, 151, 185–195. [Google Scholar] [CrossRef]
  17. Wiwart, M.; Fordoński, G.; Żuk-Gołaszewska, K.; Suchowilska, E. Early diagnostics of macronutrient deficiencies in three legume species by color image analysis. Comput. Electron. Agric. 2009, 65, 125–132. [Google Scholar] [CrossRef]
  18. Pagola, M.; Ortiz, R.; Irigoyen, I.; Bustince, H.; Barrenechea, E.; Aparicio-Tejo, P.; Lamsfus, C.; Lasa, B. New method to assess barley nitrogen nutrition status based on image colour analysis: Comparison with SPAD-502. Comput. Electron. Agric. 2009, 65, 213–218. [Google Scholar] [CrossRef]
  19. Story, D.; Kacira, M.; Kubota, C.; Akoglu, A.; An, L. Lettuce calcium deficiency detection with machine vision computed plant features in controlled environments. Comput. Electron. Agric. 2010, 74, 238–243. [Google Scholar] [CrossRef]
  20. Sanyal, P.; Bhattacharya, U.; Parui, S.K.; Bandyopadhyay, S.K.; Patel, S. Color texture analysis of rice leaves diagnosing deficiency in the balance of mineral levels towards improvement of crop productivity. In Proceedings of the 10th International Conference on Information Technology (ICIT 2007), Rourkela, India, 17–20 December 2007; pp. 85–90. [Google Scholar]
  21. Hafsi, C.; Falleh, H.; Saada, M.; Ksouri, R.; Abdelly, C. Potassium deficiency alters growth, photosynthetic performance, secondary metabolites content, and related antioxidant capacity in Sulla carnosa grown under moderate salinity. Plant Physiol. Biochem. 2017, 118, 609–617. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, Z.; Wang, E.; Zhu, Y. Image segmentation evaluation: A survey of methods. Artif. Intell. Rev. 2020, 53, 5637–5674. [Google Scholar] [CrossRef]
  23. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef] [Green Version]
  24. Abdalla, A.; Cen, H.; El-manawy, A.; He, Y. Infield oilseed rape images segmentation via improved unsupervised learning models combined with supreme color features. Comput. Electron. Agric. 2019, 162, 1057–1068. [Google Scholar] [CrossRef]
  25. Zhou, L.; Zhang, C.; Taha, M.F.; Wei, X.; He, Y.; Qiu, Z.; Liu, Y. Wheat Kernel Variety Identification Based on a Large Near-Infrared Spectral Dataset and a Novel Deep Learning-Based Feature Selection Method. Front. Plant Sci. 2020, 11, 575810. [Google Scholar] [CrossRef] [PubMed]
  26. Ning, F.; Delhomme, D.; LeCun, Y.; Piano, F.; Bottou, L.; Barbano, P.E. Toward automatic phenotyping of developing embryos from videos. IEEE Trans. Image Process. 2005, 14, 1360–1371. [Google Scholar] [CrossRef] [Green Version]
  27. Lee, S.H.; Chan, C.S.; Wilkin, P.; Remagnino, P. Deep-plant: Plant identification with convolutional neural networks. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 452–456. [Google Scholar]
  28. Zhou, L.; Zhang, C.; Taha, M.; Qiu, Z.; He, L. Determination of Leaf Water Content with a Portable NIRS System Based on Deep Learning and Information Fusion Analysis. Trans. ASABE 2021, 64, 127–135. [Google Scholar] [CrossRef]
  29. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 2016, 3289801. [Google Scholar] [CrossRef] [Green Version]
  30. Chu, H.; Zhang, C.; Wang, M.; Gouda, M.; Wei, X.; He, Y.; Liu, Y. Hyperspectral imaging with shallow convolutional neural networks (SCNN) predicts the early herbicide stress in wheat cultivars. J. Hazard. Mater. 2021, 421, 126706. [Google Scholar] [CrossRef]
  31. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  32. Ma, X.; Deng, X.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, e0215676. [Google Scholar] [CrossRef]
  33. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  34. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Condori, R.H.M.; Romualdo, L.M.; Bruno, O.M.; de Cerqueira Luz, P.H. Comparison between traditional texture methods and deep learning descriptors for detection of nitrogen deficiency in maize crops. In Proceedings of the 2017 Workshop of Computer Vision (WVC), Venice, Italy, 22–27 October 2017; pp. 7–12. [Google Scholar]
  37. Ghosal, S.; Blystone, D.; Singh, A.K.; Ganapathysubramanian, B.; Singh, A.; Sarkar, S. An explainable deep machine vision framework for plant stress phenotyping. Proc. Natl. Acad. Sci. USA 2018, 115, 4613–4618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Tran, T.-T.; Choi, J.-W.; Le, T.-T.H.; Kim, J.W. A comparative study of deep CNN in forecasting and classifying the macronutrient deficiencies on development of tomato plant. Appl. Sci. 2019, 9, 1601. [Google Scholar] [CrossRef] [Green Version]
  39. Abdalla, A.; Cen, H.; Wan, L.; Mehmood, K.; He, Y. Nutrient Status Diagnosis of Infield Oilseed Rape via Deep Learning-Enabled Dynamic Model. IEEE Trans. Ind. Inform. 2021, 17, 4379–4389. [Google Scholar] [CrossRef]
  40. Palm, H.W.; Knaus, U.; Appelbaum, S.; Strauch, S.M.; Kotzen, B. Coupled aquaponics systems. In Aquaponics Food Production Systems, 1st ed.; Springer: Cham, Switzerland, 2019; pp. 163–199. [Google Scholar]
  41. Mao, H.; Gao, H.; Zhang, X.; Kumi, F. Nondestructive measurement of total nitrogen in lettuce by integrating spectroscopy and computer vision. Sci. Hortic. 2015, 184, 1–7. [Google Scholar] [CrossRef]
  42. Lan, W.; Wu, A.; Wang, Y.; Wang, J.; Li, J. Ionic solidification and size effect of hemihydrate phosphogypsum backfill. China Environ. Sci. 2019, 39, 210–218. [Google Scholar]
  43. Tian, X.; YanYan, W.; LaiHao, L.; WanLing, L.; XianQing, Y.; Xiao, H.; ShaoLing, Y. Material characteristics and eating quality of Trachinotus ovatus muscle. Shipin Kexue Food Sci. 2019, 40, 104–112. [Google Scholar]
  44. Somerville, C.; Cohen, M.; Pantanella, E.; Stankus, A.; Lovatelli, A. Small-Scale Aquaponic Food Production: Integrated Fish and Plant Farming; FAO Fisheries and Aquaculture Technical Paper; FAO: Rome, Italy, 2014; p. 1. [Google Scholar]
  45. Van Delden, S.H.; Nazarideljou, M.J.; Marcelis, L.F. Nutrient solutions for Arabidopsis thaliana: A study on nutrient solution composition in hydroponics systems. Plant Methods 2020, 16, 72. [Google Scholar] [CrossRef]
  46. Chen, Z.; Wang, X.J.C.; Systems, I.L. Model for estimation of total nitrogen content in sandalwood leaves based on nonlinear mixed effects and dummy variables using multispectral images. Chemom. Intell. Lab. Syst. 2019, 195, 103874. [Google Scholar] [CrossRef]
  47. Liu, N.; Townsend, P.A.; Naber, M.R.; Bethke, P.C.; Hills, W.B.; Wang, Y. Hyperspectral imagery to monitor crop nutrient status within and across growing seasons. Remote Sens. Environ. 2021, 255, 112303. [Google Scholar] [CrossRef]
  48. Pacumbaba, R., Jr.; Beyl, C. Changes in hyperspectral reflectance signatures of lettuce leaves in response to macronutrient deficiencies. Adv. Space Res. 2011, 48, 32–42. [Google Scholar] [CrossRef]
  49. Van Eysinga, J.R.; Smilde, K.W. Nutritional Disorders in Glasshouse Tomatoes, Cucumbers and Lettuce, 1st ed.; Centre for Agricultural Publishing and Documentation of Wageningen University: Gelderland, The Netherlands, 1981. [Google Scholar]
  50. Ben Chaabane, S.; Sayadi, M.; Fnaiech, F.; Brassart, E. Colour image segmentation using homogeneity method and data fusion techniques. EURASIP J. Adv. Signal Process. 2009, 2010, 367297. [Google Scholar] [CrossRef] [Green Version]
  51. Khan, M.W. A survey: Image segmentation techniques. Int. J. Future Comput. Commun. 2014, 3, 89. [Google Scholar] [CrossRef] [Green Version]
  52. Corrias, G.; Micheletti, G.; Barberini, L.; Suri, J.S.; Saba, L. Texture analysis imaging “what a clinical radiologist needs to know”. Eur. J. Radiol. 2022, 146, 110055. [Google Scholar] [CrossRef] [PubMed]
  53. Kociołek, M.; Strzelecki, M.; Obuchowicz, R. Does image normalization and intensity resolution impact texture classification? Comput. Med. Imaging Graph. 2020, 81, 101716. [Google Scholar] [CrossRef]
  54. Kebapci, H.; Yanikoglu, B.; Unal, G. Plant image retrieval using color, shape and texture features. Comput. J. 2011, 54, 1475–1490. [Google Scholar] [CrossRef] [Green Version]
  55. Azimi, S.; Kaur, T.; Gandhi, T.K. A deep learning approach to measure stress level in plants due to Nitrogen deficiency. Measurement 2020, 173, 108650. [Google Scholar] [CrossRef]
  56. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: Abingdon, UK, 2017. [Google Scholar]
  57. Agarwal, M.; Gupta, S.K.; Biswas, K. Development of Efficient CNN model for Tomato crop disease identification. Sustain. Comput. Inform. Syst. 2020, 28, 100407. [Google Scholar] [CrossRef]
  58. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  59. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  60. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [Green Version]
  61. Mani, B.; Shanmugam, J. Estimating plant macronutrients using VNIR spectroradiometry. Pol. J. Environ. Stud. 2019, 28, 1831–1837. [Google Scholar] [CrossRef]
  62. Sridevy, S.; Vijendran, A.S.; Jagadeeswaran, R.; Djanaguiraman, M. Nitrogen and potassium deficiency identification in maize by image mining, spectral and true colour response. Indian J. Plant Physiol. 2018, 23, 91–99. [Google Scholar] [CrossRef]
  63. Siedliska, A.; Baranowski, P.; Pastuszka-Woźniak, J.; Zubik, M.; Krzyszczak, J. Identification of plant leaf phosphorus content at different growth stages based on hyperspectral reflectance. BMC Plant Biol. 2021, 21, 28. [Google Scholar] [CrossRef] [PubMed]
  64. Abdalla, A.; Cen, H.; Wan, L.; Rashid, R.; Weng, H.; Zhou, W.; He, Y. Fine-tuning convolutional neural network with transfer learning for semantic segmentation of ground-level oilseed rape images in a field with high weed pressure. Comput. Electron. Agric. 2019, 167, 105091. [Google Scholar] [CrossRef]
  65. Phonsa, G.; Manu, K. A survey: Image segmentation techniques. In Harmony Search and Nature Inspired Optimization Algorithms; Springer: Cham, Switzerland; pp. 1123–1140. 2019. [Google Scholar]
  66. Gouda, M.; Ma, M.; Sheng, L.; Xiang, X. SPME-GC-MS & metal oxide E-Nose 18 sensors to validate the possible interactions between bio-active terpenes and egg yolk volatiles. Food Res. Int. 2019, 125, 108611. [Google Scholar] [PubMed]
  67. Yilmaz, A.; Demircali, A.A.; Kocaman, S.; Uvet, H. Comparison of Deep Learning and Traditional Machine Learning Techniques for Classification of Pap Smear Images. arXiv 2009, arXiv:2009.06366. [Google Scholar]
  68. Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J. Field Robot. 2020, 37, 7–19. [Google Scholar] [CrossRef]
  69. Manickam, R.; Rajan, S.K.; Subramanian, C.; Xavi, A.; Eanoch, G.J.; Yesudhas, H.R. Person identification with aerial imaginary using SegNet based semantic segmentation. Earth Sci. Inform. 2020, 13, 1293–1304. [Google Scholar] [CrossRef]
  70. Li, Y.; Shi, T.; Zhang, Y.; Chen, W.; Wang, Z.; Li, H. Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2021, 175, 20–33. [Google Scholar] [CrossRef]
  71. Benjdira, B.; Bazi, Y.; Koubaa, A.; Ouni, K. Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. Remote Sens. 2019, 11, 1369. [Google Scholar] [CrossRef] [Green Version]
  72. Chantharaj, S.; Pornratthanapong, K.; Chitsinpchayakun, P.; Panboonyuen, T.; Vateekul, P.; Lawavirojwong, S.; Srestasathiern, P.; Jitkajornwanich, K. Semantic segmentation on medium-resolution satellite images using deep convolutional networks with remote sensing derived indices. In Proceedings of the 2018 15th International Joint Conference on Computer Science and Software Engineering (JCSSE), Nakhon Pathom, Thailand, 11–13 July 2018; pp. 1–6. [Google Scholar]
  73. Chen, L.; Lin, L.; Cai, G.; Sun, Y.; Huang, T.; Wang, K.; Deng, J. Identification of nitrogen, phosphorus, and potassium deficiencies in rice based on static scanning technology and hierarchical identification method. PLoS ONE 2014, 9, e113200. [Google Scholar]
  74. Lee, J.-H.; Kim, D.-H.; Jeong, S.-N.; Choi, S.-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef]
  75. Sajedi, H.; Mohammadipanah, F.; Pashaei, A. Automated identification of Myxobacterial genera using convolutional neural network. Sci. Rep. 2019, 9, 18238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Amri, A.; A’Fifah, I.; Ismail, A.R.; Zarir, A.A. Comparative performance of deep learning and machine learning algorithms on imbalanced handwritten data. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 258–264. [Google Scholar]
Figure 1. Flowchart of the proposed method of nutrient deficiency diagnosis in lettuce plants grown in aquaponics.
Figure 1. Flowchart of the proposed method of nutrient deficiency diagnosis in lettuce plants grown in aquaponics.
Chemosensors 10 00045 g001
Figure 2. Images captured in different uneven lighting conditions.
Figure 2. Images captured in different uneven lighting conditions.
Chemosensors 10 00045 g002
Figure 3. Visual appearance of fully nutritious (FN) lettuce plants and general symptoms of and NPK deficiencies in Romaine and Flandria cultivars.
Figure 3. Visual appearance of fully nutritious (FN) lettuce plants and general symptoms of and NPK deficiencies in Romaine and Flandria cultivars.
Chemosensors 10 00045 g003
Figure 4. Example images after applying data augmentation protocols: (a) Original image (b) Flip vertical (c) Flip horizontal (d) Rotate right 90o (e) Rotate left 90o (f) Film grain noise (g) Glass noise (h) Texturizer noise (i) Scaling 0.5 (j) Scaling 0.75 (k) Translation down (l) Translation right (m) Translation up (n)Translation left (o)Translation right/up (p)Translation left/down.
Figure 4. Example images after applying data augmentation protocols: (a) Original image (b) Flip vertical (c) Flip horizontal (d) Rotate right 90o (e) Rotate left 90o (f) Film grain noise (g) Glass noise (h) Texturizer noise (i) Scaling 0.5 (j) Scaling 0.75 (k) Translation down (l) Translation right (m) Translation up (n)Translation left (o)Translation right/up (p)Translation left/down.
Chemosensors 10 00045 g004
Figure 5. Flowchart of segmentation process using SegNet.
Figure 5. Flowchart of segmentation process using SegNet.
Chemosensors 10 00045 g005
Figure 6. A typical architecture of the Deep Convolution Neural Network (DCNN) consisting of four main layers.
Figure 6. A typical architecture of the Deep Convolution Neural Network (DCNN) consisting of four main layers.
Chemosensors 10 00045 g006
Figure 7. Training accuracy, validation accuracy, training loss, and validation loss for each epoch/iteration of the training process using Inceptionv3 (a) and ResNet18 (b) model.
Figure 7. Training accuracy, validation accuracy, training loss, and validation loss for each epoch/iteration of the training process using Inceptionv3 (a) and ResNet18 (b) model.
Chemosensors 10 00045 g007
Figure 8. Spectrum of lettuce leaves for aquaponic and control system across growing season, 15 days after planting (a), 30 days after planting (b), 45 days after planting (c), 60 days after planting (d).
Figure 8. Spectrum of lettuce leaves for aquaponic and control system across growing season, 15 days after planting (a), 30 days after planting (b), 45 days after planting (c), 60 days after planting (d).
Chemosensors 10 00045 g008
Figure 9. Visual assessment of segmentation methods; the original images (a), threshold (b), K-means (c), SegNet (d), and Ground-truth (e) of lettuce plants grown in aquaponics.
Figure 9. Visual assessment of segmentation methods; the original images (a), threshold (b), K-means (c), SegNet (d), and Ground-truth (e) of lettuce plants grown in aquaponics.
Chemosensors 10 00045 g009
Table 1. Average concentrations of N, P, and K in lettuce for aquaponic and control systems throughout the growth period.
Table 1. Average concentrations of N, P, and K in lettuce for aquaponic and control systems throughout the growth period.
DayLeaf Content of Nutrients, g/kg DW
AquaponicHydroponic (Control), FN
NPKNPK
153.010.213.056.050.756.01
203.50.553.46.350.746.32
253.540.343.75.570.676.52
303.20.414.015.80.536.65
352.850.354.215.70.526.04
404.050.382.56.550.656.18
453.640.473.375.440.456.27
502.950.513.75.280.636.3
554.250.213.975.310.526.57
602.940.224.045.740.616.19
Table 2. Average concentrations of N, P, and K in water of aquaponic and control systems throughout the growth period.
Table 2. Average concentrations of N, P, and K in water of aquaponic and control systems throughout the growth period.
NutrientConcentration (mg/L)
Aquaponic (Measured)Control (Optimal)
Total N30.8321
P 10.636.9
K60.8340
Table 3. Typical symptoms of FN, -N, -P, and -K in lettuce throughout the growing period and the number of images acquired for each class.
Table 3. Typical symptoms of FN, -N, -P, and -K in lettuce throughout the growing period and the number of images acquired for each class.
ClassTypical SymptomsImages
FNHealthy plant, leaves are green, and generally with no mottling or spots600
-NGrowth is restricted, foliage yellowish green, severe chlorosis of older leaves, and decay of older leaves.850
-PPlants are stunted, older leaves die with severe deficiency, leaf margins of older leaves exhibited chlorotic regions followed by necrotic spots, and leaves are darker than normal.550
-KGrowth is reduced, leaves are less crinkled and darker green than normal, with severe deficiency they become more petiolate, necrotic spots on margins of old leaves, and chlorotic spots develop at the tips of older leaves.1000
Table 4. The Training results of the SegNet, Inceptionv3, and ResNet18 models.
Table 4. The Training results of the SegNet, Inceptionv3, and ResNet18 models.
ModelTraining
Accuracy (%)
Validation
Accuracy (%)
Training Time
(min)
SegNet98.30%99.29%194
Inceptionv398.90%98.00%65
ResNet1897.70%92.50%87
Table 5. The segmentation performance measures (Acc, Pr, Re, F1_score, Dice score, and ST).
Table 5. The segmentation performance measures (Acc, Pr, Re, F1_score, Dice score, and ST).
ModelAcc (%)Pr (%)Re (%)F1_score (%)Dice Score (%)ST (s/Image)
SegNet99.199.399.599.499.50.605
K-means83.183.583.783.583.35
Thresholding75.275.575.675.575.50.7
Table 6. Results of statistical analysis of variance (ANOVA) and orthogonal comparisons using Tukey’s HSD test *.
Table 6. Results of statistical analysis of variance (ANOVA) and orthogonal comparisons using Tukey’s HSD test *.
FN-N-P-K
Contrast0.314 a0.108 b0.051 d0.074 c
Correlation0.966 a0.658 b0.662 c0.646 d
Energy0.962 d0.967 c0.978 a0.974 b
Homogeneity0.991 d0.992 c0.995 a0.994 b
Entropy0.928 a0.550 b0.352 c0.431 d
Red0.835 a0.357 b0.338 c0.340 c
Green0.782 a0.401 b0.393 c0.392 c
Blue0.883 a0.242 b0.269 c0.268 c
Hue0.595 a0.064 b0.039 c0.019 d
Saturation0.663 a0.208 b0.093 c0.035 d
Value0.423 a0.115 b0.038 c0.006 d
Area3762 a3652 a2035 b2636 c
Perimeter3024 a2870 a1690 b2170 c
Convex hull92,483 a63,213 b32,385 c47,986 c
* Values followed by the same superscripted letters in the same row are not significantly different based on Tukey’s HSD test at 5% significance level.
Table 7. Confusion matrices of all classification methods. Inceptionv3, ResNet18, SVM, KNN, and DT.
Table 7. Confusion matrices of all classification methods. Inceptionv3, ResNet18, SVM, KNN, and DT.
Predicted ClassAcc (%)
FN-N-P-KTotal
Incepionv3True classFN12000012096.5
-N116531170
-P231005110
-K033194200
Total123171106200600
ResNet18True classFN12000012092.1
-N3150107170
-P27956110
-K246188200
Total127161111201600
SVMTrue classFN11023512086.1
-N81401210170
-P59888110
-K867179200
Total131157110202600
KNNTrue classFN10825512084.5
-N81391211170
-P512858110
-K10510175200
Total131158112199600
DTTrue classFN100371012079.8
-N91301516170
-P6138011110
-K11812169200
Total126154114206600
Table 8. Classification performance measures of all the applied methods.
Table 8. Classification performance measures of all the applied methods.
MeasureInceptionv3ResNet18SVMKNNDT
Acc (%)96.592.186.184.579.8
Pr (%)95.791.685.583.778.9
Re (%)96.292.185.884.179.2
F1_score (%)95.991.885.683.879.0
CT (s/image)0.60.80.20.40.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Taha, M.F.; Abdalla, A.; ElMasry, G.; Gouda, M.; Zhou, L.; Zhao, N.; Liang, N.; Niu, Z.; Hassanein, A.; Al-Rejaie, S.; et al. Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics. Chemosensors 2022, 10, 45. https://doi.org/10.3390/chemosensors10020045

AMA Style

Taha MF, Abdalla A, ElMasry G, Gouda M, Zhou L, Zhao N, Liang N, Niu Z, Hassanein A, Al-Rejaie S, et al. Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics. Chemosensors. 2022; 10(2):45. https://doi.org/10.3390/chemosensors10020045

Chicago/Turabian Style

Taha, Mohamed Farag, Alwaseela Abdalla, Gamal ElMasry, Mostafa Gouda, Lei Zhou, Nan Zhao, Ning Liang, Ziang Niu, Amro Hassanein, Salim Al-Rejaie, and et al. 2022. "Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics" Chemosensors 10, no. 2: 45. https://doi.org/10.3390/chemosensors10020045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop