Interventionary studies involving animals or humans, and other studies that require ethical approval, must list the authority that provided approval and the corresponding ethical approval code.
3.1. Test and Implementation
The processing test was based on two architectures, the first is a low-cost embedded architecture type Nvidia Jetson Nano and the second is a laptop. Using the Jetson Nano architecture has shown that the processing time is reduced using the CUDA language, and we can achieve real-time processing. It is the same case for the laptop, but the problem here is the portability of our system.
Table 1 shows the specification of our used device.
At the first step, a sequential implementation has been proposed in order to study the back-end part of our algorithm. Compared to the Laptop on the Jetson Nano, the three functions constitute the total processing time of the algorithm. Therefore, following a workload analysis based on a hardware/software Co-Design approach allows us to reduce the processing time as much as possible for these three functions for both architectures and mainly for the Jetson Nano. Its compact size of 10 cm × 8 cm × 2.9 cm, and its minimal power consumption of 5 W, added to its limited resources and adapted memory capacity. Its relatively low cost allows it to meet a constraining specification related to embedded systems. After the sequential implementation we based on the CUDA parallel programming language to accelerate the processing.
Figure 11 shows a sequential implementation based on C/C++.
In
Figure 11, we can conclude that in the laptop, the pre-processing part occupies more than 64% of the processing time, while in the Jetson Nano architecture, we have only 42%. After the time analysis, we decided to accelerate the three functions to decrease the processing time to satisfy the real-time constraint.
Table 2 shows a processing time comparison on the laptop and Jetson Nano based on C/C++ and CUDA.
Using the results of the performance profiler, we have gone from a processing time equal to 387.7 ms for the pre-processing to 3.9 ms, which is an acceleration of almost 100 times. For the indices processing, we have an acceleration of 10, and, finally, the counting has been accelerated by 56 times. For the heterogeneous embedded system Jetson Nano, we have gone from a processing time equal to 833.4 ms for the pre-processing to 6.4 ms, an acceleration of 130 times has been achieved. For the second function, we have gone from 432.1 ms to 12.4 ms, equivalent to an acceleration of 34 times, and, finally, the counting has been accelerated by a value of 38 times.
Figure 12 represents a temporal synthesis of the different functions on our Laptop and on Jetson Nano.
After the processing time analysis and acceleration, we added a detailed study on memory consumption and processing time. The analysis results obtained have been based on the profiler proposed by Nvidia. The tool offered us the percentage of time that these functions take compared to the global activity level of the GPU of the functions “CUDA memcpy H_to_D” and “CUDA memcpy D_to_H”.
Table 3 shows a summary of the results obtained.
Table 3 shows that in the case of the Laptop we have a CPU→GPU transfer rate equal to 2.4829 GB/s and a GPU→CPU rate equal to 2.7274 GB/s. The Jetson Nano embedded system has values 3 times lower in the order of MB/s. Our system consists of two movements, the first for the box containing the embedded architecture and the cameras used for processing. The second part focuses on moving all metal supports to a new row. With this method, we can ensure that all the plants will be processed.
Figure 13 shows an overview of our system.
The first step that was performed was the testing and validating of our robot in a closed space based on three rows to validate the mechanism of our system. each rows contain mint, parsley and pepper plant, respectively. The validation of this system in a closed space does not imply accurate functionality in a real environment. For this reason, we have opted after the validation to make a real test in order to validate our algorithmic and systematic approach. In
Figure 13, we have on the top left the developed robot and, on the bottom, a multispectral and RGB camera view. In the bottom right image, we have the box that contains the Arduino architecture that operates the movement part of our robot.
Figure 14 shows images collected by our robot.
After the prototype validation in the laboratory, we moved to the field validation to evaluate the prototype performance. The results showed that the prototype works in the same conditions and mechanism as in the laboratory. The test was conducted in an open field and a closed greenhouse, showing our system’s flexibility. In addition, our robot can be adapted for the different precision agriculture applications by editing the back-end with the appropriate algorithm. These applications can be either weed detection, fruit and plant counting, or disease detection. In addition, a decision-making system can be added to take real-time actions in the agricultural field. This approach will help the farmer make precise and fast decisions to avoid difficult problems. The developed system is characterized by its low cost and low energy consumption.
Figure 15 shows the test of our system in a real agricultural field.
In
Figure 15, module 1 represents the battery that powers the motors. On the other hand, we have used a power bank for the power supply of the camera, and for the embedded architecture. Module 2 is a box with the control part that gives actions to the motor. Module 3 is the robot embedding cameras and the architecture-based processing. Additionally, we added a power consumption analysis as part of the robot specifications study.
Figure 16 shows the results obtained.
In
Figure 16, we have measured the current consumption of the electric motors and the power. We tried to make several iterations to see each time the consumption. The maximum power consumption is about 2.9 W, and for the current consumption is 0.59 A.
3.2. Experimental Result
The approach used in this work was based on evaluating several indices, namely NDVI, NDWI, and NDRE. Then, these indices will be collected on all 20 zones to determine the region with low indices. Afterward, we try to determine the GPS coordinates of each region with its index. The first index that has been calculated is NDVI.
Figure 17 shows the evaluation of NDVI in the 20 zones.
The NDVI processing was based on several plants to evaluate the variation of this index. The indices’ values vary between 0.15 and 0.8. Generally, the values close to 1 present strong vegetation (reflects that the plant has no problem at the vegetation level). Once we have calculated the variation of NDVI on several plants in the different regions, we move to calculate the NDVI average in each region to define the regions with less vegetation. This method will allow us to locate the regions with vegetation problems.
Figure 18 shows the average NDVI in each region.
The results in
Figure 18 show that zones 6 and 10 give a low index compared to the other zones. Usually, the mean values vary between 0.15 for zone 6 and 0.67 for zone 9. Zones 6 and 10 present low values due to the supply system of the necessary plant components. This reflects the strong relationship between vegetation, water, and the nitrogen content in the plants. After locating the region with a low index, we tried to locate the plants with the vegetation index.
Figure 19 shows the vegetation index calculated for each plant.
As a result of evaluating the different plants in the areas with a vegetation problem, we determined the plant’s exact position through the images collected with the precise GPS data of the Parrot Sequoia + camera. This localization gives us the plant where we have the vegetation problem, which will help the farmer or robots make precise decisions.
Figure 20 shows images of the plants and their NDVI results. In
Figure 20 are the RGB images of tomato plants collected in the greenhouse, and we have the evaluation of the binarized NDVI index using the threshold T
1 = 0.5. Additionally, we have tried to vary the threshold for the red images with T
2 = 0.4. This threshold variation shows the plants that have an index more than T
1 or T
2. This operation will help us to classify the final results. To determine the index with its proposed threshold, we need to take the plants of each type and create a test with the vegetation sensors afterward to provide each plant with its own index for decision-making.
After determining the NDVI vegetation index, we need to calculate the other indices, such as NDRE and NDWI. The reason for calculating these indices is that vegetation is not enough to indicate that the plant is in good condition. The second evaluation was based on the NDRE index shown in
Figure 21. The NDRE evaluation was based on a 0.2 threshold. Regions with an index lower than 0.2 suffer from a nitrogen deficiency in the plants.
Figure 21 summarizes the results obtained for 20 zones that have been evaluated. The results show that the index varies between 0.079 and 0.75 in zones 10 and 5, with minimum and maximum values. We also find zones 1 and 6 with NDRE values of 0.15 and 0.08. These regions reflect the nitrogen lack in plants. Similarly, for the vegetation index, we have zone 6 and 10 that are in the same situation. On the other hand, in the evaluation of NDRE, another zone that suffers from a lack of nitrogen has been added. After a first synthesis, we concluded that the suffering zones are 6, 1, and 10. For this reason, it is necessary to extend the evaluation and see the average of the NDRE index in the different regions.
Figure 22 shows the different areas’ evaluation based on each zone’s average.
The application of the average processing in the different zones has reinforced the synthesis elaborated in
Figure 22. It also confirmed that zones 6, 1, and 10 suffer from the lack of nitrogen. Our evaluation methodology based on a thorough evaluation that aims to determine the exact plants where we have problems.
In
Figure 23, we have shown an evaluation of 20 plants in regions 1, 6, and 10. The results show that the plants in regions 6 and 10 have very low values compared to zone 1. The values vary between 0.05 and 0.1. On the other hand, zone 1 varies between 0.2 and 0.11. After evaluating the different plants, it is necessary to determine the precise localization of the plants. The third vital index that is opted for our evaluation is NDWI. This index determines the amount of water in the vegetation.
Figure 24 shows the evaluation of the NDWI.
The NDWI evaluation is based on a threshold of 0.3. Areas with less than 0.3 have a water deficiency or water absence, while areas with more than 0.3 have water. The index evaluation showed that, like the NDRE, areas 1, 6, and 10 have low water content.
Figure 25 and
Figure 26 present the results in each zone.
The NDWI values in
Figure 26 range from 0.618 to 0.13 for zones 18 and 10. The calculation of NDWI in the figure is based on the average of each zone. The global analysis of the greenhouse study showed that vegetation problems appeared in zones 6 and 10, while water and nitrogen problems appeared in zones 1, 6, and 10. This methodology of interpretation and this study will help farmers monitor the agricultural fields and determine the plants and areas that suffer from a various problem. This will increase the productivity of the farm. After processing the indices in each plant, we will have the precise localization of each plant
Figure 27 shows an example of NDVI.
The GPS data shown in
Figure 28 are provided by the Parrot Sequoia + camera. Then, we generated a map of the greenhouse with the 20 zones and the index result with the GPS data, as shown in
Figure 27. This will improve the decision system associated with the closed greenhouse. Therefore, in the map, the green and blue colored rectangle shows zones 10 and 6. Additionally, the GPS coordinates of the plants that have a vegetation problem. After evaluating the different vital indices, including NDVI, NDRE, and NDWI, we obtained the different information needed in the agricultural fields. This information is the most relevant to have an overview of the health of the plants. As soon as the evaluation is finished, a global map should be generated for the farmer, containing the different information about the greenhouse.
Figure 28 shows the overall map of the indices monitoring.
The GPS data shown in
Figure 28 are provided by the Parrot Sequoia + camera. Then, we can generate a map of the greenhouse with the 20 zones and the index result with the GPS data, as shown in
Figure 27. This will improve the decision system associated with the closed greenhouse. In the map, the green and blue colored rectangle shows these zones 10 and 6.
In addition, the GPS coordinates of the plants that have a vegetation problem are given. After evaluating the different vital indices, including NDVI, NDRE, and NDWI, we obtained the different information needed in the agricultural fields. This information is the most relevant to have an overview of the health of the plants. As soon as the evaluation is finished, a global map should be generated for the farmer, containing the different information about the greenhouse.
Figure 28 shows the overall map of the indices monitoring.
Table 4 shows a Summary of normalized indices results in the closed greenhouse.