Computer development based embedded systems in precision agriculture: tools and application

Precision agriculture (PA) research aims to design decision systems based on agricultural site control and management. These systems consist of observing ﬁ elds and measuring metrics to optimize yields and investments while preserving resources. The corresponding applications can be found on large agricultural areas based on satellites, unmanned aerial vehicles (UAVs), and sol robots. All these applications based on various algorithms that are complex in terms of processing time. If these algorithms are evaluated o ﬄ ine on work-stations or desktops, this is not the case for algorithms that need to be embedded and should operate and help make real-time decisions. We, therefore, need an advanced study using hardware-software co-design approach to design decision systems to embed di ﬀ erent algorithms, including sensor data acquisition and processing units. In this work, we propose a review in processing information tools-based embedded systems in PA algorithms with di ﬀ erent applications: weed detection, numerical counting, monitoring of plant indexes, and disease detection. This review has been based on more than 100 papers to extract useful information on the di ﬀ erent techniques used and the information processing systems. The elaborated study presents the various tools, databases, and systems in order to extract the advantages and disadvantages of system and application.


Introduction
Smart farming or precision agriculture (PA) is an operating concept, in the agriculture field, that collects, processes and evaluates temporal, spatial and individual data as well as integrates them with additional information to provide operational advice based on estimated variability to enhance resource use efficiency, sustainability, productivity and quality of the agricultural product.Also, PA systems are used to provide soil/crop requirements based on real-time/off-line numerical data to produce prescription maps (Berni et al. 2009).Recently, PA has seen a revolution in terms of fields and solutions.These solutions are aimed to resolve the different problems encountered in agriculture.Some of these solutions include different surveillance techniques for plants and crops in agricultural fields (Berni et al. 2009;Kamath et al. 2019;Shadrin et al. 2019;Sun et al. 2019;Yang et al. 2019).Before, agriculture field control was based on using networks sensor to extract various soil indexes like water quantity and vegetation indexes in plants.In this case, the problem is the cost of sensor maintenance, especially in large agricultural areas.There is also the problem of the permanent monitoring of sensors used in the surveillance of agricultural fields.Today, the importance of research on PA is shown by implementing inexpensive technologies to resolve a variety of problems.This solution that can PA offers us are based on ground robots, Unmanned Aerial Vehicle (UAV), and satellite imagery (Gevaert et al. 2015;Viani et al. 2017;Ahmed et al. 2018;Khan et al. 2019).In the other side, we find a various applications in PA, including weed detection, numerical counting, monitoring plant indexes and disease detection (Tucker 1979;De Castro et al. 2018;Zhou et al. 2018;Jin et al. 2019).Nowadays, there is a need of embedding these applications in real-time systems which requires a detailed study using the Hardware-Software co-design approach (Noguera and Badia 2002).This study is deepened in terms of data-flow processing.The reflection on the hardware architecture is part of the algorithm-architecture mapping process with the constraints imposed by challenges such as computation speed and hardware resource utilisation.The specific needs and requirements (memory, computational resources, bandwidth, etc.) are studied for the addressed application (Martelli 2019;Saussard 2017).
The management of agricultural fields helps farmers to make decisions that influence the productivity of crops and plants in agricultural fields (Singh et al. 2020).Several applications have been proposed in the agricultural area.J. A. J. Berni et al. (2009) proposed work for the monitoring of crops vegetation.The authors used an UAV equipped with two thermal and multispectral sensors (Berni et al. 2009).As well, R. Kamath et al. (2019) proposed a low-cost system based on an RGB sensor and a raspberry architecture for crop monitoring in greenhouse farms.W. Yang et al. (2019) based their work on the study of damage in plants.This study was based on a hyperspectral sensor to see the influence of cold damage on corn plant producibility.Y. Sun et al. (2019) processed a soil index matrix to monitor vegetation.The study was based on a hyperspectral sensor.Integrating these applications in architectures based on ground or image sensors requires a mapping study between algorithms, architectures and sensors.For this reason, mapping algorithms on hardware embedded systems is a rather complicated task that requires a thorough study in order to propose an optimised implementation of these algorithms.The Hardware/Software Co-design allows to simultaneously study the algorithmic complexity and the architecture of hardware models and specifications to propose real-time implementations (Hou et al. 2019).This approach can be applied using three tasks.The first task consists to study the algorithm to build the different functional blocks.This method gives general visibility and facilitates the optimisation of the blocks in order to implement them in either homogeneous or heterogeneous architecture.The task also gives a possibility to reduce the complexity of the algorithm used.Second step of the H/S co-design dedicated to the study of different architectures to determine the strong and weak points of each architecture in terms of memory access, processing times and energy consumption (Hou et al. 2019).The last one is the implementation task.This task comes after the algorithm and architecture mapping to study the implementation.At this step, we can choose the highlevel synthesis (HLS) used to achieve the embedded implementation on a heterogeneous or homogeneous architecture.
This work presents overview in PA algorithms and dedicated architectures.The aimed applications are based on two types of processing: software or hardware processing.For the hardware part, we present the different works that used embedded implementation either in heterogeneous systems like CPU-GPU or homogeneous systems that used only one type of processor like FPGA, GPU, CPU and others.We also present the different tools for monitoring and collecting images to achieve processing afterward.In this context, we find satellite, UAV and ground robots' imagery.We also present different databases, either multispectral or RGB, dedicated to process algorithms.Deep learning applications are also presented and discussed.
Hence, this work is divided into six sections.After this introduction, the second section gives a general introduction on PA involved in different applications and monitoring indexes.Section 'Tools and database' presents different tools and data used in PA.Section 'Embedded system in precision agriculture' deals with applications based on embedded systems.Section 'Discussion and future work' presents a discussion and future work, and finally, section 'Conclusion' presents a general conclusion on the proposed work.Figure 1 shows general paper structure.

Indexes used in PA
The originality of these indexes is based on the use of various bands of images.The most used surveillance index in PA is NDVI.This index is based on two bands, near-infrared and red (Tucker 1979).NDVI aim to detect the vegetation in plants.Usually, the index is related to the biomass of the plant.The closer it is to 1, the greater the biomass development of the plant.NDVI (1) monitoring requires the use of high-resolution cameras that can detect the red and near-infrared bands.We can find high-precision images such as satellite images in this context, but the problem with this type of image is the noise due to the clouds.In this context, the authors in Jin Chen et al. (2004) have proved technique in order to filtering this type of problem.Recently, we can find several works that have been developed for the use of this index in several applications.Among these works, we can discover M. Gholamnia et al. (2019)  The water parameter extraction is done using the Fully Convolutional Networks (FCN) algorithm.We find also NDMI that represents The Normalised Difference Moisture Index to quantify the moisture in crops [Equation (3)].This index is based on the short wave near infrared (SWIR) and near-infrared.It ranges from −1 to 1, with values below 0 representing dry soil, and between 0 and 1 representing humid soil (Hardisky et al. 1983).Another index has a similar classification to the NDVI index, but it is based on the green band.This index is the Green Normalised Difference Vegetation Index (GNDVI).It is specialised by sensitivity of chlorophyll compared to the NDVI.This index values range between 0 and 1 [Equation ( 4)].BSI or bare soil index is used to detect soil variation using the NIR, B, red and SWIR [Equation ( 5)].Typically, this BSI has been used for field drought and fire detection studies (Rikimaru et al. 2002).The Soil Adjusted Vegetation Index (SAVI) is used to adjust the NDVI values [Equation ( 6)].The SAVI is based on the NDVI but using a parameter A, which ranges from −1 to 1.This parameter has high values for low vegetation and low values for large vegetation areas (Huete 1988).
In the scientific literature, there are also other indexes that are not widely used by researchers in applications.For example, the modified normalised difference vegetation index MNDVI is dedicated to the detection of frost damages in agricultural areas [Equation ( 7)].The MNDVI based on the near-infrared band with a wavelength of 0.78-3 μm and the medium infrared band MIR with a wavelength of 3-50 μm (Jurgens 1997).We

R E T R A C T E D
can also find the normalised difference red edge index (NDRE) which is based on the NIR and red-edge bands (Horler et al. 1983) [Equation (8)].Canopy Chlorophyll Content Index (CCCI) is an index based on NIR, rededge and R bands to calculate and monitor nitrogen in plants (Wang et al. 2007).This index helps to determine the values of nitrogen in plants, it is calculated using the two vegetation indexes NDVI and NDRE [Equation ( 9)].
We find also the work of I. Herrmann et al. (2010) which presents the Blue Normalised Difference Vegetation Index (BNDVI).This index gives exact fertiliser values of plants (Herrmann et al. 2010) [Equation (10)].
In vegetation monitoring, we can find other indexes than NDVI.Srivastava et al. (2019) showed in their work that the EVI (Enhanced Vegetation Index) proposed by A. R. Huete et al. (1997) can be used to monitor crops.Kaufman and Tanré (1992) proposed the Atmospherically Resistant Vegetation Index (ARVI) to evaluate the atmospheric resistance of vegetation.Also, the authors elaborated another index for the correlation of the background in the calculation of vegetation indexes.This index is Soil and Atmospherically Resistant Vegetation Index (SARVI).All these vegetation indexes have been proposed to eliminate the calculation failures in the NDVI (Kaufman and Tanré 1992).
Agricultural field monitoring is mandatory to achieve high quality of agricultural products.Generally, the algorithms for indexes monitoring are based on three phases.The first one is the data acquisition using different tools either by cameras embedded on UAVs, ground robots or by satellites images.After image acquisition, it is necessary to verify if the image bands require a separation.Phase 2 consists of indexes calculation according to the application.For example, if we want to monitor vegetation, we have to focus on the near infrared, green and red bands in order to calculate the different vegetation indexes.On the other side, we find the humidity and water indexes which require the SWIR, NIR and G bands.The calculation of the NDMI index is based on the SWIR band.This band is difficult to find in the Multispectral cameras.This type of sensor does not provide the SWIR band, which requires the use of a Hyperspectral sensor.After the extraction of the different indexes, it is important to determine the application that will be used later, this identification will allow to make decisions.In this context, we find the work of X. Li et al. (2019) which provides a rivers detection system based on the OBIA algorithm in different environments.The authors in Aghighi et al. (2018) suggest a system based on the vegetation index for predicting yields in fields containing maize.This kind of irrigation system management leads to have a good quality of irrigation and avoid losses especially in countries with low water resources.The work of C. Coakley et al. (2019) is based on the vegetation index to monitor agricultural fields that are dedicated to Khmer Rouge production in order to control the irrigation system.Table 1 shows the different indexes with their equations used in PA.

Weed detection
The weeds can be competitive with crops for water and nutrients and can provide different host for a variety of pathogenic, insect pests or nematodes, such as the fungi that cause black rot.This problem has a direct impact on the agricultural soil quality.Weed detection is typically carried out on row crops at the early part of the growing season.This requires very high-resolution images, in which UAVs are the best means of exploration.OBIA or object-based image analysis is efficient than conventional techniques.OBIA algorithm based on image processing for monitoring agricultural soils problems.It is focused on segmentation, data collection and classification (Hossain and Chen 2019).The OBIA algorithm based on five steps can be used in different applications.The first step consists in acquiring images collected using tools most commonly used in PA, namely satellites, UAVs or soil robots.After that, a segmentation of images is performed in order to analyse objects that will be used later.For example, if we want images that contain vegetation, it is enough to calculate the NDVI index to estimate the line that contains the vegetation.In the case of weed detection, a data classification between plants, weeds and plants that contain an acceptable number of weeds is required.The last step of the OBIA algorithm is object recognition.Figure 2 shows the OBIA algorithm used in remote sensing.
Several works have been developed to resolve the weeds problem.I. Sa et al. (2018) present a classification approach based on multispectral images collected by an aerial micro vehicle to detect weeds.They implemented neural networks to obtain a dataset using a field with different herbicide levels, which gives plots that contain only crops or weeds.Subsequently, they have used the NDVI vegetation index to characterise the plots.Their work is divided into three parts.The first part is the database construction based on a 40 × 40 m² weed field with different levels (minimum, medium and maximum) of herbicides.Consequently, the authors built three plots with single crops (maximum level), crops and weeds (medium level) and weeds only (minimum level).Each image is taken in these plots at 660 nm for the R-band and 790 nm wavelength for the NIR.The second part is related to the database pre-processing.This processing allows the images alignment to calculate the NDVI vegetation index.The alignment used for all images is based on distortion, correlation and image cropping.Then the NDVI extraction is achieved with the application of a Gaussian blur to the aligned images and removing the false responses (shadows, small debris).The Dense Semantic Segmentation Framework is used in the last part.The authors used a cascade of neuron networks to predict areas that contain weeds (Sa et al. 2018).
Another work (de Castro et al. 2018) has been proposed for weed detection based on the OBIA approach.This work consisted of four steps.The first step aims to store the necessary data from the used field.The authors have separated plots used at 10 × 10 m 2 sizes to process each sub-patch sequentially and automatically.In this step, an eCognition software is used to make the multi-scale segmentation of images.Then they eliminated the shadow objects because it influences the values of the NDVI index which will be calculated later.After the calculation of the NDVI indexes, a thresholding is applied in order to separate crops and weeds.The authors used the NIR/G ratio to detect soil vegetation and plants.For that, they used a thresholding method Otsu supported by the software eCognition (Torres-Sánchez et al. 2015).Then, they applied a method, proposed by Peña et al. (2013), to detect the crop line.
Step 2 of the above work consists in using an RF classifier to extract, for each image collected, brightness, NIR standard deviation, NIR/G values, medium Red, medium Green, medium NIR to distinguish crops, bare soil and weeds.This classifier is also supported by the eCognition software.Step 3 consists of making a classification correction to avoid false classifications.After the different component's classification of agricultural fields, the last step is to generate prescription cards with 0.5 m × 0.5 m grid frameworks to put the crops in the higher level and the weeds in the lower level (de Castro et al. 2018) (Figure 3).

R E T R A C T E D
converted into colour clean space.An HSV conversion is applied after.The following task allows to combine the clusters/objects via the bwareaopen (BW, pixel) Matlab command.Then, authors applied a conversion to the L*a*b model and also a double conversion to double the image intensity.The last task creates a table and stores the number of plants counted and the percentage of green pixels.Figure 4 shows the algorithm proposed in this work of Gnädinger and Schmidhalter (2017).The algorithm proposed in Chen et al. ( 2017) is based on deep learning in order to count fruits in different structures.This algorithm is divided into three parts.The first one is dedicated to the labelling of different image components to prepare the labels for counting.The second part aims to estimate the number of fruits per droplet.The third part estimates the total number of fruits detected in images.Figure 5 gives an overview of the proposed algorithm.

Disease detection
The detection of diseases in agricultural fields has an influence on agricultural yield, which implies the permanent monitoring of agricultural products against different diseases.In this context, P. Jiang et al. (2019) focused on the detection of apple diseases.The most frequent diseases found in apple plants are Rust, Mosaic, Alternaria leaf spot, Grey spot and Brown spot.The proposed algorithm is devised in three steps.The first step focuses on collection of data which contain images with different diseases, these data are collected by two methods.The first one is achieved using plants in the laboratory and the second one is achieved in a real environment for the validation of results obtained in the end of evaluation.After data collection, manual annotation is required and followed by a series of data augmentation for the extension of ALDD (Apple leaf Disease Dataset).The second step consists of dragging the conventional neural network used in this work.Then, the collected data are diverted into two parts, one part to form the INAR-SSD model and the second part to test and evaluate the performance.The last step contains the final results of disease detection.Figure 6 shows the proposed algorithm (Jiang et al. 2019).
Table 2 shows different applications in PA based on type of algorithm, type of images used and their applications.

Tools and database
Databases in PA are very important tools to evaluate algorithms.A database that contains a large number of images with different resolutions is recommended in

R E T R A C T E D
agricultural fields that contain rice to make a prediction of field yields.H. C. Table 3 shows the different applications using satellites.

Unmanned aerial vehicle
An UAV is a plane without a human pilot controlled by a radio channel.Different models of UAVs are used in PA over the last two decades.We find also a variety of UAVs used in different work such as weed detection, numerical counting and agricultural field monitoring.Generally, five models of UAVs exist.Figure 8 shows the different UAVs models such as Fixed Wing (a), Single rotor with only one motor (b), Quadcopter (c), Hexacopter (d) and Octocopter (e).Several works in smart farming are based on UAVs.The purpose of these UAVs is to support the different sensors in order to collect the various information used in the agricultural fields.This information is based on the data type provided by the sensor.S. Candiago et al. (2015) proposed a work that deals with an evaluation of different indexes, such as the vegetation index, the green vegetation index and the adjusted vegetation index.This evaluation was based on tomato fields to monitor plant health.The system proposed in this study is based on a multirotor hexacopter drone.This drone weighed 5.4 kg with 9 kg maximum weight.The control of this drone was done in two ways: a manual control with the use of radio control or semiautonomous control based on waypoints.The batteries had a duration of 21 min hovering flight time and 18 min for acquisition (Candiago et al. 2015).
S. Guan et al. (2019) provide a system to calculate the high-resolution vegetation index based on quad-copter UAVs and a multispectral camera.This camera is based on a TetraCam sensor that proved RGB images and NIR band.The NDVI value given by the multispectral sensor will then be evaluated to find the relationship between the vegetation index and the amount of fertiliser in rice fields and wheat fields.

R E T R A C T E D
UAV supports a maximum weight of 15 kg and a speed of 18 m/s with a battery life of 4500 mAh (Horstrand et al. 2019).J.Y. Baek et al. (2019) proposed an algorithm for chlorophyll estimation based on a multispectral sensor.The tools used in this work are based on the MicaSense RedEdge camera and an Inspire 2 quadcopter drone.This drone has an overall weight of 4 kg and a maximum speed of 94 km/h with a flight time of 30 min with a low weight (Baek et al. 2019) Table 4 shows a synthesis of different works based on UAVs and multispectral cameras.

Sensors
In PA, many applications use a multispectral or hyperspectral camera as a sensor.This camera is a device that captures several wavelengths in a single plane that are separated for specific analysis and recombination applications.This will allow a much more accurate analysis and visualisation of details that are not visible to the eye.The most famous cameras used in PA are Parrot Sequoia+, TetraCam and RedEdge-MX.
Figure 9 shows different cameras used in PA with (A) is RedEdge camera, (B) parrot sequoia camera, (C) Tetra-Cam and (D) FX10 camera.
In Figure 9, the A camera presents the RedEdge camera which gives five spectral bands such as R edge with 717 nm, B (475 nm), R (668 nm), G (560 nm) and NIR (840 nm) bands.This type of camera offers the possibility to calculate different vegetation indexes based on the different bands.In the state-of-the-art, we find the work of P. Marcon et al. (2018) who used this camera to calculate vegetation indexes.In the same context, we find also the work of B. Lu et al. (2019) based on the RedEdge camera to compare results obtained from the vegetation index with hyperspectral and multispectral cameras.In the same figure, (B) shows the parrot sequoia camera.This camera offers separate images with different spectral bands.The strong point of this camera is its low weight (72 g) compared to the RedEdge camera (250 g).This camera offers four separate bands and a RGB image with a 4608 × 3456 pixels resolution and the others bands with 1280 × 960 pixels resolution.The G band is collected at a 550 nm wavelength, the R band at 660 nm, the RedEdge at 735 nm and the NIR at 790 nm.In the literature, several works were based on the Sequoia parrot cameras.S. Guan et al. (2019) used the multispectral cameras to evaluate the relationship between NDVI values and fertiliser amounts on rice and wheat yield.TetraCam (C) is a low-cost camera for agricultural applications.This instrument includes an integrated Incident Light Sensor (ILS) measurement The advantage of this camera is the low cost compared to others one, but its weak point is the types of images provided.This camera gives the 3 RGB bands and the other image with NIR band which increases processing in the case of applications that require index monitoring.Indexes are based on separate bands; the other two cameras give this possibility with a high rate of images per second.In this context, S. Candiago et al. (2015) have based their work on TetraCam camera to evaluate multispectral images and vegetation indexes based on an UAV.Specim FX10 (D), in Figure 8, is a hyperspectral camera with more than 224 spectral bands which varies between 400 and 1000 nm.This camera can give more spectral bands than other cameras used with a higher frequency.But the weak point of this camera is its weight with 1.26 kg compared to Sequoia, RedEdge-MX and Tet-raCam cameras, which create battery life problems in the case of UAV-based applications.In the same context, authors in Horstrand et al. (2019) present an embedded system for real-time agricultural field monitoring based on the FX10 hyperspectral camera.Table 5 shows different multispectral and hyperspectral sensors with their weights and rates (fps).
The use of hyperspectral sensors allows extracting the detailed bands such as the short wave near-infrared SWIR.These types of bands allow computing other indexes such as NDMI, which cannot be calculated without the appearance of these bands.Similarly, hyperspectral sensors provide a wide bandwidth from 400 to 1730 nm.This variety gives more bands in different wavelengths, which shows more details in the different plants.For example, we can find the Hyspex camera with a wavelength between 950 and 1730 nm with total bands up to 232.The wide range of wavelengths gives a very high number of bands, but minimising this range provides a limited number of bands.As an example, the Ximea MQ022HG hyperspectral sensor has a wavelength between 665 and 960 nm.We can notice that this margin is minimised, which gives only 25 bands compared with the Hypex S-640i sensor, which reaches 232 or FX10 with 255 bands.But the advantage of this sensor, even though it has limited bands, is the low power consumption that exceeds 1.6 W. Another important parameter is the weight, especially when discussing applications requiring low power consumption and weight.This sensor has a weight that does not exceed 32 g compared with the FX10 sensor that reaches 1260 g.This weight is increased by ×39 compared to Ximea sensors.Similarly, we have the Firefleye 185 sensor, which gives images with a wavelength between 450 and 950 nm with 125 bands, as well as a power consumption that reaches 7 W with a weight of 490 g and 25 frames/s.In the same way, the Specim IQ offers 204 bands with a length between 400 and 1000 nm.The strong point of hyperspectral sensors compared to multispectral is the variety of bands given which provides flexibility for PA applications.As the hyperspectral sensors vary between them, this variation is summarised on several parameters, namely, the energy consumption, weight and wavelength given by each sensor which influences the number of bands for the different images.The best choice of these sensors depends on the desired application as well as the platform specifications that will support these sensors.The use of sensors in PA is not summarised in cameras only.Erlin Tian et al. (2021) proposed an IoT-based crop monitoring approach to increase agricultural fields' yield (Tian et al. 2021).As

Embedded system in PA
Embedded systems in our days have undergone a huge revolution in different fields.But in PA, the growth is still limited.For this reason, we should force research in this field.Porting algorithms on embedded architectures gives more usefulness to the proposed algorithms to solve problems in PA.We find some attempts in the scientific work related to precision algorithms implementation on embedded architectures.For instance, N. Li et al. (2019), present a new methodbased image processing to detect cultivated plants that contain a high presence of weeds.In this work, they proposed system based on conventional neural networks to segment plants in a database that contains 788 RGB images.Results of the proposed algorithm show that the image segmentation is done with an absolute error factor less than 0.005 and measurement scores at 97%.On the other hand, the system can process 169 images/s based on a Desktop using a Nvidia GeForce RTX 2080 @1515 GHz GPU with 3.5 GHz Basic RAM frequency.An evaluation was achieved also on a Jetson TX2 embedded system with GPU @1.3 GHz NVIDIA 256 CUDA cores and a CPU @2 GHz 4 Core Cortex-A57.The proposed work has a strong point with real-time data processing based on a desktop computer with 169 images/s rate.The same processing on the Jetson TX2 is achieved at 5 frames/s.Indeed, the system cannot follow the cameras data flow that gives 24 or 60 fps (Li et al. 2019).
In the case of the disease's detection, the work of P. Jiang et al. (2019) was based on conventional neuron networks to detect apple plant disease in realtime.The proposed approach was based on processing images of a database that contains 26,377 images, to detect the five most common types of disease in apple plants, namely Rust, Mosaic, Alternaria leaf spot, Grey spot and Brown spot.The algorithm was

R E T R A C
T E D  2019) presented a method for dynamic detection of seed germination.This approach is based on Convolutional Neural Network (CNN) with 97% accuracy.The work presents the evaluation of the proposed algorithm using continuous image sequences on an embedded system with a VideoCore IV GPU + Myriad 2 and 1 GB RAM.They also used a desktop with Quad-Core i7 @2.8 GHz with 16GB RAM and a GeForce GTX 1050Ti GPU @1.291 GHz.Results showed that the processing time given by the desktop is 10 times faster than those of the embedded system but the constraint here is the high energy consumption (25 W) that limits the use of such a desktop in embedded applications (Shadrin et al. 2019).
Monitoring of honey farms is used to increase honey production and quality.The work of T.N.Ngo et al. (2019) presents an imagery-based real-time system to monitor honey bee activities.This system is based on counting incoming and outgoing bees.A camera is fixed to the bee hive input to collect image sequences in order to count the bees.The proposed algorithm is based on a Kalman filter and Hungarian algorithm.The first step consists of acquiring images and initialising the background model.In this step, a reference image is used to eliminate the backplane.A threshold is applied to detect bees that enter or exit.The evaluation of the results based on a Jetson TX2 that contain a GPU with @1.3 GHz and a Cuda parallel programming.The obtained results show that the accuracy of the algorithm can achieve 93.9% compared to the manual method that presents 100% of precision.Data collected by this system are sent to a server through a 4G network.The execution time obtained in this work allowed to process 30 frames/s in the case of 320 × 240 QVGA resolution.The execution time increases in the case of highresolution, e.g., UXGA or QXGA.An evaluation of different resolutions on the same architecture is recommended to showing the impact of this variation on processing time (Ngo et al. 2019).
Numerical fruit counting gives farmers the possibility to predict the plant yield which helps to make decisions and increase the agricultural production.The work of S. W. Chen et al. (2017) based on deep learning present an embedded system for digital fruit counting.The proposed system is based on conventional neural networks to estimate the total number of fruits in different regions of the farm.This study was aimed to count two fruits: orange in the day and apple in the night.A first step is dedicated to the farm data collection.This step consists on collecting and storing the labels in the Scalable Vector Graphics using circles around the fruit.Images used have a 1280 × 960 pixels resolution and reach up to 100 varieties of fruits.After that, authors used a subdivision to separate the images into 16 windows of 320 × 240 pixels for oranges and 16 windows of 480 × 300 pixels for apples.After the labelling task, authors stored the resulting information in an SVG format that contains location, size and the number of each label.This gives the ease of identifying these individual data.They have converted the green SVG vector to a matrix that contains labels to process each matrix pixel and determine a set of pixels that correspond to a fruit.A training phase on the database is built for learning.The last step is dedicated to the final counting of results to have the final number of fruits.The embedded system used in this work is based on a TITAN XGPU @ 1.418 GHz.The work presents a counting system embedded on a drone but the evaluation of this system in a real environment requires a study of the processing times in order to verify the real-time constraints (Chen et al. 2017).P. Horstrand et al. (2019) have elaborated a real-time surveillance of agricultural fields.The work presents a realtime monitoring system based on the calculation of the NDVI, MSAVI and MCARI (Modified Chlorophyll Absorption in Reflectance Index) indexes.Authors used a

R E T R A C T E D
Matrix 600 drone.A CAN bus is implemented to manage different sensors data and a heterogeneous embedded CPU-GPU system (Jetson TK1 with an NVIDIA GK20, 326 GFLOPS and ARM CPU Cortex-A15 quadra-core @2.32 GHz) for index calculation and drone control.A hyperspectral camera (FX10) with 224 bands and an RGB camera are used (Horstrand et al. 2019).The resulting images are compressed and send to a ground station.P. Bosilj et al. (2018) presented a work dedicated to pixel-by-pixel classification of images after vegetation index calculation.This classification is used to distinguish plants and weeds.The proposed method shows an improvement compared to the other methods proposed in the state-of-the-art.The proposed method applies attribute profiles to make the multiscale pixel description and successive filtering of each image.Local binary patterns are then applied to calculate texture descriptors from the proximity of each image pixel to compare the intensity.Finally, a histogram of the gradient descriptors is realised to determine the gradient intensity of each pixel.The system was implemented on an Intel Core i7 PC with a CPU @ 2.60 GHz.The algorithm evaluation showed that this system can execute the first step in a limited time [1.7-2 s], the second step with an execution time of 1.8 s and the last step in 1.9 s, with a binding complexity of the algorithm.The execution time presented in the work requires improvement in order to follow a fast classification in different applications that requires a high speed.Implementing the algorithm on an embedded system will give more advantages for the application (Bosilj et al. 2018).In the case of UAV usage in PA, the work of B. H. Y. Alsalam et al. (2017) presents a monitoring system to detect weeds which leads to financial damage related to crop yields if they are not detected at an early stage.The proposed algorithm is based on Observation, Orientation, Decision and Action (OODA).The orientation part contains the exteroceptive data such as the RGB images and ultrasonic sensor data.The orientation phase consists in extracting the geolocation using GPS and sending these data to the remote processing centre.The third step is based on decision-making by testing the desired alteration and target selection.The action phase is to target plants in order to take images for further processing.The work presented in this study is evaluated with a drone and an onboard Odoird U3 card processing that contains a Samsung Exynos 4412 Prime Cortex-A9 Quad Core processor @1.7 GHz and a Mali-400 Quad-Core graphics accelerator @440 MHz.This system controls at the same time detection of weeds and direction of the Outdoor animal tracking CNN named YOLO NVidia Tesla C2075 GPU @ 1.5 GHz K. Wakamori et al. (2020) Estimation of plant water stress Multimodal neural network Laptop core i7-6700 k with GPU Nvidia GTX960 H. Genno and Kobayashi (2019) Apple growth monitoring -Raspberry Pi 2 Model B+ with CPU quad-core ARM Cortex-A7 @ 900MHz G. Kitić et al. (2019) Evaluation of different plant states The results showed that this system is able to modify its current target and perform weed detection action on the selected target.The ability to perform an onboard processing is an important advantage of this work, but the weak point is the lack of real-time processing to reduce the flight time and increase the number of plants processed (Alsalam et al. 2017).For weed detection, the work of P. Lottes et al. (2018) shows a weed detection method based on a fully conventional network with an encoder-decoder structure.The experimental evaluation of the new proposed algorithm showed that the application is more generalist than the other approaches proposed in the state-of-the-art.
The approach proposed in this work is semantic and processes a sequence of successive images.The experimental approach is based on different sugar beet fields located near Bonn and near Stuttgart in Germany.This database contains aligned RGB images and NIR images, with 512 × 384 pixels resolution.The hardware used in this work is based on a BoniRob soil robot and a GTX 1080 Ti @1.7 GHz GPU card with 3584 CUDA cores.The work presents an innovative new method of weed detection based on conventional neuron networks but the constrains here is the processing time.It is necessary to implement the algorithm on an architecture that can process the algorithm in real-time because the system is based on a robot that requires image processing before it can move (Lottes et al. 2018).Table 6 shows a synthesis of the embedded systems based on the different algorithms, embedded system used and the application.For greenhouse farms, the work of M. Dongdong et al. (2019) presented a new greenhouse management system to increase plant yield.This system is based on imaging with a hyperspectral camera to manage temperature and irradiation.This camera collects images in an area of 2.5 cm × 5 cm.The proposed system is equipped with a conveyor managed by the proposed algorithm to expose more equally heat and irradiation to each plant.In this study, a comparison was made between a traditional greenhouse and another equipped by the proposed system.This comparison showed that the maize production was increased up to 50 times in the case of the proposed installation.The algorithm was programmed by COMSOL 5.2b software and evaluated on a LENOVO desktop with 16 GB RAM and an Intel Xeon E1270 CPU @ 3.70GHz.Then, a Matlab R2016a software was used to process the collected images.To calculate the vegetation index, authors used tools on a HP 17G3 Workstation integrating an Intel core i7 CPU @2.70 GHz and a 64 GB RAM.The proposed study discussed a particular case of PA that presents major problems especially when we talk about greenhouse farming which requires precision, real-time processing especially in large greenhouses areas (Dongdong et al. 2019).
Discussion and future work  2019) has shown a system that processes the NDVI index.The strong advantage of this work is the real-time processing of the algorithm that calculates the different indexes.However, the weight of the FX10 cameras reaches 1.26 kg and the Jetson TK1 card with almost 1 kg make the flight time very limited.On the other side, we find the Parrot Sequoia camera which gives bands to calculate vegetation indexes with a weight of 72 g instead of 1.23 kg.In the same way, we find the Jetson TX2 card with 250 g instead of 1 kg in the case of Jetson TK1 with dual CPU @2 GHz and a GPU of 1.3 GHz allowing increasing the flight time.The weakness when using an UAVs is the energy consumption especially in all applications based on embedded systems (Horstrand et al. 2019).In the case of closed greenhouses that require special plant monitoring, the use of soil robots is necessary in order to increase the deliverability of this type of farm.To increase the monitoring of different agricultural fields, several works have been proposed combining satellite, UAVs and soil robot.D. Murugan et al. (2017) proposed a technique that combines satellite data with those collected by drones.This technique has been proposed to reduce the difficulty of satellite surveillance.On the other hand, it also reduces the frequent use of UAVs (Murugan et al. 2017).C. Potena et al. (2019) proposed a combination between robots and UAVs.In this study, they shared tasks between an autonomous robot

R E T R A C T E D
operating a SLAM (Simultaneous Localisation and Mapping) algorithm and UAVs in order to increase monitoring accuracy (Potena et al. 2019).The use of multi-sol-robots and SLAM algorithms in PA provides a general cartography of different fields.The application of these types of algorithms can help robots to monitor agricultural fields autonomously and efficiently (Khanna et al. 2015;Silano et al. 2019;Howard 2006).The application of deep learning and machine learning algorithms in PA is encouraging.In this context, the work of P. Jiang et al. (2019) presents an algorithm based on the INAR-SSD model for realtime disease detection.The algorithm proposed in this study is based on a GTX1080Ti GPU @ 1.582 GHz (Jiang et al. 2019).S. W. Chen et al. (2017) present a fruit counting algorithm based on the SVG vector.The implementation of the approach has been proposed using a TITAN X GPU @1.418 GHz (Aghighi et al. 2018).To detect weeds I. Sa et al. (2018) based their studies on conventional CNN neural networks.The hardware used is a Jetson TX2 GPU and Titan X @ 1.3 GHz boards (Chen et al. 2017).In the same context, the work of R. Jian et al. ( 2019) presents a method based on artificial intelligence for monitoring the production of maize fields.This study aims to increase the yield of these fields.The algorithm proposed in this paper is based on two models DeNitrification-DeComposition (DNDC) and Decision Support System for Agro-technology Transfer (DSSAT).These two models are used to optimise management of the maize fields in northeast China (Jiang et al. 2019).W. Soo Kim et al. (2020) presents a technique-based onion blight diagnosis.The system proposed in this work is based on a HDWC-S322MIR PTZ camera with a high-resolution scanner and a motor that controls the system vertical movement to monitor plants (Kim et al. 2020).
Authors of this paper used deep neural networks (DNN) algorithm implemented on an I7 CPU @ 3.70 GHz.For data training, they used a Titan-V GPU with a frequency of @ 1.2 GHz.The methodology followed in this paper is based on cropping transmitted images with the selection of a central area of 3000 × 2000 pixels.The images used in this work are collected every 30 min during the day between 7 am to 19 pm (Potena et al. 2019).We can now say that the use of image processing techniques in agriculture has shown its strength and flexibility in solving various problems of traditional agriculture (Quiroz and Alférez 2020;Barış Özlüoymak and Bolat 2020;Li et al. 2020;Van De Vijver et al. 2020;Dong et al. 2020).The monitoring of agricultural products is not limited to the growth phase of agricultural products, we can find published works that follow the evolution of products.For example, M. Condorí et al.
(2020) propose a system that tracks the tobacco process in order to have a good quality of this product (Condorí et al. 2020).Another example appeared in the work of H. Kang and Chen (2020) which is dedicated to counting apple fruits based on the application of deep learning.The processing time allows real-time processing at 35 fps (Kang and Chen 2020).Despite these systems, the use of heterogeneous embedded systems in PA is still limited.This leads to in-depth studies of different approaches existing in the literature to design systems that implement increasingly complex algorithms and that meet the constraints of real-time analysis and control in the field of PA.
Computer development in PA has undergone a huge evolution in data processing, which gives flexibility in the farmers' decisions.Most of the agricultural countries have adopted the use of computer systems to increase the yield and production of farming fields.In this context, Krone et al. (2016) proposed a statistical study on modern information and communication technologies in agriculture.The study focused on the case of Tanzania and Kenya.The authors concluded that education among farmers has a major impact on the use of information technologies in agriculture.Normally, most farmers avoid the use of new technology, and this constraint is due to a lack of information on the technologies used, which showed the paper.The use of new tools based on artificial intelligence requires an informative education in order to convince farmers to use the techniques offered by researchers in this field (Krone et al. 2016).Another work has been developed in Pakistan by Naveed and Hassan (2020) to examine information provision among farmers.However, this information is related to soil monitoring included: . weeds .irrigation management .planting techniques .soil fertility management .the improvement of citrus varieties .the means to stop the fall of the fruits .government policies and programs All this information is very sensitive to have a high yield.However, monitoring of agricultural fields can be focused on several areas, such as plants, irrigation and the environment (Naveed and Hassan 2020).Based on the different data processed, we can conclude that the different sensors and architecture tools are divided into three types.The first type focuses on satellite imagery.This mean is recommended when there is a need to monitor large agricultural fields without considering the resolution of the images.In fact, applications

R E T R A C T E D
that require a high output, such as counting fruit, cannot be handled using this type of tool.Therefore, the use of UAVs is strongly recommended for medium-sized agricultural fields.As well as the use of UAVs gives flexibility to sensor usage, while the satellite does not provide this possibility.Another important advantage is the variety of applications that can be achieved in PA using UAVs.
On the other hand, we have agricultural robots.These tools present a strong solution for the surveillance of agricultural fields.As well as we can find robots that react directly to the agricultural crop, which makes these tools strong for making decisions.But the limitation of these robots is due to the fact that they cannot handle large agricultural fields caused by several constraints like energy consumption.Robots remain a strong tool necessary in small fields and greenhouses.

Future work
The exchange of information in the agricultural field is very important for several reasons.This exchange's required information is not limited to the agricultural area, and this information concerns several parties to know the farmers.Besides, it is necessary to push the farmers to expel the world of information exchange.This step will allow us to see agriculture as a purely commercial world.As a future work and recommendation, we propose using communication systems based on the use of a variety of tools.These tools vary between the different applications that exist in traditional agriculture.As well as it is necessary to strengthen the education system among farmers in African countries, such as East Africa, Ghana, Egypt, Tanzania and other countries of the world (Akuku et al. 2020;Osei et al. 2017;Isaya et al. 2018).As well as it is necessary to carry out an appropriate study on the different problems in order to propose robust solutions based on the use of embedded systems.This type of system will increase the accuracy of monitoring in agricultural fields, which is a great benefit for farmers.The use of information processing systems will further help expel a world of autonomy based on aerial and ground robots.These tools will perform precise tasks with the help of locating systems that deliver accurate information.To achieve all this, it is necessary to apply a thorough study on the informative system of agricultural fields, including the use of accurate sensors as well as reliable systems.On the other hand, the full exploitation of information on agricultural fields requires service models based on multitasking systems (Zhang et al. 2019).These systems include the use of web services such as phones, workstations and various tools (Tang et al. 2020).This use of different tools will help the farmer much better for the communication between the other agricultural fields' services (Cho et al. 2011).We also need the tools to manage the data in order to process them in real-time (de Souza et al. 2017;Saddik et al. 2021;Latif et al. 2019;Latif and Saddik 2019).The correct distribution of these data, either with cameras or other tools, requires a precise pre-processing of the images, allowing an accurate treatment on several applications to increase the yield and the quality of the agricultural fields (Tang et al. 2020;Ren et al. 2020;Wang et al. 2020;Kuchumov et al. 2019) [116-119].

Conclusion
In this paper, we presented state-of-the-art information exchange algorithms and hardware systems-based PA.The synthesis of these works aims to present different approaches carried out in monitoring different agricultural areas.We have also presented different tools used to collect images and information of agricultural fields and databases for disease detection, weed detection and digital counting.
The introduction of embedded systems-based information development in PA has seen a revolution in the last years, which implies implementing different algorithms on hardware architectures, either homogeneous such as CPUs, GPUs, FPGAs or heterogeneous systems such as CPU-GPUs or CPU-FPGAs requiring different programming languages.While several architecture designs exist for off-line processing and control, there is, as yet, no dedicated on-line approach for a real-time embedded system based on controlling indexes and decisions.The reliable architecture should take into account: power consumption, size and weight, computing power and real-time constraints.FPGA-based acceleration has presented intense competition to the GPU-based acceleration in the recent computing systems due to its high computation capabilities and lower energy consumption.We will draw specific architectures to define a new programmable architecture (based on the use of FPGA devices) or a heterogeneous architecture (CPU-FPGA) that jointly optimises parameter extraction operators and those of multisensor data fusion.In this case, the aim is to optimise the processing time on a dedicated low-cost system.A design methodology based on hardware-software codesign is necessary.The architecture used will be validated in HIL (Hardware In the Loop) using dataset and then implemented on a mobile platform, operating in a "reference" environment.This approach will allow operating tasks of PA (indexes computing, control and decision) in real-time and in the real world.

R E T R
who showed a detailed study on the Iranian region's monitoring based on the combination of deep learning techniques such as TIMESAT and BFAST.The results of this paper showed the appearance of high NDVI values in the north and west of the country; this evaluation was based on the(Gholamnia et al. 2019).Another work of S.Candiago et al. (2015) was based on a multispectral camera and the software processing tool PixelWrench 2. The drone used in this work is a hexacopter.This study aim to monitoring agricultural fields based on NDVI.The authors have used Tetracam for image collection, which gives RGB and NIR images, this type of camera has a latency problem for image collection.For this reason, this type of camera is undesirable in applications that require temporal constraint.In the same context, we can use the NDVI index to detect certain diseases; for this reason, the work of J.Yeom et al. (2019) aims to use the NDVI index to monitor diseases in sorghum and cotton fields.The evaluation showed that the index in areas containing conventional tillage (CT) is very high compared to others without tillage(Yeom et al. 2019).Water monitoring requires the use of another index, NDWI Normalised Different Water Index (NDWI) use near-infrared waves and green to extract the water(McFeeters 1996).Its main objective is the detection of water in plants.Several works have been elaborated to use this index.Among them, we find the authors inBangira et al. (2019), who used Sentinel-based satellite images (2 and 3) to classify NDWI values.This classification is based on machine learning techniques.The evaluation of the method used has been compared with various deep learning models such as Decision Tree (DT), Support vector machine (SVM) and nearest neighbour (K-NN).L. Li et al. have studied the application of the NDWI for the extraction of water masses (2019).
Numerical plant counting is a very important application for predicting the yield of agricultural fields.Several articles have been proposed in this field such as applications that are based on deep learning and others based on algorithms with several processes(Chen et al. 2017;Gnädinger and Schmidhalter 2017).The work ofGnädinger and Schmidhalter (2017) proposed an algorithm for the numerical counting of Maize fields.This proposed algorithm was divided into six tasks.The first task consists to eliminate weeds to avoid false calculations.This elimination is manual and based on photo-editing software.Then, authors performed a segmentation of the green pixels based on Matlab software.The second task consists to create a histogram of each R, G, B bands and also a grey histogram, the values of these histograms vary between 0 and 255.Another colour histogram is created for the colour evaluation.Then authors applied the Decorrstretch filter to stretch the colour difference.Each image strip was
order to evaluate algorithm functionality and execution time.Generally, three types of databases exist.The first one is the satellite images for monitoring large agricultural fields with low accuracy.The second one is the medium agricultural fields images based on drones.The third type is small agricultural fields and greenhouses images given by soil robots equipped with multispectral cameras for plant monitoring.Figure7shows different images used in PA: A presents an image of a large field collected by a satellite, B presents an image collected by a drone and C presents an image using a soil robot.Satellite imagerySatellites are generally monitoring tools in different fields.The preliminary researches in PA are based on satellite imagery.Today, we find works based on surveillance or mapping based on satellites.For instance, K.Guan et al. (2018) used a satellite dataset to map
Figure 7. Examples of database images used in precision agriculture: (a) satellite image, (b) UAV image and (c) robot image.

Figure 9 .
Figure 9. Cameras used in precision agriculture.
1280 × 960 pixels resolution.Another database is diverted into two fields (http://www.ipb.uni-bonn.de/data/uav-sugarbeets-2015-16/).The first field is called A containing RGB images with a 4000 × 2250 resolution collected by a DJI MATRIX 10 drones.The attitude used in this work varies between 8 and 12 m with a Zenmuse X3 camera.The field B collected by a GoPro programmer camera to take images every second with a change in altitude varies between 10 and 18 m based on a DJI PHANTOM 4 UAV.For soil robots in small fields and greenhouses close, we find the database described in (http://www.ipb.uni-bonn.de/dataset-s_IJRR2017/annotations/Italy_180511/images/nir/).It contains images with a 1266 × 966 resolution (http:// www.ipb.uni-bonn.de/datasets_IJRR2017/annotations/Italy_180511/images/nir/). This database gives RGB images and a NIR images of different plants in agricultural fields.These images are collected by the multispectral camera TetraCam.Another database for data training used in the deep learning MSRA10 K Salient Object Database contains RGB images of different objects among them images for plants (https://mmcheng.net/codedata/).

Table 1 .
Indexes in precision agriculture.
NIR: near infrared band, SWIR: short wave infrared band, G: green band, R: red band, MIR: medium infrared band.

Table 2 .
Different application in precision agriculture.
Luciani et al. (2019)ed several satellites to map districts in the New Zealand region.The absence of water is a major problem in different fields and crops which directly influence the production.In the case of agricultural field monitoring, to manage irrigation system especially in areas that contain low water resources, R.Luciani et al. (2019)proposed an automatic system for agricultural field monitoring based on satellite imagery.Dataset is based on different satellite images such as Aqua MODIS.This satellite was launched in the aqua platform in 2002, it allows a total visualisation of all the earth every half-day.Images provided by this satellite contain 36 spectral bands with a wavelength ranging between 0.4 and 14.4 μm and a resolution of 250 m, 500 m and 1 km.VIIRS is also a satellite that was launched in 2011 with 22 bands.This satellite gives images with different resolutions (high and medium).The wavelength varies between 0.412 and 12.01 μm, with five bands at high resolution and the others at a medium resolution.On the other side, we find the MERIS satellite which was launched in 2002.
Images provided with MERIS give 15 bands with a wavelength ranging from 390 to 1040 nm.Sentinel-3 OLCI satellite was launched in 2016 allowing image resolution of 300 m for the highest resolution and 1200 m for the reduced one.The images contain 21 bands with a wavelength ranging from 400 to 1020 nm (https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/).

Table 3 .
Precision agriculture-based satellites.system for downlink radiation that can be used to generate accurate reflectance data for the NIR, R and G bands.The central unit of this camera is based on a Linux controller technology "smart camera".It has a user-friendly navigation interface and a variety of Linux interface options available from the wireless communication interface.The RS232 series is used for data transfer.The sensor is configurable to capture two images with 2 megapixels, with NIR, R, G filtration of the Bayer filter.The camera rate is 1 fps.
Figure 8. Models of Unmanned Aerial Vehicles (Available in www.businesswire.com).

Table 4 .
Tools used in precision agriculture.
(Chebrolu et al. 20175)009;Aljanobi et al. 2010)such as temperature, humidity and CO 2 sensor to monitor agriculture.The proposed approach is based on an intelligent sensor network(Liu 2021).H.Liu (2021)used IoTbased sensor network in order to monitor water in agricultural fields in real-time based on pressure sensors.From this, we can conclude that using soil sensors in PA applications gives accurate and fast results.However, image-based sensors are still a fast, low-cost and reliable solution.Figure10shows different sensors-based wavelength.Soil robotSoil robots have a major advantage in monitoring greenhouses.As far as the platform is concerned, mobile personalised robots are the preferred choice.The reliability of the Crawler and Caterpillar systems is demonstrated by many authors(Chatzimichali et al. 2009;Aljanobi et al. 2010).But wheeled robots are presented as a suitable solution thanks to their simple design(Kitamura and Oka 2005).All these systems can include sensors like GPS, odometers, linear guidance, trajectory plans, multispectral cameras and navigation strategies.Other robots use irrigation pipes or rails in the field to move around, especially in small farms and greenhouses.We also find the AgBotII and BoniRob autonomous robots.The AgBot II(Bawden et al. 2017) is a prototype farm robot designed and developed by QUT researchers with the support of significant co-funding from the Queensland government.AgBot II is a new generation of weed and crop management equipment for autonomous group work in agricultural and field crop management applications.The robot's software, cameras, sensors and other electrical devices allow it to operate in a field, apply fertiliser, classify, detect weeds and control them mechanically or chemically.These robot's give farmers a tool that can help them in reducing operational costs and yield losses.The BoniRob(Chebrolu et al. 2017) platform is a multi-purpose robot designed by Bosch Deep Field Robotics.BoniRob is designed for precision farming applications, such as mechanical weed management and spraying of selective weed killers, as well as crop and soil monitoring.It makes it possible to set up various tools for different tasks.BoniRob is equipped with four wheels that can be driven separately from each other, which gives a flexibility of operation and navigation over difficult agricultural fields.www.sensefly.com/education/datasets/?datase-t=5632&industries%5B%5D=2).Several cameras are proposed such as sensefly Aria X, sensefly s.o.d.a-3D camera, sensefly Duet T, sensefly s.o.d.a, MicaSens RedEdge and Parrot Sequoia+.This camera offers four separate bands with a 4608 × 3456 pixels resolution for RGB images and

Table 6 .
Embedded systems-based precision agriculture.
Vargas et al. (2019)ystems based on artificial intelligence give more flexibility in the field of PA.This increases deliverability in agricultural fields.These fields are generally broken down into three types: large, medium and small agricultural fields.For large fields, satellite monitoring is recommended to avoid UAV and soil robot constraints.In this case authors ofVargas et al. (2019)proposed a method that quantifies the hosting of irrigated spearmint crops.This work based on drone and satellite data.The advantage of using satellite-based imagery with high resolution is the monitoring of large fields.But processing tasks are achieved offline.This makes it difficult to implement applications requesting decisions in real time.On the other hand, we find the medium agricultural fields based on UAVs.UAVs equipped with multispectral cameras give a global view of these fields.We can monitor them by different indexes namely NDVI and NDWI.The work of P.Horstrand et al. (