Satellite Images-based Obstacle Recognition and Trajectory Generation for Agricultural Vehicles

In this study, a method for the generation of tracking trajectory points, detection and positioning of obstacles in agricultural fields have been presented. Our principal contribution is to produce traceable GPS trajectories for agricultural vehicles to be utilized by path planning algorithms, rather than a new path planning algorithm. The proposed system works with minimal initialization requirements, specifically, a single geographical coordi‐ nate entry of an agricultural field. The automation of agricultural plantation requires many aspects to be ad‐ dressed, many of which have been covered in previous studies. Depending on the type of crop, different agricul‐ tural vehicles may be used in the field. However, regardless of their application, they all follow a specified trajectory in the field. This study takes advantage of satellite images for the detection and positioning of obstacles, and the genera‐ tion of GPS trajectories in the agricultural realm. A set of image processing techniques is applied in Matlab for detection and positioning.


Introduction
Rapid developments in computational intelligence, which are parallel to developments in digital science and technology, launched a new age for the automation of traditionally manual industries including agricultural production. Higher quality and volume in agricultural production requires a higher efficiency in ploughing, seeding, harvesting and similar traditionally manual tasks [40]. Various types of automated agricultural vehicles may be needed and implemented, requiring the tracking of a specific trajectory in the field. Studies on the low-level control of autonomous vehicles propose several methods to track with a sufficiently low lateral error [29,24]. A higher level of agricultural automation requires the automatic preparation of the reference paths to be followed by autonomous agricultural vehicles. Path planning algorithms, which are based on splitting a field into trapezoidal subfields and merging them into larger blocks to solve routing by a straight line, is covered by [39]. Path planning algorithms based on level set methods are analysed in [5] to improve re-computations in the case of environmental changes. A dynamic path search algorithm to guide tractors on turns at the end of fields is considered in [43]. Various optimization criteria, such as Bpatterns for optimal route planning with ant colony optimization, have been introduced in [1]. A probabilistically robust path planning algorithm using rapidly exploring random trees is introduced in [27] for the efficient identification and execution of paths in real-time. A complete coverage path-planning algorithm is proposed to generate dynamic paths, which avoid obstacles for a cleaning robot using neural networks [37]. A four-layer path-planning algorithm is developed in [34] to ensure the optimal real time planning of multiple semi-autonomous vehicles. Cost function minimization and heuristic associated with a means of a greedy algorithm is used to determine the best Hamiltonian solution in field coverage path planning [10]. The DFS (Depth-First Search), with the use of sub-regions to avoid the overabundance of turns in narrow areas, is used by [13]. An algorithm to determine the turning areas, whilst minimizing overlapping, is introduced in [32]. A new approach to determine short-and long-term dynamic accuracy is proposed in [40]. Some methods for using GPS to control vehicles are proposed in [33,36]. In addition, some approaches are introduced to detect objects physically [20,4] and by imaging [38,3,9].
Our principal contribution is to propose an automated method for producing trajectories and the detection of obstacles, particularly trees, for an autonomous agricultural planting system, rather than a new path planning algorithm. In the remainder of this paper, Section 2 introduces the preliminaries for the satellite images and image processing for detecting and positioning obstacles in an agricultural field. Section 3 describes the proposed method in detail, and the results of this study are presented in Section 4. Section 5 contains a discussion of the proposed method and Section 6 presents our conclusions.

Preliminaries
In this study, a set of predetermined methods, from image processing techniques to GPS positioning systems, are used to achieve the stated goals. These tools, which are adapted to meet our needs, are introduced in this section.

Source of Satellite Images
Access to live satellite providers and images is limited due to national security restrictions. Therefore, Google Maps is utilized in this study as the primary search engine. Images from Google Maps are assumed to be live and accurate for evaluation purposes, which is essential to demonstrate the expandability of the proposed method to use live satellite image providers. In this study, it is assumed that mappings between actual GPS coordinates of the earth and pixels of the images taken by Google Maps were done precisely by Google. A similar API-V3 Key, which is shown in Figure  1 and is specified by Google, is used to locate and import satellite images using the GPS coordinates of the field [12].

Source of Satellite Images
Access to live satellite providers and images is limited due to national security restrictions. Therefore, Google Maps is utilized in this study as the primary search engine. Images from Google Maps are assumed to be live and accurate for evaluation purposes, which is essential to demonstrate the expandability of the proposed method to use live satellite image providers. In this study, it is assumed that mappings between actual GPS coordinates of the earth and pixels of the images taken by Google Maps were done precisely by Google. A similar API-V3 Key, which is shown in Figure_1 and is specified by Google, is used to locate and import satellite images using the GPS coordinates of the field [12].

Figure_1
: Sample API key used to obtain satellite images from Google Maps.

Positioning System
Google Maps uses GPS and Mercator projection to represent the map in tiles of 256 x 256 pixels, which gives us the whole earth in a tile at zoom level zero, as shown in Figure_2 [12]. Using a unique x, y pair, each tile can be accessed for positioning. In this system, (0,0) refers to the northwest of the map, based on Mercator projection.

Positioning System
Google Maps uses GPS and Mercator projection to represent the map in tiles of 256 x 256 pixels, which gives us the whole earth in a tile at zoom level zero, as shown in Figure  2 [12]. Using a unique x, y pair, each tile can be accessed for positioning. In this system, (0,0) refers to the northwest of the map, based on Mercator projection.

Image Processing Techniques
The geographic satellite images of the target fields are processed using noise reduction, edge detection, colour reduction and median filtering methods to prepare the image for the determination of the boundaries and detection of obstacles.

Filtering and Noise Reduction
Median filtering is a nonlinear operation that is used to eliminate sharp singular marks, which are of no consequence due to their small size [8]. This filtering is preferred to others due to its potential to preserve edges, while suppressing noise.

Edge Detection Techniques
Edge detection is essential to find boundaries of the desired field and determine the specifications of obstacles like dimensions and location. It also helps to find the field's attributes such as length and position. Several edge detection techniques, including Prewitt, Sobel, Roberts and Canny, are utilized [25,11]. After noise reduction, Prewitt, Sobel and Roberts had more acceptable results, based on the parameters used in this study and presented in Section 3.2. As shown in Figure 3, among these popular methods, Sobel -with no thinning -seems to perform the best in the detection and presentation of edges in an agricultural environment.

System Description
The proposed method mainly works on top-view satellite images, and is processed to detect the boundaries of the field and the obstacles, as described in [23]. Although different kinds of obstacles, such as stones and trees, may exist in an agricultural field, this study focuses on detecting trees to emphasize the validity of this method. The system contains four main stages, as shown in Figure 4, consisting of: 1) initialization, to locate and import the image; 2) field extraction, in which the target field is extracted; 3) obstacle detection, to detect, position and store obstacles; and 4) trajectory generation, which generates tracking trajectory points.

Field Extraction and Sectioning
Users entering any GPS coordinates of the agricultural field do the initialization of the system to target the desired field. This study takes advantage of thresholding methods, such as Otsu's method, to perform segmentation on the image for extraction of the target field and detection of obstacles [44]. The user determines the highest possible zoom level (Δ z ) that fits the field into the frame. The obtained image is converted to gray-scale for further processing by retaining the luminance, while eliminating the hue and saturation information. Conversion to gray-scale is obtained using the weighted sum of R, G, and B components of the imported colour image (0.2989 * R+0.5870 * G+0.1140 * B) [41].
On the gray image, a median filtering with a 3-by-3neighbourhood matrix is applied to reduce typical "salt and pepper" noises [8]. The target field is extracted from the imported image using segmentation techniques and sectioning [30,6]. The proposed method also allows the user to tune the filtering parameters of the automated process interactively, which gives the user an opportunity to select the target field manually if needed. Then, a binary form of the image is constructed, as in Figure 5b. The detected regions (Ri) are numbered, based on the number of connected pixels (ni), as in Figure 5c. The centre of the image is mapped to the GPS coordinate of the field, and the largest region, which overlaps it, is considered to be the target field [42]. The imported image is cropped using the bounding box properties of the field to only show the desired field, as in Figure 5d and 5e.

Obstacle Detection and Positioning
In this section, a method for detecting obstacles in an agricultural field is presented [23]. The undesired regions inside the field, which are too small to be deemed as obstacles, are removed using morphological image reconstruction by filling them [22,16]. Top-view of trees in satellite images usually appear as a circle or ellipse, and are detected using Circular Hough Transform (CHT) [31]. The radius of trees has various ranges in different zoom levels. In this study, the CHT parameters are set to: object-polarity = 'bright', computation method = PhaseCode, sensitivityfactor = 0.9 and, finally, edge-gradient-threshold = 0.27.
Past experience shows that at Δ z = 18, which this experiment is based on, trees have a radius range from 4-18 pixels, which changes in various zoom levels. The proposed method is designed to work with other zoom levels too. However, due to limited space in this article, the results will not be presented. Each detected circle to be accepted as a new tree entry is checked for duplicity. This is done by specifications of obstacles like dimensions and location. It also helps to find the field's attributes such as length and position. Several edge detection techniques, including Prewitt, Sobel, Roberts and Canny, are utilized [25,11]. After noise reduction, Prewitt, Sobel and Roberts had more acceptable results, based on the parameters used in this study and presented in Section 3.2. As shown in Figure_3, among these popular methods, Sobel -with no thinning -seems to perform the best in the detection and presentation of edges in an agricultural environment.

System Description
The proposed method mainly works on top-view satellite images, and is processed to detect the boundaries of the field and the obstacles, as described in [23]. Although different kinds of obstacles, such as stones and trees, may exist in an agricultural field, this study focuses on detecting trees to emphasize the validity of this method. The system contains four main stages, as shown in Figure_4, consisting of: 1) initialization, to locate and import the image; 2) field extraction, in which the target field is extracted; 3) obstacle detection, to detect, position and store obstacles; and 4) trajectory generation, which generates tracking trajectory points.  mapping the centre point of the object to a pixel in the image and searching the list to find duplicates. In the case of no duplication, the tree is stored with its centre pixel and radius (in pixels) to determine its diameter.

Field Extraction and Sectioning
Captured satellite images are not set with GPS information. Therefore, we have proposed a positioning method to convert pixels to GPS coordinates. The system works on a geographic coordinate system, which is based on latitude and longitude. Starting with the earth's circumference at equator α = 40075017 metres for θ e = 360°, the number of pixels in a tile p t = 256 and p z = z t × 2 Δ z the number of pixels in zoom level (Δz) and the length of one degree of Longitude Per Pixel (LDPP) is determined by dividing θ e over p z . Additionally, by dividing α over p z , the number of metres per pixel at the equator is calculated, which is 0.5971645 metres at Δz =18. While moving from the equator to the poles, the distance between longitudes gradually approaches zero, which means that the distance of one degree at the equator has its maximum value of 111319.49167 metres, and decreases as it approaches the poles, which was calculated in our previous study [23].
In addition, while moving towards the poles, the number of GPS points per pixel differs too. Therefore, to get an accurate position, another mapping is required. Multiplying longitude at each degree by the number of GPS points per pixel results in establishing an accurate position. The centre pixel of the image is mapped to the GPS point, which is provided by the user, as in Figure 6a. However, during field extraction, while cropping the image, the Original Centre Pixel (OCP) might be displaced, as in Figure 6b. In order to get an exact location, the offset needs to be calculated. From Figure 6b, the displacement of the centre pixel is obvious. To calculate this offset, Bounding-box's Start Point (BSP) and Bounding-box's End Point (BEP) (the dashed lines in Figure 6a) are used.
The cropped centre pixel's new position and offset are calculated by subtracting OCP's coordinates from CCP's. The only GPS coordinates of the field provided by the user in latitude ( r p Lat ) and longitude (r p Long ) are mapped to the centre pixel of the image, which all further positioning procedures depend on, hence, maintaining its exact location is essential. Having the offset calculated, the system can calculate each detected tree's exact coordinates in latitude and longitude pairs, using their centre pixel coordinate (tx ti , ty ti ) using Eq. 1 and Eq. 2. After validation, the object is stored with its properties, such as index, position, pixel coordinates, radius, latitude and longitude, and is graphically annotated with a red circle [19,14].

Trajectory Points Generation and Positioning
The system provides a sequence of GPS trajectory points for agricultural vehicles to follow [35]. Having obtained all Past experience shows that at ∆ ൌ 18, which this experiment is based on, trees have a radius range from 4-18 pixels, which changes in various zoom levels. The proposed method is designed to work with other zoom levels too. However, due to limited space in this article, the results will not be presented. Each detected circle to be accepted as a new tree entry is checked for duplicity. This is done by mapping the centre point of the object to a pixel in the image and searching the list to find duplicates. In the case of no duplication, the tree is stored with its centre pixel and radius (in pixels) to determine its diameter.
Captured satellite images are not set with GPS information. Therefore, we have proposed a positioning method to convert pixels to GPS coordinates. The system works on a geographic coordinate system, which is based on latitude and longitude. Starting with the earth's circumference at equator ߙ = 40075017 metres for θ ୣ = 360 o , the number of pixels in a tile ऀ ୲ ൌ 256 and ऀ ऊ ൌ ऀ ୲ ൈ 2 ∆ ऊ the number of pixels in zoom level (∆z) and the length of one degree of Longitude Per Pixel (LDPP) is determined by dividing θ ୣ over ऀ ऊ . Additionally, by dividing ߙ over ऀ ऊ , the number of metres per pixel at the equator is calculated, which is 0.5971645 metres at ∆z =18. While moving from the equator to the poles, the distance between longitudes gradually approaches zero, which means that the distance of one degree at the equator has its maximum value of 111319.49167 metres, and decreases as it approaches the poles, which was calculated in our previous study [23].
In addition, while moving towards the poles, the number of GPS points per pixel differs too. Therefore, to get an accurate position, another mapping is required. Multiplying longitude at each degree by the number of GPS points per pixel results in establishing an accurate position. The centre pixel of the image is mapped to the GPS point, which is provided by the user, as in Figure_6a  hence, maintaining its exact location is essential. Having the offset calculated, the system can calculate each detected tree's exact coordinates in latitude and longitude pairs, using their centre pixel coordinate (ऄx ୲୧ , ऄy ୲୧ ሻ using Eq. 1 and Eq. 2. After validation, the object is stored with its properties, such as index, position, pixel coordinates, radius, latitude and longitude, and is graphically annotated with a red circle [19,14].

Trajectory Points Generation and Positioning
The system provides a sequence of GPS trajectory points for agricultural vehicles to follow [35].
Having obtained all of the required data that are related to the field and obstacles therein, the sequence of tracking trajectory points in the form of a path is ready to be generated. The aim is merely to provide trajectories rather than path planning. To test the integrity of the proposed approach, the generated trajectory points are integrated into a simple path planning technique, which finds the longest edge and provides parallel lines according to that edge.
In the binary image, it is assumed that the pixels with value "one" (white colour) are referring to the main field, and the trajectory pints should be within these areas. The detected trees are re-plotted into the field using their centre point coordinate and radius in pixels as black regions with "zero" value after the field is filled with "one" values. This process results in a clearer, smoother and more precise image, containing both the field and the trees. Applying Standard Hough Transform (SHT) algorithm on the binary image results in finding the longest edge of the field. In the algorithm, the distance between the origin and the line (ߩ ) is set to 0.5 and the angle from the origin to the closest point (ߠ ) is set to െ90 ߠ ൏ 89, respectively. During the search, each line's length is calculated using its first point coordinate ሺऀ1ሻ, last point coordinate ሺऀ2ሻ and orientation angle ߠ (Eq. 3). a b Figure 6. Offset calculation of centre point of the required data that are related to the field and obstacles therein, the sequence of tracking trajectory points in the form of a path is ready to be generated. The aim is merely to provide trajectories rather than path planning. To test the integrity of the proposed approach, the generated trajectory points are integrated into a simple path planning technique, which finds the longest edge and provides parallel lines according to that edge.
In the binary image, it is assumed that the pixels with value "one" (white colour) are referring to the main field, and the trajectory pints should be within these areas. The detected trees are re-plotted into the field using their centre point coordinate and radius in pixels as black regions with "zero" value after the field is filled with "one" values. This process results in a clearer, smoother and more precise image, containing both the field and the trees. Applying Standard Hough Transform (SHT) algorithm on the binary image results in finding the longest edge of the field. In the algorithm, the distance between the origin and the line (ρ h ) is set to 0.5 and the angle from the origin to the closest point (θ h ) is set to − 90 ≤ θ h < 89, respectively. During the search, each line's length is calculated using its first point coordinate (p1), last point coordinate (p2) and orientation angle θ h (Eq. 3).
A line with maximum l line value is stored as the longest edge (E D max ) along its θ h . The slope of the longest line (m ED max ) is calculated, as follows: The E D max is highlighted with a blue bold line, as shown in Figure 7a. Desired paths must be parallel to the E D max with equal distance from each other. The distance between paths (d m ) in metres is converted to pixels using Eq. 5.
The desired paths are to be parallel to each other with constant d pix , as entered by the user. Therefore, temporary points (red dots) are plotted as origin points to draw paths, as in Figure 7b. Any deviation of origin points will result in lines with unequal distance d pix from each other. The slope of the perpendicular line is m pLine = atan (m E D max ).
Each origin point (O P x, y ) is placed one by one with a constant increment (Δ inc ) to generate points with equal distance from each other using Eq. 7 and Eq. 8. The system generates tracking trajectories using the longest edge, perpendicular line and origin points. A sequence of trajectory points in a straight line is generated by replacing m with m pLine in linear equation y = mx + bEq. 9 and Eq. 10 Likewise, b (y-intercept) is replaced by O P y (y-coordinate of the origin point) and x is calculated by decrementing the origin point's x-coordinate from P L x , which increments pixel by pixel to determine the offset.
These pixels represent the points in which tracks parallel to the longest edge should pass through. Any intersection between the trajectory points and detected obstacles in the field needs to be determined to avoid any collisions. The system maps each generated point's pixel coordinates (x, y) to the image and compares their value and, if it is overlapped with a pixel with a "zero" value, the point is discarded. This is due to the fact that either the generated point is placed on an obstacle or it is outside of the field's boundary, as both are presented with "zero" value. After passing all of the conditions, the trajectory points are plotted sequentially on the field as red dots in straight lines, as in Figure 7a. Each trajectory point is converted from pixel to GPS coordinates to be used by a GPS tracking enabled agricultural vehicle [23]. As indicated, moving between the poles and equator results in different distances that each pixel can cover. Therefore, the exact length of a degree of longitude per pixel (LDPP) must be calculated [23]. Pixel-to-GPS conversion results are stored in a 2D-array (m × n) Table 1.
Columns 1 to 4 contain information related to the first track, columns 5 to 8 contain information for the second track and so on. The first two are pixel coordinates (TP x i, y i ) and the next columns are GPS coordinates (T P Lat , Long ) of the trajectory point.

Results
In order to illustrate the proposed method, a number of examples are presented. The system is developed and tested on MATLAB technical programming language (The MathWorks Inc., Natwick, MA) platform. A computer with a CPU of 2.8 GHz (Intel Core i7) and 8 GB of RAM running on a Mac OS X Yosemite (10.10.3) is used to run and test the system. All of the satellite images are obtained from Google Maps Image API with resolution of 640x640 pixels, as provided by Static Map API. The algorithm was tested on seven different fields with various shapes, sizes and number of obstacles to validate the system. The zoom level in all of the tests is set to 18 for comparison purposes. In addition, the radius range for the detection of trees regarding Δ z = 18 is set to a minimum of ℛα min = 4 and a maximum of ℛα max = 18 in pixels. However, this is subject to change, depending on zoom level. After an extraction of each field, the original image is cropped to only contain the desired field, resulting in an image with different dimensions, as in Figure 8. Furthermore, the tree detection outputs of these fields are graphically presented with red circles in the images.

Figure 8. Test fields after crop and obstacle detection
Details of the detection section for these seven fields are shown in Table 2. The columns from left to right are the total number of trees, the total number of tree detections, correct detections, incorrect detections, the accuracy of correct detections, the percentage of incorrect detection and the complexity of the field. Complexity is defined based on the number of obstacles, their shape, size, fields' dimensions and smoothness of edges. The distance between the desired paths are set to be d m = 2 m in all of the fields for comparison purposes. Results of the path finding algorithm based on SHT are shown in Figure 9.

Discussion
According to the experimental results, the system is capable of detecting obstacles with significant precision, while using the free Google Maps API, which provides a maximum 640x640 pixels image and forced us to use low zoom levels. The system's accuracy is correlated with zoom level, such that a higher zoom level results in more precise detections. Using Google Business API with an image of 2048 x 2048 pixels, the same experiment could be conducted with zoom level 20, resulting in an image with four times more clarity, as in Figure 10. In turn, this would increase the precision of detection. As previously presented in Table  2, the results indicate that, while the number of trees and complexity increases, the accuracy remains significant. Overall, the accuracy is above 80%, except for fields 1 and 7. The low accuracy in field 1 is due to the fact that one of the two trees in the field is covered by Google's watermark on the image, making the system unable to detect the tree. Additionally, in field 7, since many of the trees were together and formed, non-circular shapes resulted in the failure of detection. Table 2 shows that the average accuracy in the detection of trees and average error are more than 87% and less than 3% respectively, which represents a significant result.
Results of generating sequence of trajectory points to form the desired path.
cy is above 80%, except for fields 1 and 7. The low accuracy in field 1 is due of the two trees in the field is covered by Google's watermark on the image, unable to detect the tree. Additionally, in field 7, since many of the trees formed, non-circular shapes resulted in the failure of detection. Table_2 rage accuracy in the detection of trees and average error are more than 87% spectively, which represents a significant result. ogle provides precise data, the proposed system proves to be successful in inates of an obstacle detected and positioned by the proposed method is e Maps to validate its accuracy as an example. Figure_ 11 shows that the GPS xactly to the centre of the detected tree.
od is also accurate in calculating the detected obstacles' sizes, although they fferent sizes in the agricultural fields. As an example, the detected trees' proposed method (Figure_12a) is 10.14 m (20.74 pixels in the image), from m (Figure_12b) and its actual diameter, as recently measured, was 11.1 m mparison between the detected trees' sizes calculated by the proposed le Maps (14 cm difference) indicates a very low error rate in measuring ever, the big difference between the actual size of the obstacle and the size e Maps indicates the insufficiency of Google Maps' accuracy in representing Longitude Latitude Figure 10. Correlation between zoom and quality Assuming that Google provides precise data, the proposed system proves to be successful in positioning. Coordinates of an obstacle detected and positioned by the proposed method is searched by Google Maps to validate its accuracy as an example. Figure 11 shows that the GPS coordinate points exactly to the centre of the detected tree.
The proposed method is also accurate in calculating the detected obstacles' sizes, although they may appear in different sizes in the agricultural fields. As an example, the detected trees' diameter using the proposed method ( Figure 12a) is 10.14 m (20.74 pixels in the image), from Google Maps is 10 m (Figure 12b) and its actual diameter, as recently measured, was 11.1 m (Figure 12c). A comparison between the detected trees' sizes calculated by the proposed method and Google Maps (14 cm difference) indicates a very low error rate in measuring obstacle sizes. However, the big difference between the actual size of the obstacle and the size provided by Google Maps indicates the insufficiency of Google Maps' accuracy in representing data.
The path planning strategy used in this study generates paths that are parallel to each other. A linear equation of the base line with different a y-intercept but constant increment is considered to guarantee the parallelism and straightness. Both tracks in the pixel and GPS coordinates are compared to validate the accuracy of conversion. From the line equations in the charts presented in Figure 13, it can be concluded that both lines in the pixel and GPS coordinates are straight lines and this is due to the fact that all of the exponents are not higher than one.
Furthermore, the coefficient of determination (R 2 ) of the points with the value of 1 confirms the complete straightness of the both lines. The coefficient of determination of the trajectory points in both the pixel and GPS coordinates for all fields in this study are presented in Figure 14a. The results indicate that the points are less that 0.005% variation from the straight line, which shows that the conversion is significant. In addition, it is necessary for these points to be equidistant from each other, which ensures that tracking devices would not lose or stray from the path. A low standard deviation result indicates the accuracy in generation and conversion The results indicate that the points are less that 0.005% variation from the straight line, which shows that the conversion is significant.   Figure 15, which are in the range of 2.04s to 25.95s for d m = 2m, 1.35s to 14.47s for d m = 4m, and 1.15s to 10.69s for d m = 6m. The results indicate that the computational time increases almost twofold by decreasing the distance between paths to half.
In addition, it is necessary for these points to be equidistant from each other, which ensures that tracking devices would not lose or stray from the path. A low standard deviation result indicates the accuracy in generation and conversion of tracking trajectory points, as shown in Figure_14b. To investigate the computational times, the same experiments are performed with various distances (ࣸ ୫ = 2 m, ࣸ ୫ = 4 m, ࣸ ୫ = 6 m) and the results are shown in Figure_15, which are in the range of 2.04s to 25.95s for ࣸ ୫ = 2m, 1.35s to 14.47s for ࣸ ୫ = 4m, and 1.15s to 10.69s for ࣸ ୫ = 6m. The results indicate that the computational time increases almost twofold by decreasing the distance between paths to half. Table_3 presents the processing time for all of the fields. From the table, it can be observed that the highest rates of processing time belong to fields 3 and 4, as they are almost twice the size of fields 1, 2, 5 and 6, which requires more trajectory points to cover the field. Although field number 7 is almost the same size as fields 3 and 4, the results from Table_2 indicate that the computational time and generated trajectory points are half as many. Table_1 shows that the system has detected one obstacle in field number 3, and three in field number 4, while, for field 7, there are 194 detected obstacles. Having more obstacles in the field results in less space for planting, which causes the system to eliminate those areas from the path. This results in a decrease in generation of tracking trajectory points and computational time.   3 presents the processing time for all of the fields. From the table, it can be observed that hest rates of processing time belong to fields 3 and 4, as they are almost twice the size of fields 1, 2, 5 and 6, which requires more trajectory points to cover the field. gh field number 7 is almost the same size as fields 3 and 4, the results from Table_2 e that the computational time and generated trajectory points are half as many. Table_1 that the system has detected one obstacle in field number 3, and three in field number 4, for field 7, there are 194 detected obstacles. Having more obstacles in the field results in ace for planting, which causes the system to eliminate those areas from the path. This in a decrease in generation of tracking trajectory points and computational time. Although field number 7 is almost the same size as fields 3 and 4, the results from Table 2 indicate that the computational time and generated trajectory points are half as many. Table 1 shows that the system has detected one obstacle in field number 3, and three in field number 4, while, for field 7, there are 194 detected obstacles. Having more obstacles in the field results in less space for planting, which causes the system to eliminate those areas from the path. This results in a decrease in generation of tracking trajectory points and computational time.

Conclusion
In this study, a method for the generation and positioning of tracking GPS trajectory points for agricultural autonomous systems has been presented. The proposed method begins with a single GPS coordinate of an agricultural field, and detects obstacles automatically. The acquired global position and dimensions of detected obstacles can be used for agricultural vehicles to avoid any collisions. Low computational time in the range of 1.15 -25.95 seconds, along with simplicity of use and accuracy in positioning and generating of tracking trajectory points, makes the proposed method feasible to be integrated into any path planning algorithm or method for automation of agricultural vehicles. The generated trajectory points are graphically represented in the image to be presented to the user. This method is tested with a simple path planning technique to validate the generation of trajectory points. According to the experimental results, the proposed method provides significant means to generate trajectory points and position obstacles. In the near future, this method can be extended to cover more types of obstacles, along with the development of other subsystems of agricultural automation such as interactive user interfacing for obstacle verification.

Acknowledgements
The authors wish to thank Anas MAHDI for his contribution to this study in the statistical analysis.