Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields

https://doi.org/10.1016/j.compag.2015.02.014Get rights and content

Highlights

  • Guidance line extraction was proposed based on the morphological characteristics of rice.

  • Segmented lines and extended virtual lines were created from object edges.

  • Guidance line was extracted from distribution of intersection points.

  • Error of guidance line was less than 1°.

Abstract

The guidance line extracted from an image of a rice row precisely guides a robot for weed control in paddy fields. The guidance line enables the robot to follow the crop row without damaging for a successful weed control-equipped vision camera. The accuracy of the guidance line is affected by the morphological characteristics in the image, such as crop leaves, stems, orientation and density. In the case in paddy rice fields, it is difficult to extract the guidance line precisely because leaves are oriented in multiple directions. This paper proposes a new guidance line extraction algorithm to improve the navigation accuracy of weeding robots in paddy fields. The proposed algorithm seeks to identify the central region of the rice plant using the morphological characteristic of which leaves converge normally toward the direction of the central stem region. The guidance line is extracted from the intersection points of virtual straight lines using the modified robust regression. The virtual straight line is represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. The proposed algorithm was observed to have good performance experimentally with a high accuracy of less than 1° with varying rice plant sizes.

Introduction

Recently, robots have been widely utilized in the agricultural field because of advancements in robot technology. Specifically, the automation of agricultural robots is essential for improving work efficiency with regard to crop productivity and the labor force. Previous research has mainly studied crop row detection and autonomous guidance based on vision sensors for the automation of the robot (Mousazadeh, 2013). The autonomous guidance system is divided into three parts: image segmentation of the crop rows, guidance line extraction using an image of the crop row, and motion control of the robot. Image segmentation of the crop rows is an image processing process in which the crop is distinguished from the background (i.e., water, soil and stones). Guidance line extraction creates a reference allowing the robot to follow the crop row without damaging the crop. Thus, guidance line extraction is an important factor allowing the robot to accomplish its mission of precise weed control. Motion control allows the robot to determine its position relative to the guidance line.

Previous research about guidance line extraction using a vision camera can be classified by the method used: the distribution of the image pixel value method, the Hough transform method, the regression method and other methods. The first method type is based on the distribution of the image pixel values. Olsen (1995) proposed summing the grey values of the image pixels in the direction of the y-axis, as sum of the pixel lines over the center of a crop row will be higher than the corresponding sum over the center of an inter-row space. The guidance line was set using the local peak points of a sinusoidal curve obtained by summing the grey values of the image pixels. Pla et al. (1997) applied a vanishing point detection algorithm to a skeleton segment image to obtain the point at which all lines defining the crop converge with a crop row. Slaughter et al., 1999, Slaughter et al., 2008 proposed the spatial median algorithm which allows the median of an image’s histogram to be calculated directly by summing the values in each position counter until a value equal to 50% of the population is reached. The values in the counters represent the cumulative cross-sectional spatial distribution and crop row. Romeo et al. (2013) applied a method based on image perspective projection that searches for the maximum accumulation of segmented green pixels along straight alignments. The accuracy of the guidance line extraction is less accurate if the distribution of the image pixel value is irregular due to the morphological characteristics of the crop.

The second group of methods is based on the Hough transform. Marchant and Brivot (1995) used the Hough transform to integrate information over a number of rows to detect parallel crop rows. Rovira-Más et al. (2005) applied pre-processing, including the determination of a suitable region of interest (ROI) and midpoint encoding, to achieve an effective Hough transform. Bakker et al. (2008) reduced the image processing time of a Hough transform. In this method, the grayscale images are divided into three sections, and these sections are then merged into one image. Ji and Qi (2011) also used a randomized Hough transform to reduce the computing time. Ericson and Åstrand (2011) proposed an approach for detecting parallel crop rows using an omnidirectional camera using a combination of an edge-based method and a Hough transform. A key disadvantage of these methods is that the accuracy of the guidance line is affected by the morphological characteristics to a greater extent than in other models (Ji and Qi, 2011).

The third set of methods is based on linear regression. Billingsley and Schoenfisch (1997) applied the regression method to a crop that takes the form of a spotty row of various-sized shapes. Søgaard and Olsen (2003) computed the center of gravity for crop row segments in the image and obtained the lateral position of the center lines of the rows by weighted linear regression. Hana et al. (2004) applied linear regression in each tracking window to obtain a guidance line that represents the center of the crop row. Montalvo et al. (2012) made assumptions regarding the underlying knowledge about the arrangement of crops in the maize field and constructed a corresponding template or mask to delimit the areas in which the pixels that belong to crop rows are expected. Finally, they applied least-squares linear regression to adjust the guidance line. In these methods, the accuracy of the guidance line is highly sensitive to outliers in the crop configuration (Billingsley and Schoenfisch, 1997).

Finally, the following methods do not belong to any of the above three categories. Kise et al. (2005) used an elevation map with a stereo camera to acquire 3-D distance information. The guidance line was extracted by the elevation map, which divided zones according to crop height. This method is restricted in its application to certain cases, including when the height difference between the floor and grain is irregular or the crop is small. Fontaine and Crowe (2006) segmented an area of consecutive crop pixels in an image by assuming the crop to be the white shapes. The guidance line was determined by calculating the center of gravity for each white shape and then connecting the calculated centers of gravity. In this method, the accuracy is lowered by the error in the calculation of the center of gravity of the white shapes, which is not well approximated using circular forms. Guerrero et al. (2013) used the robust Theil-Sen estimator (Massart et al., 1997) to adjust the guidance line to actual crop rows. The Theil-Sen estimator was applied to estimate the slope angle and the intercept of the crop rows because of its statistical efficiency and robustness.

As noted in previous studies, a critical factor for the accuracy of the guidance line is the morphological characteristics of a crop that is segmented from an image. In other words, the accuracy of the guidance line is strongly affected by the estimation of the central region or point of each crop image. The accuracy must be improved such that the central region or points of the crop row can be defined as morphological characteristics in an image in which the line is affected by the orientation and density of the crop leaf and stem size as well as the offset of the camera frame resulting from changes in the posture of the robot. Specifically, in the case of an irregularly shaped crop, such as rice plants, it is difficult to accurately identify the center point using the previous method because the leaves are oriented in multiple directions. Thus, it is necessary for the extraction algorithm of the guidance line to focus on the crop morphological characteristics to minimize error. This paper proposes a new algorithm for guidance line extraction to improve the accuracy of weeding robots in the paddy field environment. This research implemented an algorithm to precisely extract a guidance line based on the direction of a crop row using the morphological characteristics of the detected image. In addition, we propose a simple method for real time processing. This paper consists of four sections. The first section presents introduction, the second section describes the materials and methods, the third section presents the experimental results and the fourth section outlines the conclusions of this research.

Section snippets

Weed control robot (K-Weedbot) and algorithm design architecture

The proposed algorithm was applied to a weed control robot (K-Weedbot) in paddy fields. Fig. 1(a) shows the robot configuration; this robot used a special type of wheel to remove the weeds while moving between rice rows, which were located under the center of the robot. The robot was equipped with screws instead of common wheels to ensure excellent performance on water-soaked soil, and a vision camera was mounted in front of the robot for image acquisition. An inexpensive CCD camera (Logitech®

Experimental results

Experiments were performed in Pyeongtaek, Gyeonggi-do, Republic of Korea, during April and May 2014. The experimental video samples were obtained using a mounted camera connected via USB communication while the robot drove at a speed of approximately 0.2 m/s in a rice paddy field. The image samples consisted of four video samples, and each sample had 100 images. Each sample involved a different rice size, corresponding to different growth times (15–50 days) of the rice plant, to verify that the

Conclusion

Guidance line extraction allows a robot to follow a crop row without damaging crop plants and is thus an important element in successful weed control. The accuracy of the guidance line is affected by morphological characteristics of the crops in the image. In the case of rice plants, the guidance line is difficult to extract precisely because the leaves are oriented in multiple directions. Therefore, this paper proposed a new guidance line extraction algorithm to improve the accuracy of weeding

References (26)

  • J. Romeo et al.

    A new expert system for greenness identification in agricultural images

    Expert Syst. Appl.

    (2013)
  • D.C. Slaughter et al.

    Autonomous robotic weed control systems: a review

    Comput. Electron. Agric.

    (2008)
  • H.T. Søgaard et al.

    Determination of crop rows by image analysis without segmentation

    Comput. Electron. Agric.

    (2003)
  • Cited by (106)

    View all citing articles on Scopus
    View full text