Assessment of Aquatic Weed in Irrigation Channels Using UAV and Satellite Imagery

Irrigated agriculture requires high reliability from water delivery networks and high flows to satisfy demand at seasonal peak times. Aquatic vegetation in irrigation channels are a major impediment to this, constraining flow rates. This work investigates the use of remote sensing from unmanned aerial vehicles (UAVs) and satellite platforms to monitor and classify vegetation, with a view to using this data to implement targeted weed control strategies and assessing the effectiveness of these control strategies. The images are processed in Google Earth Engine (GEE), including co-registration, atmospheric correction, band statistic calculation, clustering and classification. A combination of unsupervised and supervised classification methods is used to allow semi-automatic training of a new classifier for each new image, improving robustness and efficiency. The accuracy of classification algorithms with various band combinations and spatial resolutions is investigated. With three classes (water, land and weed), good accuracy (typical validation kappa >0.9) was achieved with classification and regression tree (CART) classifier; red, green, blue and near-infrared (RGBN) bands; and resolutions better than 1 m. A demonstration of using a time-series of UAV images over a number of irrigation channel stretches to monitor weed areas after application of mechanical and chemical control is given. The classification method is also applied to high-resolution satellite images, demonstrating scalability of developed techniques to detect weed areas across very large irrigation networks.


Introduction
Much of the food and fibre needs of the world are met with irrigated agriculture. A large proportion of the farms devoted to this are supplied by networks of channels, which are seeing increasing levels of automation [1]. Excessive vegetation in these typically earthen channels impacts timely and efficient delivery of water to farms because of its effect of reducing flow rates [2]. Relationships between channel geometry, aquatic weed growth and Manning's resistance coefficient n for rivers were explored in [3]. Monitoring and controlling weeds in irrigation channels is a costly exercise, and there is therefore demand to improve the effectiveness of monitoring and methods of assessing new control strategies.
Irrigation channel networks often cover significant areas, with many kilometers of channels. For example, the Murrumbidgee Irrigation network covers a total of 670,000 ha, and delivered water to 117,900 ha of irrigated land in 2017. It includes 3500 km of supply channels and 1600 km of drainage channels [4]. The widths of these channels are typically less than 10 m, the depths less than 1.5 m, and the vast majority are earthen channels. These characteristics are similar to most irrigation areas in the world. Considering the vast distances involved, regular manual surveying channels for weeds is impractical.
as well as targeted control for the dominant species in sections of channels. Species discrimination using field work is time consuming and only practical in small areas. The authors of [15] focus on discriminating wetland vegetation species and estimating their biophysical properties, using both multi-and hyper-spectral data. Challenges include the difficulty of identifying boundaries between aquatic vegetation (as they often overlap), and the reflectance characteristics of vegetation are often very similar and are combined with the reflectance characteristic of the surrounding water. In [16], lake and river sites were surveyed with with 5-cm resolution. They used visual inspection of the images to classify a large variety of weed species, and compared this with the accuracy of object-based image analysis and classification. Accuracy was better than 90% for water versus vegetation, and better than 50% for taxonomy. The possibility of plant species discrimination in estuaries using hyperspectral data was investigated in [17], considering variation in each species' signature with space and time. Submerged plants can be discriminated only in visible wavelengths, as longer wavelengths (near infra-red) are absorbed by the water. The paper investigated three species, eelgrass Zostera capricorni, strapweed Posidonia australis, and paddleweed Halophila ovalis over two years. The green wavelengths were most significant in discriminating between species, followed by red. They give suggested wavelengths and bandwidths to optimally classify these species.
This work aims to apply remote sensing techniques to enable efficient detection of weed in irrigation channels. The challenges particular to this application include the narrow width of the channels (compared with previous works which focussed on larger bodies of water such as lakes and rivers). Irrigation networks are also sparse and spread over extensive areas, rather than concentrated in a tight area. In addition, irrigation channel water is often highly turbid. The study takes multispectral imagery from UAVs and satellites, processes them in Google Earth Engine (GEE) [18], using unsupervised and supervised classification algorithms to automatically map areas of weed infestation. This information is of great use to irrigation water delivery companies as they seek to optimise their networks to enable timely and high capacity delivery of water to farms.

Location
This study was based in the Murrumbidgee Irrigation Area (MIA), which includes the towns of Griffith, Whitton and Leeton in NSW, Australia. A wide variety of crops are grown in the area, with major summer crops including rice, cotton, citrus and grapes. The irrigation water is delivered by Murrumbidgee Irrigation, using a gravity-fed channel network from the Murrumbidgee River. The total irrigation area covers 670,000 ha, of which approximately 18% of the area was irrigated in 2017 [4]. A Sentinel-2 mosaic from January 2018 of a portion of the MIA is shown in Figure 1, centred at 34 • 24 33.0 S, 146 • 01 53.9 E (WGS84). The figure contains annotations of the irrigation network supply channels, and of the areas mapped by UAV and satellite for this study.

Aquatic Vegetation
Four genera of aquatic vegetation grow in excess in MIA irrigation channels impeding flow and water delivery [19]. Three of the genera are native. One is cumbungi, Typha spp., an emergent flat leafed cylindrically stalked (2 cm diameter) reed that grows up to 4 m tall. Both T. domingensis and T. orientalis grow in the MIA region. They have clonal growth, with upright shots emerging each year from extensive underground rhizome beds. The other two genera of weedy vegetation are also attached to bottom of the channels, but their leaves float on the water surface. The ribbonweed Valliseneria spp. grows straplike leaves, 3 m long and 3 cm wide, from stolons in the sediment. In high nutrient waters, the green leaves appear brown as they are covered in algae. Both V. australis and V. nanna grow in the MIA, with V. nanna having a slightly narrower leave width. The third native genera of weedy plant in the MIA is pondweed Potamogeton spp. It has both submerged and floating leaves off stems up to 4 m long which grow out of rhizomes in the sediment. The floating leaves tend to be more robust, 10 cm long and 7 cm wide, than the translucent submerged leaves which are 20 cm long and 1 cm wide. Both P. tricarinatus and P. ochreatus grow in the MIA.
The introduced weedy vegetation is parrot's feather Myriphyllum aquaticum, a native of South America. Stems of up to 2 m grow from rooted stolons and pale green, whorled and feathery leaves grow off the stems. Submerged leaves rot, leaving only the emergent leaves at the end of a long, bare submerged stem.
Potamogeton spp. grow rapidly in spring (September to November), followed by summer (December to February) growth of Valliseneria spp. Typha shoots can grow at any time of year, but the majority emerge in summer and autumn (March to May). Control of the biomass of the four genera in the irrigation channels is by different strategies and include both physical removal and chemical treatment. Photographs of the weed types is shown in Figure 2.

Image Capture
This study investigated the use of remotely-sensed multispectral imagery from both UAV and satellite platforms for detection of weed areas in irrigation channels.

UAV
UAV images were captured from a DJI (Shenzen, China) Inspire 1 v2. This is a quadcopter, weighing around 3 kg. A MicaSense (Seattle, WA, USA) RedEdge camera was attached to the UAV. Four of the five camera bands were used and are specified in Table 1. The flight paths were automated using the Drone Deploy iPad application. The images were captured from an altitude of 75 m with along-track overlap of 80% and at least two passes along the channel length. The ground pixel size was 5 cm. Before each flight, images of a calibration reflectance panel were taken. The images were radiometrically calibrated and orthomosaics were generated with the Pix4D software. Images were captured over four channel stretches as part of a study of the effectiveness of different weed control strategies. To reduce the impact of shadowing, where possible, the images were taken within 2 h of solar noon. The treatments, channel dimensions and image capture dates are shown in Table 2. The untreated channel (U) had no weed control applied. The mechanical treatment channel (M) had excavator de-silting carried out on 23 July 2018. The two chemical treatment channels (C1 and C2) had Endothall herbicide applied on 11 July 2018. Throughout the study period, each of the channels also had a WiField logger [20] continuously measuring water depth and temperature. Water depth was measured using a MaxBotix (Fort Mill, SC, USA) MB7389 ultrasonic sensor, and the water and air temperatures were measured using digital Maxim (Sunnyvale, CA, USA) DS18B20 temperature probes.

Satellite
In order to assess the effectiveness of the developed weed detection methods in classifying much larger areas of irrigation channels, two images from the World-View 3 satellite constellation were purchased. These were selected from the archive, on 7 January 2017 and 13 March 2018, with both captures occurring within 15 min of local noon. The common area between the images was used for analysis, covering 38.5 km 2 . This covers part of the MIA around the town of Whitton, and the capture area is shown in Figure 1. The images had a pan-sharpened resolution of 30 cm, and eight multispectral bands. Four of the eight instrument bands were used, and are specified in Table 1.

Image Processing
Images from the UAV and satellites were uploaded to Google Earth Engine (GEE) as private assets. All subsequent image processing was performed in GEE, with some post analysis on data exported from GEE performed using Python. GEE allows quick and powerful cloud-based spatial data analysis, and includes algorithms for image co-registration, clustering and supervised classification [18].

World View-3 Satellite Image Pre-Processing
The World-View 3 satellite images are provided as 8-band digital number (DN) products. These images were pre-processed in GEE to obtain surface reflectance. First, they were converted to at-sensor radiance images using the data in [21]. Then, to obtain surface reflectance, the dark object subtraction (DOS) atmospheric correction method was applied [22].

UAV Image Co-Registration for Time-Series Analysis
The UAV orthomosaics were not co-registered. In order to define geometries marking areas common to all the images in the collection (for example geometries marking areas of land and channels), geo-location errors due to GPS inaccuracy need to be corrected. Thus, images were co-registered using the GEE registration algorithm. The images from each channel site were co-referenced to its corresponding 12 July image using the normalized difference vegetation index (NDVI) band.

Masking
A shapefile of the Murrumbidgee Irrigation supply channel network was obtained, and uploaded as a Google Fusion Table. This was then imported to GEE as a Feature Collection. To avoid problems with registration errors between the geometries in the supply channel features and the UAV and satellite images, the supply channel geometries were buffered by 10 m. These buffered geometries were then used to mask the images, so the resulting images only included the supply channels and a narrow region of land around the channels, with a total width of 20 m. This masking results in a reduced number of pixels to be processed. It also makes land area classification more accurate, as there is less variation in the reflectance characteristics of the land areas on either sides of the supply channels than that of the total land area of the unmasked images.

Vegetation Index Computation
Image bands were processed to give vegetation indexes, including the normalized difference vegetation index (NDVI), normalized difference water index (NDWI), normalized difference aquatic vegetation index (NDAVI), and visible atmospherically resistant index (VARI) among others [9,23,24]. The equations for the indices used are given in Table 3. Statistics for the each of the bands and indexes were computed over areas of interest. Table 3. Vegetation index definitions, where B, R, G and N IR are blue, red, green and near-infrared reflectance respectively, and L = 0.5.

NDVI
Normalized Difference Vegetation Index

Geometry Definition for Sampling
Polygon geometries were drawn on the images to identify areas of water, land, weed and channel (containing both water and weed). These areas are easily identifiable from red-green-blue images, so the polygons were drawn by visual inspection of the images. In each case, stratified sampling algorithms in GEE were used to randomly sample pixels within the geometries, with approximately 1000 pixels being taken from each geometry. Given the large number of samples, and the random distribution of the samples within the geometries, the sampled reflectance characteristics should be a good representation of the actual reflectance characteristics of each of the classes. These samples were then used for classifier training and validation. Separate geometries were used for training and validation to ensure fair assessment of classification accuracy.
The samples were also used to assess the distribution of vegetation index values within each class. Statistics of the sampled pixels were computed. To assess the separability of the water, land and weed classes based on vegetation indices, the normalized mean difference was used [9]: where µ n denotes the mean of the vegetation index within the n-th class, and σ n the standard deviation. This provides a measure of how different the means of the indices of pairs classes are with respect to their variance.

Classification
The images were classified into areas of water, land and weed. A number of supervised classification methods provided in GEE were tested.
The obvious way to classify the images is to define geometries identifying areas of water, land and weed for each image, sample the pixels of the image within each geometry, and pass these labelled samples to the classification algorithm. However, it quickly became apparent that this is both time-consuming and prone to error once multiple images need to be processed (for example, a time series of multiple stretches of channel to detect weed area change over time). If a classifier is trained on one image, and applied to another, it is likely that water colour changes, weed variation and land vegetation variation on the channel banks will cause the classification of the new image to be inaccurate. This was observed in this study. One alternative is to train the classifier using samples from many images over multiple areas and times; however, this is time consuming as new polygons need to be manually drawn for each image. There is also no guarantee that the images chosen to train the classification on will cover all the spectral characteristics of the classes that will be encountered in future images.
Clearly, a method is needed to enable robust classification of images that does not rely on users spending significant time drawing new polygons identifying classes for each new image. To this end, a semi-automatic method to re-train the classifier for each new image was developed using a combination of unsupervised and supervised classification. The unsupervised classification automatically finds geometries of water and weed to sample within to train the supervised classifier. The steps are as follows (illustrated in Figure 3):

•
For land areas, a number of polygon geometries covering land areas around the edges of the channels are drawn. Land areas do not change with time, so they can be used for all new images. These areas were then used as the sampling geometries to generate training data for the land class.

•
For water areas, clustering analysis (unsupervised classification) is used to automatically find areas of water in the image:

-
The clustering is performed using the band with best separability between the water class and the other classes. The K-Means clustering method implemented in GEE was used with three clusters.

-
The mean NDWI of each of the clusters is calculated (as NDWI was found to give the best separability between water and other classes, shown in Section 3.1.1 below). The cluster with the maximum NDWI is used as the sampling geometry to generate training data for the water class.
• To find weed areas, polygons were drawn covering only channel areas with reasonable distribution between water and weed (excluding land). These areas thus only contained water and weed. Clustering analysis was again performed on the NDWI of these channel areas with a similar method as that described above. The cluster with minimum NDWI was used as the sampling geometry to generate training data for the weed class.
This combination of unsupervised classification and supervised classification results in a robust and time-efficient method, where a new classifier is generated for each new image. The definition of training polygons is simple, as only polygons covering some of the land in an image, and some of the channel area in an image are needed. These same polygons can be used over multiple images of the same area in a time series, as the clustering automatically separates the water and weed areas for sampling (the user does not need to manually define weed/water areas for each new image).
To validate this classification procedure, manual validation geometries were drawn on selected images defining areas of land, water and weed. These geometries were separate to the land and channel polygons used for training the classifier to ensure fair assessment of classifier performance. Validation is assessed for the main results using error matrices, as recommended in [25]. These matrices provide information on both omission errors (validation pixels that were not correctly classified to their true class) and commission errors (validation pixels that were incorrectly classified to another class). For some of the comparison results (comparing classifiers, resolutions, band inputs) where space would prohibit printing all the error matrices, the kappa value is used to provide a summary of the classification accuracy, as it includes information both of correctly and incorrectly assigned validation points [25]. Values of kappa over 0.75 are generally regarded as being indicative of excellent agreement between actual and predicted classes.

Intensive Small Area Mapping by UAV
A time series of UAV images was used to assess the possibility of detecting weed change with time. The capture areas are shown in Figure 1. The capture dates and times are shown in Table 2.
Throughout the study, WiField loggers collected water depth and temperature data, which is shown in Figure 4. Evaporation and leaking/seepage is indicated by smoothly declining water levels, as seen in the chemical treatment channels (C1 and C2). These channels were 'locked-up' after chemical application with no water allowed back in during the treatment period. The untreated channel (U) was maintained at a constant water level by the automated channel control system, except towards the end of the study where it was drained and then refilled. The mechanical treatment channel (M) had low water levels while the de-silting was occurring, and was refilled shortly after.  The first image in the time series (2 July) from the C1 channel was used to assess class reflectances, and classifier accuracy with different bands, resolutions and classification algorithms. The selected parameters were then used on a time series of images from the U (untreated) channel to validate the classification method over numerous image dates. The validated method was then used over all the channels (U, M, C1 and C2) over the whole time series to assess changes in the weed area to compare the different weed control methods.

Reflectances of Each of the Classes
Geometries identifying areas of water, land and weed were drawn on the 2 July C1 image. The value of vegetation indices within each of these labelled geometries was sampled. A histogram of the reflectance values for significant vegetation indices is shown in Figure 5. This indicates the potential separability of each of the classes. Clearly, it is quite straightforward to separate water from land and/or weed using any of the indices shown. It is more difficult to separate weed and land. Land includes both unvegetated areas (bare soil, road, structures, etc.) and vegetation areas such as trees and other plants. This is the reason for the bi-modal distribution of the vegetation indices of the land classes.
These observations are quantified in Table 4, which shows the normalized mean distances (NMDs) between each of the classes. NDWI seems the best index to separate water from both land and weed, with NMDs of 7.38 and 14.46, respectively. This is why clustering can be used to robustly and automatically find water areas. The low NMDs for land-weed indicate a simple separation is not possible. The land area does not change between images in a time series, so can easily be defined by static geometries. There is good separability between weed and water classes, if land can be excluded. Thus, weed training areas can be defined by clustering defined channel geometries that include water and weed but not land. The best index for separating water and weed is again NDWI, with an NMD of 14.46 from Table 4.
These observations lead to the combined unsupervised (clustering) and supervised classification methodology described in Section 2.4.6 and diagrammed in Figure 3.

Classification Accuracy
Having devised the combined unsupervised-supervised classification method that is hypothesised to be robust, time-efficient and simple, the method is now assessed. First, different supervised classification algorithms are compared, then optimal combinations of image bands and indices input to the classifier are compared, and then the accuracy with a range of image resolutions are assessed.
For brevity, classification performance with different parameters will be compared with the kappa value, computed from the error matrices. An example is given from the 2 July image of the U channel. The selected optimum parameters found in the following sections are used (classification and regression tree (CART) classifier; 0.05 m resolution; and the red, green, blue and near-infrared bands). The error matrix is shown in Table 5. The matrix indicates that all 1000 samples from within the water validation geometry were classified correctly, 23 of the 1000 samples from within the land validation geometry were incorrectly classified as water and 24 were incorrectly classified as weed and so on. This error matrix has an overall accuracy of 0.983 (the sum of the diagonal divided by total observations), a consumers accuracy of [0.978,0.996,0.976] and a producers accuracy of [1,0.953,0.996]. The kappa value for this matrix is 0.974, indicative of excellent classifier accuracy.

Comparison of Classification Methods
GEE implements a range of classification algorithms. The accuracy of many of these was tested for this application, using the 2 July UAV image from the C1 channel. In each case, the default classification parameters were used. The kappa value results (calculated from the error matrices of actual vs. classified values) are shown in Table 6. It can be seen that the classification and regression tree (CART) classification performed the best. In the following sections, the CART classifier will be used.

Comparison of Classification Bands
Here, the optimal bands to train the classifier with are investigated, again for the 2 July image of the C1 channel. The MicaSense RedEdge camera has red, green, blue, red edge and near-infrared bands. In addition to the raw bands, the classifier was also trained with common indices which are computed from these bands. The results are in Table 7. The vegetation indices taken in isolation do not perform well, probably because of the overlap between land and weed vegetation indices as seen in Figure 5. Taking four raw bands (B, G, R, NIR) is optimal, and is used in the following investigations. The addition of the red edge band is not necessary for this application, but may be more important for higher dimensional classification (for example, to classify different weed species).

Comparison of Image Resolutions
In this section, the effect of image resolution is investigated for the 2 July image of the C1 channel. This is useful to find the minimum required image resolution to classify weed areas in irrigation channels. The UAV imagery had a native resolution of 0.05 m. The imagery was coarsened from 0.1 to 5 m using the GEE resample algorithm. The results are shown in Figure 6. It can be seen that classification accuracy is very good with resolutions less than 1 m. It drops off rapidly above this. This is intuitively expected, as the channel widths are less than 10 m, and weed areas are often small and isolated. The points in Figure 6 increasingly deviated from the line of best fit as pixel size increased. One reason for this is the training and validation sets became small, as there were limited pixels to sample within the validation geometries with large pixel sizes.

Validation over a Time Series of Images
Having assessed the classification method and parameters on a single image, its robustness over multiple images needs to be validated, as there may be different lighting conditions, water colour and turbidity, weed reflectances, etc. The untreated (U) channel was used for this purpose, as the land, weed and water areas did not change so the same validation geometries could be used for every image in the time series.
The validation kappa for each image is shown in Table 8. The kappa is lowest in the 23 August image. This image was taken just after the channel was drained and refilled as seen in Figure 4. A visual inspection of this image reveals that some of the weed patches had been either submerged or washed away, which together with the fact that the validation geometries were kept fixed for all images in the time series is the reason for the lower kappa for this image. However, it can be seen that the accuracy is excellent, with kappa remaining above 0.9 for all images over the two-month period. One application of irrigation channel weed mapping is assessing change in weed growth. This could indicate potential hot spots of growth that need controlling as soon as possible. It could also provide a method to assess the efficacy of different weed control methods, for example different chemical or mechanical controls. The long-term trends of weed growth could be analysed as a multi-year image database is built.
The four channels (Table 2) were imaged throughout the study period. The fixed training geometries for land and channel were defined for each of the channels. After classification, the area of each of the classes (water, land and weed) was computed. This could be used to indicate decline or growth of weeds. All this takes less than a minute in the GEE cloud computing environment. The results are shown in Figure 7. Note that these results are during the winter period, where there are a lot of dynamics in channels being drained and refilled, and weed control works being done. The characteristics are likely to not show as much variance during the peak irrigation season. The visual and classified images from 2 July, 9 August and 23 August for the C1 channel are shown in Figure 8. Figure 7a shows the corresponding change in relative areas of land, water and weed for this channel. It is surprising to see the land area decline until 9 August, then rise again. The reason becomes clear when the water depth data in Figure 4 is examined. The water level dropped due to evaporation and leaking/seepage. On 11 August, areas of dry ground were visible. The channel was then refilled to more than its starting level on the 23 August image. There was little weed visible in the channel on this date, which is also correctly shown in both the percentage area graph and the classified image. The other channels in Figure 7 show land area that is relatively constant, as the channels did not empty to the same extent as the C1 channel. In general, the weed area declines in the chemically and mechanically controlled channels (C1, C2 and M). In the untreated channel (U), the weed area is relatively constant (see Figure 7d). However, the 23 August image shows a lower weed area. This is for the reason discussed in Section 3.1.6-that the water level was dropped and refilled immediately prior to the image, leading to some of the weed patches becoming submerged on 23 August. This section has shown the usefulness of using the developed classification method over a time-series of images of irrigation channels to monitor weed growth.

Large Area Mapping by Satellite
The methods developed above are now applied to much larger areas, using satellite images from the WorldView-3 constellation. Two images were purchased from different dates but covering the same 38.5 km 2 area as described in Section 2.3.2.
Separate geometries were drawn to mark out areas of land, water and weed in the 2017 and 2018 images. The image bands were sampled randomly within each of these polygons at a resolution of 0.3 m. The median, 25 and 75 percentiles for the 8-band reflectances of each of the classes for both images are shown in Figure 9. The 2017 image was on a day with wind coming from the Nor-Nor-East creating ripples aligning with the sun direction and thus creating reflections on the water. This explains the wider spread and different characteristics in the water reflectance in 2017 when compared with 2018. The 2017 image was taken in early autumn, whereas the 2018 image was from mid-summer with corresponding differences in vegetation on the land and weed in the channels. The variation in the reflectances of the classes over time is evident, again showing that a classifier generated from one image cannot be applied to accurately classify another image. Next, the automated unsupervised-supervised classification method (Section 2.4.6) was applied. The same land and channel training geometries were used for both image dates. Separate validation geometries were used (as weed areas were different from 2017 to 2018). The resulting error matrices are shown in Table 9. Kappa remains above 0.85 for both images, indicating excellent accuracy.
The classified areas for each of the classes from the two images are shown in Table 10. The total area classified is 0.614 km 2 of the total satellite image size of 38.5 km 2 , which consists of 20 m wide strips around the supply channels. The total length of channels mapped is then 30.7 km. The last column shows the proportion of the total channel area (weed + water) that is taken up by weed. The similarity in the land area between the two images verifies that this area is being classified correctly, as it does not change from image to image. The percentage of weed information will be very useful for irrigation network companies to track the weed status of their channels through the season, and from season to season. The classification and result calculations of these large high-resolution images took seconds in GEE.

Discussion
This study has demonstrated the potential for the use of remote sensing to monitor vegetation in irrigation channel networks, which is a major cause of impeded flow rates. GEE was used to process, analyse and classify UAV and satellite images in seconds.
UAV and satellite imagery each have their benefits. Data from UAVs are useful for intensive monitoring of small areas of channels with high resolution. They offer a relatively low cost way of mapping stretches of channel, with the typical time taken to set up and capture a stretch of 1 km being about 1 h. Applications for this include intensive experiments to assess the effectiveness of weed control methods (chemical or mechanical), which require a time series of images to analyse change. Purchasing satellite images over small experimental areas and multiple dates would not be cost effective. Use of UAVs also provides very high resolution, which may be needed for deeper classification than was performed in this study, such as classifying between species of weeds. UAVs will also be useful to check for critical stretches of channel at short notice. An example where this is useful is when a farmer complains that the promised water delivery rates are not being met, a UAV could be used to survey the channels leading to the farm to determine if weed is the reason for the constraint, and the water delivery provider could then quickly make a decision on how to rectify the issue.
Satellite imagery will be important to monitor the large areas covered by the entire irrigation channel networks. It is clear that the resolution of current free satellite imagery is not sufficient. For example, the Sentinel constellation resolution is 10 m, which is not useful in monitoring channels with widths of much less than 10 m. Thus, images must be purchased, in order to obtain resolutions of 1 m or better. Images should contain at least the red, green, blue and near-infrared bands. Satellite images could be obtained for a few key times in the year. For example, one in the off-season could be used to plan weed control when it is possible to dry down channels and to target intensive work on problem areas. An image immediately preceding the peak irrigation season could be used to ensure the network is ready to deliver high flows to all areas and any last minute rectification of weed issues could be performed. An image in the midst of the peak irrigation season could be used to identify problem areas that are causing impeded flows. As a database of multi-season images are built up, trends in weed growth and problem areas could be identified and the reasons for proliferation of weeds in certain areas investigated and resilience against weed outbreaks improved. Multi-season images will also allow tracking of the proliferation of introduced species, which may be resistant to current weed control strategies.
The satellite images used in this study were very high resolution 8-band images, which comes with a high price. The minimum requirements for satellite images for weed area identification should be determined to reduce costs. Example satellite image costs are shown in Table 11.
To compare the cost of surveying the network by satellite, to manually survey with a person in a car, the length of the supply networks for a number of rectangular areas within Figure 1 was determined. The length of channels was found to be roughly proportional to the square root of the area: length ≈ 25 √ area. Thus, for a 500 km 2 area, the length of channel is about 550 km. If 1.5 m resolution satellite images are determined to be sufficient, the cost for an image will be around 500 × 6 = $3000 (Table 11). If a manual survey is done, assuming a 10 km/h driving speed, the cost is the hourly rate for a driver/surveyor multiplied by the time, plus the mileage costs for the car. Assuming an hourly rate (including tax, superannuation and insurance) of $100/h, and mileage costs of $1/km, the cost to manually survey the area would be 100 × 550/10 + 550 = $6050, approximately twice that of the satellite image. This simple analysis shows that replacing manual monitoring with satellite monitoring is an attractive option. Satellite monitoring also provides additional benefits such as quantifiable weed area calculations and regular mapping of the region. Additional value could be generated from the purchased images by using them for other purposes, such as finding NDVI to determine the total area of irrigated land in a given season. Table 11. Satellite constellations and approximate costs for four-band tasked images [26]. Future work will involve investigating the benefits that hyperspectral data may bring to the task of weed identification. Hyperspectral data has more bands and therefore may be able to separate the classes more precisely, particularly considering the different reflectance signatures of weed and land vegetation species [15]. The World-View 3 data has eight bands, of which only four were used in this study. The possibility of classifying between species will be useful as different weeds are more responsive to different methods of control. Higher resolution imagery may also be needed so that object-and texture-based classification methods can be employed [16].

Constellation Resolution (m) Cost ($US/km 2 ) Minimum Area (km 2 )
Recent work has shown the strong impact of anisotropic reflectance of mixed vegetation and water environments [27]. The electromagnetic reflectance environment of irrigation channels is complex, (with interactions between solar radiation, vegetation canopy, water surface, water column, and channel bottom), and the use of radiative transfer models of the system to obtain bidirectional reflectance functions may enhance classification accuracy [28]. More work is needed in this area.
As a larger database of images of the channels are built over a longer time scale, with more variance in reflectance characteristics of each of the classes, it will be useful to investigate if a single classifier can be trained that is robust against all images. The classifier could be updated as new images become available. It would also be useful to investigate the possible merging of classifiers for UAV and satellite-based images, which may involve some radiometric correction as the instrument bands are different, as seen in Table 1. Other future work could involve applying the techniques developed here to other regions, and assess how robust the method of weed detection is with different species of weeds that are predominant in those areas.

Conclusions
Weed growth in irrigation network channels is a major source of flow constraint, which results in slow delivery of irrigation water to farms. This paper has demonstrated the use of remotely sensed multispectral images from UAV and satellite platforms to detect and quantify areas of weed in irrigation network channels. An algorithm was developed, combining unsupervised and supervised classification to robustly identify areas of weed, water and land from images with very little user intervention or overhead in supplying training data for new images. Classification accuracy typically achieved kappa values of greater than 0.85. The required bands to achieve these results were red, green, blue and near-infrared, and resolutions of better than one metre were recommended. UAV images were shown to be useful for intensively monitoring small areas of irrigation channels over time, for example, to quantify the change in weed after chemical or mechanical controls are implemented. Satellite images were processed over very large areas and the total area of weed in the channels throughout the images were automatically calculated. These techniques will facilitate improvement in the maintenance of timely water delivery from irrigation channel networks.