Drone Aided Thermal Mapping for Selective Irrigation of Localized Dry Spots

We report a smart irrigation system that allows selective irrigation of localized dry spots in an agricultural field. The proposed irrigation system uses a quadcopter drone equipped with a Thermal Infrared (TIR) camera and a GPS module to generate georeferenced thermal images that indicate the area and location of the dry spots in a survey area. Drones navigate and acquire aerial thermal images, which are then processed by an onboard edge intelligence module along with flight data (GPS coordinates, altitude, and drone direction). Smart sprinklers deployed on the field are able to wirelessly receive the coordinates of dry spots so they can be irrigated selectively. A terrestrial edge unit generates an irrigation pattern for the smart sprinklers using a pre-trained machine learning (ML) model to generate an irrigation pattern by varying the head rotation angle (<inline-formula> <tex-math notation="LaTeX">$\theta$ </tex-math></inline-formula>) and the water flow control valve rotation angle(<inline-formula> <tex-math notation="LaTeX">$\emptyset$ </tex-math></inline-formula>) of the smart sprinkler.


I. INTRODUCTION
The agricultural sector consumes more than 70% of freshwater [1]. While irrigating the crops, underwatering and overwatering are one of the pressing issues [2]. Overwatering causes the reduction of overall freshwater and causes the damage of crop roots, soil runoff, mineral runoff, and algal bloom to nearby freshwater sources [3], [4]. Precise and accurate irrigation is the smart way to avoid all these problems. Although IoT-based smart sprinklers have been developed in irrigation [5], [6], [7]. When used over a large area of farmland, however, the significant setup costs, additional recalibrating time, and additional issues make this strategy inefficient [7].
Unmanned aerial vehicles (UAV) have become a popular and affordable technology in recent years for capturing highresolution remote sensing images for various precision agriculture applications. Drones have been used in agriculture for decades to spray pesticides, seeding, plant health monitoring, crop management [8], [9], [10], [11], [12] and production The associate editor coordinating the review of this manuscript and approving it for publication was Nurul I. Sarkar . cost management [13], [14]. Drone imaging is combined with artificial intelligence for better accuracy [15]. Contactless Thermal Infrared imagery is an effective way to characterize the temperature variations of agricultural fields. TIR imaging has been used to find the kangaroos in Australia [16] and spider monkeys in Mexico [17]. Using TIR images, nesting bird habitats have been identified [18]. TIR images are used to evaluate forest ecosystems [19], archaeological sites [20], [21], [22] and Photovoltaic installations [23]. UAV-based TIR imaging technology is also used to detect compost pile conditions [24], peat fire detection [25], find mesocarnivores animals in farmland [26], search and rescue missions [27] and so on.
Overwatering and crop irrigation issues can be solved more effectively with the help of a drone equipped with a thermal infrared camera. We used UAV based imaging to generate the thermal map of an area within minutes which in turn was used to control smart water sprinklers. A machine learning model trained using the image processing of TIR images, flight data, primary agricultural data, and methodological parameters such as geological location, soil moisture absorption, wind, and weather data can be used to predict the moisture content of each spot and choose appropriate sprinkler parameters. The nearby sprinkles can irrigate that dry area based on the ML decision generated and transmitted by the terrestrial edge station.
Shorter running times and less water consumption are the results of carefully managed irrigation systems [16]. Irrigating in a circular pattern can be enabled by sprinklers with revolving nozzles [28]. Several smart irrigation models have been developed based on thermal imaging [29]. The entire farm area can be divided into small sectors. We adopted the sector division techniques of cellular communication, which divide a large segment into smaller circular segments [30], [31]. The smart sprinklers are placed at the center of each sector, which can hydrate a radius of 10-20 m. However, circular sections can create considerable overlaps. Hence, we approximated each sector as a hexagon, which has an equivalent area of 82% of a circular area, resulting in less overlapping of the irrigated land.
The TIR and RGB images taken by the drone are processed using an onboard edge intelligence. The images processed using the onboard drone edge AI are converted into georeferenced data, which is then sent to the terrestrial edge unit using LoRa WAN communication. The terrestrial edge station generates the irrigation pattern using a trained machine learning model. This decision and irrigation pattern are transmitted to the smart sprinklers through LoRa WAN communication. Wi-Fi, ZigBee, NRF24L01 RF communication-based smart sprinklers are commonly used in short to midrange IoT smart irrigation [32], [33], [34], [35], [36], [37]. These cost-effective communication methods work well with a mid-range distance, around 100 meters, without an external antenna [38], [39], [40]. These communication methods can be upgraded with a dedicated antenna to achieve a distance of a few hundred meters. The LoRa WAN communication is suited for a large area because of its broad coverage of a few kilometers without secondary repeaters, reducing communication complexity and an initial setup cost [40], [41], [42]. Smart sprinkles can be powered by a solar panel that completely avoids batteries. Solar pumping without batteries has shown success in good sunlight [43], [44] Even while cloud computing has a great degree of scalability and is inexpensive, it has a significant latency and puts a lot of strain on the network's bandwidth. For making the proposed system to be efficient and reliable, latency should be as low as possible. Edge computation is integrated into the system [45], [46], [47]. Edge computing, as defined by the Edge Computing Consortium (ECC), is a distributed open platform that integrates network, storage, and application capabilities at the network edge, near the objects or data sources [48]. Data processing under this computing paradigm does not need to send information to the cloud or other distant centralized or distributed systems. Edge intelligence is the next evolution of edge computing and artificial intelligence (AI). It intends to empower the edge by utilizing AI technologies [49]. Typical edge computing systems feature graphics units and CPUs to efficiently handle the trained ML model inside the standalone computational device. The clustering of computing resources in the edge system enables parallel processing of the overall task, which reduces latency and enables handling of complex tasks like image processing, machine learning, and deep learning within the cluster itself [50].

A. PROPOSED DRONE-AIDED EDGE COMPUTING ARCHITECTURE
The drone-aided smart irrigation system, shown in Figure 1, is designed to accurately irrigate a farm area and lessen excessive freshwater use. The entire system consists of two subunits: aerial and terrestrial. The aerial unit has an autonomous drone in which the aerial edge computing station captures and processes thermal images and transfers the decision to the terrestrial sector through LoRa WAN communication. Terrestrial edge station, smart sprinklers, hydration pumps, and drone takeoff pads are in the terrestrial unit.
The terrestrial edge can generate an irrigation pattern based on the georeferenced data from the drone. The machine learning (ML) model at the terrestrial edge server can generate an irrigation pattern and set the sprinkler parameters such as flow rate and flow direction in order to irrigate localized dry spots. By relying on a low-cost, low-resolution Thermal Infrared (TIR) camera, the captured images are preprocessed and georeferenced before sending them to the terrestrial edge station for identifying the most effective irrigation patternallowing network, memory, and computing resources to be saved.
We built a custom quadcopter drone for aerial remote sensing. The custom-made drone is equipped with a raspberry pi 4 for image capturing and processing. An RGB camera (pi camera) and a TIR camera (FLIR) are used to collect the RGB and TIR spectral images of the ground to identify the localized dry spots. We used the open-source software Mission planner from Ardupilot to define the flight path. The area of surveillance can be selected using the software ( Figure 1). The image capturing and processing are done using python programming on the onboard aerial edge station. This data is further transmitted to the terrestrial edge station for predicting and calculating the irrigation pattern using a trained machine learning model.
Once all the necessary inputs (GPS Coordinates, Altitude, Speed of Hovering) are programmed in the drone, it is ready for remote sensing. After taking off, the drone moves from waypoint to waypoint and captures the RGB and the TIR images. These combined images (T ig ), and the flight data (GPS coordinates-C gps , altitude-A lt , the direction of the drone-D i ) are transmitted to the onboard raspberry pi through the UART communication interface. An image processing algorithm in python took these data and calculated each localized dry spot and its relative dryness. This data is further transmitted to the terrestrial edge station for generating the irrigation pattern through a trained machine learning model. After the mission is completed, the drones are returned to the land for recharging. The terrestrial unit consists of the terrestrial edge station and sprinkler control unit. The terrestrial edge unit receives the georeferenced output through the LoRa WAN communication. The terrestrial edge station uses a raspberry pi 4-8GB to calculate the sprinkler control parameters (SCP) and to generate an irrigation pattern which is transmitted to the smart sprinklers through the LoRa WAN communication. LoRa WAN technology's long range will cover a large area of land without having to use a secondary signal repeater.
The sprinklers will direct the water spray to the localized dry spots by adjusting the water pressure and the servo rotation head of the sprinkler head. We describe the capabilities of a fully-functioning, deployed system.

A. UNMANNED AERIAL VEHICLE (DRONE) CONSTRUCTION
We built a custom-made quadcopter for this project (Figure 2a). The drone has an ultra-durable polyamide nylon based F450 frame, Pixhawk 2.4.8 flight controller, and the raspberry pi 4 4GB as a flight computer. The F450 frame manufactured by Readytoskey (ShenZhen ReadytoSky Technology Co., Ltd) is made of ultra-durable polyamide nylonparts that can withstand more crash-landing damages. This frame has a 450 mm motor center, ideal for using 9-or 10-inch propellers. This frame allows us to use BLDC motors with 16,19, and 25-mm mounting hole centers. The frame has a height of 450 mm with landing gears attached, weighing approximately 280 grams. 800 to 1100 kV motors are ideal for this frame. We used the 900 kV BLDC motors manufactured by Readytoskey with 10-inch propellers (1045 props). 30 A Electronic Speed Controller (ESC) manufactured by Simonk is used to control the throttle speed of the drone ( Figure S1).
The Pixhawk 2.4.8 flight controller controls all the motor speeds along with data sensing. Pixhawk 2.4.8 has a 32-bit ARM Cortex M4 core processor with FPU, which has a rate of 168 MHz, 256 KB RAM, and 2 MB flash memory. The Pixhawk flight controller comes with an inbuilt accelerometer and gyroscope (MPU6050), MEMS barometer, ST Micro 16-bit gyroscope, ST Micro 14-bit accelerometer, and compass (magnetometer). Pixhawk can communicate with the peripheral device through UART, I2C, SPI, CAN, and USB transmission protocols. Which also supports 3.3 V to 6.6 V ADC inputs. The ESCs are connected to the main output channels of the Pixhawk, and the RF receiver is connected to the SBUS input. We construct a quadcopter drone, so we only need four main output channels to connect the ESCs. Channels 1 to 4 are used here. An external switch is connected to the Pixhawk to arm and disarm the drone.
We used a FLYSKY FS-i6X RF transmitter and an FS-X6B receiver for transmission and manual control. FS-i6X is a 2.4 GHz dual antenna 6-channel RF transmitter based on AFHDS2A protocol, which can be upgraded to 10 channels with some DIY tricks. FS-i6X is compatible with PWM, PPM, i-bus, and s-bus protocols. This transmitter ranges from 500 to 1500 m in the air with a transmission power of less than 20 dBm. The 2.4 GHz dual antenna FS-X6B The 915 MHz radio telemetric module pair communicates between the terrestrial edge unit and the drone. This 500 mW transceiver can connect to the telemetric port of the Pixhawk flight controller. The other transceiver pair can be connected to the USB interface to receive flight data and transmit control signals through AT commands. The default baud rate of the full-duplex transparent serial link is 5600 baud. This transceiver pair ranges from 300 to 500 meters in an open field.
The NEO-M8N external flight controller GPS module is connected to the Pixhawk to find the exact coordinates. This has a tracking sensitivity of 161 dBm and capturing sensitivity of 148 dBm with up to 10 Hz concurrent GNSS. NEO-M8N flight controller GPS module also includes an internal compass (magnetometer) which can be used to calculate the direction. There is a default port in the Pixhawk flight controller to connect the external GPS module, and the external compass can be connected to the I2C port. A 3500 mAh 3S Lithium polymer battery is used to power the drone and all the electronic systems. This 12.6 V battery can fly the drone for about 30 minutes at an altitude of 10-15 meters.
A raspberry pi 4 4GB captures and processes the thermal, RGB images, along with the flight data from the Pixhawk flight controller (Figure 2b). The Broadcom BCM2711, Quad-core Cortex-A72 (ARM v8) 64-bit SoC powers the raspberry pi 4 at a 1.5 GHz clock frequency. We used a 4GB RAM model for this project. That is enough to do all our necessary work. The camera can be connected to USB ports, and a 32 GB class 10 micro-SD card is used to boot the raspberry Pi with Raspbian OS. A python program captures and processes the images and data to generate the basic information of each dry spot to create the data set for the machine learning model. A 12 V to 5 V -3.5 A external power converter is used to power the raspberry pi from the LiPo battery. The Pixhawk flight controller can communicate with raspberry pi through the UART transmission protocol, which is used to extract the current drone location and altitude from Pixhawk.
Open-source Mission planner software developed by Ardupilot (https://ardupilot.org/planner/) is used for drone calibration and assigning flight paths. Mission planner is installed on a Windows computer and connects the Pixhawk utilizing a micro-USB cable. The quadcopter firmware with the required orientation is flashed on the Pixhawk. After connecting all the peripherals to the Pixhawk, the flight controller is ready for calibration. RF throttle, accelerometer and gyroscope, compass, and ESC must be calibrated using Mission planner. Flightpath, waypoints coordinates, altitude, and flight mode can be programmed into the drone using the graphical interface of Mission Planner. Once completed, the drone can communicate with Mission Planner through the 915 MHz telemetry module while in the air. VOLUME 11, 2023 The specifications of important individual components are given in the Table. S1

B. EDGE INTELLIGENCE IN UAV
Our system consists of aerial and terrestrial edge intelligence units. The UAV's edge intelligence acquires and processes the TIR images, primary data such as drone hovering altitude, GPS coordinates, hovering speed, and direction to generate the pixel data information, which is transmitted to the terrestrial edge for further machine learning approach. The drone will take TIR photos of each hexagonal sector after the hovering pattern, route, and heights have been put into it. Due to outside factors like wind and weather, the TIR camera pictures are unreliable. After being stabilized, the pictures are fed into the image processing phase. Prior to using adaptive thresholding to find the localized dry areas, the original rectangle TIR pictures are cropped to a regular hexagonal image and divided into R, G, and B components. The information from the pixel data is gathered once the localized dry spots are georeferenced. Pixel data are transmitted to the terrestrial edge station through LoRa WAN communications ( Figure 3). Using a trained machine model, the terrestrial edge station utilizes these data to determine the irrigation pattern and generate the sprinkler control parameters (SCP).

C. UAV IMAGING
The custom-made drone is equipped with a Thermal Infrared (TIR) and a RGB camera for capturing thermal and color images while hovering.
Thermal Infrared (TIR) Camera: A Forward-Looking Infrared Camera (FLIR), Lepton 3.5 FLIR was used to capture thermal images (8 -14 µm wavelength). The camera had a resolution of 160 × 120 or 80 × 60 active pixels and an aperture angle of 57 • with a shutter (Figure 2a). The camera operates in the 2.8-3.1 V range, with an average power consumption of 650 mW during shutter events. The STM32based breakout board is used to mount the FLIR camera ( Figure S2a). A micro-USB cable can be used to connect the FLIR camera to a computer; in our case, a raspberry pi ( Figure 2b). RGB Camera: The raspberry pi camera captures RGB images ( Figure S2b). The V2 camera has a Sony IMX219 CMOS 8-megapixel sensor which has a sensor size of 3.674 × 2.760 mm(1/4 ′′ format) with an active pixel count of 3280 × 2464 (total pixels 3296 × 2512). This supports 1080p 30fps, 720p 60fps VGA90 video modes along with 3280 × 2464 pixel still images. The IMX219 has a builtin 650 nm IR filter, which means it is non-sensitive to IR light and draws a peak current of 300 mA while operating. A ribbon cable connects the camera module to the CSI port of raspberry pi ( Figure 2b).

D. EDGE INTELLIGENCE AT TERRESTRIAL EDGE UNIT
The terrestrial edge intelligence station receives the pixel data information of the localized dry spots through the LoRa WAN receptor (Figure 4). The pixel data is composed of each pixel's x and y coordinate location and the distance between each pixel to the sprinkler (l). The pixel data is further introduced to a trained machine learning model to predict the targets such as irrigation pattern and the sprinkler control parameters (SCP), such as the sprinkler head rotation angle (θ) and the water flow control valve rotation angle (∅) ( Figure 5). The dataset consisted of a training dataset making up 80% of it and a testing dataset making up 20% of the pixel data information.

E. CONSTRUCTION OF SMART SPRINKLER
The custom-made smart sprinkler consists of LoRa based IoT receiver, a servo mechanism for controlling the sprinkler nozzle direction, a water solenoid for controlling water quantity, and a ball valve rotator for controlling the pressure. An ATSAMD21 microcontroller controls the direction of irrigation, duration, and force of water through the sprinkler, as in Figure 6. The smart sprinkler is powered by a 12 V, 20 W solar panel (Figure 7a). The flat basement with mounting holes helps easily place smart sprinklers at required locations. A 0.5-inch pipe connector is used to supply water to the sprinkler from the pressure pump through the pipes. A 0.5-inch pipe connects the water solenoid output connector and the sprinkler head. This watertight rotating tube maintains the free rotation of the sprinkler head using the MG-90 servomotor with gear mechanism without blockage or cracking. The sprinkler head can rotate from 0 to 359 degrees freely. This back-and-forth rotation of the sprinkler head helps irrigate in specific directions. The sprinkler head is attached to a water blocker controlled by an SG-90 micro servo, determining the irrigation distance by controlling the water pressure. A 12 V-1 A water solenoid is used to turn on and off the sprinklers. The smart sprinkler detailed design and construction are given in the Figure S3 Additionally, the 12 V 72 PSI water solenoid helps control water flow through the sprinkler. A 12 V solar panel with an electronic charge controller is used to power the entire system. The microcontroller ATSAMD21 controls the smart sprinkler based on the terrestrial edge computing station commands received through the LoRa WAN module. These complete electronic modules are watertight and moistureproof. The PV array height can be adjustable based on solar light, crop height, and shadow effect. The PCB design of LoRa enables smart sprinkler is given in the Figure 7b.
Calculating the relation between the SG-90 servo angle (∅) and the irrigating distance is essential. We have attached a hall effect flow meter with the custom-made water flow control valve mechanism to develop an experimental setup (Figure 8a). This experimental product is connected with a pressure pump using a 0.5-inch PVC pipe. The hydration distance and water flow are controlled through the valve using an SG-90 micro servo motor and an Arduino development board (ATMEGA 328P microcontroller) (Figure 8b). The smart sprinkler receives the ∅ values from the terrestrial edge unit via LoRa WAN and the localized dry spots are irrigated selectively. We also developed a BLINK-based graphical interface to control the smart sprinkler parameters manually (θ, ∅, and solenoidal switching) for calibration purposes (Figure 8c,d). CAD design of all the parts used in the smart sprinkler design is given in the Figure S5-20.

F. RF COMMUNICATIONS
LoRa WAN is used as the transceiver for drones and smart sprinklers. It is essential to point out that, for a large region of farmland, the terrestrial edge station must be able to connect with most of the smart sprinklers without needing a secondary repeater. During system operation, less than 30 kB of data is transmitted from the drone to the terrestrial edge computing station and to the sprinkler. Therefore, using LoRa WAN technology for data transmission is feasible because bandwidth is not a key concern in the system. For a communication system, optimal range, power, and price/performance trade-offs are possible by using LoRa technology. The spread spectrum modulation method used by LoRa is unique and was developed from chirp spread spectrum (CSS) technology [51], [52]. The LoRa WAN (Long Range Wide Area Network) serves primarily as a network layer protocol for controlling communication across Low Power Wide Area Network (LP WAN) gateways and edge devices as a routing protocol [51]. According to the regulations, LPWAN technologies mainly operate in the ISM band from 902 MHz to 928 MHz in the USA and the band from 863 MHz to 870 MHz in Europe [53], [54].
Using a LoRa gateway, the appropriate edge station can transmit information to the LoRa WAN with a bandwidth of 125 kHz. Multiple data rate options, ranging from 250 bps to 5470 bps, are supported by this technology. A transmission range of up to 10 km helps to make up for this extremely low data rate [51]. It's vital to note that a high-gain antenna and an environment lack of physical obstructions are requirements for reaching such long distances. A 20% packet loss rate for a distance of around 2 km between nodes and the LoRa gateway has been demonstrated in studies employing an SX1278 LoRa chip and an ultra-low power STM32L151 microcontroller [55].
For our purpose, we chose a RYLR896 LoRa module (Integration of SX1276 LoRa communicator and STM 32 microcontroller), which has a more extended range and better receiver sensitivity (down up to −148 dBm). The LoRa chip uses a peak power of 300 mW when in transmission mode and drops to 60 mW when in reception mode. These values decrease to a power level close to 0.25 mW in sleep mode. The peak power needed only lasts for a time frame of less than 10 ms when operating in the mode with the lowest transmission rate [51]. The FSK, GFSK, MSK, GMSK, LoRa, and OOK modulations are supported by the SX1276 LoRa modem, which has a configurable bit rate of up to 300 kbps.

G. TERRESTRIAL EDGE FRONT END
The user control GUI (Graphical User Interface) available at the terrestrial edge is the terrestrial edge console (Figure 9). Tkinter is used to create the unit. Tkinter, sometimes known as ''Tk interface,'' is a python module that allows access to the Tk GUI toolkit. This toolkit was created in TCL (Tool Command Language), and it is multiplatform, supporting Linux, MAC OS, and Microsoft Windows. The only framework included in the python standard library is Tkinter. The terrestrial edge console collects data from the smart sprinklers ( Figure S21a) and aerial edge through LoRa WAN communication and external REST APIs are utilized to get weather data ( Figure S21b). These APIs employ GPS coordinates to acquire correct weather for terrestrial edge control station. The sprinklers can also be manually controlled by the user for calibration purpose by changing the θ and ∅ values of each sprinkler ( Figure S21c). The terrestrial edge GUI is also used to visualize the irrigation pattern of each smart sprinklers. If there are unfavorable weather conditions, the weather data collected can show how well the sprinkler system performs.

H. METHODOLOGY
Surveying the farm area is the first step to collecting each boundary point coordinates, total farm area, and shape. Consider a square-shaped farmland; the four boundary coordinates are the primary data needed for further calculations. The total surveillance area is divided into hexagonal sectors where the smart sprinklers are placed at the center of each sector. A python program is developed in processing, which takes the areal map of farmland, boundary conditions (coordinates), diagonal of each hexagon, and hexagon separation x and y values to avoid overlapping as the input. The farm map is overlaid with a regular hexagon grid based on these data. The program also provides the coordinates of the center of each hexagon. Finding the exact locations for sprinklers are achieved through this method. The method used for implementing hexagonal griding is shown in the algorithm S1.
We implemented the project on a 40 m x 40 m open grassland. The land is virtually divided into a few small segments. A hexagonal sector with a diagonal dimension of 10 m is used in this project.
TIR images acquired by the drone are processed using the onboard edge intelligence module along with the flight data in order to estimate the location and dryness of every localized dry spot within each hexagonal sector. This data is transferred to the terrestrial edge station through LoRa WAN communication. The terrestrial edge station (raspberry pi 4-8GB) collects these data and generates a dataset containing the rotation angles (θ, ∅) of the water flow control valve and the nozzle head. To precisely calculate the irrigation pattern, the ML model incorporates environmental parameters such as field soil moisture absorption rate, crop type, dry spot area, and weather.

1) IMAGE STABILIZATION
The major problem we faced during the reconstruction of the thermal images was the stability of the recorded thermal videos. This is mainly affected by the steadiness of the drone subject to environmental conditions like wind and weather. Before determining the properties of the localized dry spots, it is essential to stabilize the thermal videos. A python code written for Open CV took the shaky video. It converted that to a stabilized video based on changing the frame transformation parameters from one frame to another using optical flow for all frames. The frame transformations are a rigid Euclidean transform consisting of three parameters distance (dx, dy) and angle (dθ). The algorithm in python compiles the transformations to obtain the ''trajectory'' for x, y, and the angle at each frame.

New transformation =transformation
A sliding average window is used to smooth the trajectory. The radius of the window defines the number of frames used for smoothing. The new transformation is formatted using equation (1) and applied to the entire thermal video. The final output of the stabilized thermal video is created by cropping the unwanted edges of all frames. The algorithm for stabilizing the drone video is detailed in section S2.

2) RGB SPLITTING OF THERMAL VIDEO
The color of the localized dry spots varies from dark blue to red hue when the dryness increases. The environmental conditions such as weather and atmospheric temperature can also affect the quality of captured TIR images [18], [56]. Removing unwanted information from the TIR images is an essential preprocessing step in the aerial edge unit. Separating a picture into its RGB component channels reveals that the R channel contains the bulk of the image's informational value, making it simple for an image processing system to identify the video's reference plane and pinpoint the location of any dry areas. A 40 x 40 cm 2 aluminum sheet attached to a wooden frame was used as the temperature reference plane.
The algorithm S3 is used to split the TIR images into R, G, B channels.

3) LOCALIZED DRY SPOT DETECTION FROM TIR IMAGES
The area of a single dry spot (A) is calculated based on the TIR image covering a single hexagon. This is calculated according to the resolution of the TIR imaging camera and the drone's altitude. For an altitude of 4 m, the dry spot area of 2.5 cm 2 equals 1 pixel of the TIR images captured using a Lepton 3.5 FLIR camera. Since we are using the same TIR camera for the experiment (Lepton 3.5 FLIR), the resolution is constant. This calculation depends upon the drone's altitude at which the image is captured.
The OpenCV library transforms the thermal image input into a greyscale image. The image is represented as I (x, y), with (x, y) denoting the pixel coordinates. The image height (h) and width (w) are calculated to represent the image as a (h x w) pixel metric. The pixel metrics is evaluated horizontally {y 0 x 1 ,y 0 x 2 , . . . ..,y 0 x (w−1) , y 0 x (w) } and the vertical value ranges as {y 0 ,y 1 , . . . ..,y (h−1) , y h }. Threshold value T is predefined, i.e., determined from trials exhibiting maximum dry spot detection DMAX and minimum error EMIN from multiple image evaluations. The pixel values I (x, y) exceeding the threshold are considered dry spots, hence highlighted in the image with greyscale value 0 representing C. The image area is also estimated in pixels; in the bulk image process, the image name and area are represented in an excel worksheet. The processed images are stored in an output directory for further processing. The algorithm S4 is used to find the localized dry spots area.
The input image is a greyscale image with dry spots represented with RGB {0, 0, 0}. There are two predefined values, i.e., alpha and pixel ratio. α represents the error or deviation from the expected target dry spot of the water jet. The area covered by deviation or error is defined as A ′ . The total area covered is A MAX will be higher than the actual area (A)

Total area covered A MAX
α value depends on the pressure deviation, nozzle size of the sprinkler, wind, and external factors. The sprinkler precision is considered as 100% when α = 0. The αvalue is empirically determined and used to correct the sprinkler VOLUME 11, 2023 Algorithm 1 Irrigate the Localized Dry Spots Input: alpha ← deviation from the original value, pixel ratio ← pixel cluster size, B ← blue in RGB, G ← green in RGB Output: live image stream of sprinkler, dry spot fill 1: Procedure sprinkle: 2: img ← imread input image with cv2 3: height, width ← shape method of img 4: for y in height do 5: for x in width do 6: if pixelRatio > 1 then 7: for y1 in y to (y + pixelRatio) do 8: for x1 in x to (x + pixelRatio) do 9: if y1 < height and x1 < width then 10: b, g, r ← img[y1, x1] 11: if b = 0 and g = 0 and r = 0 then 12: img[y, x] ← B, 0, 0 13: flowRate, angle ← get data for x, y from DecisionTreeClassifier 14: sprinkle with angle and flowRate 15: deviation ← round value of (alpha * pixelRatio) 16: if alpha > 0 then 17: for i in (y -deviation) to (y + deviation + 1) do 18: for j in (x -deviation) to (x + deviation + cv2.waitKey(time) end rotation angles to avoid deviation from the actual area (A) that needs to be covered. The corrections can be manually set by the terrestrial edge front end. Variation of α corresponds to the pixel_ratio, which is a cluster of pixels around the dry spot, clubbed together as a single region to be irrigated, referred to as sprinkle_area if the pixel ratio value in the area is. sprinkle_area=(pixel_ratio) 2 (3) The sprinkle_area values can range from 1 to n 2 , depending on pixel_ratio. The value of the sprinkle ratio is significant concerning the sprinkler nozzle type or spread rate. The algorithm uses a decision tree classifier to obtain the rotation angles, flow rate, pressure, and distance for every dry spot coordinate value (x, y). Algorithm 1 details the irrigation of localized dry spots.

I. IRRIGATION PATTERN GENERATION: MACHINE LEARNING AIDED MODEL FOR GENERATING IRRIGATION PATTERN OF THE SMART SPRINKLER TARGET PREDICTION BASED ON MACHINE LEARNING
The pattern of irrigation is generated as an array of [θ n , n ] values, θ and ∅ are sprinkler head rotation angle and the pressure valve control servo angles. Various machine learning models were used to predict the target variables [θ n , n ] using the data gathered from thermal images and sprinkler experiment data ( Figure 10). The ML dataset includes the pixel data information, drone parameters, and environmental data. The aerial edge computing system generates the pixel data from the processed TIR images, including the position of each pixel (x, y) and the distance between each pixel and the sprinkler (l). A training dataset comprised 80% of the dataset, and a testing dataset comprised 20% of the dataset. By contrasting anticipated and actual values, the accuracy of the ML regression models was examined. By creating python scripts that accessed the Orange API, the Orange toolbox was used to execute ML studies. Python script widgets are used in Orange to execute python scripts to carry out extra functionalities like feature significance. The outputs of the python script widget are the updated variables from the python script.
Our project's GitHub repository contains the Python codes developed for all modules of this work. (https://github.com/ skmidhun09/drone_aided_smart_irrigation)

A. HEXAGONAL GRID MAPPING
The aerial edge computing station distinguishes each localized dry spot from the TIR images. Our preliminary We chose an open grassland of 40 m x 40 m dimension as our survey area. The four boundary coordinates were obtained from Google Maps. The drone waypoints were set to hover over the grassland at about 20 m altitude while capturing RGB and TIR images. The sector is divided into smaller hexagonal sectors with a 10 m radius and the location of each sprinkler is determined using the algorithm S1 ( Figure S22,23). The drone hovering pattern is further generated from the sprinkler location of the farmland. The location of each smart sprinkler is given as a waypoint in the Ardupilot Mission planner software which calculates an appropriate drone hovering pattern.

B. VIDEO STABILIZATION
Environmental factors such as weather and wind can arbitrarily change drone stability, which will also affect the stability of the thermal images and video captures. Although VOLUME 11, 2023 gimbal-based image stabilization methods are efficient, on the other hand, it is also expensive. To mitigate the negative impacts of camera, shake on recorded video and enhance the overall quality of the media, we developed a video stabilization program. The frame transformation parameters of the TIR video are adjusted relative to the Euclidian distance of frame transformation. Figure 11a represents the fluctuation of d x and d y for a shaky video. There was no camera rotation during recording, therefore angle deviation (d a ) is ignored. Figure 11b,c represents the change in pixel value for the x and y trajectory for an un-stabilized and stabilized video. The new transformation of each frame is calculated from the previous transformation by applying the smooth trajectory parameters d x and d x (Figure 11d).

C. RGB SPLITTING OF TIR IMAGES
We discovered that abrupt changes in the environmental conditions impact the effectiveness of dry spot recognition when isolating the localized dry patches from the TIR videos. This is calculated by comparing the TIR images to the contrast value of the reference plane (a 40 cm 2 aluminum foil attached to a wooden piece used as reference in this case). The low-resolution Lepton 3.5 FLIR camera cannot effectively distinguish the reference plane from the surrounding wet areas. Splitting each color channel (R, G, B) of TIR images effectively determines the relation of the reference plane and dry patch color in the TIR images. The blue and green channels cannot distinguish the reference plane from the wet areas ( Figure S24a,b). In contrast, a dry region is best defined in the red channel ( Figure S24c). This feature can be utilized to precisely calculate the localized dry spots by comparing the temperature of the reference plane on the ground.

D. OPTIMUM DRONE HOVERING ALTITUDE TO DETERMINE THE LOCALIZED DRY SPOTS EFFECTIVELY
To find the optimum altitude for a drone hovering in the smart irrigation, we conducted experiments using synthetic turf placed inside a single hexagonal sector. An artificial turf of 5 m 2 is placed at the center of one sector. TIR images of the turf is captured using a Lepton 3.5 FLIR camera at different altitudes from 4 to 20 m (Figure 12a) while hydrating the turf using a known quantity of water (0.5 to 2.5 L) as in Figure 12b. The area of each wet patch is mapped into pixels through the blob detection method. The TIR images are used to estimate the influence of image resolution of the Lepton 3.5 FLIR camera at various altitudes. The pixel representation of wet patches for each hydration level at different altitudes are calculated. The relation between the wet patch area and hydration level at different altitudes (Figure 13a), TIR image representation of the wet patch for each altitude at different hydration levels (Figure 13b). This data is used to map the relationship between the dry spot area, moisture level, and the TIR image capturing altitude for a Lepton 3.5 FLIR camera. We created two adjacent wet patches of 1 ft radius separated 1 ft apart by hydrating the turf with a known quantity of water. This is to determine the resolution of the corresponding FLIR camera and the optimum altitude to capture the TIR images. Each wet patch is hydrated with 1.5 L and 2.5 L of water, and the TIR images are captured at different altitudes (4 m-20 m) as in Figure 14a. We find that the FLIR camera can clearly distinguish each wet patch up to an altitude of 12 m (Figure 14b,c). A larger coverage area was achieved at higher altitudes of the drone hovering, giving a shorter period to cover the farmland. We further used image processing to accomplish pixel mapping of the corresponding wet patches (Figure 14d). At a height of 12 meters, the FLIR camera could capture approximately 40 x 30 m 2 in a single image. Therefore, the hexagonal diagonal dimension can be increased to 15 meters, thereby reducing the initial hovering time and cost. A machine learning model is developed to determine the sprinkler flow rate corresponding to each pixel distance from the center of each segment. At the optimum altitude for drone hovering (12 m), the total area of a single pixel is about 7.5 cm 2 . Depending on the location of the dry zone pixels, the ML model generates an accurate servo angle for nozzle head rotation (θ) and flow rate control (∅) along with the working duration of water solenoid to precisely irrigate each pixel in the TIR image.

E. LORA COMMUNICATION
The SX1276 LoRa module is used to transmit data from the aerial edge to the terrestrial edge and from the terrestrial edge to the smart sprinklers. The SX1276 LoRa module is set as a transmitter or receiver based on the purpose. The spreading factor (SF) and the bandwidth (BW) determine LoRa communication's efficiency and the maximum transmission distance, which is governed by the RSSI and SNR parameters. We studied the influence of these parameters on the SX1276 LoRa module by changing the spreading factor from 7-12 for each bandwidth value of 125 kHz and 250 kHz. transmission distance for the configuration of SF 12 with 125 kHz and 250 kHz bandwidth. The highest transmission distance is achieved with minimum package loss when the LoRa is configured with a SF of 12 and a BW of 125 kHz which is used in this system. A LoRa configuration with a bandwidth of less than 125 kHz can increase transmission distance, but at the expense of an increase in transmission delay.

F. GEOREFERENCING OF LOCALIZED DRY SPOTS
The smart sprinkler is located at the center of the hexagonal sector, and its GPS coordinates can be read out from the Pixhawk using UART communication. The drone's hovering altitude is fixed so that one side of the TIR image is the same as the diagonal of the hexagon's adjacent sides. The TIR images are divided into hexagonal-shaped sectors and each localized dry spot is georeferenced based on the GPS coordinates and altitude of flight ( Figure 16). A dataset is generated with the pixel data information of each dry spot. The ML model uses this dataset to calculate the irrigation pattern by generating the smart sprinkler control parameters (θ, ∅).  The machine learning model is trained with a dataset that relates water flow rate and the irrigating distance. The water flow rate is determined by rotating a ball valve using an VOLUME 11, 2023 SG-90 micro servo motor controlled by the ATSAMD21 microcontroller (Figure 17 a, b). We experimentally determined the servo angle's irrigation distance and flow rate. The ball valve and servo motor are attached so that the 0 • angle closes the valve and 90 • opens the valve completely. The flow rate vs irrigating distance of the smart pump in a 10 • interval is shown in Figure 17 a, b.

G. DEVIATION IN IRRIGATION PATTERN FOR LOCALIZED DRY SPOTS
Even if the image processing algorithm calculates the localized dry spots exactly, the smart sprinkler may not irrigate the targeted region precisely based on the irrigation pattern generated by the machine learning model. This may be the result of a number of external or internal factors that influence the precision of the irrigation pattern. External factors such as wind or internal issues such as the inconsistency of the sprinkler nozzle can lead to this inaccuracy. From our studies, a single pixel of a TIR image captured by a Lepton 3.5 FLIR camera from an altitude of 4 m represents 2.5 cm 2 (Figure 18a). The smart sprinkler is designed to irrigate based on the temperature of each pixel, which means the irrigation pattern is directly related to each pixel mapping. For a smart sprinkler to precisely irrigate a dry spot of 2.5 cm 2 the nozzle diameter of the sprinkler should be so small that it is challenging to construct because of the pressure generated inside the sprinkler when the water solenoid turns on. We have introduced a sprinkler error constant (α) to represent the deviation in the irrigated area from the irrigated pattern. The α value varies from 0 to 1. An increase in the α value increases the hydration area (Figure 18 b-d), which reduces the system efficiency. The value of α should be zero for a perfect smart irrigation system. Figure 18e represents the relation between the α value and the area deviation. The figure shows that the relation between α and deviation area is linear for TIR images captured up to an altitude of 12 m using a Lepton 3.5 FLIR camera. This will help to improve the system efficiency by including the change in α value parameter in the ML dataset, which will not be applicable for images captured above 12 m altitude. However, the difference in area calculated using pictures taken at higher altitudes (>12m) showed erroneous results. This is due to the resolution limit of the TIR camera used. Hence, we maintained the image capturing altitude of 12 m for all our experiments in this study.

H. MACHINE LEARNING AIDED MODEL FOR GENERATING IRRIGATION PATTERN OF THE SMART SPRINKLER 1) PREDICTION EFFICIENCY OF THE DECISION-MAKING MODEL
At the terrestrial edge, a machine learning model will generate the SCP based on the TIR image data and the input dataset ( Figure S29). The performance of the ML models was evaluated using the four key parameters: Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R-squared (R2). The MAE evaluates the average size of errors over a set of predictions without considering their direction. It evaluates the accuracy of continuous variables. MSE's penalty is proportional to the square of the error rather than the actual error. A smooth gradient for minor errors is created when the error is squared, which gives the outliers more weight. An effective model should have MSE values that are almost zero. The square root of MSE is used to calculate RMSE. The range of MAE, MSE, and RMSE is from 0 to ∞. R-squared is also estimated to show how well a regression model fits the data. The best R-squared value is 1.
The four machine learning models: K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest (RF), and Neural Network, are tested for performance based on the MSE, RMSE, MAE, and R2. The RF model, which we evaluated, had the highest accuracy in predicting the θ and ∅, with an MSE value of 0.063 ( Figure S30). The KNN shows the second-highest accuracy with an MSE value of 0.149. SVM had the lowest accuracy with an MSE value of 86.581. The highest effective model (Random Forest) was introduced into the terrestrial edge unit for the prediction of SCP ( Figure S31).

2) ESTIMATING FEATURE IMPORTANCE FOR MAXIMUM RELEVANCE AND MINIMAL REDUNDANCY
A selection of predictors with a high prevalence is identified. Unnecessary predictors reduce model performance and raise computing costs. Some of the predictors among them have a considerable impact on how the model responds. The significance of each feature was examined by varying the predictors and calculating the accuracy drop. We wanted to identify the predictor combination relevant to the response, including predictors produced from TIR image processing and meteorological data output.
The performance of a model is reduced by unnecessary features, which also raises the cost of computation. It's crucial to identify a subgroup of features with a high prevalence. Some factors have a greater influence on the response model than others. Using the RReliefF algorithm and the k-nearest neighbor approach, we prioritized the significance of features or predictors. RReliefF is a continuous target function that rewards features that provide different values to neighbors with different response values while penalizing features that provide different values to neighbors with the same response values. To choose only the variables that maximize the information gain, we analyzed the RReliefF for each variable. Figure 19 displays the RReliefF-implemented features relevance rating. The features are marked as CL, L = [1, 2, 3, 4], where C1 is the distance between each pixel to the sprinkler, C2and C3 are pixel x, y coordinates values and C4 is the angle between pixels and the sprinkler reference axis. C1 was discovered to be the most crucial characteristic among the CL predictors with a weight of 0.595, where C2, C3, and C4 shows almost equal importance with RReliefF value of 0.388, 0.390, and 0.373. While comparing features, the RReliefF score serves as a baseline; certain features can be eliminated depending on feature importance values.

IV. CONCLUSION
We have developed a drone aided thermal mapping system to control the smart irrigation system to irrigate crops with an optimum quantity of water for precision agriculture. The aerial edge computing station captures the TIR images and determines the localized dry spot when the drone hovers over each hexagonal sector. The dry spot information is transferred to the terrestrial edge control station through LoRa WAN communication. The LoRa WAN-based smart sprinkler operates in accordance with the terrestrial edge computing station's determination of the dry spot based on the MLproduced irrigation pattern. This smart irrigation system can precisely hydrate all the crops in the implemented region based on the reference hydration level, soil moisture absorption, and weather condition. This system applies to any crops in and geological location. And the system is scalable regardless of the size, geological background of the agricultural land, and variety of crops cultivated.