Development of Autonomous Robot for Laborsaving of the Forestry Detection of young plants by RGB and Depth Sensor

Traditional forest management and maintenance consumes excessive labor time. Particularly burdensome is the investigation and management of resource quantity (tree growth) in the forest and the elimination of weeds. Therefore, this paper proposes a prototype mechanism for an autonomous self-moving robot that promises to reduce the workload of forest management investigation. For the autonomous movement task, we installed some actuators and a three-dimensional (3D) depth sensor into an all-terrain vehicle. The trajectory path of the robot is determined from the obtained depth-sensor data via 3D conversions and a labeling process. The handle is then controlled by calculating the steering angle. In an experimental evaluation mimicking a real forest environment, the prototype achieved autonomous self-movement with successful passage through the trees.


Introduction
Seventy percent of Japan's land is occupied by forests, approximately 40% (100 billion hectares) of which comprises artificial forests.The artificial forests are planted with Hinoki and cedars at a density of 3000 trees per hectare.Both species are cultivated for construction and other projects.The time between planting and shipment is 50 years.Within this term, forestry workers must not only investigate and manage the resource quantity (tree growth) in the forest but also eliminate weeds.Such work places a large labor burden on forestry workers.Moreover, the falling employment rate has depleted the work force, causing delay of these tasks and degradation of the forest.Our laboratory is developing an autonomous self-moving robot equipped with a weeding mechanism for forest investigation.The proposed robot reduces the labor burden in forestry work.
Forest scenarios change daily under the dynamic conditions of tree and weed growth.Moreover, forests are large-area, steep-grade natural environments that are not easily patrolled by humans.For the same reasons, map information and path planning is rendered difficult in forests.Both tasks (maintenance and path planning) could be much more easily accomplished by an appropriately designed mobile robot.
This study proposes a robot that moves autonomously in forest environments.When moving, the robot must avoid the trees (resources).As mentioned above, a forest path is difficult to plan in advance, so the robot must detect its plausible trajectory from information of the equipped sensors.The final goal of this study is to develop a robot that eliminates weeds and investigates resource quality while traveling extensively through the forest.Fig. 1 shows an overview of the robot.The robot platform is an all-terrain vehicle (ATV; Kawasaki, Inc.),

P -368
which can access the difficult forest terrain.The external situation is recognized by a depth sensor and a CCD camera equipped on the ATV.The brake lever, accelerator lever and steering are controlled by a motor.Weeding is performed by a multi-blade mechanism attached between the front wheels of the ATV.

Fig. 1. Overview of the robot
As a first step toward this goal, we are developing an autonomous self-moving robot.In previous studies, environment recognition and autonomous movement control have been achieved by laser range finders (LRFs) and charge-coupled device (CCD) cameras [1][2].In this study, the external situation is recognized by a depth sensor and a CCD camera.The depth sensor is less precise than the LRF, but is considerably cheaper.In addition, it is easily introduced to the robot system because its information is easily interpretable from images.The robot always moves toward a traversable gap between the trees.

AUTONOMOUS SELF-MOVING SYSTEM
The main sensor that detects external situations is the depth camera.This sensor provides the information (placement, size and number) of the objects ahead of the robot.A flowchart of the robotic system is shown in Fig. 2. The various processing stages are detailed below.Recognition of external situation by depth sensor and CCD camera The depth sensor utilized in this study is commonly called Infinisoleil (Nippon signal, Inc.).This sensor measures the depth of an object from its installation position.The horizontal and vertical components of the viewing angle are 60° and 50°, respectively.The maximum measurable depth is 15 m.The depth values in the grayscale depth image are converted to pixel values (Fig. 3a), and to a bird's eye view (x-z plane image) via three-dimensional conversions (Fig. 3b).The placement, size and number of the trees are recognized by labeling processing on the x-z plane image.

Fig. 2. Flowchart of the proposed robotic system B. Decision of the target position
The passable gaps through the trees and weeds are detected in the x-z plane image after the label processing.Fig. 4 shows the measuring results of the tree gaps.After selecting a passable gap, the robot sets the center point of the gap as the target point.If several passable gaps are detected, the robot selects one of these gaps (currently, the widest one).If no passable gaps are detected, the robot measures the gaps between the trees and the line of the sensor angle, then selects the left or right gap (whichever is larger).(1) Controlling each actuator In the specified order, the system's motors control the steering, accelerator lever and brake levers.The control order is governed by a rotary encoder attached to the rear wheel.Once reach the desired distance, the process returns to Step A.

Detection of young plant system
Fig. 5. shows the environment in which the robot actually moves.The trees and weeds are densely grown in the forest, and both plant types appear in the x-z plane image.To enable weed avoidance by the robot, the external environment is recognized by combining the depth sensor data and the RGB image acquired by the CCD camera (which is installed next to the depth sensor).

Combining Depth data and RGB image
Combining depth data and RGB image using perspective projection method.The relationship between 2D point position in pixel coordinates ( [ 1] ) and 3D point position in World coordinates ( [ ] ) is calculated by (4).K is intrinsic parameters of camera and, R, T are the extrinsic parameters. (4)

Extraction of young plants
The trees are distinguished by the green domains in the RGB image.The pixels of the tree domains in the depth image are identified by the corresponding green pixels in the RGB image.The x-z planar image captures the weed information, as shown in

Conclusion
We developed an autonomous self-moving robot.This robot is composed of an "autonomous self-moving system" and a "detection of young plant system".This robot can automatically act by avoiding obstacles in the front.Moreover, it is also possible to judge whether the object in front is young plants or weeds, and to avoid only the young plants.