Improvement of Robot ' s Self-localization by Using Observer View Positional Information

This study aimed to improve the precision of the robot’s self-localization in the standard platform league of RoboCup, i.e. a robotic soccer competition. For improving the precision of the self-localization, we suggest a new technique that uses a camera out of the field for assistance. Robots in the field use the unscented particle filter that estimates their position from landmark. When a robot which is equipped with the filter cannot recognize landmarks exactly, particles spread and the precision of the self-localization decreases. Therefore, the overlooking camera out of the field observes each robot's position. When particles spread, the camera out of the field estimates the foot of a robot, and then the robot sprinkles particles on the neighborhood again. In this way, even if a robot cannot recognize landmarks exactly, assists of camera out of the field revise the position of particles and improve the precision.


Introduction
The RoboCup (Robot Soccer World Cup) project sets a goal that a fully autonomous robot team shall win against the most recent winning team of FIFA World Cup in soccer by 2050.The RoboCup Soccer Standard Platform League (SPL) is a league that all teams compete with the same standard humanoid robot called NAO developed by Softbank Robotics 1 .The robot operates fully autonomously, that is with no external control, neither by humans nor by computers.In RoboCup Soccer SPL, the robot must process all the calculations on vision processing and decision making using low-end CPU (Intel Atom 1.6GHz).In addition, the robot must devote a lot of computation time to percept a white goal and a mostly white ball in vision processing.Each team has five player robots and one coaching robot that can send instructions at a perspective view from outside the field.An example of the positional relationship between the field and the coaching robot is shown in Fig. 1.
In RoboCup Soccer SPL, a self-localization mechanism that estimates its own position and orientation is required.We use an unscented particle filter (UPF) 2 which is currently a mainstream method 3 for self-localization.However, a robot cannot accurately grasp any landmarks, then particles do not converge, so the estimation error of self-localization becomes large.In addition to the conventional method, by using the coaching robot as the observer, an area where a player is likely to exist is specified.We propose a method to promote convergence of particles by correcting the

P -662
coordinates of scattering particles based on the information from the coaching robot.From this method, estimation error of the self-location is assumed to be suppressed when the player cannot accurately recognize landmarks.

Unscented Particle Filter (UPF)
The UPF is a combination of an unscented Kalman filter (UKF) 4 and a particle filter (PF) 5 .The filter can solve the problem in particle filter which resampling will fail if the new measurements appear in the tail of prior or if the likelihood is too peaked in comparison to the prior 2 .The difference between UPF and PF is whether UPF is used for updating prediction step of PF.

Proposed method
When UPF cannot accurately grasp landmarks, particles may not converge.When such a situation occurs, the coaching robot behaves as an observer, assists to estimate the self-localization of the player from the outside, and encourages the convergence of the particles.

True perspective image
At first, the coaching robot gets a perspective image as shown in Fig. 2. Then we transform it to a true perspective image using homography transform 6 (see Fig. 3).Since the homography transform requires more than four coordinates on an image, the coaching robot will select more than four points out of 17 candidates, i.e. four corners of the field, eight corners of the penalty area, two penalty crosses, two intersections of the center line and the side lines, and a point of the center mark.In Fig. 2, we use eight points by indicating red circles.

Estimation of a player's position
We estimate a straight line with a high possibility that a robot exists.Only the uniform region is extracted from the image after homography transform.Then, the region is denoising by opening processing 7 (see Fig. 4).After that, we extract a region of the own team's jersey, it is certain that the player robot will be on the line calculated by simple linear regression analysis (see Fig. 5).

Estimation of a foot position
Finally, the foot position of the player robot is estimated as the bottom point of non-green regions on the line (Fig. 5).We transform the color space of the perspective image into L*a*b* to detect the color of the field.L* stands for lightness and a* and b* are hue and saturation, respectively.The color approaches red as the value of a* becomes high and green as it becomes low, and yellow as the value of b* becomes high and blue as it becomes low.We binarize the image of a* by Otsu's thresholding method.As applying the homography transform to the image, the true perspective image is shown in Fig. 6.The estimated foot position is illustrated in Fig. 7 as a red dot.

Determination of resampling position
Based on the foot position, the locations where particles are scattered are determined.Taking into account the error of the estimated foot position, the positions of particles are determined according to the normal distribution as given by eq. ( 1).
where μ is the mean and σ 2 is the variance.In this paper, the value of μ is defined by the foot position x, and the value of σ is set to 1 / α.The particles are gathered into the foot estimated by increasing the value of α.Based on the above, the positions of particles are indicated by yellow circles in Fig. 8.

Experiment
We verify the accuracy whether the player can assist the self-localization using the image acquired by the coaching robot.Firstly, it is evaluated how the estimated foot position is closer to the true one by comparing the proposed and the conventional methods.Secondly, after correcting the position of the particle by the proposed method, it is verified whether it is close to the true position as compared with the conventional method.The experiments were conducted using one player under an LED uniform lighting environment using natural light.We use the OpenCV 3.1 library as a tool for image processing.The value of α in the normal distribution in Section 3.4 is empirically set to 6 in order to prevent particles from spreading.The number of particles is 12.

Experiment 1 (Verification of the accuracy of the estimated foot position)
When distributing particles using the proposed method, the accuracy of the estimated foot positionin Section 3.3 is critical the resampling coordinates of the particles is highly baased on the foot position.Therefore, we verify the accuracy of the estimated foot position using the proposed method by measuring the actual foot position.in addition, we compare the estimation error of the selflocalization with the conventional method.
In this experiment, we verifies whether an error changes depending on the distance between the coaching and player robots.We then prepared two kinds of routes as shown in Fig. 9.The difference of pattern A and B is whether the player robot approaches the coaching robot or not.Experiments were conducted three times each and the errors are averaged.

Result of Experiment 1
The experimental results in Experiment 1 are shown in Table 1.From Table 1, it is shown that the accuracy of the proposed method is better than that of the conventional method irrespective of pattern A and B. In addition, the improved rate of the accuracy of the estimated foot position is 78%.Therefore, using the estimated foot position, resampling particles is expected to improve the accuracy.

Experiment 2 (Verification of the accuracy of self-localization after resampling)
After resampling particles using the proposed method, when the player robot moves again, we verify the  P -664 accuracy of the self-localization.We also compare the estimation error of the self-localization with the conventional method.As seen in Section 4.1, the player robot moves by two kinds of routes as illustrated in Fig. 10.Experiments were carried out three times each and the errors against the true position are averaged to compare the accuracy.When resampling is performed using the proposed method, the direction of the particles is determined according to the normal distribution based on the estimated direction.The normal distribution is given by eq. ( 1).In self-localization, the direction is corrected by recognizing landmarks.Therefore, the value of μ is set to the previous estimated direction and the value of σ is empirically set to π/8.

Result of Experiment 2
The results in Experiment 2 are shown in Table 2. From Table 2, it is shown that the proposed method is more accurate than the conventional method irrespective of pattern A and B. In addition, the improved rate of the average accuracy is 64%.

Conclusion
In this paper, we have proposed the method for improving the accuracy of the self-localization of UPF used in RoboCup Soccer SPL.In the proposed method, we could correct the position of the particles using the observer (coaching robot) and assist the subject (player robot) who performs self-localization estimation.
As future work, we will assist the estimation of the selflocalization for multi-player robots.

Fig. 6 .
Fig. 6.A binary image of the player robot.

Table 1 .
Average error of the estimated foot position (Experiment 1)