An image-based visual servoing control method for UAVs based on fuzzy logic

Visual servoing is a method to achieve precise positioning and motion control of objects by visual feedback, and it is widely applied in the fields of robotics and unmanned aerial vehicles (UAVs) in recent years. This paper presents a novel image-based visual servoing (IBVS) control method for UAVs based on fuzzy logic to effectively solve the problem under field of view constraint and improve the control efficiency. In this paper, a fuzzy logic of servo gain is designed for the control input of visual servoing, which solves the problem of feature loss in IBVS and improves the efficiency. Meanwhile, a deep computing method based on known data is proposed to solve the unknown depth of Jacobian matrix, which makes the control easier to converge. The effectiveness of the proposed method is verified by the simulation of a quadrotor UAV equipped with a monocular camera.


Introduction
Rotor UAVs have developed rapidly in the past decade with the progress of UAVs and information technology. 1 Rotor UAVs can mainly be categorized as single rotors, quadrotors, octorotors, and hexarotors. Since the advantages of small size, light weight, low cost, simple structure, and good flexibility, 2 quadrotors are widely used in a variety of scenarios such as UAV performance, small logistics transportation, pesticide spraying, high-altitude auxiliary fire-fighting, and so on. 3 These tasks require that the UAVs can achieve precise positioning, altitude hovering, and target tracking in the application scenarios. At present, most of commercial UAVs use GPS to realize positioning. 4 However, the accuracy of GPS may be poor or even invalid in some high-rise buildings, crowded areas, or indoor environment, resulting in mission failed. 5,6 Therefore, it is of prime importance to study the autonomous hovering and tracking control as well as find an alternative solution in the case of GPSdenied.
Visual sensors were applied in UAV control by scholars to build a vision-based rotor UAV control system. 7 The visual servoing of UAV aims to obtain the visual image information of the target position and to control the movement of UAV, resulting in a stable state at the relative position and attitude. 8 Visual servoing can be divided into position-based visual servoing (PBVS) and image-based visual servoing (IBVS). 9 The error of PBVS is defined in 3D Cartesian space, which is very sensitive to camera calibration parameters and 1 estimated accuracy of target position. IBVS directly uses the image feature error of the image plane to derive the input of control and does not need to estimate the 3D position information. Moreover, it does not affected by the camera parameters. Different from PBVS, accurate geometric models of visual objects are not required in IBVS. 10 Therefore, IBVS is widely used in mobile robots and UAV control. [11][12][13][14] Servo gain has effect on the control speed and stability of IBVS system. [15][16][17] When the servo gain is too large, the linear velocity and angular velocity of the system will increase, resulting in oscillation and the stability of the system is undermined. On the contrary, when the servo gain is too small, it will decrease convergence and cost too much time for control process. Therefore, it is crucial to set a reasonable servo gain during servo control. At present, the servo gain is usually set as a fixed value empirically in most cases. 18,19 Correspondingly, the method cannot achieve accurate control under the circumstance of complex nonlinear.
Recently, some scholars have tried to adjust the servo gain in real time. For example, the adaptive gain matrix is adjusted by the deviation of the training image relative to target object, so that the additional information from the image processing software can be used to improve the performance of the visual servo controller. To achieve dynamic and stable visual servoing, the proportional gain is used as a substitute for the servo gain function in the proportional integral differential (PID) controller. 20 However, the parameters of the PID controller are not easy to be calculated and there is no general approach for different tasks. Combining reinforcement learning with visual servoing control system, the method of dynamically selecting servo gain improves the performance of visual servo control. [21][22][23] Generally, reinforcement learning is computationally expensive and has bad real-time performance. These studies fully show that appropriate servo gain is necessary for a good control. Meanwhile, the efficiency and universality of these methods should be improved, too.
Fuzzy logic has a good effect on expressing qualitative knowledge and experience with unclear boundaries. 24,25 It uses the concept of membership function to distinguish fuzzy sets, deal with fuzzy relations, and simulate human brain to implement rule-based reasoning. Combining fuzzy logic with IBVS control, the system can adaptively select the appropriate servo gain in different states. Compared with reinforcement learning with huge amount of computation, fuzzy logic has the advantages of small amount of computation and good real-time performance. The fuzzy rule is the core of the fuzzy logic algorithm. 26,27 This paper presents an image-based visual servoing (IBVS) control method for UAV with fuzzy logic. Firstly, two independent servo gains are designed for linear velocity and angular velocity, and fuzzy logic is designed for the two independent servo gains to adjust servo gain adaptively, which solves the issue of feature loss in IBVS and improves efficiency to some extent. Secondly, a depth calculation method based on known data is proposed to solve the problem of unknown depth of Jacobian matrix, which is easier to converge convergence than fixed depth value.
The remainder of the paper is organized as follows. Section ''Related works'' is devoted to the related works for the proposed technique, such as the mode of quadrotors and classical IBVS method. Section ''IBVS based on fuzzy logic'' presents the IBVS based on fuzzy logic for adjustment of the adaptive servoing gain. Simulations are detailed in Section ''Simulations and experiments'' to illustrate the performance of the proposed method. Finally, conclusions are drawn in Section ''Conclusions.''

Quadrotor model description
The coordinate frames of a quadrotor UAV can be shown in Figure 1, where X b À Y b À Z b denotes the body-fixed frame and X i À Y i À Z i denotes the world frame. For simplicity, it is assumed that the camera frame is coincident with the quadrotor body frame. The transformation between inertial frame and body frame is realized by rotating matrix R. The rotation matrix R is obtained by (u, u, c) rotation as R = R u R u R c . Assuming the position of the UAV's centroid is where sk() is the skew-symmetric matrix and sk(a)b = a 3 b, ''3 '' is the cross product operator,  (3) is the orthogonal rotation matrix from body-fixed frame to world frame, I 2 < 3 3 3 is the UAV's constant inertia matrix around the centroid. F and G can be calculated as where F is in the body-fixed frame; g is the gravity acceleration; e x , e y , and e z are the unit vectors. Then, The IBVS model of quadrotor Figure 2, where X À Y À Z represents the camera frame; x À y represents the image plane frame. The intersection of Z axis of the camera frame and the image plane frame is called principal point and its coordinate in image plane frame is (u 0 , v 0 ).
The relationship between camera coordinate system and image plane coordinate is where f is the focal length, and a and b are the respective numbers of pixels per unit distance in the horizontal and vertical directions. The derivative of equation (4) with respect to time can be expressed as The relationship between the position coordinate of the target in the camera coordinate system and the motion speed of the camera in the inertial coordinate system: where Substituting equation (6) into equation (5), yields For convenience, equation (7) can be rewritten as where J i is the image Jacobian for the ith feature point. The objective of IBVS is to minimize the feature error vector where s 2 < 2M 3 1 are M features' current coordinates in image, s Ã 2 < 2M 3 1 are the features' desired positions. Meanwhile, the position of feature position is related to the motion of UAV. Derivation of equation (9) can be expressed as T are the UAV' s linear and angular velocities in the body-fixed frame, J 2 < 2M 3 6 is the image Jacobian matrix. In order to ensure the exponential decline of characteristic error, the speed control quantity of UAV can be obtained by the following formula where J + = J T J À Á À1 J T is the Moore Penrose pseudo inverse matrix of J, l is the servo gain. Substituting equation (11) into equation (10), we obtain the closed-loop system In order to prove the stability of the control system, the candidate Lyapunov function is defined as Then, the derivative of the candidate Lyapunov function can be computed as In this paper, l is limited in the range [0, 1.5]. Thus, _ E\0 for any e 6 ¼ 0, which guarantees the global asymptotic stability of the closed-loop system.

IBVS based on fuzzy logic
This section introduces the IBVS method based on fuzzy logic. The scheme of this method is shown in Figure 3.

The fuzzy logic
According to equation (11), the control input of IBVS is mainly affected by l and e, where e is determined by the environment and cannot be changed, while the visual servo gain l is a fixed value according to experience. Obviously, the initial position error e is very large, which will bring a large control input. Excessive control may cause the UAV to deviate from the field of view, so it is necessary to set a small servo gain l. However, when the UAV is close to the target position, the error e will become very small, which will lead to small control input and require a significant amount of convergence time. As a result, it is necessary to set a large servo gain to improve efficiency. Therefore, we propose a method based on fuzzy logic, which can dynamically adjust the servo gain l according to the absolute value of the error e in the UAV image plane and its derivative. The method is based on experience gained by using algorithms and modifying parameter values. Compared with other algorithms, such as machine learning, it requires less time to develop and implement.  Figure 5. Table 1 shown fuzzy rules to estimate l based on e j j and _ e j j.

Depth estimation method
The coordinates of the target center point are respectively. The distance between P 1 and P 2 is l 1 , the distance between P 3 and P 4 is l 2 .
As shown in Figure 6, the coordinates of the four target points in the image are p 1 (x 1 , y 1 ), p 2 (x 2 , y 2 ), p 3 (x 3 , y 3 ), p 4 (x 4 , y 4 ), respectively. According to equation (4), it has Since the plane depth information is consistent, in order to simplify the operation, set Z 1 = Z 2 = Z 0 1 , Z 3 = Z 4 = Z 0 2 . Substituting equations (14) and (15) into equations (12) and (13), respectively, yields ((x 1 À x 2 ) 2 + (y 1 À y 2 ) 2 )Z 0 The value of depth Z 0 1 and Z 0 2 can be obtained according to equation (18), it has In order to reduce the error, the final depth Z is the average of the two values, that is Substituting the value of Z into equation (7), then the Jacobian matrix can be obtained.

Simulations and experiments
In this section, several numerical examples are illustrated to prove the effectiveness and performance of IBVS method based on fuzzy logic.
In this paper, fuzzy logic is calculated by fuzzy logic toolbox in MATLAB. The servo gain l is shown in Figure 7. When the feature error and change rate of feature error are large, the output gain is minimum, which can effectively prevent the excessive control quantity. When the feature error and change rate of feature error are very small, the output gain is the largest, which can increase the control quantity and thus improve the efficiency of control.
In order to prove the effectiveness and practicability of the proposed method, C-IBVS 28 under different servo gains in the simulation platform is used for comparison. Figure 7 shows position path trajectory diagram of quadrotors UAV under different conditions. Figure 8 shows that when the gain value is small, the motion trajectory of UAV is relatively gentle. With the increase of gain, the movement becomes more and more intense, leading to greater changes in the trajectory. Figure 9 shows the feature trajectories in the image plane. It can be concluded that when the gain l = 1:5 is adopted in the C-IBVS method, the feature trajectories in the image change violently, and the feature points have reached the image edge, which may lead to the risk of losing the target. When l = 1, there is a safe distance between the feature trajectories and the edge. When l = 0:5, the image track is the shortest and the safety distance is the largest. The feature trajectory of the IBVS based on fuzzy logic also does not change  violently, and sufficient safety distance is reserved to ensure that the target features are not lost. Figure 10 shows the motion trajectories of UAV in different conditions. It can be seen that when l = 0:5, the convergence time needs 25 times steps and its trajectories very smooth. When l = 1, it needs 15 time steps and there is slight vibration in X, Y, and Z directions. When l = 1:5, it also needs 15 speed steps, and the vibration is very obvious. The convergence time of our method is 12 times steps and the curve has no overshoot, which proves it has good control performance and high efficiency.

Conclusions
In this paper, an image-based visual servoing (IBVS) control method of UAV based on fuzzy logic is proposed. Firstly, the fuzzy logic is designed for the servo gain to adaptively adjust the servo gain, which solves the problem of feature loss in IBVS and improves the efficiency. Secondly, to solve the problem of unknown depth of Jacobian matrix, a depth calculation method based on known data is proposed, which is more accurate than the fixed depth value, and the control is easier to converge. The simulation results show that the proposed method is effective.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.