Visual Servo Control of a Legged Mobile Robot Based on Homography

 Abstract : Due to the intermittent motion of a legged mobile robot, an additional periodic movement must be introduced that directly affects the image processing accuracy and destabilizes the visual servo control of the robot. To address this problem, this paper investigates a control scheme for the visual servoing of a legged mobile robot equipped with a fixed monocular camera. The kinematics of the legged mobile robot and homography-based visual servoing are employed to allow the robot to achieve the desired pose. By investigating the homographic relationship between the current and desired poses, the approach has no need for transcendental knowledge of the three-dimensional geometry of the target image. The feature points are directly extracted from the images to evaluate the homography matrix. To reduce the effects caused by the intermittent motions of the legged robot, an improved adaptive median filter is proposed. Furthermore, a sliding mode controller is designed, and a Lyapunov-based approach is used to analyze the stability of the control system. With the aid of CoppeliaSim software, a simulation is implemented to verify the effectiveness of the proposed method.


Introduction
Legged mobile robots (LMRs) have the ability to adapt to unstructured work environments and are an important component of robotic applications. However, due to the Brockett condition [1], mobile robots need additional control information to address asymptotic stability and feedback stabilization issues related to differential geometric control theory. Thus, cameras are often used to provide the vision function for mobile robots. The vision-based control information extracted from captured images can be introduced into the robot control loop to increase the moving stability and accuracy of the system as a whole. Thus, notable advances have been made to improve what is known as visual servo control [2][3][4][5].
According to the type of visual feedback information, visual servo systems can be roughly classified into two main categories: position-based visual servoing (PBVS) and image-based visual servoing (IBVS) controls [6]. In PBVS, the controller is designed in three-dimensional (3D) Cartesian space. The 3D pose of the camera is evaluated from the visual features of the target object. In IBVS, the controller is designed directly using visual features without the need for 3D reconstruction. In addition, a combination of these two approaches also exists, which is known as hybrid or 2.5D visual servoing. In this approach, a decoupled controller can be designed by using Euclidean homography based on the estimation of a monocular displacement from the current to the desired camera poses [7,8]. Investigations on the visual servo control of wheeled mobile robots (WMRs) have been carried out with these available methods. Based on IBVS, Wang et al. [9], presented an adaptive backstepping control law to track a moving target. Ito et al. [10], proposed a control scheme utilizing vision-based trajectory planning and tracking control to precisely generate a trajectory following a moving target. To meet field of view constraints, Xu et al. [11], developed a three-step epipolar-based visual servoing for a mobile robot. By employing a Lyapunov-based approach, Fang et al. [12], presented an adaptive estimation method for the compensation of constants and unmeasurable parameters in a homography matrix. MacKunis et al. [13], investigated the projective geometric relationships between target points in a reference image and points in a live image and proposed a hybrid visual servo control method to achieve the simultaneous tracking of two WMRs. In addition, Becerra et al. [14], proposed a sliding mode control law for mobile robots in which convergence to the target could be achieved without auxiliary images.
It should be noted that the abovementioned studies mainly focused on WMRs. Compared with WMRs, the complex mechanical structures and motion characteristics of LMRs pose challenges in achieving visual servo control. Due to the intermittent motion of an LMR, there is an additional periodic movement that affects the accuracy of the image processing, thereby destabilizing visual servoing. Concerning the LMR developed in our previous work [15], this paper investigates a homography-based control scheme for the visual servoing of the robot. To reduce the influence of intermittent motion on image processing, an improved median filter algorithm is proposed by considering the kinematics of the LMR. The rest of the paper is organized as follows. A brief review of the structure and kinematic analysis of the robot is introduced in Section 2. Then, an adaptive median filter is developed for pose estimation in Section 3. Section 4 addresses the design of a sliding mode controller for visual servoing. Finally, a simulation is carried out to verify the validity of the proposed control scheme in Section 5, and conclusions are drawn in Section 6.

Mathematic modeling
The LMR proposed in [15] is shown in Figure 1(a). This robot is composed of eight planar six-bar linkages with different arrangements. By using cylindrical and conical gears, four linkages on one side are driven by a common motor, as shown in Figure 1(b), and   is the velocity of the input joint rate, while LMR v is the moving speed of a group of four linkages with respect to the ground. As shown in Figure 1(b), a pair of six-bar linkages on one side of the LMR shares the same ground link and is actuated by two cranks on the same gear arranged with a phase angle of π. Then, the steering of the robot is realized by controlling the motor speeds to achieve the difference in LMR v for the two sides. To determine the auxiliary inputs for the controller, the kinematics of the six-bar linkages are analyzed in this section. Coordinate frames are defined first to describe the motion of the robot, and then a homography model is derived to calculate the pose of the robot using the feature points of the images. Figure 2 shows a schematic diagram of the six-bar linkage consisting of a crank (link-1), a rocker (link-3), a ground link (link-4), two floating links (link-2 and link-5), and an output link (link-6). Points A~F represent the centers of the revolute joints;  is the input angle of the crank. A frame o xy  is established to measure all vectors, as shown in Figure 2. According to the kinematic modeling method of the Assur group presented in [15], the position vectors of points C, F and G can be directly obtained.

Kinematic of the LMR
where B r , D r and E r are the position vectors of points B, D and E, respectively; BC l , CD l and EF l represent the lengths of link-2, link-3 and link-5, respectively; BD l , CE l , CF l , CG l and FG l are the distances between the two points in the subscripts. Then, substituting Eqs. (1) and (2)  1 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q where   cos sin ; AD l is the distance between points A and D.
Hence, given the dimensional parameters of the links, i.e., AB l , AE l , AD l , BC l , CD l , EF l , CF l , FG l and CG l , and the input angle  , the unknown variables BD l and Accordingly, the output trajectory Ω (point G) of a pair of linkages can be divided into two segments.
Furthermore, to control the locomotion of the robot, the relationship between the moving speed LMR v and the velocity of the input joint rate   must be established. By obtaining the derivatives of Eq. (6) with respect to time, it is convenient to obtain LMR v as the component of Ω  in the direction of the x axis, i.e., Eq. (7) gives an implicit expression of the velocity mapping between LMR v and   . However, it is difficult to evaluate   using this equation for a given set of LMR v and  . To save computational cost, an iteration algorithm based on the kinematic model is proposed for the solution of   (see Figure 3). In addition, to evaluate the linear and angular velocities of the LMR, a body-fixed frame  is established at the center point O of the robot (see Figure 4). The frame of the monocular camera mounted on the robot is set to be coincident with  for simplification. With the approach, the imaging plane  of the camera is perpendicular to the X axis of frame  . Then, the following equation can be achieved: where L is the nominal distance between two groups of linkages; v and  are the linear and angular velocities of the robot, respectively; and r v ( l v ) denotes the

Coordinate frames relationship
The purpose of this paper is to regulate the position and orientation of the LMR based on image feedback of a fixed target. In this context, the desired pose of the robot is defined by a prerecorded image of a set of coplanar feature points, i P ( 1, 2, , i N   ), in plane obj  (see Figure 5). To evaluate the current pose using the homography matrix, the relationship between the current and desired poses of the robot is established in this section.
In Figure 5, frame *  (  ) is defined to describe the desired (current) pose of the robot. Plane where  is the rotational angle of frame the current pose of the robot in frame *  can be determined by a defined vector Then, the visual servoing controls the robot to reach the desired pose by achieving

Visual servoing kinematics
To control the robot, the mapping between q  and the linear and angular velocities of the robot, i.e., v and  , must be established. The velocity of point * O with respect to frame  can be derived as By expanding Eq. (12), the following expressions can be obtained.
x y According to the definition given in Section 2.2, the relationship between the position vector T measured in  and the position vector * T measured in *  can be related by [16] *   T RT .
Expanding Eq. (14) By using Eqs. (13) and (15), it can be solved that According to Eq. (16), the pose of the robot can be adjusted by vector u to accomplish the visual servoing task. To facilitate the design and analysis of the controller, we use the auxiliary error signals as the inputs of the controller. The mapping between the auxiliary error signals and the pose of the robot is given as Consequently, Eqs. (11) and (18) will be used to design the controller for the visual servoing of the robot.

Homography calculation and decomposition
To calculate the auxiliary error signals in Eq. (17), the current pose of the robot must be obtained. In this paper, the current pose of the robot is retrieved from the current and desired images by homography calculation and decomposition. For a set of coplanar feature points i P on plane obj  (see Figure 5), two images of these points taken by the same camera at different positions ( O and * O ) can be geometrically related by a homography matrix H [17]. With the aid of algorithms presented in [18][19][20], the feature points in the imaging plane can be recognized. Let i p and * i p denote the pixel coordinates in two imaging planes of the current and desired poses, respectively. The mapping between i p and * i p can be given by According to Eqs. (21) and (14), q can be evaluated by the homography decomposition. Then, the auxiliary error signals can be obtained using Eqs. (14) and (17).

Adaptive median filter for pose estimation
The process of calculating the current pose of the robot assumes that the centers of the current and desired poses are coplanar. However, due to the intermittent motion of the robot, there is periodic interference affecting the image processing. Therefore, the accuracy of the estimation of the homography matrix will be reduced by the displacement of the robot in the direction perpendicular to the motion plane. To reduce this effect as well as the noise of electromagnetic interference, an adaptive median filter (AMF) is used for image processing before homography calculation, and a filter based on a median filter (MF) algorithm is employed to obtain the data of the current pose by using homography decomposition in order to enhance the stability of the images. The standard MF for image processing is a nonlinear signal processing algorithm. The concept of the MF algorithm is to replace the center gray pixel by the median value in the window sliding in the image space. Compared with the standard MF, the AMF improves this method by adaptively changing the size of the sliding window. Hence, a smaller size filter can be used to maintain the image details with low noise, while a larger size filter is applied to remove the high noise. By The closed-loop kinematics of auxiliary error signals can then be obtained by substituting Eq. (26) into Eq. (18). Moreover, it is essential to prove that the controller can guarantee the asymptotic stability of the control system. The analysis is given as follows: Select the Lyapunov function   According to the relationship of frames (see Figure 5) and Eq. (17) Hence, the stability of the control system is proven.
According to the designed controllers, the scheme developed to realize visual servo control is shown in Figure 8.

Simulation
In this section, a simulation is carried out to verify the validity of the proposed control scheme. A virtual model of the robot having the same dimensions as the prototype is built in the environment of the commercial software CoppeliaSim. The vision sensor provided by the software serves as the camera to obtain the target image. The control scheme is written in C++ and enabled to control the virtual model via the Remote API of the software.
As shown in Figure 9, the setup of the simulation includes the model of the robot with a vision sensor and a target image having six red circles. With respect to the global frame g  defined in the software, the target image is fixed at   . Table 1 lists the dimensional parameters of the robot, and Table 2 lists the size of the image and the parameters of the vision sensor and controller. Figure 10(a) shows the photographs of the simulation, in which the robot moves from the initial pose to the final pose, and Figure 10    The reason for the fluctuation of  is that the update frequency of the control law is not fast enough to adjust the variation of the angular velocity. The simulation results demonstrate that the proposed control scheme can be effectively used for the visual servoing of the robot.

Conclusion
Investigation on the visual servo control scheme of a legged mobile robot is presented in this paper. The following conclusions are drawn.
(1) Kinematic analysis of the LMR is carried out to establish the relationship between the linear and angular velocities and the motor speeds of the robot. (2) Aiming at reducing the effects of the interference caused by the intermittent motion of the robot on estimating the homography matrix, an improved median filter is designed. (3) Based on the sliding mode control method, a controller is designed to generate the linear and angular velocities of the robot by using the auxiliary error signals, and the stability of the control scheme is verified. (4) A simulation is implemented to demonstrate the validity of the proposed control scheme using CoppeliaSim software. The results show that the proposed method can effectively control the LMR to reach the desired pose.