Adaptive Visually Servoed Tracking Control for Wheeled Mobile RobotwithUncertainModelParameters inComplexEnvironment

,is paper investigates the stabilization and trajectory tracking problem of wheeled mobile robot with a ceiling-mounted camera in complex environment. First, an adaptive visual servoing controller is proposed based on the uncalibrated kinematic model due to the complex operation environment. ,en, an adaptive controller is derived to provide a solution of uncertain dynamic control for a wheeled mobile robot subject to parametric uncertainties. Furthermore, the proposed controllers can be applied to a more general situation where the parallelism requirement between the image plane and operation plane is no more needed. ,e overparameterization of regressor matrices is avoided by exploring the structure of the camera-robot system, and thus, the computational complexity of the controller can be simplified.,e Lyapunovmethod is employed to testify the stability of a closedloop system. Finally, simulation results are presented to demonstrate the performance of the suggested control.


Introduction
In recent decades, the wheeled mobile robots (WMRs) have received increasing attention due to their promising applications in transportation, health care, security, and so on, which promotes the research of high-accuracy tracking control and stability analysis of the WMRs [1][2][3][4]. Particularly, WMR belongs to the nonholonomic mechanical system which is unable to be stabilized at one equilibrium by means of continuous and static state feedback controller [5][6][7], leading to the great complexity of the study about WMRs. A significant direction of the motion control of WMR is to employ various kinds of sensors in a closed-loop controller. e visual sensor, one of the typical noncontact sensors, has particular advantages such as abundant visual information and high efficiency; hence, visual servoing control of WMR has become a vigorous research field worldwide.
Numerous scientific achievements have been reported on visual servoing and vision-based manipulations [8,9]. Just like the robot manipulators, the vision system in a mobile robot can be formed by two kinds of configurations, namely, eye-in-hand configuration [10,11] and fixedcamera configuration [12,13], respectively. For the first category configuration, the camera is mounted on the endeffecter. In contrast, the camera is called a static-camera or fixed-camera configuration when the camera is located on the ceiling. Till now, there has been a plethora of prominent literature concerning the visual servoing of nonholonomic mobile robots. To mention a few, in [14], position-based visual servoing (PBVS) was employed for visual tracking between a WMR and a multi-DOF crane. In [15], a visual servoing scheme was presented for a nonholonomic mobile robot to combine the merits of PBVS and image-based visual servoing (IBVS). In [16], a novel strategy was proposed for visual servoing of a mobile robot and the difficult issue of the automatic extrinsic calibration was addressed. It should be noted that the above-mentioned works require the camera mounted on the end-effector to be tediously calibrated beforehand. Unfortunately, the controllers are very sensitive to camera calibration errors which may give rise to reduced accuracy. To obviate this limitation, the uncalibrated camera system has emerged as a valid tool for practical systems. In [17], two independent uncalibrated cameras were used to accomplish person tracking for a vision-based mobile robot subject to nonholonomic constraint. e authors in [18] addressed a visual servo regulation approach which can work well without the perfectly calibrated camera. To deal with the imperfect calibration of the camera, the visual servoing of nonholonomic mobile robots was proposed in [19], considering both unknown extrinsic parameters and unknown depth from the camera to the motion plane. In [20], without calibrating the camera, the eye-in-hand visual trajectory tracking control strategy was constructed to ensure that the WMR is able to track the desired trajectory. e aforesaid papers mainly discuss the visual servoing of nonholonomic mobile robots with eye-in-hand configuration. e fixed-camera configuration has the global sight and it enables the camera system to keep the observed object always in the field of view. erefore, many researchers also devote themselves to the solutions of a WMR with the fixed uncalibrated camera. For instance, in [21], the unified tracking and regulation WMR visual servoing control was studied and the state information can be utilized to formulate the WMR kinematic model. In [22], a monocular camera with a fixed position and orientation was used to track the desired trajectory for a WMR and the controller does not require the camera to be mounted. Taking the limited velocity of a WMR into account, the control scheme for tracking a moving target by a WMR was presented in [23]. Despite the significant progress of visual servoing with the fixed uncalibrated camera, the adaptability of these controllers is unsatisfactory since the camera plane is always required to be parallel to the motion plane of the robots. It means that the controllers in [21][22][23] are no longer effective when the camera is fixed at a general orientation on the ceiling. To overcome this drawback, the authors in [24,25] proposed the visual servoing of a mobile robot without the parallelism requirement. By employing an adaptive imagebased visual servoing approach, the camera image plane and the motion plane of WMRs are free from position constraint. However, all these methods suffer from the overparameterization in the process of the decoupled linear transformation. In addition, the previous controllers are developed via a kinematics-based model and the nonlinear dynamics are not taken into consideration in controller design.
Dynamic model-based control methods [26][27][28][29] reflect the motion of real mobile robots with significant dynamics characterized by mass and inertia as well as friction, which are otherwise not considered in kinematics-based model control. e nonlinear dynamics of the mobile robot usually contain uncertain and time-varying parameters. Consequently, the nonlinear dynamic controllers to deal with unmodeled robot dynamics diverse further research. Control methodologies such as adaptive control technique [6], sliding mode control technique [27], and neural network control technique [28] have been developed on dynamic model with uncertain parameters of mobile robots. By far, visual servoing control for mobile robots at the dynamic level can be found in [8,[30][31][32]. In [32], position/orientation tracking control of WMRs via an uncalibrated camera was considered and the adaptive controller was designed to compensate for the dynamic and the camera system uncertainties. It is noteworthy that the preceding studies are confined to visual servoing of mobile robots based on dynamic model, and these methods are invalid in a more general situation where the uncalibrated camera is fixed at an arbitrary position. Additionally, overparameterization limits the applicability of these controllers to a great extent.
In this paper, the stabilization and trajectory tracking problems of a wheeled mobile robot in complex environments are studied. e main contributions of this paper are threefold: (1) Two visual servoing controllers are proposed to stabilize a wheeled mobile robot with a ceilingmounted camera and the desired trajectory tracking can be realized. First, an adaptive visual servoing controller is proposed based on the kinematic model. en, an adaptive controller is derived to provide a solution of uncertain dynamic wheeled mobile robot subject to parametric uncertainties related to the camera system.
(2) An uncalibrated visual servoing control strategy is proposed to realize trajectory tracking of a WMR, whose major superiority lies in the avoidance of both the requirement that the camera plane must be parallel to the motion plane of the robots and the overparameterization as in [24,25]. Such a solution allows the controllers to be applied in a more general situation with a simpler structure and higher efficiency. (3) In comparison with the existing works for visual servoing mobile robot control in [19,33], the camera parameters, including the intrinsic and extrinsic parameters, are unnecessary to be well calibrated, and the tracking control can be ensured in the presence of uncertain dynamics.

Preliminaries and System Descriptions
roughout this paper, a typical setup for the visually servoed wheeled mobile robot is considered, as shown in Figure 1, where the camera is mounted on the ceiling to observe the movement of feature point labeled on the mobile robot. Let O b X b Y b Z b be the base coordinate frame, O c X c Y c Z c be the camera coordinate frame, and O m X m Y m Z m be the mobile robot coordinate frame, respectively. Furthermore, let O m be the center of mass of wheeled mobile robot, P be the feature point, and d be the distance from O m to P along the positive direction of axis X m . Without loss of generality, it is assumed that the robot moves in a specific plane. Note that both the image-based kinematic and dynamic control are fully considered in this paper.

Kinematics Model of Nonholonomic Mobile Robot in
Task Space. Let us firstly review the kinematics model of a mobile robot. Denote the task-space position of wheeled mobile robot with respect to the base coordinate frame by Remark 1. In this paper, parameter uncertainties of visual servoing robot system are addressed, which means that the real parameter values ϕ k,a and ϕ k,b in (11)- (15) are unknown in the control design. Moreover, the image depth is not required to be consistent during robot operation as in [25,36]; that is, the fixed-camera image plane can be not parallel to the operation plane, where a more realistic scenario is considered in both kinematic and dynamic control. In addition, the distance d between the feature point and the origin of the coordinate system O m X m Y m Z m is assumed to be uncalibrated, which, together with the above parameter uncertainties, imposes great complexity and challenge in visual tracking control. roughout this paper, the following assumptions hold.
Assumption 1. e feature point P can always be detected throughout the entire robot workspace such that the image position is continuously available. Moreover, the orientation θ of mobile robot can be measured by the encoders or other optical sensors mounted on the actuators.

Dynamics Model of Nonholonomic Mobile Robot.
e dynamic behavior of wheeled mobile robot can be expressed by the Euler-Lagrangian equation as follows [6,37]: is the Coriolis and centrifugal matrix, G ∈ R 3×1 denotes the gravitational force, B(θ) ∈ R 3×2 denotes the input transformation matrix, τ D ∈ R 2×1 represents the dynamic input torque, A(θ) ∈ R 1×3 is the so-called constraint vector with λ being the constraint force, and the constraint form can be further represented as It must be noted that matrices M(θ), V(θ, θ . ), G, B(θ), and A(θ) do not depend on the actual position of x b and y b (more details of the robotic dynamics model can be referred to [37]). Based on the kinematics (1) and (2), the following holds: Differentiating both sides of (18) and then substituting into the robot dynamics (16) and premultiplying both sides by S T (θ), we have where (17) is utilized in the process of formula simplification and respectively. To facilitate the control scheme, the dynamics properties of WMR are employed [37].

Property 2.
e inertia matrix M(θ) is symmetric and positive definite, which also satisfies where μ 1 and μ 2 are positive constants and ‖ · ‖ denotes the standard Euclidean norm.
Property 4. e dynamic equation (19) can be linearly restructured as where ρ ∈ R 2×1 is a differentiable vector, ϕ d ∈ R e 3 ×1 denotes the constant parameter vector of dynamics and is unknown in the control design, and E D (θ, θ . , ρ, _ ρ) ∈ R 2×e 3 is the regressor matrix of dynamics. (8) and (19), it can be found that the kinematic control and the dynamic control are related by τ K and τ D , respectively. If the designed kinematic input τ K is actually achievable in the task execution without any time delay, the visual tracking control can be conveniently realized by the kinematic loop. However, in most state-of-theart researches on wheeled mobile robot control [19,32,33,38], it is stressed that the motors assembled on the left and right wheels may not respond fast enough with the result that the actual kinematic control values τ K may lag behind the design values. us, in this paper, the dynamics control for visual servoing WMR is also addressed, simultaneously taking the mechanical parameter uncertainties into consideration; that is, the precise parameter values (e.g., robot mass, inertia, and friction) are not required to be exactly measured.

Problem Statement.
Based on the above system model and assumptions, the control problems from two different perspectives, namely, the kinematic and dynamic control, are addressed. Given a continuous desired trajectory y d , _ y d , € y d ∈ R 2×1 on the image plane, this paper aims to solve the following problems: P1L: assuming that the WMR responds fast enough, design an adaptive visual servoing kinematic controller (AVSKC) τ K such that the precise trajectory tracking performance can be obtained in the absence of calibrated camera model; that is, P2: when the kinematic input τ K is not always achievable, design an adaptive visual servoing dynamic controller (AVSDC) τ D such that (23) holds, simultaneously taking into account the uncalibrated camera-robot model.

Adaptive Visual Servoing Kinematic Control for Wheeled Mobile Robot under Uncalibrated Visual Model
In this part, we focus on the adaptive visual servoing kinematic control scheme for wheeled mobile robot with uncalibrated camera model, where the projection plane of camera does not need to be parallel to the operation plane during the execution of the mission, and the dynamic control will be exhibited in the next section. Since the parameters of the visual model are unknown, adaption laws are presented to estimate the real parameter values, and based on the estimated parameters, AVSKC is developed to realize the asymptotic image trajectory tracking. N(y, θ), z, and _ z be the estimated values of N(y, θ), z, and _ z by replacing the unknown parameters ϕ k,a and ϕ k,b in N(y, θ), z, and _ z with the estimations ϕ k,a and ϕ k,b , respectively, and the estimations are offered by the adaption laws. Define Δy � y − y d as the image error. en, inspired by [25], the AVSKC is designed as

Controller Design. Let
where α is a positive constant. In (24), the estimated visual model rather than the calibrated model is utilized, and the estimated depth and its differential are also introduced to compensate the model error since the image plane and the operation plane are nonparallel. Now, we can further analyze the closed-loop kinematics with depth information as follows: where Δϕ k,a � ϕ k,a − ϕ k,a and Δϕ k,b � ϕ k,b − ϕ k,b , and Property 1 is used. Substituting the AVSKC (24) into (25) gives rise to 3.2. Unknown Parameter Estimation. By observing (24), it is obvious that the estimation of N(y, θ) is employed, which requires that the parameters ϕ a and ϕ b are updated online. e kinematic parameter updating laws are presented as where Φ a ∈ R e 1 ×e 1 and Φ a ∈ R e 2 ×e 2 are the positive-definite diagonal matrices. us, by integrating (27) and (28), N(y, θ), z, and _ z in (24) are then available.

Stability Analysis.
At this point, we are going to formulate the first theorem. (1), (2), (4), (6), and (8) satisfying the assumption that the estimated interaction matrix N(y, θ) is nonsingular. In the case that the design kinematic input τ K is actually achievable in the task execution, the adaptive visual servoing kinematic controller (AVSKC) given by (24) together with the visual parameter adaption laws (27) and (28) ensures the global stability of (26) and the asymptotical convergence of Δy to zero such that lim t⟶∞ y − y d ⟶ 0.

Theorem 1. Consider the visual servoing wheeled mobile robot represented by
Proof. Construct the kinematic-based Lyapunov function candidate as Differentiating V k with respect to time yields Substituting the closed-loop kinematics (26) and the parameter updating laws (27) and (28) into (30), the derivative of V k can be denoted as Since V K ≥ 0 and _ V K (t) ≤ 0, we can obtain that V K (t) is bounded; that is, Δy, Δϕ k,a , and Δϕ k,b are bounded, which directly implies that ϕ k,a and ϕ k,b are both bounded since ϕ k,a and ϕ k,b are constants. us, N(y, θ), z, and _ z are all bounded, giving rise to the boundness of τ K from (24), which means that _ y, Δ _ y ∈ L ∞ from (8) and the boundness of (26) is guaranteed. From the result of (29) and (38), we have Δy ∈ L 2 ∩ L ∞ . erefore, we can obtain that lim t⟶∞ y − y d ⟶ 0. us, the proof is completed.
From the result in [34], it has been proven that the matrix N(y, θ) is always nonsingular. us, if the parameters of N(y, θ) in ϕ k,a and ϕ k,a are updated properly, it can be ensured that N(y, θ) is full rank by modifying the parameter adaption laws. In this paper, the so-called parameter projection [39] is introduced to avoid nonsingularity of N(y, θ). e adaption laws for visual kinematic parameters are presented as where _ ϕ a,i , _ ϕ b,i , ℧ a,i , and ℧ b,i are the ith element of _ ϕ a , _ ϕ b , ℧ a , and ℧ b . Furthermore, the projection function is given as [39,40] proj ℧ a,i � where (· ) and (· � ) denote the upper and lower bounds of parameter (·). In this way, if the condition (· � ) ≤ (·(0)) ≤ (· ) is satisfied, then the parameter (·) will locate in the region Figure 1: Visually servoed wheeled mobile robot system and coordinate representation.

Proposition 1. Consider the visual servoing wheeled mobile robot represented by
Proof. Choose the same Lyapunov function candidate V k (29), whose time derivative can be written as us, the proof of Proposition 1 can be referred to that of eorem 1.

Remark 3.
It is noted that the considerations on visual kinematic model in this paper are similar to that in [19,24,25,32,33,36], where the visual model parameters are uncalibrated in kinematic control design. However, the projection plane is set to be parallel to the operation plane in [32]. Also, note that eye-in-hand configuration is addressed in [19,33] rather than the eye-to-hand setup in this paper, and in addition, only partial camera intrinsic parameters are taken into consideration in [33], and only the extrinsic camera parameters are considered in [19], respectively. Furthermore, the overparametrization problem is still unresolved in [24,25], where a 2 × 14 regressor matrix needs to be determined. Extra two particular feature points are introduced in [36] for the purpose of preventing the direct use of image Jacobian matrix from control design. e major difference between the proposed AVSKC scheme and the uncalibrated visual tracking control scheme [19,24,25,32,33,36] is that the parameters in image projection matrix, including the intrinsic and extrinsic camera parameters, are estimated by adaption laws while avoiding the overparametrization problem. is is realized by exploiting the structure of image Jacobian matrix inspired by [35] (see Property 1). Specifically, the salient features of the proposed AVSKC scheme lie in (I) the structurally simple implementation of control law τ K (24); (II) the inexpensive obtainment of regressor matrices E K,a and E K,b in adaption laws (27) and (28) (or using (33) and (35)); (III) the nonsingularity property of N(y, θ) and the receptivity of uncertain parameters processed by adaption laws (33) and (35).

Remark 4.
If the actuators of WMR perform effectively, the visual tracking on image plane can be conveniently realized by the proposed AVSKC scheme. However, it is well recognized that the presence of dynamic uncertainties will cause a great negative impact on control performance. In the next section, we will pertinently propose the dynamic control scheme for visual servoing WMR together with the handling of dynamic uncertainties.

Adaptive Visual Servoing Dynamic Control for Wheeled Mobile Robot with Uncalibrated Visual Model and Dynamics
e focus in this section is extending the wheeled mobile robot kinematic control to dynamic control in the presence of uncalibrated visual model and uncertain dynamics; that is, both the kinematics and dynamics parameters are not required to be measured accurately. Note that, in this case, the designed controller becomes the dynamic input torque τ D , in which case a deeper control loop is considered.

Controller Design. Define a referenced image velocity as
where c is a positive constant. en, the reference errors of image velocity can be denoted as us, the reference errors r y contain both the image errors and image velocity errors. Furthermore, define a kinematic auxiliary variable Differentiating (41) with respect to time, we have e purpose of designing this auxiliary variable is to connect the kinematic control variable τ K , which can be specified as Note that, in (41), τ K is a real response due to the evolution of robot dynamics (19) rather than a designed input. e relation between r τ and r y can be interestingly derived as where β y is a positive constant. Using Property 1, we can further obtain N(y, θ)τ K − N(y, θ)τ r � − E K,a (θ, τ K )Δϕ k,a + E K,b (θ, τ K , y)Δϕ k,b and z _ y − z _ y � E K,b (θ, x, _ y)Δϕ k,b . us, (44) can be rewritten as Complexity r τ N T (y, θ)β y zr y � −Δϕ T k,a E T K,a θ, τ K β y zr y + r T y zβ y zr y Based on the kinematic control errors r τ and reference errors r y , the adaptive visual servoing dynamic controller (AVSDC) is proposed as where β τ is a positive constant.
is related to the actuator dynamics rather than the robot dynamics, in this paper, we assume that the exact structure and parameters of B(θ) are known. In particular, B(θ) is defined as the I 2×2 identity matrix in [6,38] if the actuators are free of operation faults.

Remark 6.
It seems interesting that, in the proposed AVSDC scheme, both image errors and velocity errors (r y � Δ _ y + cΔy) are introduced in the control law, which will strengthen the tracking performance and robustness of the dynamic closed-loop system. As will be shown in the stability analysis, the asymptotical convergence of r y leads to the asymptotical convergence of both Δ _ y and Δy. By substituting the AVSDC (46) into (19) and employing Property 4, one can obtain the closed-loop dynamics:

Unknown Parameter Estimation.
In the AVSDC design (46), the estimated dynamics and visual kinematics are employed, and the estimated parameters are updated online by where Φ d ∈ R e 3 ×e 3 is a positive-definite diagonal matrix, the projection operation is given in (36) and (37), and

Stability Analysis.
Based on the above system analysis of closed-loop dynamics, we have the following result.
Proof. Similarly, consider the Lyapunov function candidate Taking the derivative of V D yields Premultiplying both sides of (47) by r T τ and then substituting it into (53), we have where Property 3 is used. Subsequently, substituting (45) and the parameter updating laws (48), (49), and (50) into (54), we can obtain the following result: As V D ≥ 0 and V D ≤ 0 are simultaneously established, we can obtain that V D (t) must be bounded; that is, r τ , M(θ), Δϕ k,a , Δϕ k,b , and Δϕ d are all bounded, giving rise to the boundedness of θ, ϕ k,a , _ ϕ k,a , ϕ k,b , and ϕ d since ϕ k,a , ϕ k,b , and ϕ d are all constants. Moreover, N(y, θ)) is bounded and nonsingular, and z, _ y r , and r y ∈ L ∞ by observing (40) and 8 Complexity (43). From (42), we have _ τ r ∈ L ∞ , leading to τ D ∈ L ∞ from (46). According to the robot dynamics (19), we have _ τ K ∈ L ∞ , which directly implies that _ r τ ∈ L ∞ . us, the closed-loop dynamics of WMR in (47) is globally bounded.
Furthermore, from the result of (52) and (55), we get r τ ∈ L ∞ ∩ L 2 and zr y ∈ L ∞ ∩ L 2 . Differentiating (8) with respect to time leads to € y � ( _ N(y, Additionally, from the above Transformation matrix in parallel camera case Rot(x, π) Transformation matrix in unparallel camera case Rot(x, π − π/10)  analysis, € y r � € y d − cΔ _ y ∈ L ∞ , which thus results in _ zr y + z _ r y ∈ L ∞ . erefore, we have lim t⟶∞ zr y ⟶ 0, since z ≠ 0 is defined by the projection function, which finally indicates that the image errors converge to zero such that lim t⟶∞ y − y d ⟶ 0 and lim t⟶∞ _ y − _ y d ⟶ 0.

Remark 7.
Compared with the recent work on handing uncertain parameters for WMR [19,33,34,36], where only kinematic uncertainties are addressed, this paper extends the uncalibrated visual servoing control to a dynamic control loop in the presence of parameter uncertainties and varying depth. It can be seen in AVSDC (46) that both the reference image errors r y and kinematic control errors r τ , which contain velocity errors Δ _ y and _ y r , are concurrently employed, giving rise to asymptotical convergence of both Δ _ y and Δy by comparing with the AVSKC (24). Furthermore, the parameter uncertainties of visual kinematics and dynamics are adaptively compensated by the parameter updating laws (48), (49), and (50). Actually, the AVSDC (46), to some extent, can be potentially regarded as containing the kinematic control by designing the kinematic auxiliary variable τ r .

Numerical Simulations
In order to demonstrate the tracking performance of AVSKC and AVSDC schemes, simulation studies are carried out. As in Figure 1, a two-wheeled mobile robot with a camera in a fixed place is considered.

Trajectory Tracking for AVSKC.
In the simulation task, we firstly address the AVSKC scheme under parallel and unparallel camera case. Assume that one feature point is marked on the WMR, and the distance d is set to be 0.3m. e simulated parameters of the perspective projection matrix in [8,25] are given in Table 1, where f denotes the focal length of the camera, u k and v k denote the scalar factors of two-dimensional axes on image plane, u o and v o are the positions of principal point, ε is the included angle between the coordinate axis and is assumed to be known, and the rotation matrix in transformation matrix is set as Rot − 1 (x, π) in parallel camera case and Rot − 1 (x, π − π/10) in unparallel camera case, respectively. Note that these parameters are only used to construct the simulated model, but are unavailable in the control design. For simplicity, the detailed expression of system model in (6) and (7) and Property 1 can be referred to in [8,35]. e designed control gains and adaption gains are set as α � 10 and Φ a � Φ b � 10000, respectively. e upper and lower bounds in parameter projection are designed as ϕ a,i � 0.6ϕ a,i , , and the initial parameters are ϕ a,i (0) � 0.8ϕ a,i and ϕ b,i (0) � 0.8ϕ b,i , respectively. e initial states of WMR are θ 1 (0) � θ 2 (0) � θ 3 (0) � 0, where the superscript i denotes ith initial condition in simulation task, and the referenced trajectory is given as 10 * cos(t) + 340 pixel. (56)

Parallel Camera Case (PCC).
In this case, the image plane is parallel to the operation plane. e initial states of WMR are set as e graphs in Figures 2(a)-2(c) demonstrate the corresponding simulation results, from which we can observe that the real trajectory converges to the referenced trajectory in about 2.5 s. Note that, in this case, the desired velocity _ y d is time varying; however, smoothly real trajectory and bounded states are still achievable.

Unparallel Camera Case (UPCC).
In this case, the camera is placed in a position unparallel to the operation plane. e initial states of WMR are the same as in PCC. e simulation results are depicted in Figures 2(d)-2(f ). Note that, in Figures 2(e) and 2(b), the initial points on image plane are noncoincident since the camera parameters are chosen in different values. Moreover, the image errors asymptotically converge to zero as expected, verifying the effectiveness of AVSKC scheme.

Trajectory Tracking for AVSDC.
In this subsection, we will test the tracking performance of AVSDC scheme under tracking line case and tracking circle case. Due to the limitation of space, the parametric dynamics model of WMR is omitted, whose detailed expressions are given in [38], and the initial value for ϕ is set as ϕ d (0) � 0.8ϕ d . e designed gains are set as c � 5, β τ � 5, β y � 10, and Φ d � 1000; apart from this, all the simulated model and system parameters are given in UPCC.

Tracking Line Case (TLC).
e reference line is given as In this case, the desired velocity is constant. Based on the theoretical analysis in eorem 2, the real trajectory on the image plane asymptotically converges to the desired trajectory in the sense of position and velocity, confirmed by the simulation results in Figures 3(a)-3(c).

Tracking Circle Case (TCC).
In this case, the desired trajectory in this case is chosen as in UPCC, and the external disturbance f � [3 sin(t), 3 cos(t)] T is applied to the robot dynamics such that e time histories of the corresponding results are plotted in Figures 3(d)-3(f ). As predicted by eorem 2, both Δy and Δ _ y asymptotically converge to zero in about 1 s even under the influence of external interference. Furthermore, faster responses are expectedly obtained as compared with the simulation results in AVSKC scheme (see Figures 2(d) and 3(d)).

Conclusions
Two uncalibrated visual servoing control schemes for the wheeled mobile robot were developed from different perspectives, namely, the kinematic control and dynamic control. By utilizing the linearization characteristics of visual kinematics and robot dynamics, image-based tracking control laws (i.e., ASVKC and ASVDC) together with the parameter adaption algorithms were proposed to realize asymptotical convergence of image errors without the knowledge of visual model robot parameters. Furthermore, the overparametrization problem is avoided by exploiting the structure of depth-independent interaction matrix, giving less dimensional regressor matrices and simple configuration of parameter adaption laws. It was proven by the Lyapunov theory that both ASVKC and ASVDC schemes are capable of achieving global stability of closed-loop system. Lastly, numerical simulations were carried out to confirm the performance of ASVKC and ASVDC.
In this paper, we assume that the image trajectory is given in advance, and the external forces of robot system are not considered in complex environment. Furthermore, the applicability of visual servoing WMR control is worth further exploring. us, the further work encompasses the deterministic learning and accurate identification of system dynamics [41], the applicability of WMR with actuator constraint [42], and obstacle avoidance [43][44][45].

Data Availability
No underlying data are included.

Conflicts of Interest
e authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence their work; there is no professional or other personal interest of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, the paper.