Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.


Introduction
The high stiffness and high speed of parallel robots make them attractive for several applications, including manufacturing lines [1], the design of climbing robots [2] and haptic devices [3]. Moreover, the application of Visual Servoing techniques for parallel robot control is a viable alternative to schemes based on solving forward kinematics, which may not have an analytical solution. Visual Servoing is an active research area [4], [5], [6], [7], [8], [9] and has many potential applications, including micromanipulation [10], medical robotics [11], [12], human-robot cooperation [13] and active fluid flow control [14].
Planar parallel robots are an important class of parallel robots [28] and have also been the focus of several publications. In many cases, the control algorithms applied to this kind of robot use the forward kinematics for estimating the end-effector position [29], [30], [31], [32], [33], [34], [35], [36], [37]. Alternatively, some approaches use a vision system in the feedback loop for the same purpose [38], [39], [40]. A number of Visual Servoing methodologies developed in the case of openchain and parallel planar robot manipulators assume knowledge of the matrix that describes the rotation between the camera and the robot planes [41], [42], [43], [44]; however, this matrix may not be exactly known. Interestingly, [43] -which uses a linear model for the closed-loop system and the first Lyapunov methodshows that the control algorithm proposed there produces a closed-loop system which is robust in relation to camera rotation uncertainties.
For open-chain robots, several methods tackled the uncertainty problem by using adaptive techniques [45], [46], [47]. Despite being effective in this case, there is no publication applying these approaches to planar parallel robots; moreover, adaptive controllers are complex, difficult to tune and may generate transient responses with excessive overshoot.
An alternative to adaptive controllers explored here is the use of an algorithm proposed in [48], similar to the classic Proportional Derivative (PD) control law; that algorithm shares the main features of the PD controller (i.e., tuning is relatively straightforward, the derivative action shapes the transient response and the proportional action sets the closed-loop bandwidth). Moreover, the computational burden associated to this controller is small and it is wellsuited for set point tracking tasks. [40] describes an approach closer to that presented in this work; there exist, however, several differences. First, the method described in [40] resorts to active joint position measurements for generating damping. Second, and as mentioned in the preceding paragraphs, it relies on an exact knowledge of the rotation matrix. Finally, the stability analysis uses the Barbashin-Krassovsky-LaSalle theorem for concluding asymptotic stability. The present work takes the algorithm described in [48] under the assumption that only an estimate of the rotation matrix is known. Moreover, instead of relying on the Barbashin-Krassovsky-LaSalle invariance theorem, the stability analysis presented here employs a strict Lyapunov function, while the second Lyapunov method allows the conclusion of the asymptotic stability of the closed-loop system. The paper is organized as follows. Section 2 presents the model of the robotic system constituted by the parallel robot model and the vision system. Section 3 describes the controller and presents the stability analysis. Section 4 shows the experiments conducted on a laboratory prototype. Finally, the concluding remarks at the end review the findings. In the sequel, the notation ‖•‖ stands for the Euclidean norm for matrices and vectors; the symbols ( )  Figure 1 depicts the schematic of a redundant planar parallel robot including a reference frame. Three servo motors drive the joints 1 A , 2 A and 3 A ; the joints 1 P , 2 P and 3 P are passive -i.e., they are not actuated. The end-effector corresponds to the common joint whose position is denoted by X . The entries of the vectors     According to [36], the Lagrange-DʹAlembert formulation yields a simple scheme for computing the dynamics of redundant parallel robots; this approach uses an equivalent open-chain mechanism shown in Figure 2. Thus, the dynamics of the parallel robot are equivalent to the dynamics of the open-chain system and a set of loop constraints.

Modelling of a redundant planar parallel robot
The well-known Euler-Lagrange formalism [49] allows the modelling of the open-chain system when the parallel robot moves in the horizontal plane: The terms M and C define the inertia and the Coriolis and centripetal force matrices respectively. Therefore, the open-chain system (2) obeying the holonomic constraints is equivalent to the original parallel robot.
It is worth mentioning that model (2) describes the parallel robot dynamics in terms of the passive and active joint coordinates. However, since the proposed control law uses visual measurements of the robot end-effector position X , it would be more convenient to have a model in terms of these coordinates. To this end, the following proposition shows a key relationship between the vector τ and the active joint torques a τ . This equality is instrumental in obtaining a dynamic model of the parallel robot in terms of the end-effector coordinates X .

Proposition 1
Assume that τ is the joint torque of the equivalent open-chain system (2) for a given motion, and that a τ is the active joint torque of the redundant closedchain system required to generate the same motion. Accordingly, both torques are related as follows: The terms Jacobian matrices.
Proof See [36].  The following expressions define the Jacobian matrices W and S : Note that the Jacobian matrix W relates the velocities of the end-effector and the robot joints: The vector to the joint velocity vector and the end-effector velocity. Taking the time derivative of the Jacobian relationship, (6) yields: Substituting the input torque (2) into (3) leads to: Substituting (6) and (7) into (8) produces the following dynamic model: The matrices M and C satisfy the following structural properties so long as matrix W is full rank [36]: Property 1: Matrix M is symmetric and positive definite.

Property 3:
There exists a positive constant C k such that:

Modelling of the vision system
Consider the redundant planar parallel robot described previously. Figure 3 shows the vision system configuration.  . Image-processing algorithms allow the estimation of i X ; this estimate feeds the control algorithm without further processing. This latter feature is common to all image-based Visual Servoing algorithms and permits the avoidance of camera calibration procedures.
Using the perspective camera model [4], [43] allows the description of the end-effector position X in the robot coordinate frame in terms of the image coordinate frame: The vector negative number called the scale factor whose units are / pixels m and h is the magnification factor defined as: The parameter f is the camera focal length and ( ) The time derivative of (11) gives the end-effector linear velocity in terms of the image coordinate frame as: The next relationship describes the desired end-effector in the image coordinate frame in terms of the end-effector position  X , given in the robot coordinated frame by the following equation: The visual distance between the end-effector image position and the desired end-effector image position (see Figure 5) defines the image position error i X  : Assuming a constant desired position and taking the time derivative of the image position error yields: The Matrix S looses rank if the parallel robot reaches a singular configuration; in the sequel, matrix S is assumed to have full rank.
Replacing a τ given by (18) into (9) yields: Therefore, the control laws discussed below will be defined in terms of the control signal u while bearing in mind that the computation of the active torques a τ will require the use of the relationship (18).

Proposed control law
First, consider the following visual servoing PD control law [48]: Where 1 k , 2 k and  are positive constants. The fact that the camera orientation  is unknown precludes the use of control law (21) and (22). Assume that only an estimate e  of  is known, with , this permits the rewriting of (21) and (22) as: Hence, (18) allows the computation of the control active torque a  as follows: Substituting (23) into (20) leads to the following closedloop dynamics: Adding and subtracting the right-hand-side of (21) to the right-hand-side of (27) yields: The term H has the following expression: The first term in the right hand side of (28)

Stability analysis
Defining the state vector   , , i X X ξ    allows for an alternative description of the closed-loop system (26) and (28), namely: An equilibrium point for the closed-loop system (31) is and at this coordinate the term H in (29) vanishes. The following proposition states the main result of this section. (25) and (26)  ( )

Proposition 2 Consider the controllers
The following expression is an alternative writing of the Lyapunov function candidate (32): The above function is a radially unbounded positive definite, provided that the following terms: are also radially unbounded positive definite functions in i X  and ξ  , respectively. The terms (33) and (34) Hence, the terms (33) and (34) satisfy: From the above, it is clear that for a value of  which is large enough, the terms (33) and (34) are positive definite functions.
Therefore, the Lyapunov function candidate (32) is radially unbounded and positive definite.
The following expression corresponds to the time derivative of (32) along the trajectories of the closed-loop system (31): ( ) Using Property 2 and, after some simplifications, we yield:   Property 3 and (30) allows the bounding of the following terms: The above bounds permit, in turn, the bounding of the time derivative (35)  , , In order to obtain a compact writing of (36), let us define the vector and the matrices: (31) is asymptotically stable. 

Experimental Results
The 3-RRR redundant parallel manipulator shown in Figure 6 serves as a test bench for evaluating the proposed visual controller. The link lengths of this prototype are  In order to evaluate the performance of the proposed control laws (25) and (26)  In the case of the first set of experiments, Figure 9 shows the time evolution of the end-effector coordinates and Figure 10 depicts the corresponding image position errors; Figure 11 portrays the active joint torque signals.
The above results show that the closed-loop response has almost no oscillations; the position errors remain around 1 pixel and the active torque signals do not exhibit a high level of high-frequency components.  Figure 12 has essentially the same behaviour as that observed in the first set of experiments; a similar observation may be made for the position errors shown in Figure 13 and the active joint torques portrayed in Figure 14. It is clear from the results of both sets of experiments that the uncertainty in the camera orientation does not have a significant impact on the closed-loop response, with only a slight increment in the position error.

Conclusions
The theoretical and experimental results discussed in previous sections show that the visual control algorithm studied here -which essentially behaves like a classic PD algorithm -is able to withstand uncertainties in camera orientation. This algorithm does not rely on velocity measurements and has low computational requirements. On the theoretical side, a key element in the stability analysis is a strict Lyapunov function that includes the camera rotation matrix; this function allows the establishment of asymptotic stability without relying on the invariance set stability theorems. On the experimental side, the results obtained using a laboratory prototype show that the closed-loop performance remains essentially the same when the camera rotation is unknown for the controller.