Skip to main content

ORIGINAL RESEARCH article

Front. Energy Res., 23 June 2023
Sec. Smart Grids
Volume 11 - 2023 | https://doi.org/10.3389/fenrg.2023.1203904

Visual-admittance-based model predictive control for nuclear collaborative robots

www.frontiersin.orgJun Qi1,2,3 www.frontiersin.orgZhao Xu1 www.frontiersin.orgJiru Chu1* www.frontiersin.orgMinglei Zhu3 www.frontiersin.orgYunlong Teng4
  • 1China Nuclear Power Engineering Co., Ltd., Beijing, China
  • 2School of Automation, Chengdu University of Information Technology, Chengdu, China
  • 3School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
  • 4Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China (UESTC), Shenzhen, China

This paper presents a novel visual-admittance-based model predictive control scheme to cope with the problem of vision/force control and several constraints of a nuclear collaborative robotic visual servoing system. A visual-admittance model considering the desired image feature and force command in the image feature space is proposed. Moreover, a novel constraint scheme of the model predictive control (MPC) is proposed to cancel the overshoot in the interaction force control for most cases by taking the desired force command as the constraint of the proposed MPC. Via applying the robotic dynamic image-based visual servoing (IBVS) model, some other constraints, such as the actuator saturation, joint angle, and visual limits, can be satisfied simultaneously. The simulation results for the two-degrees-of-freedom (DOF) robot manipulator with an eye-to-hand camera are present to demonstrate the effectiveness of the proposed controller.

1 Introduction

In recent years, collaborative robots have become more and more popular in the nuclear industry since the increasing need for safe and secure nuclear power plants (NPPs). Interaction between the robot and the environment or human is a fundamental requirement for the construction of NPPs 1, such as moving objects, deburring, grinding, polishing, and high-precision assembling (Nabat et al., 2005; Yang et al., 2008; Xu et al., 2017; Wu et al., 2017; Song et al., 2022, Figure 1). How to realize the precision position control and compliant interaction force control simultaneously in the nuclear environment is crucial for the execution of an interaction task.

FIGURE 1
www.frontiersin.org

FIGURE 1. Application of collaborative robots in nuclear power plants (NPPs).

To cope with this problem, numerous force control schemes have been proposed. Two main categories of those control schemes are direct and indirect force control. Direct force control adds the feedback of the force sensor directly into the closed motion control loop. For example, in the hybrid position/force control approach, the force and position controllers work separately and are connected with a diagonal section matrix (Raibert and Craig, 1981). However, these kinds of schemes lead to a trade-off between the force controller and the position controller, which implies that neither force nor position can precisely converge to their desired target. The indirect force control includes impedance control and admittance control. They all create a mass–spring–damper (MSD) model to describe the interaction force between the robot and the environment (Mason, 1981; Hogan, 1985) and are inverse to each other. When the robot is controlled with the conventional model-based controller, precise interaction environment information and an accurate dynamic model of the robot are essential in compliant force control.

In the nuclear environment, where nuclear radiation can affect sensors and electronics, measuring accurate knowledge of the interaction environment and robot feedback is quite hard. Motivated by reducing or avoiding the need for precise preliminary knowledge of the environment, numerous researchers are trying to equip robot systems with vision and force sensors together. This method allows the robot system to get the environmental information and modify the vision/force controller online. In order to combine the visual and force information, several vision/force control schemes have been proposed to replace the motion control of the aforementioned force control with visual servoing (Bellakehal et al., 2011; Zhu et al., 2022b). For instance, in the hybrid vision/force control scheme (Zhu et al., 2022b), the force and vision are controlled separately, and a trade-off matrix is used to combine the output of two sensor controllers. Thus, these kinds of methods may result in local minima and reduce the robot control precision. In Lippiello et al. (2018), three vision-impedance controllers with the feedback of the camera and force sensor have been proposed to realize the physical interaction of a dual-arm unmanned aerial manipulator. However, the impedance control schemes can neither combine the vision and force sensing simultaneously nor avoid the trade-off between the two control loops.

To handle this problem, we propose a novel visual-admittance-based model to drive the robot with the trajectory of vision and the command of contact force. In this approach, the vision and force can be coupled in the image feature space successfully. Therefore, the convergence to a local minimum can be avoided, and the discrepancy between the modalities of vision and force sensors can be overcome.

However, due to the combination of force and vision in the image feature space, the controller is only employed to track the image feature commands generated from the aforementioned vision-admittance model without real-time perception of the contact force, which may lead to the unacceptable overshoot phenomenon and thus break the interaction environment or robot body. Therefore, we propose to consider the force command as a contact force constraint of the vision tracking controller. In practical applications, there are some other constraints on robot control systems, such as actuator saturation and visual limitation, that prevent the visual features from leaving the camera view.

Model predictive control (MPC) has been proven to be an efficient optimal control scheme aimed at addressing the disturbances and constraints of the system (Allibert et al., 2010; Deng et al., 2022). Recently, more and more researchers have concentrated on the advantages of MPC and applied this control scheme to the visual control of robots. In contrast to the aforementioned vision/force control scheme, MPC can explicitly handle the system constraints and resist disturbances. In Allibert et al. (2010), an MPC-based image-based visual servoing (IBVS) method is designed based on the conventional image Jacobian matrix and considers the constraints of image features. According to Hajiloo et al. (2016), a robust online MPC scheme based on the tensor product (TP) model of IBVS is proposed to realize the online control of MPC. By simplifying the image Jacobian matrix as a TP model, the computational speed of MPC is considerably increased. In Fusco et al. (2020b), the image feature acceleration is used to construct the model of MPC in order to get a shorter sensor trajectory and better motion in the feature space in Cartesian space. However, the aforementioned methods only consider the image feature constraints and kinematics constraints, such as the joint velocity or acceleration of the robot system, without taking into account the dynamic constraints of the robot system.

In general, in this paper, a novel visual-admittance-based MPC scheme with non-linear constraints based on the dynamic IBVS of the collaborative robot and desired force constraint is proposed to address the problem of the vision/force control with constraints. The visual-admittance model is formulated to converge the force and vision commands into image features to avoid the trade-off between two targets’ (vision and force) tracking control. Considering the overshoot of the visual-admittance model, force constraints are added to the predictive model of the MPC. Moreover, the non-linear input bounds are added to the proposed MPC based on the dynamic model of the robot.

This paper is organized as follows: in Section 2, some preliminaries of the robot dynamic model and IBVS are presented. The vision-admittance-based trajectory generator is introduced to combine the force and visual information in Section 3. Section 4 presents the MPC controller with the non-linear constraints based on the dynamic model of the robot and force constraints. In Section 5, simulations based on a robot manipulator model with an eye-to-hand camera are performed to verify the effects of the proposed control scheme. Finally, the conclusions are given in Section 6.

2 Preliminaries

2.1 Dynamic model of the robot manipulator

The dynamic model of the manipulator robot with n actuators in the task space is often described as an Euler–Lagrange second-order equation (Roy et al., 2018). When the end effector of the robot comes into contact with the environment, the environment exerts an interactive force on the robot system. Considering the external force developing from the contact between the end-effector of the robot and the environment, the dynamic model of a robot can be written as

Mqq̈+Cq,q̇q̇+G+JTFe=T+Td,(1)

where q,q̇,q̈Rn represent the position, velocity, and acceleration of the actuate joint angle, respectively. M(q(t))Rn×n is the inertia matrix of the robot. C(q,q̇) denotes the Coriolis–centripetal matrix of the robot torques. GR6×1 is the gravitational vector. J is the Jacobian matrix of the robot system. FeR6×1 represents the interactive force vector between the robot and environment. T,TdRn represent the input vector of the proposed controller and disturbance of the robotic system, respectively.

2.2 Dynamic model of image-based visual servoing

Image-based visual servoing (IBVS) is a sensor-based control scheme. It uses cameras as the main sensor to estimate the pose of the robot directly.

In this work, an eye-to-hand IBVS system is established to get the visual information, and we denote the visual feature estimated by the single fixed camera as sRm. For a point p = (X,Y,Z)T in the Cartesian space, the projection in the 2D image space is given as

x=X/Z=xscx/Px,y=Y/Z=yscy/Py(2)

where s = (x,y)T is the projection of p in the image space and si=(xi,yi)T represents the coordinates of the image point expressed in pixel units. cx, cy, px, and py are the intrinsic parameters of the camera. cx and cy denote the coordinates of the principal point. px and py are the ratios between the focal length and pixel size.

Differentiating Eq. 2, the relationship between the time variation of the visual feature vector ṡ and the spatial relative camera–object velocity of the robot ṗ can be written as described in Mariottini et al. (2007)as follows:

ṡ=Lsṗ,(3)

where Ls is the interaction matrix related to s (Chaumette and Hutchinson, 2006; Zhu et al., 2022a) given as

1/Z0x/Zxy1+x2y01/Zy/Z1+y2xyx.(4)

The robot spatial velocity ṗ can be transformed to the actuator velocity q̇ with the Jacobian matrix J. Then, Eq. 3 can be written as

ṡ=LscTeJq̇,(5)

where cTe denotes the transform matrix from the kinematic screw to the camera frame. We define Js=LscTeJ. Equation 5 can be rearranged as follows:

ṡ=Jsq̇.(6)

In order to generate the actuator torque, the relationship between the image feature and acceleration of the actuator joint has been formulated in the IBVS dynamic model (Fusco et al., 2020a; Liang et al., 2022).

By differentiating Eq. 3, the second-order interaction model can be demonstrated as

s̈=Lsp̈+L̇sṗ.(7)

When differentiating Eq. 5, we derive

s̈=L̇scTeJq̇+LscṪeJq̇+LscTeJ̇q̇+LscTeJq̈=Jsq̈+Pq̇,(8)

where P=L̇scTeJ+LscṪeJ+LscTeJ̇.

Substituting Eqs 5, 8 into Eq. 1, the dynamic model of IBVS can be written in the following form:

T=MJs+s̈+Ĉṡ+G+JTFeTd,(9)

where Ĉ=CJs+MJs+PJs+ and + represents the pseudo-inverse.

Assumption 1: During the control process, the robot and the IBVS controller do not encounter the controller singularity (Zhu et al., 2020). J and Ls are full-rank matrices.

3 Visual-admittance-based trajectory generator

In this section, a novel visual-admittance-based (VAB) trajectory generation is proposed and used to generate a reference trajectory for the robot end-effector online with feedback from the force sensor and camera in the image feature space. The structure of the robot manipulator system with an eye-to-hand visual servoing system is given in Figure 2.

FIGURE 2
www.frontiersin.org

FIGURE 2. Structure of the visual-admittance-based model predictive control.

The contact dynamic model between the robot end-effector and environment is often considered a second-order MSD model in the Cartesian space (Wu et al., 2022). We assume the contact model as follows:

KMëp+KDėp+KSep=Fe,(10)

where ep = prpd represents the error between the reference trajectory and the desired trajectory of the robot end-effector in the Cartesian space. KM, KD, KS denote the positive definite impedance parameters.

Due to the intrinsic technological constraints (such as the need for certain robot intrinsic parameters), the visual/force control cannot be realized directly (Oliva et al., 2021). Therefore, the vision-admittance-based trajectory generator is designed to avoid those limits.

First, after some simple manipulations with Eq. 9, the relationship between image features and external force can be rewritten as follows:

s̈+C̄ṡ+ds+fpumi=fu,(11)

where

C̄=JsM1Ĉ,ds=JsM1GTd,fpumi=JsM1JFe=JsfTFe,fu=JsM1T,(12)

where fpumi represents the vector of per unit mass/inertia (p.u.m.i) virtual forces applied on the image feature space corresponding to the robot in the Cartesian space. fu denotes the actuator torque projection in the image feature space.

Multiplying both sides of Eq. 10 by Js, the relationship between fpumi and the position error ep can be rearranged as follows:

fpumi=JsKMëp+JsKDėp+JsKSep.(13)

Substituting Eqs 2, 3, 7 into Eq. 13 and denoting p = Pzs, the impedance model corresponding to the image feature can be rearranged as

fpumi=K̂Mës+K̂Dės+K̂Ses,(14)

where

K̂M=JsKMLs,K̂D=JsKDKMLs+L̇sL+,K̂S=JsKSPz,(15)

where es = srsd represents the error between the reference trajectory generated from the interaction force and the desired trajectory. When the contact force of the robot is zero, sr = sd. Otherwise, the vision-based admittance model can generate the reference trajectory to perform the force control of the system.

When the system is at equilibrium, the error of the image feature acceleration will converge to zero. To simplify the admittance model and reduce the calculation complexity, we set ës=0 and the simplified model can be described as

ės=K̂SK̂Des+fpumiK̂D.(16)

By solving Eq. 16, we obtain

est=s0eAt+eAt0tfpumiK̂DeAτdτ,(17)

where A(t)=0tK̂S(τ)K̂D(τ)dτ and s0 is the initial state of the image feature. Then, the reference trajectory can be described as

sr=est+sd=s0eAt+eAt0tfpumiK̂DeAτdτ+sd.(18)

Considering the desired force command FdR6, we propose a novel integral force control law (IFCL) to obtain the p.u.m.i virtual force fpumi as follows:

fpumit=JsfKdėf+Kpef+Ki0tefτdτ,(19)

where ef = FdFe, Fd is the predefined desired contact force, and Kd = diag(kd1, ⋯kdn), Kp = diag(kp1, ⋯kpn), and Ki = diag(Ki1, ⋯kin) are positive defined diagonal matrices.

4 Visual/force predictive controller

4.1 Model predictive control

MPC is an optimal control scheme proposed to solve the multi-variable constraint control problem. It determines the best input signal for a system by considering the finite future evolution of the system state. More precisely, it finds an optimal control sequence U* that minimizes a cost function Jm with a set of predefined constraints over a finite predictive horizon Np. The obtained control sequence is described as Ut*={ut*,,ut+Nc1*}, where ui1* is the optimal input signal in step i and NcNp is called control horizon. The state predictions during the control horizon are calculated using the independent control inputs, while the remaining inputs are equal to the last elements of the control sequence. The cost function Jm is the sum of the quadratic error between the predictive state xt+i and the desired state xt+i*. The optimal problem is shown as follows:

Ut*=minUtJmt,(20)

with

Jmt=i=1Np1et+iTQset+i+i=0Nc1ut+iTQuut+i+eNpTQpeNp,(21)

which is subjected to the constraints

xt+i+1=fxt+i,ui,(22a)
ut+iURnu,i=0,,Nc1,(22b)
xt+iXRnx,i=1,,Np,(22c)

where et+i=xt+ixt+i* and i represents time i = iTs, where Ts is the control sample. Qs, Qu, and Qp are the positive definite matrices that denote the relative importance of different components in cost function (Eq. 21). The constraint in Eq. 22a is the predictive model of MPC and is often chosen as the discrete-time model of a system. U and X are, respectively, the user-defined bounds of the input signal and system state. Once the optimal problem is solved, only ut* is fed to the system, and the obtained control sequence is used to generate the next initial control sequence of the optimal problem. Then, the optimal process is repeated.

4.2 The predictive model

In this work, we choose the joint configuration, interaction force, and image features as the state vectors of MPC. This allows the controller to explicitly take the image feature and force constraints into account and handle the redundancy during the optimization process. Then, the discrete predictive model of the IBVS robot system can be described as

qt+i+1q̇t+i+1st+i+1Ft+i+1=qt+i+q̇t+Δtq̇t+ist+i+Jsq̇t+iΔtFt+i+KfmJq̇t+iΔt+12Δt2ΔtJsΔtKfmJΔtui,(23)

where Δt is the sample time of MPC and ui is the actuator acceleration at the step i. Js and J are evaluated with the change in the state vector (Allibert et al., 2010). qt,q̇t,st,Ft are the initial states of the predictive model and can be obtained directly from the sensors of the robot system.

4.3 The joint acceleration constraints

The joint acceleration constraints aim at taking into account the physical boundaries of the actuators. Due to the interaction with the environment, it is possible that the input torque of the system exceeds the physical limit of the actuator torque, while the joint acceleration satisfies the rated acceleration constraints of the actuator. Thus, substituting the predictive state xt+i into Eq. 1, the relationship between the joint acceleration and actuator torque can be rewritten as

ut+i=M1Tt+iCq̇t+iGJTFt+i=M1Tt+iΠt+i.(24)

Then, the joint acceleration constraints of the MPC can be proposed as follows:

ût+iminuût+imax,(25a)
ût+imin=M1Tt+iminΠt+i,(25b)
ût+imin=M1Tt+imaxΠt+i,(25c)

where i = 1, …, Np and Πt+i can be generated from the predictive state xt+i. Tt+imax and Tt+imin represent the upper and lower bounds of the permissible actuator torque, respectively.

4.4 The terminal constraint (TC)

In this case, the stability of the proposed controller is ensured by a terminal constraint. TC imposes the last predicted visual features equal to the desired feature. However, it is difficult to satisfy a strict equality constraint when solving the optimizing problem. Thus, the constraints are converted into inequalities by defining a small enough threshold δtc.

st+Npsdδtc0.(26)

4.5 The image feature constraints

In the context of visual serving, the image feature must remain visible. The following constraint prevents the image feature from escaping the field of the camera pixel view:

st+iminst+ist+imax,(27)

where i = 0, …, Nc − 1. st+imin and st+imax, respectively, represent the maximum and minimum bounds of the image feature in the camera pixel in sample step i.

4.6 The force constraints

Considering that the possible overshoot of force control can break the interactive environment, we set the desired force as a state constraint given as

Ft+iFt+i*+δf,i=1,,Np,(28)

where Ft+i* is the desired force in step i, and δf is the permissible error of the interaction force control.

5 Simulation

In this section, to demonstrate the control effects of the proposed visual-admittance-based model predictive controller, several simulations are illustrated in the two-link manipulator with an eye-to-hand camera. The force sensor is inserted in the end-effector of the robot, and the robot is controlled to track the predefined force and image feature trajectory.

5.1 Simulation design

In this simulation, we choose a two-link robot manipulator as the control object of the proposed controller. The structure of the two-link robot manipulator with eye-to-hand camera configuration is given in Figure 3. The coordinate of the origin of the robot system is Pb = (0, 0, 0) m, and the dynamic model of this robot is given as

Mq̈t+Cq̇+G+JTFe=T,(29)

with

Mq=M11M12M21M22,Cq,q̇=p1q2̇p1q̇1+q̇2p1q̇10,Gq=p2gcosq1+p3gcosq1+q2p3gcosq1+q2,Jq=l1sinq1l2sinqsl2sinqsl1cosq1+l2cosqsl2cosqs,

where,

M11=m1l1c2+m2l12+l2c2+2l1l2ccosq2,M12=m2l2c2+l1l2ccosq2M21=M12,M22=m2l2c2,p1=m2l1l2csinq2,p2=m1l1c+m2l1,p3=m2l2c,qs=q1+q2,

FIGURE 3
www.frontiersin.org

FIGURE 3. Structure of the two-link robot manipulator with eye-to-hand camera configuration.

where q1 and q2 are the joint angles of two actuators. m1 and m2 are the masses of links 1 and 2, respectively. l1 and l2 are the lengths of two links, respectively. l1c and l2c are the distances between the mass center of two links and the actuator joints, respectively. p1(0) and p2(0) are the initial joint angles. The aforementioned elements are listed in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Parameters and initial joint angle of the robot.

The camera set is fixed at Pc(0, 0,−2) m, and the image plane of the camera (X’OZ’) is set to be parallel to the Cartesian plane (XOZ). The camera resolution is 1280 × 720 pixels, and the focal length of the camera is 10 mm. The ratio of a pixel and unit length is 100 pixels/mm along the two axes of the pixel plane. The frequency of the camera observation is set at 50 Hz.

The constraints of actuator torque, actuator acceleration, and the image feature are, respectively, given as

20NmT20Nm,5rad/s2q̈5rad/s2,0px0pxs1280px720px.(30)

In this simulation, we use Eq. 10 as the dynamic model of the force sensor. In the real interactive environment, compared with kS and KD, the value of KM is small enough to be ignored. Then, the parameters of this equation are given as KS = 10000 and KD = 0.57. The threshold of the TC is given as δtc = 100.

In the visual-admittance model, the parameters of Eq. 19 are given as Kp = 5 × 10−4, Ki = 6 × 10−4, and Kd = 2 × 10−5.

In order to validate the convergence capability and the control performance of the proposed controller, the position trajectory is predefined and consists of two phases: the approach phase and the interaction phase. The image feature trajectory we used in the MPC is generated by a virtual camera model from the position trajectory. During the approach phase, the robot needs to converge to P1, (1.5, 0.4, 0) m in 1.5 s, and in the interaction phase, the robot moves from P1 to P2, (1.5, 0.365, 0) m in 3.5 s, and the position trajectory is given as follows:

pd=1.50.4+0.01*t1.50.(31)

During the interaction phase, a 10 N force command is applied along the x-axis from 2 to 4 s.

In order to verify the effects of the proposed controller, three sets of controllers are introduced in this simulation to drive the robot to track the desired trajectory:

Controller 1: The MPC we proposed in this paper. The control and prediction horizon are set at nc = 5 and np = 6, respectively, and the discretization time used in the controller is given as △t = 0.1 s. The parameters of the predictive model cost function are chosen as Kmf =, 10000, Qx = diag{1000}, and Qu = diag{0.001}.

Controller 2: The classical MPC without the torque or interaction force constraints. The parameters of this controller are similar to Controller 1.

Controller 3: The parallel visual and force control (PVFC) scheme proposed in Zhu et al. (2022b) with input constraints. The parameters of this controller are given as Kv = 7.5, Kp = 20, Ki = 0.01, and Kf = 0.01.

5.2 Results and analysis

In this section, to illustrate the performance of the proposed controller with the torque constraints, three controllers are investigated through the simulation on a two-DOF robot manipulator with an eye-to-hand camera. We use MATLAB to conduct the simulation and sequential quadratic programming algorithm (fmincon function in the MATLAB optimization toolbox) to solve the optimization problem of MPC. The frequency of the MPC is equal to that of the camera which is given as 50 Hz. Similarly, the frequency of the torque generator is equal to that of the joint and force sensor given as 1,000 Hz.

The image trajectories and the image errors of the three aforementioned controllers are given in Figure 4. From Figures 4A–C, we can find that the image trajectory of the IBVS system under the proposed MPC is smoother than that under the PVFC and is similar to that under the classical MPC. When the force command is applied, the overshoot of the IBVS system under the PVFC is much more serious than that of the classical MPC and proposed MPC. As is shown in Figures 4D–F, during the approach phase, the converge time of the proposed MPC (approximately 0.8 s) is shorter than that of the PVFC (approximately 1.25 s) and is very close to the classical MPC (approximately 0.75 s).

FIGURE 4
www.frontiersin.org

FIGURE 4. Image trajectories and errors of three controllers. (A) Image trajectory of the proposed MPC. (B) Image trajectory of the classical MPC. (C) Image trajectory of the PVFC. (D) Image errors of the proposed MPC. (E) Image errors of the classical MPC. (F) Image errors of the PVFC.

The position and force trajectories of the three aforementioned controllers are shown in Figure 5A and Figure 6, respectively. As is shown in Figure 5A, during the approach process, the length of the position trajectory in the Cartesian space under the proposed controller is shorter than that under the PVFC and, in the interaction phase, the positioning accuracy of the IBVS robot system under the proposed MPC is better than that under the PVFC. From Figures 6A, B, we can find that the overshoot and chatters of the force control in the IBVS robot system under the PVFC are the most serious within the three aforementioned controllers, and the overshoot of the force control in the IBVS system under the classical MPC still exists. Nevertheless, due to the effectiveness of the virtual interaction model and the force constraints, the overshoot and chatters of the force control in the IBVS robot system under the proposed controller are eliminated.

FIGURE 5
www.frontiersin.org

FIGURE 5. Position trajectories and input torques of three controllers. (A) Proposed and classical MPC. (B) PVFC.

FIGURE 6
www.frontiersin.org

FIGURE 6. Force trajectories of the three controllers. (A) Proposed and classical MPC. (B) PVFC.

The input torques of three controllers are given in Figure 5B. The red zone in Figure 5B is the predefined torque constraint. As is shown in Figure 5B, the classical MPC without the non-linear torque constraints cannot make the IBVS system obey the torque constraints strictly, and the proposed MPC can satisfy the torque constraints. Compared with the PVFC, in the interaction phase, the input torque of the IBVS system under the proposed MPC is smoother than that under the PVFC.

6 Conclusion

In this paper, a visual-admittance-based model predictive controller is developed to cope with the overshoot and trade-off of the classical position/force control scheme in the nuclear environment. A visual-admittance-based trajectory generator is presented to combine the desired image features and the force feedback into the reference image trajectory. Considering the integration element of the visual-admittance-based trajectory generator, we propose a novel predictive model constraints scheme. In this scheme, the desired force command is considered a constraint to avoid the overshoot of the force control, and the non-linear constraints based on the dynamic model of the robot are proposed to satisfy the actuator torque limits. A torque generator is used to generate the input signal of the robot system with the proposed MPC output and the real-time feedback of the robot joint sensors. An illustrative example of a two-DOF robot manipulator with an eye-to-hand camera is given to validate the effect of the proposed control scheme. Moreover, compared with the PVFC, the proposed MPC controller has better precision in force and position tracking. Compared with the classical MPC, the proposed controller can satisfy the image feature, torque, and force constraints and eliminate the overshoot of force control.

The proposed controller still needs information about the interaction environment, such as the elements of the virtual force model in the predictive model. Our ongoing research is finding a new observer to identify the elements of the interaction environment online. In addition, simplification of the optimization process in the MPC is in progress to construct a real-time MPC scheme of the IBVS robot system with non-linear constraints. In the future, experiments will be conducted to validate the proposed method.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.

Author contributions

JQ: methodology, software, validation, formal analysis, investigation, data curation, writing—original draft, and writing—review and editing. JC and ZX: validation, writing—review and editing, and data curation. MZ and YT: conceptualization, supervision, writing—review and editing, and funding acquisition. All authors contributed to the article and approved the submitted version.

Funding

This study was supported by the Natural Science Foundation of Sichuan Province (2023NSFSC0872) and the Shenzhen Science and Technology Program (CYJ20210324143004012).

Conflict of interest

Authors JQ, ZX, and JC were employed by the company China Nuclear Power Engineering Co., Ltd.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Allibert, G., Courtial, E., and Chaumette, F. (2010). Predictive control for constrained image-based visual servoing. IEEE Trans. Robot.26, 933–939. doi:10.1109/TRO.2010.2056590

CrossRef Full Text | Google Scholar

Bellakehal, S., Andreff, N., Mezouar, Y., and Tadjine, M. (2011). Vision/force control of parallel robots. Mech. Mach. Theory.46, 1376–1395. doi:10.1016/j.mechmachtheory.2011.05.010

CrossRef Full Text | Google Scholar

Chaumette, F., and Hutchinson, S. (2006). Visual servo control. i. basic approaches. IEEE Robot. Autom. Mag.13, 82–90. doi:10.1109/mra.2006.250573

CrossRef Full Text | Google Scholar

Deng, Y., Léchappé, V., Zhang, C., Moulay, E., Du, D., Plestan, F., et al. (2022). Designing discrete predictor-based controllers for networked control systems with time-varying delays: Application to a visual servo inverted pendulum system. IEEE/CAA J. Autom. Sin.9, 1763–1777. doi:10.1109/JAS.2021.1004249

CrossRef Full Text | Google Scholar

Fusco, F., Kermorgant, O., and Martinet, P. (2020a). “A comparison of visual servoing from features velocity and acceleration interaction models,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Google Scholar

Fusco, F., Kermorgant, O., and Martinet, P. (2020b). Integrating features acceleration in visual predictive control. IEEE Robot. Autom. Lett.5, 5197–5204. doi:10.1109/lra.2020.3004793

CrossRef Full Text | Google Scholar

Hajiloo, A., Keshmiri, M., Xie, W.-F., and Wang, T.-T. (2016). Robust online model predictive control for a constrained image-based visual servoing. IEEE Trans. Indust. Electron.63, 1–2250. doi:10.1109/TIE.2015.2510505

CrossRef Full Text | Google Scholar

Hogan, N. (1985). Impedance control: An approach to manipulation: Part III—applications. J. Dyn. Syst. Meas. Control107, 17–24. doi:10.1115/1.3140701

CrossRef Full Text | Google Scholar

Liang, X., Wang, H., Liu, Y.-H., You, B., Liu, Z., Jing, Z., et al. (2022). Fully uncalibrated image-based visual servoing of 2dofs planar manipulators with a fixed camera. IEEE Trans. Cybern.52, 10895–10908. doi:10.1109/TCYB.2021.3070598

PubMed Abstract | CrossRef Full Text | Google Scholar

Lippiello, V., Fontanelli, G. A., and Ruggiero, F. (2018). Image-based visual-impedance control of a dual-arm aerial manipulator. IEEE Robot. Autom. Lett.3, 1856–1863. doi:10.1109/lra.2018.2806091

CrossRef Full Text | Google Scholar

Mariottini, G. L., Oriolo, G., and Prattichizzo, D. (2007). Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Trans. Robot.23, 87–100. doi:10.1109/TRO.2006.886842

CrossRef Full Text | Google Scholar

Mason, M. T. (1981). Compliance and force control for computer controlled manipulators. IEEE Trans. Syst. Man. Cybern.11, 418–432. doi:10.1109/tsmc.1981.4308708

CrossRef Full Text | Google Scholar

Nabat, V., de la O Rodriguez, M., Company, O., Krut, S., and Pierrot, F. (2005). “Par4: Very high speed parallel robot for pick-and-place,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE), 553–558.

CrossRef Full Text | Google Scholar

Oliva, A. A., Giordano, P. R., and Chaumette, F. (2021). A general visual-impedance framework for effectively combining vision and force sensing in feature space. IEEE Robot. Autom. Lett.6, 4441–4448. doi:10.1109/LRA.2021.3068911

CrossRef Full Text | Google Scholar

Raibert, M. H., and Craig, J. J. (1981). Hybrid position/force control of manipulators. J. Dyn. Syst. Meas. Control.102, 126–133. doi:10.1115/1.3139652

CrossRef Full Text | Google Scholar

Roy, S., Roy, S. B., and Kar, I. N. (2018). Adaptive–robust control of euler–Lagrange systems with linearly parametrizable uncertainty bound. IEEE Trans. Contr. Syst. Technol.26, 1842–1850. doi:10.1109/TCST.2017.2739107

CrossRef Full Text | Google Scholar

Song, S., Zhu, M., Dai, X., and Gong, D. (2022). “Model-free optimal tracking control of nonlinear input-affine discrete-time systems via an iterative deterministic q-learning algorithm,” in IEEE Transactions on Neural Networks and Learning Systems, 1–14. doi:10.1109/TNNLS.2022.3178746

CrossRef Full Text | Google Scholar

Wu, J., Gao, Y., Zhang, B., and Wang, L. (2017). Workspace and dynamic performance evaluation of the parallel manipulators in a spray-painting equipment. Robot. Comput. Integr. Manuf.44, 199–207. doi:10.1016/j.rcim.2016.09.002

CrossRef Full Text | Google Scholar

Wu, R., Li, M., Yao, Z., Liu, W., Si, J., and Huang, H. (2022). Reinforcement learning impedance control of a robotic prosthesis to coordinate with human intact knee motion. IEEE Robot. Autom. Lett.7, 7014–7020. doi:10.1109/LRA.2022.3179420

CrossRef Full Text | Google Scholar

Xu, P., Li, B., and Chueng, C.-F. (2017). “Dynamic analysis of a linear delta robot in hybrid polishing machine based on the principle of virtual work,” in 2017 18th International Conference on Advanced Robotics (ICAR) (IEEE), 379–384.

CrossRef Full Text | Google Scholar

Yang, G., Chen, I.-M., Yeo, S. H., and Lin, W. (2008). “Design and analysis of a modular hybrid parallel-serial manipulator for robotised deburring applications,” in Smart devices and machines for advanced manufacturing (Springer), 167–188.

Google Scholar

Zhu, M., Chriette, A., and Briot, S. (2020). “Control-based design of a DELTA robot,” in ROMANSY 23 - Robot Design, Dynamics and Control, Proceedings of the 23rd CISM IFToMM Symposium.

CrossRef Full Text | Google Scholar

Zhu, M., Briot, S., and Chriette, A. (2022a). Sensor-based design of a delta parallel robot. Mechatronics87, 102893. doi:10.1016/j.mechatronics.2022.102893

CrossRef Full Text | Google Scholar

Zhu, M., Huang, C., Qiu, Z., Zheng, W., and Gong, D. (2022b). Parallel image-based visual servoing/force control of a collaborative delta robot. Front. Neurorobot.16, 922704. doi:10.3389/fnbot.2022.922704

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: visual-admittance model, visual/force control, model predictive control, collaborative robot, dynamic visual servoing

Citation: Qi J, Xu Z, Chu J, Zhu M and Teng Y (2023) Visual-admittance-based model predictive control for nuclear collaborative robots. Front. Energy Res. 11:1203904. doi: 10.3389/fenrg.2023.1203904

Received: 11 April 2023; Accepted: 08 June 2023;
Published: 23 June 2023.

Edited by:

Zhi-Wei Liu, Huazhong University of Science and Technology, China

Reviewed by:

Sahaj Saxena, Thapar Institute of Engineering & Technology, India
Tiedong Ma, Chongqing University, China
Ruizhuo Song, University of Science and Technology Beijing, China

Copyright © 2023 Qi, Xu, Chu, Zhu and Teng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jiru Chu, chujr@cnpe.cc

Download