Next Article in Journal
Increase in Fast Response Time of the Resistance-to-Voltage Converter When Monitoring the Cable Products’ Insulation Resistance
Previous Article in Journal
Thermal and Geometric Error Compensation Approach for an Optical Linear Encoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physician-Friendly Tool Center Point Calibration Method for Robot-Assisted Puncture Surgery

State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(2), 366; https://doi.org/10.3390/s21020366
Submission received: 3 December 2020 / Revised: 29 December 2020 / Accepted: 1 January 2021 / Published: 7 January 2021
(This article belongs to the Section Sensors and Robotics)

Abstract

:
After each robot end tool replacement, tool center point (TCP) calibration must be performed to achieve precise control of the end tool. This process is also essential for robot-assisted puncture surgery. The purpose of this article is to solve the problems of poor accuracy stability and strong operational dependence in traditional TCP calibration methods and to propose a TCP calibration method that is more suitable for a physician. This paper designs a special binocular vision system and proposes a vision-based TCP calibration algorithm that simultaneously identifies tool center point position (TCPP) and tool center point frame (TCPF). An accuracy test experiment proves that the designed special binocular system has a positioning accuracy of ±0.05 mm. Experimental research shows that the magnitude of the robot configuration set is a key factor affecting the accuracy of TCPP. Accuracy of TCPF is not sensitive to the robot configuration set. Comparison experiments show that the proposed TCP calibration method reduces the time consumption by 82%, improves the accuracy of TCPP by 65% and improves the accuracy of TCPF by 52% compared to the traditional method. Therefore, the method proposed in this article has higher accuracy, better stability, less time consumption and less dependence on the operations than traditional methods, which has a positive effect on the clinical application of high-precision robot-assisted puncture surgery.

1. Introduction

Robot-assisted needle insertion technology can improve the accuracy and security of many minimally invasive percutaneous surgeries, such as biopsy and brachytherapy. Usually, a robot-assisted needle insertion system mainly includes a lesion navigation system and a robot [1]. Through a series of coordinate transformations, the robot precisely inserts the needle into the lesion location [2]. Errors from coordinate transformations result in a precise but not accurate needle placement. An important reason is the inaccuracy of the tool center point (TCP) which is often described as the tool center point position (TCPP) and tool center point frame (TCPF) with respect to the end flange frame in robot-assisted puncture surgery. The inaccuracy will result in a fixed offset between the true position of the needle tip and the target position for each penetration. For all robot users, general robot manufacturers provide kinematics calculations only from the base to the end flange instead of the TCP. That means doctors must establish the coordinate transformation from the end flange to the TCP by themselves to adapt to different size of puncture needles and to ensure the safety of the operation. Traditional TCP calibration is to jog a robot manually to approach a sharp tip feature with different robot poses [3]. This procedure is time consuming and highly operator-dependent, since it requires the doctor’s eyes to focus on the sharp tip to ensure that the TCP can reach the same position every time. This method is not conducive to clinical applications and the promotion of precision puncture surgery. Thus, it is necessary to propose a quick, simple and error-controllable TCP calibration method.
Many TCP calibration solutions have been proposed by academia and industrial researchers, with three main approaches, namely, mechanical constraint, laser sensor measuring and vision processing [4].
Mechanical constraint is a common method for traditional manual calibration and is a typical contact method. The most commonly used mechanical constraint is the sharp-tipped tool. The whole process is carried out in two steps. In the first step, TCP is moved by the operator to the tip with more than four different poses to calibration TCPP [5]. In the second step, operator should select at least one points on each axis of the TCP frame taking the TCPP obtained by the first step calibration as the origin and move them to the tip, which is to establish the TCPF. Mizuno proposed a multipoint spherical fitting method that combined least squares to calibrate the TCP [6]. Xiong Shuo combined the coordinate transformation relationship and the least squares matrix to simplify the calculation of the spherical fitting [7]. However, the fact that the TCP cannot completely coincide with the tip leads to low calibration accuracy, and the placement of the TCP is related to the operator’s experience. Using this method for needles means that the operator must gaze fixedly at the needle tip, which is smaller than one millimeter, to ensure that the distance is close enough to obtain a reliable result. The other mechanical constraints reportedly used are a sphere [8] and plane [9]. However, the shortcoming is that the operators must subjectively determine whether the TCP reaches the target location. The DynaCal system also is typical contact calibration equipment that is based on a pull-wire sensor, which has been widely used in industry. However, these contact methods are not suitable for flexible tools such as puncture needles because it is difficult to keep the flexible tool from deforming while pulling the measuring wire attached to the tool.
Therefore, noncontact measurement methods are more suitable for needle calibration. Laser sensors are among the most widely used methods for modern industrial noncontact measurements, due to their high precision and efficiency. ABB’s BullsEye is a famous solution that has been widely used in the calibration of welding torches. BullsEye uses a single laser beam as a line constraint to calculate the robot tool coordinate via a robot motion procedure. Thereafter, Hao Gu proposed a dual laser line TCP calibration method [10]. This method significantly reduces the operating time by more than 50% and easily improves the TCP accuracy to the submillimeter level. These noncontact methods can reach high accuracy and are robust to the environment but are relatively high in cost and require specific trajectories to be preplanned for the robot to ensure accuracy. Vision measurement technology also plays an important role in industry, such as digital image correlation (DIC) [11,12], in surgery, such Optical Coherence Tomography [13,14,15]. Liu et al. [16] proposed an automatic TCP calibration method based on a common binocular vision measurement, which reduced the dependence on operating experience while guaranteeing certain accuracy. However, limited by the accuracy of the binocular system, it is difficult for its calibration accuracy to meet the needs of high precision. Luo et al. [17] combined vision and deep neural networks to achieve adaptive tool calibration. There are also other studies that apply vision to the calibration of other parameters of robots [18,19,20]. Among them, binocular vision is widely favored by researchers. Researchers have summarized the factors that affect the accuracy of the binocular system, including structural parameters and calibration parameters [21,22,23]. The reason is mainly due to the camera’s central perspective model and the binocular measurement principle. In typical binocular systems, the longer the baseline is, the higher the resolution in the direction of the z-axis, and at the same time, the view field of the binocular system will be farther away from the camera’s image plane, which will reduce the resolution of the x and y-axes. Thus, it is difficult for a typical binocular system to meet the required accuracy for puncture surgery.
In this paper, we built a high-precision binocular system to achieve high-precision spatial positioning of the puncture needle tip and the direction of its axis. Using the binocular system, we also proposed an calibration algorithm based on the least squares method. The algorithm decouples the TCPP (position of needle tip) and TCPF (direction of needle axis) and identifies them separately in the common process. The largest advantages of our method are that it reduces the difficulty of operations while improving the accuracy. The doctor only needs to control the needle fixed in the robot end flange to enter the measurement space to complete the calibration.
The remainder of this paper is organized into the following sections. In Section 2, we introduce our system’s constitution, the design of vision system, the algorithm for positioning the tip of needle and direction of needle axis and the analysis of TCP calibration algorithm. Section 3 introduced the experimental design of the evaluation of our vision system’s accuracy, the accuracy in different configurations and the comparison of our method and traditional methods. Section 4 analyzes the results of all experiments and suggestions for calibration using the method proposed in this article are given. Section 5 concludes that our proposed method can achieve a higher precision puncture and has higher clinical application value.

2. Materials and Methods

2.1. System Constitution

The experiment system is composed of a collaborative robot, binocular vision system and PC software. The system configuration is shown in Figure 1. In this system, we adopt a 6 DOF collaborative robot offered by UNIVERSAL ROBOTS Co., Ltd. (Shanghai, China), model UR5. A biopsy needle (provided by Bard Peripheral Vascular, Inc, MN1413) is mounted on a simple fixture attached to the end of the flange. The binocular vision system is formed by two industrial CCD cameras offered by DAHENG IMAGING Co., Ltd. (Beijing, China). The camera model is MER-1810-21U3C and is equipped with a lens of model M1224-MPW2. The resolution of each camera is 4912 (H) × 3684 (V). The focus of the lens is 12 mm. The system applies diffuse bright-field back light illumination to improve the image contrast, thereby improving the measurement accuracy. In addition, this illumination design can help the camera to obtain a clear image with a small aperture. The advantage of a small aperture is that the camera’s depth of field is larger, and a larger measurement space generated can be used to capture the movement of the needle tip, avoiding the out-of-focus caused by the movement of the needle tip in the measurement space.
While the system is working, the operator manually jogs the robotic arm to place the puncture needle into the stereo vision measurement space. The robot controller sends the end flange position to a personal computer (PC) in real time by an Ethernet cable. The images are captured by two cameras through a universal serial bus (USB) cable to the PC. The position of the needle tip in the measurement space coordinate system can be calculated by an image processing algorithm running on the PC. All of the algorithms are written in C++ based on two external libraries, Halcon and Eigen.

2.2. Binocular System Design and Image Processing

The core task of the binocular system is to accurately position the needle. Binocular stereo vision acquires three-dimensional information on the objects based on the principle of parallax. Therefore, parallax is an important factor that affects the resolution of the binocular system. The resolution of a traditional binocular system is mainly affected by the baseline length and the measurement distance without considering the camera’s own parameters. However, longer baselines lead to longer minimum measurement distances, which is not conductive to accurate measurements. According to the TCP calibration requirements of the robot with a needle as the end tool, we designed a convergent binocular system. The increase in the angle between the optical axes of the two cameras improves the resolution and reduces the size of the uncertain area [24].
The essence of vision-based measurement technology is the correspondence between spatially uncertain regions and the vision system’s pixels. Figure 2 shows the correspondence between the binocular system’s pixels and the spatially uncertain area using a simplified model of a binocular vision system. This model projects the uncertain region onto the horizontal plane assuming that the two cameras are placed at the same height. According to the principle of imaging, the two boundaries of a single pixel correspond to two rays in space, as shown by the green line in the figure, thereby forming a quadrangular uncertain region (green area). Obviously, ω will significantly affect the size of the uncertainty area.
We analyzed the size of the uncertain region at different optical axis angles through numerical calculations. We fixed the length of the baseline and calculated Δ x , Δ z and Δ x 2 + Δ z 2 of uncertain regions at the angle ω of 0 ° , 15 ° , 30 ° , 45 ° , 60 ° and 75 ° . The contour in Figure 3 shows this result. Overall, as ω increases, Δ x and Δ x 2 + Δ z 2 show a trend of decreasing first and then increasing. Δ z decreases as ω increases.
Since the calibration process does not use all of the view field, we chose the nearest circular area with a radius of 20 mm as the analysis object and analyzed the mean and variance of Δ x , Δ z and Δ x 2 + Δ z 2 . The results are shown in Figure 4. We can see that the mean of Δ x and Δ x 2 + Δ z 2 decreases first and then increases as ω increases, and it reaches a minimum when ω is 45 ° . The mean of Δ z decreases with an increase in ω . In general, the fluctuation of Δ x is relatively mild. Δ z and Δ x 2 + Δ z 2 are drastically reduced when ω goes from 0 ° to 30 ° . To facilitate a comparison of the mean and variance, we calculated the negative logarithm of all of the variances. The negative logarithm of the variance of Δ x and Δ x 2 + Δ z 2 increases first and then decreases as ω increases, and it reaches a maximum when ω is 45 ° . The negative logarithm of variance of Δ z increases with increased ω . It can be seen from the variance that the accuracy of the visual system is more stable when ω is 45 ° . In summary, when ω is 45 ° , the vision system has better performance for the positioning accuracy of internal points in the circular area, theoretically.
Based on the above numerical simulation results and analysis, we designed an orthogonal binocular system, as shown in Figure 5. Such a configuration is not conducive to the matching of feature points because in the most extreme cases the images of the two cameras are completely different. However, this will not affect the algorithm introduced in Section 2.3. The left camera and right camera are fixed on two cross slides. The relative positions of the two cameras can be controlled by the two cross slides to ensure that both cameras have a clear field of view. Due to the use of a small aperture to increase the depth of the scene, the system adds four light sources to assist the imaging. Two ring lights are fixed in front of the lens for the calibration of the internal and external parameters of the vision system. Two back lights are fixed on the two cameras to enhance the contrast of the image and to accurately extract the needle tip position. The four lights are controlled by the lighting controller. All of the devices are fixed on an optical shock absorption platform.

2.3. Positioning Needle

As is well known, the binocular system must be calibrated for internal and external parameters before use. This process has been systematic, and we used HALCON’s toolbox and a circular marking calibration plate to calibrate the internal and external parameters of the system. Since our goal is positioning the needle tip rather than three-dimensional reconstruction of an object, we propose a simple algorithm for accurately acquiring position of needle tip and direction of needle axis in our vision system.
The images processing is described as follows to get pixel coordinate of the needle:
(a)
Take gray pictures when no objects are placed in the binocular system and record separately as I L ( x , y ) and I R ( x , y ) ;
(b)
Control the robot moving the needle tip to different positions within the measurement range of the binocular system and take pictures I L i ( x , y ) and I R i ( x , y ) ( i = 1 , 2 , 3 n );
(c)
Subtract the image that contains the needle tip from the image that corresponds to the initial state of the camera without the needle tip using formula G L ( R ) i ( x , y ) = ( I L ( R ) i ( x , y ) I L ( x , y ) ) + 128 . The gray value of pixels less than 0 is truncated to 0 and greater than 255 is truncated to 255;
(d)
Select the pixels from G L ( R ) i ( x , y ) whose gray values fulfill the condition 0 G L ( R ) i ( x , y ) 100 based on the experience of the experimental;
(e)
The resulting image will contain the needle and partial noise. We calculate the size of all of the connected domains in the image and keep the largest connected domain. Then correct image distortion. Then, using a circular structure with a radius of 5 pixels, we perform a morphological opening operation on the image to smooth the outline of the needle.
(f)
Calculate the maximum circumscribed rectangle of the needle in the image and calculate the coordinates of all pixels where the short side intersects the boundary of the needle. Take the average of all intersection coordinates as the pixel coordinates of the needle tip.
(g)
Fit the edge of the needle with a polygon. Extract the two longest straight lines as input for calculating the needle direction.
Figure 6 shows a sample of the image processing.
The needle imaging on the two cameras is shown in Figure 7. In this article we take the frame of left camera as the world frame. The 3 × 4 matrix P maps needle tip X = [ X Y Z 1 ] as a point from 3D space to the 2D image space, x = [ x y 1 ] , via perspective projection up to a scale w .
w L x L = [ α y L 0 x 0 L 0 α y L y 0 L 0 0 1 ] [ I | 0 ] P L X
w R x R = [ α x R 0 x 0 R 0 α y R y 0 R 0 0 1 ] [ R | t ] P R X
where the subscripts L and R represent the left camera and the right camera, respectively. α x = m x f and α y = m y f represent the focal length f of the camera in terms of pixel dimensions in the x and y direction, respectively. m x and m y is the number of pixels per unit distance in image coordinate. x 0 and y 0 is the principal point in terms of pixel dimensions. Because the left camera is set as the world frame of the vision system, the external parameter of the left camera is [ I | 0 ] , which means there is no rotation and translation. The external parameter of the right camera is [ R | t ] , where R and t represent rotation matrix and translation vector relative to the left camera.
Needle tip position in 3D space X = [ X Y Z 1 ] calculate by Formula (3) using least square method assuming that the needle tip is not at infinity [25].
[ x L p L 3 p L 1 y L p L 3 p L 2 x R p R 3 p R 1 x R p R 3 p R 2 ] [ X Y Z 1 ] = 0
where p i is rows of p .
We considered TCPF calibration as the direction calibration problem of straight homogeneous generalized cylinders (SHGC) [26,27,28,29,30]. We used an analytical geometric method that combines the law of SHGC projection imaging to estimate it. The projection of the cylinder on the each image plane will always include two straight lines such as l L + and l L as shown in Figure 7. They are formed by the intersection of the image plane and the tangent plane of the cylinder which pass through the camera’s optical center. We represented these tangent planes by the normal vectors n L + n L n R + and n R . The direction of the normal vectors is given by
n = P l P l
where l can be l L + l L l R + and l R given in Hesse normal form on the image plane and n calculated corresponds to n L + n L n R + and n R using P L and P R .
The axis of the needle must be on the symmetry plane n L and n R with
n L ( R ) = n L ( R ) + + ( n L ( R ) + n L ( R ) n L ( R ) + n L ( R ) ) n L ( R )
n L ( R ) + n L ( R ) n L ( R ) + n L ( R ) is a corrector factor to ensure that n L ( R ) is perpendicular instead of parallel to the symmetrical plane.
Then the direction of needle axis can be calculated by Formula (6).
l A x i s = n L × ( R n R ) n L × n R

2.4. TCP Calibration Algorithm

TCP calibration is to obtain the actual position and frame of the tool center point using external measurements and fitting algorithms. The essence of this problem is to solve the problem of AX = B, and many researchers have already given solutions [31,32,33]. With regard to a surgical assistant robot, on the one hand, it requires high accuracy of the TCP, and on the other hand, it must reduce the difficulty of the doctors’ operations using the robot. As shown in Figure 8, frame {B} is the robot’s base frame. Frame {E} is the end flange frame. Frame {V} is the binocular vision system frame.
For the needle, we do not care about the rotation of the needle tip frame with respect to the needle axis. We only need to align the z-axis of the tip frame with the axis of needle. So the position and pose are considered separately as the position vector of the needle tip ( P N e e d l e E ) and the direction vector of the needle axis ( l A x i s E ) in frame {E}. According to the forward kinematics of the robot, any vector P N e e d l e E satisfies the following formula in the frame {E}.
T V B [ P i V 1 ] = T E i B [ P N e e d l e E 1 ] ( i = 0 , 1 , 2 , 3 , n )
In this function, T V B = [ R V B t V B 0 1 ] is the homogeneous transformation from {B} to {V}, and it is an unknown constant during the calibration. [   P i V 1 ] Τ = [ x i V y i V z i V 1 ] is the needle tip position in frame {V} (as X = [ X Y Z 1 ] remove the homogeneous factor in Section 2.3). T E i B = [ R E i B t E i B 0 1 ] is the homogeneous transformation from {B} to {Ei}, and we can obtain it from the robot controller. [ P N e e d l e E 1 ] Τ = [ x E y E z E 1 ] Τ is the position vector of the needle tip in frame {E}. We expand Formula (7) to obtain Formula (8). Obviously, there are two unknown vectors and one unknown matrix in the equation. It is impossible to directly obtain P N e e d l e E .
R V B P i V + t V B = R E i B P N e e d l e E + t E i B ( i = 0 , 1 , 2 , 3 , n )
Moving n + 1 positions while keeping a fixed rotation of the robot’s end effector, a series of Equation (8) can be obtained. By calculating the difference between every equations and the first equation, we obtain n equations as in Equation (9), which contains only one unknown matrix R V B . Because the end flange makes only translational motions and frame {V} remains relatively fixed with frame {B}, R V B and R E i B P E are constant.
R V B ( P i V P i + 1 V ) = t E i B t E 0 B ( i = 1 , 2 , 3 , n )
Next, we use singular value decomposition (SVD) to obtain R V B .
Then we manually control the end flange to move to m + 1 (four or more) points with translation and rotation. We should attempt to make a difference in the posture between each point large enough. The same as above, by calculating the difference between every equations and the first equation, we obtain m equations, which contains one unknown vector P N e e d l e E . We also use SVD to find P N e e d l e E .
( R E i B R E 0 B ) P N e e d l e E = R V B ( P i V P 0 V   ) ( t E i B t E 0 B ) ( i = 1 , 2 , 3 , m )
The direction of needle axis l A x i s E in end flange frame always follow Formula (11). We also use SVD decomposition to get l A x i s E .
R B V R E i B l A x i s E = l i V ( i = 0 , 1 , 2 , 3 , m )
where l A x i s E represents the direction of needle axis in end flange frame, l i V represents the direction of needle axis in vision frame (as l A x i s in Section 2.3).
In order to facilitate the robot controller to directly control the needle, the homogeneous transformation matrix between the end flange and the needle tip must be calculated. Although some researches have pointed out that the asymmetry of the side-bevel needle will cause the puncture trajectory to be deflected, but this is only limited to the muscle-rich positions such as head and neck surgery [34]. More applications, including the chest, abdomen and prostate, where there are more cavities, the deflection is not obvious, and the use of diamond-tip needles also reduces the deflection. So, without considering the rotation of the needle tip frame with respect to the needle axis, rotation matrix is simplified to the following.
[ cos β 0 sin β 0 1 0 sin β 0 cos β ] [ 1 0 0 0 cos γ sin γ 0 sin γ cos γ ] R N e e d l e E [ 0 0 1 ] = l A x i s E
Finally, the homogeneous conversion matrix [ R N e e d l e E P N e e d l e E 0 1 ] from the end flange to the tip of the needle is obtained.
In the second step of TCP calibration, the difference between all pose needs to be as large as possible. The reason is that the least squares estimate of Formulas (10) and (11) is related to this difference. In order to facilitate the following analysis, we simplify (10) and (11) to A X = B . The bound of the estimation of X is as follows [35]:
X ˜ X 2 ε κ 2 ( A ) 1 ε κ 2 ( A ) ( 2 + ( κ 2 ( A ) + 1 ) r 2 A 2 X 2 ) X 2
where ε max ( δ A 2 A 2 , δ B 2 B 2 ) , κ 2 ( A ) = σ max ( A ) σ min ( A ) means the condition number of A , r = A X B . Obviously the larger κ 2 ( A ) and ε , the larger the error X ˜ X 2 . The difference between all poses determine κ 2 ( A ) and ε . Obviously, the binocular visual system we designed limits this difference and may reduce the calibration accuracy. Therefore, the Section 3.2 designed an experiment to analyze the impact of the reduction on calibration accuracy.

3. Experimental Design

3.1. Accuracy Evaluation of Vision System

To test the actual accuracy of the vision system, we build a test system, as shown in Figure 9. A laser tracker (Leica AT901 Laser Tracker) is used to measure the accuracy of the vision system, which accuracy is ±15 μm + 6 μm/m for position. The spherically reflector of the laser tracker is fixed at the end of the robot. The distance between the reflector and the laser tracker is about 2 m. Thus, through the movement of the reflector, we can obtain a precise change in the needle tip position while keeping the posture of the robot end flange fixed. To evaluate the positioning accuracy of the vision system in all directions, we manually controlled the robot to move 15 intervals at 2 mm on the xyz-axis of the robot base frame separately. We did not use the conversion matrix of the vision system frame and the robot base frame to convert the robot’s movement on the xyz axis to the xyz axis of the vision system, because the inaccuracy of the conversion matrix will introduce additional errors. This will not affect the final result because the error of the vision system will not change due to the change of the reference frame. The coordinates of the needle tip in the vision system are calculated by the proposed method. However, the coordinates of the needle tip cannot be calculated in the laser tracker frame and robot base frame. The puncture needle, robot end flange and spherically mounted reflector are relatively fixed, and therefore, their movement has a theoretical relationship, as shown in Equation (14).
{ P i V P i + 1 V n e e d l e = P i L a s e r P i + 1 L a s e r r e f l e c t o r = P i B P i + 1 B E n d F l a n g e P i V = [ x i V y i V z i V ] , P i L a s e r = [ x i L a s e r y i L a s e r z i L a s e r ] , P i B = [ x i B y i B z i B ]
In the above equation, P i V P i + 1 V n e e d l e , P i L a s e r P i + 1 L a s e r r e f l e c t o r and P i B P i + 1 B E n d F l a n g e are the Euclidean distance of the needle tip in the vision frame, the Euclidean distance of the reflector in the laser tracker frame and the Euclidean distance of the end flange in the robot base frame, respectively. The laser tracker is used as a standard to evaluate the accuracy of the vision system. The error is defined in Formula (15).
P e r r o r = P i V P i + 1 V n e e d l e P i L a s e r P i + 1 L a s e r r e f l e c t o r

3.2. TCP Calibration Accuracy under Different Configurations

The analysis in Section 2.4 shows that different configurations will change the upper boundary of calibration accuracy. In order to separate the influence of κ 2 ( A ) and ε on the accuracy of calibration, the following experiment is designed.
We considered that κ 2 ( A ) represents the uniformity of the configuration set in spatial and ε represents the amplitude of the configuration set. Based on experience, we designed 5 configurations for TCP calibration as Figure 10 shows to separate the influence of κ 2 ( A ) and ε . The angle between axis of 1, 3, and plane I are both θ. The angle between axis of 2, 4, and plane Π are both θ. Plane I and plane Π is perpendicular to horizontal plane (plane III). Although all the axis of needle in Figure 10 intersect at one point only to illustrate the experiment, it is not actually necessary for all the axis to intersect at one point. The experiment was divided into two groups and Table 1 described the details:
  • θ was fixed, and ϕ was changed. This configuration would change κ 2 ( A ) .
  • ϕ was fixed, and θ was changed. This configuration would change ε .
A relatively accurate TCP including P r e a l E and l r e a l E were obtained through repeated calibrations assuming that the error between the calibrated TCP and the real TCP obeys zero mean normal distribution. The errors of TCPP ( P N e e d l e E ) and TCPF ( l A x i s E ) are evaluated by Formulas (16) and (17), where TCPP and TCPF are the average of 20 serious calibrations using traditional methods.
D i s t a n c e = P r e a l E P N e e d l e E
A n g l e = arccos ( l r e a l E l A x i s E )

3.3. Comparison of the TCP Calibration with Traditional Methods

The time consumption and accuracy of the calibration are the key criteria for evaluating the practicability. To evaluate the practicability of the new TCP calibration method, a comparative experiment between the traditional method and the proposed method was taken. Using our proposed method, in each experiment, first the operator controls the tool of robot to enter the vision system’s view filed and to move two points randomly along each direction of xyz under a fixed pose to calculate R V B . Then, the operator arbitrarily gives the robot five different postures and controls the needle tip to enter the vision system’s view field to calculate l A x i s E and P N e e d l e E by the method described in Section 2. The five postures should follow the suggestions given in the conclusion. The error is calculated by Formulas (16) and (17). We repeated the above experiment and also used the traditional method to experiment 15 times. We recorded the time used each time separately.

4. Results and Discussion

4.1. Accuracy Evaluation of Vision System

The calibrated internal and external parameters are listed in Table 2. The reprojection error is 0.28 pixels. The calibration results meet the experimental requirements.
The accuracy of the vision system is shown in Figure 11. In Figure 11, xyz represents the direction of the robot’s base coordinate and the errors are calculated by Formula (15). It can be seen from the results that all of the errors fluctuate in the range ±0.05 mm and the maximum error reach to 0.041 mm, which is lower than the absolute positioning error of a general cooperative robot. The actual accuracy of the vision system is worse than the simulation results, as mentioned in Section 2.2. On the one hand, this finding is due to the inaccuracy of the internal and external parameters, and on the other hand, the reason is that the needle tip obtained by the image processing algorithm in the two cameras is not the identical physical point. However, the error of ±0.05 mm is still lower than the spatial resolution error of the human eye, in the least distance for distinct vision. This finding means that our vision system using the method proposed can accurately measure the spatial movement of the robot TCP, specifically with the puncture needle as the end tool.

4.2. TCP Calibration Accuracy under Different Configurations

The error of needle tip position and needle axis direction is as shown in Figure 12 and Figure 13, which are calculated by Formulas (16) and (17), respectively, when θ was fixed, and ϕ was changed. The increase in ϕ causes the condition number to decrease sharply, especially when ϕ is relatively small. The error of needle tip position has the same trend as condition number, which is consistent with the theoretical analysis of the Section 2. When ϕ is greater than 30°, the error is stable at 0.1 mm. This is because the difference operation in the coefficient matrix in Formula (10) causes ϕ to affect the condition number of the coefficient matrix, which makes it more sensitive to noise and eventually leads to a very large TCPP error when ϕ is small.
Figure 13 shows that the change of ϕ is not sensitive to the condition number of the coefficient matrix of the Formula (11). The increase in ϕ causes the condition number to decrease slowly, because the coefficient matrix of Formula (11) does not involve the difference of the rotation matrix. The needle axis direction error is stable at about 1°, and the maximum can reach 2°. The difference from Formula (10) is that the coefficient matrix of Formula (11) does not involve the difference operation between different poses and its coefficient matrix is all composed of orthonormal matrices, so its condition number does not change significantly, which ultimately leads to the calibration of TCFF not affected by ϕ influences.
The error of TCPP and TCPF are as shown in Figure 14 and Figure 15, which are calculated by Formulas (16) and (17), respectively, when ϕ was fixed, and θ was changed. Under the control of the experiment, the condition number remains unchanged. But the increase of θ also causes the error of TCPP decreases. It is worth noting that although the increase in θ also leads to a decrease in the error, it does not have a large impact on the error than ϕ . This is consistent with theoretical expectations as Section 2.4. The increase in θ leads to the increase in A 2 and B 2 in Formula (10), which leads to the decrease in ε , which improves the calibration accuracy. Figure 15 shows that the change of ϕ is also not sensitive to the condition number of the coefficient matrix of the Formula (11) and the needle axis direction error. Because B is always the unit vector in Formula (11) and its size is fixed, A is always composed of orthonormal matrices and its modulus is also unchanged.

4.3. Comparison of TCP Calibration with Traditional Methods

The time consumption of each experiment is shown in Figure 16. The average calibration time is 482 s and 88 s. The proposed TCP calibration method reduces the time consumption by 82%. The time consumption of the traditional methods varies greatly: the longest time is over 600 s, and the shortest time is close to 400 s. However, the time consumption of proposed method in this paper fluctuates around the average. On the one hand we found that the most time-consuming process is to control the tip of the needle as it reaches the same position every time using traditional methods. Due to the different operability of the robot in different postures, the time consumption varies greatly in each experiment. However, our proposed method only ensures that the needle tip be within the measurement space of the vision system, which is very friendly for physicians and is time-saving. On the other hand, the second step of the proposed method get the TCP position and TCP frame simultaneously. The proposed method gives an operation-independent workflow that is the key to reducing the time consumption.
The error of TCPP and TCPF of each experiment are shown in Figure 17 and Figure 18, which are calculated by Formulas (16) and (17), respectively. The average error of TCPP in 15 experiments using the proposed method is 0.11 mm, which is lower than the result of the traditional method of 0.32 mm. The proposed TCP calibration method reduces the error by 65%. It can be clearly seen that the proposed method has less data fluctuation. This finding shows that the proposed method has not only high accuracy but also high stability. Although occasionally the TCFF error is larger than the traditional method, the average accuracy is still improved by 52%. This is because the diameter of the needle is small and the operator’s habit leads to the same distance between the needle surface and the reference point every time. Therefore, the traditional method in the needle axis direction calibration is also more likely to achieve higher accuracy.

5. Conclusions

This paper has presented a novel approach for robot TCP calibration based on a special binocular vision system specifically with the puncture needle as the end tool. Compared with existing solutions, it is easier and faster to operate for physicians and more stable, and its accurate results can ensure the safety of surgery.
In this paper, we designed a binocular vision system as an extended measurement device to tackle the issue of TCP calibration due to the low rigidity and small size when the puncture needle is used as the end tool of the robot. Through the simulation of a simplified model, the vision system that we designed was proven to have a higher spatial resolution than the traditional binocular system and is more suitable for TCP calibration. In addition, we also proposed a TCP calibration algorithm combined with the designed visual system and analyzed the key factors affecting accuracy. Experiments have been conducted with a system that consists of a robot arm, laser tracker and two cameras. The experiment on the visual system’s accuracy evaluation confirmed that the visual system and the corresponding method we proposed can detect the spatial position changes in the TCP (needle tip) with high accuracy. The experiment in different configuration indicates the uniformity of configuration is the main influence factors. Using the binocular vision system only reduces the magnitude of the configuration and the results indicate the reduction barely influence the accuracy of TCP. In terms of TCPF, configuration does not influence its accuracy. The comparison experiment confirmed that the proposed method takes less time and has higher accuracy than the traditional method. A recommended calibration suggestion is to give priority to guarantee ϕ when the space is limited, try to make it close to 90°. In theory, θ should be as close to 90° as possible without collision, but θ should be guaranteed to exceed 30 degrees under limited space.
To sum up, this paper proposed a new TCP calibration method, which is operation-independent and physician-friendly under the premise of ensuring accuracy. It provides a necessary solution for robotic assisted puncture operation to enter the clinic. However, the method not only is suitable for using in the operating room environment but can also can be extended to the production of some electronic products that require precise operations. This method will accelerate the arrangement of production lines and improve product quality, which will promote the automation of robot processing and manufacturing. Future work will focus on TCP calibration for more general end tools.

Author Contributions

Conceptualization, L.Z. and C.L.; methodology, L.Z. and X.Z.; software, L.Z.; validation, L.Z. and Y.F.; formal analysis, L.Z.; data curation, L.Z.; writing—original draft preparation, L.Z.; writing—review and editing, C.L. and Y.F.; funding acquisition, J.Z. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China (Grant No. 2019YFB1311303), Natural Science Foundation of China (Grant No. U1713202) and Major scientific and technological innovation projects in Shandong Province (Grant No. 2019JZZY010430).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Podder, T.K.; Beaulieu, L.; Caldwell, B.; Cormack, R.A.; Crass, J.B.; Dicker, A.P.; Fenster, A.; Fichtinger, G.; Meltsner, M.A.; Moerland, M.A. AAPM and GEC-ESTRO guidelines for image-guided robotic brachytherapy: Report of Task Group 192. Med. Phys. 2014, 41, 101501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Liu, G.; Yu, X.; Li, C.; Li, G.; Zhang, X.; Li, L. Space calibration of the cranial and maxillofacial robotic system in surgery. Comput. Assist. Surg. 2016, 21, 54–60. [Google Scholar] [CrossRef] [Green Version]
  3. Zequn, L.; Changle, L.; Xuehe, Z.; Gangfeng, L.; Jie, Z. The Robot System for Brachytherapy. In Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Hong Kong, China, 8–12 July 2019; pp. 25–29. [Google Scholar]
  4. Cai, Y.; Gu, H.; Li, C.; Liu, H. Easy industrial robot cell coordinates calibration with touch panel. Robot. Comput. -Integr. Manuf. 2018, 50, 276–285. [Google Scholar] [CrossRef]
  5. Nof, S.Y. Handbook of Industrial Robotics; John Wiley & Sons: Hoboken, NJ, USA, 1999; ISBN 0-471-17783-0. [Google Scholar]
  6. Mizuno, T.; Hara, R.; Nishi, H. Method for Automatically Setting a Tool Tip Point. U.S. Patent 4,979,127, 18 December 1990. [Google Scholar]
  7. Shuo, X.; Bosheng, Y.; Ming, J. Study of robot tool coordibate frame calibration. Mach. Electron. 2012, 6, 60–63. [Google Scholar]
  8. Ge, J.; Gu, H.; Qi, L.; Li, Q. An automatic industrial robot cell calibration method. In Proceedings of the ISR/Robotik 2014, 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; pp. 1–6. [Google Scholar]
  9. Zhuang, H.; Motaghedi, S.H.; Roth, Z.S. Robot calibration with planar constraints. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 1, pp. 805–810. [Google Scholar]
  10. Gu, H.; Li, Q.; Li, J. Quick robot cell calibration for small part assembly. In Proceedings of the 14th IFToMM World Congress, Taipei, Taiwan, 25–30 October 2015. [Google Scholar]
  11. Pan, B.; Qian, K.; Xie, H.; Asundi, A. Two-dimensional digital image correlation for in-plane displacement and strain measurement: A review. Meas. Sci. Technol. 2009, 20, 062001. [Google Scholar] [CrossRef]
  12. Pan, B.; Yu, L.; Zhang, Q. Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement. Sci. China Technol. Sci. 2018, 61, 2–20. [Google Scholar] [CrossRef] [Green Version]
  13. Draelos, M.; Tang, G.; Keller, B.; Kuo, A.; Hauser, K.; Izatt, J.A. Optical Coherence Tomography Guided Robotic Needle Insertion for Deep Anterior Lamellar Keratoplasty. IEEE Trans. Biomed. Eng. 2020, 67, 2073–2083. [Google Scholar] [CrossRef] [PubMed]
  14. Tian, Y.; Draelos, M.; Tang, G.; Qian, R.; Kuo, A.; Izatt, J.; Hauser, K. Toward Autonomous Robotic Micro-Suturing using Optical Coherence Tomography Calibration and Path Planning. arXiv 2020, arXiv:2002.00530. [Google Scholar]
  15. Zhou, M.; Huang, K.; Eslami, A.; Roodaki, H.; Zapp, D.; Maier, M.; Lohmann, C.P.; Knoll, A.; Nasseri, M.A. Precision Needle Tip Localization Using Optical Coherence Tomography Images for Subretinal Injection. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 4033–4040. [Google Scholar]
  16. Changjie, L.; Rongxing, B.; Yin, G.; Shibin, Y.; Yi, W. Calibration method of TCP based on stereo vision robot. Infrared Laser Eng. 2015, 44, 1912–1917. [Google Scholar]
  17. Luo, R.C.; Wang, H. Automated Tool Coordinate Calibration System of an Industrial Robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5592–5597. [Google Scholar]
  18. Wang, Z.; Liu, Z.; Ma, Q.; Cheng, A.; Liu, Y.; Kim, S.; Deguet, A.; Reiter, A.; Kazanzides, P.; Taylor, R.H. Vision-Based Calibration of Dual RCM-Based Robot Arms in Human-Robot Collaborative Minimally Invasive Surgery. IEEE Robot. Autom. Lett. 2018, 3, 672–679. [Google Scholar] [CrossRef]
  19. Zhang, X.; Song, Y.; Yang, Y.; Pan, H. Stereo vision based autonomous robot calibration. Robot. Auton. Syst. 2017, 93, 43–51. [Google Scholar] [CrossRef]
  20. Huang, C.; Guu, Y.; Chen, Y.-L.; Chu, C.; Chen, C. An automatic calibration method of TCP of robot arms. In Proceedings of the 4th International Conference on Production Automation and Mechanical Engineering, Montreal, QC, Canada, 3–4 August 2018; pp. 3–4. [Google Scholar]
  21. Yang, L.; Wang, B.; Zhang, R.; Zhou, H.; Wang, R. Analysis on Location Accuracy for the Binocular Stereo Vision System. IEEE Photonics J. 2018, 10, 1–16. [Google Scholar] [CrossRef]
  22. Zhu, C.; Yu, S.; Liu, C.; Jiang, P.; Shao, X.; He, X. Error estimation of 3D reconstruction in 3D digital image correlation. Meas. Sci. Technol. 2019, 30, 025204. [Google Scholar] [CrossRef]
  23. Yuntong, D. External Parameters Optimization and High-Precision Pose Recognition in Multi-Camera Measurement. Ph.D. Thesis, Southeast University, Nanjing, China, 2018. [Google Scholar]
  24. Zhang, G. Vision Measurement; Science Press: Beijing, China, 2008; pp. 145–147. [Google Scholar]
  25. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003; pp. 152–191. [Google Scholar]
  26. Doignon, C.; de Mathelin, M. A Degenerate Conic-Based Method for a Direct Fitting and 3-D Pose of Cylinders with a Single Perspective View. In Proceedings of the Proceedings 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 4220–4225. [Google Scholar]
  27. Becke, M.; Schlegl, T. Least squares pose estimation of cylinder axes from multiple views using contour line features. In Proceedings of the IECON 2015 41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 001855–001861. [Google Scholar]
  28. Navab, N.; Appel, M. Canonical Representation and Multi-View Geometry of Cylinders. Int. J. Comput. Vis. 2006, 2, 133–149. [Google Scholar] [CrossRef]
  29. Huang, J.-B.; Chen, Z.; Chia, T.-L. Pose determination of a cylinder using reprojection transformation. Pattern Recognit. Lett. 1996, 17, 1089–1099. [Google Scholar] [CrossRef]
  30. Becke, M. On modeling and least squares fitting of cylinders from single and multiple views using contour line features. In Proceedings of the 9th International Conference on Intelligent Robotics and Applications (ICIRA), Portsmouth, UK, 24–27 August 2015; pp. 372–385. [Google Scholar]
  31. Ma, Q.; Goh, Z.; Ruan, S.; Chirikjian, G.S. Probabilistic approaches to the $$ AXB = YCZ $$AXB=YCZcalibration problem in multi-robot systems. Auton Robot 2018, 42, 1497–1520. [Google Scholar] [CrossRef]
  32. Sun, Y.; Pan, B.; Guo, Y.; Fu, Y.; Niu, G. Vision-based hand–eye calibration for robot-assisted minimally invasive surgery. Int. J. Cars 2020. [Google Scholar] [CrossRef] [PubMed]
  33. Shah, M.; Eastman, R.D.; Hong, T. An overview of robot-sensor calibration methods for evaluation of perception systems. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems, College Park, MD, USA, 20–22 March 2012; pp. 15–20. [Google Scholar]
  34. Li, P.; Yang, Z.; Jiang, S. Needle-tissue interactive mechanism and steering control in image-guided robot-assisted minimally invasive surgery: A review. Med Biol. Eng. Comput. 2018, 56, 931–949. [Google Scholar] [CrossRef] [PubMed]
  35. Demmel, J.W. Applied Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997; pp. 117–118. [Google Scholar]
Figure 1. System configuration.
Figure 1. System configuration.
Sensors 21 00366 g001
Figure 2. Principles of uncertain regions. ( Δ x —the size of the uncertain area in X axis, Δ z —the size of the uncertain area in Z axis, ω —the angle between the optical axis of the camera and the Z axis, F —the focal length of the lens, s —the size of a single pixel in the simplified model, B —the length of the baseline of the binocular system).
Figure 2. Principles of uncertain regions. ( Δ x —the size of the uncertain area in X axis, Δ z —the size of the uncertain area in Z axis, ω —the angle between the optical axis of the camera and the Z axis, F —the focal length of the lens, s —the size of a single pixel in the simplified model, B —the length of the baseline of the binocular system).
Sensors 21 00366 g002
Figure 3. Contour of uncertain region size. (a). Δ x of uncertain region. (b). Δ z of uncertain region (c). Δ x 2 + Δ z 2 of uncertain region.
Figure 3. Contour of uncertain region size. (a). Δ x of uncertain region. (b). Δ z of uncertain region (c). Δ x 2 + Δ z 2 of uncertain region.
Sensors 21 00366 g003aSensors 21 00366 g003b
Figure 4. Mean and variance of the uncertain region size in the nearest circular area.
Figure 4. Mean and variance of the uncertain region size in the nearest circular area.
Sensors 21 00366 g004
Figure 5. Vision system.
Figure 5. Vision system.
Sensors 21 00366 g005
Figure 6. Sample of image processing.
Figure 6. Sample of image processing.
Sensors 21 00366 g006
Figure 7. Imaging model.
Figure 7. Imaging model.
Sensors 21 00366 g007
Figure 8. Description of the coordinate system.
Figure 8. Description of the coordinate system.
Sensors 21 00366 g008
Figure 9. Accuracy testing experiment.
Figure 9. Accuracy testing experiment.
Sensors 21 00366 g009
Figure 10. Configurations of TCP calibration.
Figure 10. Configurations of TCP calibration.
Sensors 21 00366 g010
Figure 11. Measurement error of the vision system. (X, Y and Z are defined in the robot base frame).
Figure 11. Measurement error of the vision system. (X, Y and Z are defined in the robot base frame).
Sensors 21 00366 g011
Figure 12. Error of TCPP with ϕ .
Figure 12. Error of TCPP with ϕ .
Sensors 21 00366 g012
Figure 13. Error of TCPF with ϕ .
Figure 13. Error of TCPF with ϕ .
Sensors 21 00366 g013
Figure 14. Error of TCPP with θ .
Figure 14. Error of TCPP with θ .
Sensors 21 00366 g014
Figure 15. Error of TCPF with θ .
Figure 15. Error of TCPF with θ .
Sensors 21 00366 g015
Figure 16. Time consumption.
Figure 16. Time consumption.
Sensors 21 00366 g016
Figure 17. Comparison TCPP error.
Figure 17. Comparison TCPP error.
Sensors 21 00366 g017
Figure 18. Comparison TCPF error.
Figure 18. Comparison TCPF error.
Sensors 21 00366 g018
Table 1. The details of configuration of experiment.
Table 1. The details of configuration of experiment.
ϕ
θ = 40 ° 2 ° 4 ° 6 ° 8 ° 10 ° 15 ° 30 ° 45 ° 60 ° 75 ° 90 °
θ
ϕ = 90 ° 5 ° 10 ° 15 ° 20 ° 25 ° 30 ° 35 ° 40 °
Table 2. The internal and external parameters of the vision system.
Table 2. The internal and external parameters of the vision system.
Right CameraLeft Camera
Focus(mm)12.3912.41
Cell Width (Sx) (μm)1.251.25
Cell Height (Sy) (μm)1.251.25
Center Column (Cx) (pixel)2387.072433.72
Center Row (Cy) (pixel)1820.481861.79
2nd Order Radial Distortion (K1) (1/pixel2)8.90 × 10−108.80 × 10−10
4th Order Radial Distortion (K2) (1/pixel4)3.42 × 10−17−1.06 × 10−16
6th Order Radial Distortion (K3) (1/pixel6)−3.06 × 10−241.99 × 10−23
2nd Order Tangential Distortion (P1) (1/pixel2)1.89 × 10−131.41 × 10−13
2nd Order Tangential Distortion (P2) (1/pixel2)−1.56 × 10−131.56 × 10−14
Image Width (pixel)49124912
Image Height (pixel)36843684
Relative position (mm)162.39, −4.7 × 10−5, 157.34
Relative pose (°)2.93, 270.23, 3.06
Reprojection error (pixel)0.28
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, L.; Li, C.; Fan, Y.; Zhang, X.; Zhao, J. Physician-Friendly Tool Center Point Calibration Method for Robot-Assisted Puncture Surgery. Sensors 2021, 21, 366. https://doi.org/10.3390/s21020366

AMA Style

Zhang L, Li C, Fan Y, Zhang X, Zhao J. Physician-Friendly Tool Center Point Calibration Method for Robot-Assisted Puncture Surgery. Sensors. 2021; 21(2):366. https://doi.org/10.3390/s21020366

Chicago/Turabian Style

Zhang, Leifeng, Changle Li, Yilun Fan, Xuehe Zhang, and Jie Zhao. 2021. "Physician-Friendly Tool Center Point Calibration Method for Robot-Assisted Puncture Surgery" Sensors 21, no. 2: 366. https://doi.org/10.3390/s21020366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop