Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.


Introduction
Automation of welding processes has been a challenging field of research in robotics, sensor technology, control systems and artificial intelligence because of its severe environmental conditions such as intense heat, fumes and so on [1]. In the field of robotics, industrial robot welding is by far the most popular application worldwide, since various manufacturing industries require welding operations in their assembly processes [2]. The most significant application of robot welding can be found in the automobile industry. In the case of the representative Korean automobile company, Hyundai Motor Company, the most manufacturing processes, except for delicate assembly processes, are automated with automotive assembly lines, and the welding process is almost fully automated. As a result, the productivity and quality of the products have been improved remarkably. On the contrary, the shipbuilding process is much less automated than the automobile manufacturing process due to its large-scale unstructured production environment. The welding process in shipbuilding is automated just 60%. Thus, the fact is that the study of robotic welding is still required in the field of shipbuilding, taking into consideration its complex and unstructured production environment.
Shipbuilding is achieved by welding numerous steel plates according to a ship blueprint. Since the steel plates are too big and heavy to carry as is, a lug is attached to the plates as a handle, as shown in Figure 1. In this study, for robotic welding of the lug to the steel plate, a 3D lug pose detection sensor is proposed based on a structured-light vision system. In fact, a structured-light vision system has been commonly used for robotic welding with high precision and low disturbance [3,4]. In general, the structured-light vision system for robotic welding consists of a camera and more than one laser diode. In this case, the baseline (or distance) between a camera and a diode and the projection angle of the diode relative to the central axis of the camera determine the intrinsic system characteristics related to the performance. Kim et al. [5,6] proposed a mechanism to change the projection angle of a structuredlight vision system according to the working distance. The system, however, needs additional parts such as an actuator and a controller, and also requires additional operating time to adjust the projection angle in accordance with the working distance. In this study, the proposed pose detection sensor consists of a camera and four laser line diodes. In our system, the baseline between a camera and a diode and the projection angle of a diode are key design parameters to determine the sensor performance. Thus, we first analyzed the sensor performance relative to the design parameters, and then determined the values for the parameters, taking the lug shape into consideration.
In robotic welding, the acquisition of the initial welding position is one of the most important steps [7,8]. In this study, we also focus on lug pose acquisition including its position and orientation through the coarse-to-fine alignment. First, the rough lug pose is obtained over the lug. According to the rough pose, the robot is controlled to move close to the side of the lug, and then the precise lug pose is obtained. Since the lug pose includes its position and orientation, the initial welding position and the welding line can be obtained from the lug pose. In this case, the structured laser lines are obtained by several image processing algorithms such as the vertical threshold algorithm [9], the Zhang-Suen thinning algorithm [10], the Hough transform algorithm [11] and the separated Hough transform algorithm robust to illumination change.
The organization of this paper is as follows. Section 2 describes the automatic robot welding procedure, and the design and performance analysis of the sensor. Section 3 proposes the coarse-to-fine alignment to obtain the precise lug pose consisting of position and orientation. In section 4, experimental results and discussion are provided to verify the feasibility and effectiveness of the proposed sensor. Finally, Section 5 will present concluding remarks. Figure 1. An automatic lug welding system with an overhead type robot manipulator developed by Daewoo Shipbuilding and Marine Engineering (DSME) Co., Ltd.

Automatic Robot Welding Procedure
In this study, the automatic robot welding with the proposed 3D lug pose detection sensor proceeds in three stages: top view alignment, side view alignment and automatic welding stages. First, the lug pose consisting of both position and orientation is exactly obtained through the top and side view alignments. Next, the robot is controlled to move along the predefined welding path for automatic lug welding. In this study, we focus on the precise robot alignment with the proposed sensor since the success of the alignment is decisive in the success of the automatic robot welding.
Two possible configurations of the robot for the top and side view alignments are shown in Figure 2, where the proposed sensor is attached to the robot end-effector. In the figure, {B}, {C} and {L} represent the robot base frame, the camera frame and the lug frame, respectively. In this case, the problem to align the end-effector with the lug is the same problem to find {L} relative to {C}, where {L} relative to {C} can be simply transformed to {L} relative to {B} using the forward and inverse kinematics of the robot. Thus, the problem can be formulated as the 3D lug pose detection problem. In the top view alignment stage in Figure 2a, the lug frame {L} can be obtained relative to {C} with the premeasured lookup table (LUT) about the lug shape. However, the obtained frame {L} has some position and orientation errors since the camera resolution is relatively low at such a long distance. Thus, the top view alignment is called the coarse alignment. According to the obtained rough frame {L}, the robot is controlled to move close to the side of the lug for the side view alignment. In the side view alignment stage in Figure 2b, the fine alignment is carried out to find the precise lug frame {L}. Finally, according to the resultant lug frame {L}, the robot automatically welds. Table 1 describes the whole procedure of the automatic robot welding in details.

3D Lug Pose Sensor Design
The front view of the proposed 3D lug pose sensor, which consists of a camera and four laser line diodes, is shown in Figure 3, where D i for i = 1,2,3,4 indicates each diode, and b is the baseline (or distance) between the camera and each diode. The origin of the camera frame {C} coincides with the center position of the camera, and the z c axis of {C} is defined perpendicular to both x c axis and y c axis according to the right-hand rule. In this study, we employ FCB-EX480CP developed by Sony, Co. as a camera. The image size is 720 × 576 in pixel and the focal length f is 849 Pixels, where the focal length is empirically obtained by the MATLAB toolbox for camera calibration [12]. Also, we employ LM-6535MS developed by Lanics, Inc. as a laser diode, where the optical power is 20 mW, the wavelength is 658 nm, and the fan angle is 90°. The camera detects four laser lines projected on the lug put on the steel plate for obtaining the 3D lug pose. The geometry of the camera and the laser diode D 1 in the x c -z c plane [11] is shown in Figure 4. The 3D object point P i (x i ,y i ,z i ) of the projected line of the diode D 1 can be obtained relative to the camera frame {C} as follows: where α is the projection angle which is defined as the angle between the central axis of D 1 and the x c axis, and In this case, the baseline b and the projection angle α are the design parameters to determine the intrinsic sensor characteristics related to its performance. First, b is determined by the allowable sensor size to attach to the robot end-effector. In this study, b is set to be 7 cm. Next, for given b, α is determined according to the desirable sensor resolution and detectable range. Here, the sensor resolution is defined as the displacement in the 3D real space per one pixel in the image plane. Let x' i and x' j be the ith pixel and the (i+1)th pixel, respectively. Then, δx' i is one pixel since δx' i is (x' j -x' i ). In this case, the displacements, δx i and δz i , for the ith pixel about the x' axis can be obtained by using Equation 1 as follows: Calculating Equations 2 and 3, the displacements δx i and δz i for i = −360, −359,…, 359 about the x' axis can be obtained for three projection angles of 60°, 70° and 80°as shown in Figure 5a. For the projection angle of 80° and the permissible resolution of 0.1 cm/pixel for fine alignment, the permissible image ranges are represented as an example. In other words, the robot must move close to the lug to satisfy the permissible range for automatic welding. In the coarse alignment, the permissible range is not satisfied since the robot is relatively far from the lug compared with the fine alignment. In this case, the resolution is exponentially reduced as shown in Figure 5. Thus, the fine alignment should be required for automatic robot welding. Similarly, the displacements δy j and δz j for j = −288, −287,…,287 about the y' axis and their permissible image ranges can be obtained as shown in Figure 5b. The Figure 5 shows that the permissible ranges decrease as the projection angle increases. Therefore, the projection angle α should be determined, taking all four resolutions in Figure 5 into consideration.  To determine the projection angle α, the detectable range of the sensor should also be considered. The geometry of the camera along with two diodes, D 1 and D 2 , in the x c -z c plane is shown in Figure 6, where two laser lines are symmetrically projected with the same b and α. For given depth z, the detectable range Δx about the x c axis is obtained by using Equation 1 as follows: where Δx' is the width between two projected lines in image plane. The detectable range Δx increases as the depth z increases, and Δx decreases as the projection angle α increases as shown in Figure 7. In this case, the detectable range Δx should be bounded by the camera view limit Δx cam , where Δx cam is obtained as follows: where x' max is 360 pixels, half the size of the image width. Thus, the projection angle of 60° is not allowable since the detectable range Δx for α = 60° is out of the camera detectable range Δx cam as shown in Figure 7. Similarly, the detectable range Δy about the y c axis can be obtained in the y c -z c plane, where the baseline is set as b but the projection angle is differently set as β. According to both detectable ranges, Δx and Δy, the projection angles α and β should be determined, where the detectable ranges are determined by the lug size. Through the above mentioned design process, we can determine proper projection angles α and β, taking a trade-off between the sensor resolution and the detectable range into consideration.

Figure 7.
Detectable range Δx about the x c axis according to the depth z.

Rough Lug Pose Detection
The top view alignment is first carried out for detecting the rough lug pose. In this case, the lug pose detection is the same problem as the lug frame acquisition. The local frame {L} of the lug which is temporarily welded to the steel plate is defined as shown in Figure 8. The x l axis and y l axis of {L} are defined in the longitudinal and lateral directions of the lug, respectively. The z l axis can be obtained by the cross product of x l with y l . In this case, four laser lines are projected on the lug and the steel plate. Through the top view alignment, the rough lug frame is obtained as shown in Figure 9. In Figure 9a, the z l axis of {L} can be obtained by the surface normal to the steel plate as follows: where l z is a unit vector along the z l axis, and P i for i=1,2,3,4 are intersections of each pair of lines. In this case, the intersection P i (x i ,y i ,z i ) can be easily obtained by using its mapped point p i (x' i ,y' i ) onto the image plane and Equation 1.  For obtaining p i from the camera image, we first separate the projected laser lines from the background using the vertical threshold algorithm [9] robust to the illumination change. Next, the Zhang-Suen thinning algorithm [10] is applied to the threshold image. And then, the Hough transform algorithm [11] is applied to the thinning image for obtaining each laser line equation in x' and y' as follows: where ρ i is the distance from the origin of the image plane to the laser line L i , and θ i is the angle between the normal line to L i and the x' axis. Thus, the point p 1 can be obtained by solving the following linear system of L 1 and L 3 .
Similarly, the points, p 2 , p 3 and p 4 , can be obtained. Then, the robot is controlled to align the x c axis of {C} with the obtained x l axis of {L} in parallel.
After the z c axis alignment with the z l axis, the camera image is obtained as shown in Figure 9b. Here, the robot is controlled to align the x c axis with the x l axis. In this case, the x l axis is parallel to the vector 6 5 P P from P 6 to P 5 , where P 5 and P 6 are the points on the laser lines, L 1 and L 2 , projected on the central beam of the lug, respectively. Thus, the difference angle Δθ between the x c axis and the x l axis can be obtained as follows: where l x is a unit vector along the x l axis. By the robot rotation of Δθ, the x c axis can be aligned with the x l axis. As a result of the x c alignment with the x l axis, the y c axis is also aligned with the y l axis. In Figure 9b, the points, P 5 and P 6 , are obtained as follows. First, the line segments on the central beam are separated from the segments on the background by the separated Hough transform algorithm proposed in this study. This algorithm separates the image into several sections at the interval of S h , and then applies the Hough transform to each section S i for I = 1,2,…,N as shown in Figure 10. As a result of the separated Hough transform, the line parameters, ρ i and θ i , for the line segment in S i can be obtained by the maximum voting parameters. Next, two line segments in the consecutive sections, S i and S i+1 , are merged as one segment if |ρ i+1 -ρ i | < ε 1 and |θ i+1 -θ i | < ε 2 are satisfied, where ε 1 and ε 2 are the acceptable boundaries for the same line. The line segment merging is repeated until there is no line segment satisfying the same line conditions. As a result, the line segment projected on the central beam can be obtained since its line parameters are clearly distinguished from those of the background line segments. Finally, from two central beam line segments, two points, P 5 and P 6 , can be obtained by Equation 1. After each axis alignment of {C} with {L}, the camera image is obtained as shown in Figure 9c. By the separated Hough transform, the points, P 7 and P 8 , on the central beam can be obtained similarly. In this case, the initial point P' Init of the lug is obtained by using the lookup table (LUT) about the central beam shape of the lug, where the LUT is manually formed by measuring the height of the lug along the z l axis at regular intervals along the x l axis. In this case, since the z c axis is aligned with the z l axis, the height of the lug at P 7 can be calculated by the camera as the difference between the depth of the steel plate and that of P 7 along z c axis. Then, the x l position for the lug height at P 7 can be obtained by using the LUT. The absolute value of the x l position is same as the distance d between P' Init and P 7 along the x l axis (or x c axis). Using the position of P 7 relative to {C} and the distance d between P 7 and P' Init , the point P' Init can be obtained relative to {C}. However, the obtained lug frame {L} is not precise enough to carry out automatic robot welding as mentioned in Section 2.2. Therefore, the additional fine alignment is needed.

Precise Welding Line Detection
For successful automatic robot welding, the precise welding line detection is very critical. Thus, the robot is controlled to move close to the side of the lug, and then the side view alignment is carried out with two laser lines L 1 and L 2 as shown in Figure 11. In this case, L 11 and L 12 represent the laser line segments of L 1 projected onto the side of the lug and the steel plate, respectively. By the same way, L 21 and L 22 represent the laser line segments of L 2 projected onto the lug side and the plate, respectively. Here, the line equations of L 11 , L 12 , L 21 and L 22 can be obtained by the threshold, thinning, and Hough transform algorithms like Equation 7 in Section 3.1. Then, the intersection p 9 (x' 9 ,y' 9 ) between L 11 and L 12 and the intersection p 10 (x' 10 ,y' 10 ) between L 21 and L 22 are obtained like Equation 8. From the points, p 9 and p 10 , on the image plane, the real points, P 9 and P 10 , can be obtained by Equation 1. In this case, the parametric line equation of the welding line is obtained by two points, P 9 and P 10 , as follows: 9 9 1 0 w OP OP t P P = + where w OP , 9 OP and 10 OP are the position vectors on the welding line from the origin of {C}, 9 10 P P is 10 9 ( ) OP OP − , and t is the parameter in (-∞,∞). For automatic robot welding, the robot is controlled to follow the welding line from the initial point. However, the initial point P' Init obtained by the top view alignment may not locate on the welding line because of its position error. In this case, the robot cannot continuously follow the welding line from P' Init because of the discontinuity between the welding line and P' Init . As a result, there is little possibility of robot welding successfully. Thus, for removing the discontinuity, we newly define the initial point P Init to locate on the welding line at a distance of d from P 9 as shown in Figure 11. The new initial point P Init can be obtained as follows:

Experimental Results and Discussion
Experiments for the top view alignment and the side view alignment were sequentially carried out with a prototype of the 3D lug pose detection sensor as shown in Figure 12. The design parameters of the sensor were determined, taking the detectable range and the sensor resolution into consideration, as shown in Table 2.

Sensor Parameters Descriptions b = 7 cm
Baseline between the camera and each laser line diode α = 70° Projection angle of diodes, L 1 and L 2 β = 96.5° Projection angle of diodes, L 3 and L 4 ε 1 = 5 pixels Acceptable boundary of the line parameter ρ for the same line ε 2 = 5° Acceptable boundary of the line parameter θ for the same line First, we carried out the top view alignment at a distance of about 71.0 cm to the lug, as shown in Figure 13. By successively applying the vertical threshold, thinning, Hough transform and separated Hough transform algorithms to the original image in Figure 13a, the feature points, P 1 , P 2 , P 3 and P 4 , on the steel plate were obtained as shown in Figure 13b. From the obtained feature points, the surface normal n to the steel plate could be calculated, where l n z = . As a result, the angular error between the normal vector n and the actually measured vector was just 0.03°. Thus, the robot can be controlled to align the z c axis with the z l axis. Then, the feature points, P 5 and P 6 , on the central beam of the lug were obtained to align the x c axis with the x l axis. Using the feature points, the angle between the x c axis with the x l axis could be obtained as Δθ = 0.05°. The robot can be controlled to align the x c axis with the x l axis again. Finally, the rough initial point P' Init could be obtained by the lookup table (LUT), which mapped the x l position of the lug to the z l position relative to {L} as shown in Table 3. Here, for each x l position of the lug, the z l position was manually measured in advance.  In accordance with the rough lug frame, the robot can be controlled to move close to the side of the lug. Next, we carried out the side view alignment at a distance of about 25.5 cm to the lug as shown in Figure 14. Similarly to the top view alignment, from the original image in Figure 14a, the feature points, P 9 and P 10 , were obtained as shown in Figure 14b. Finally, the precise initial position P Init on the welding line could be obtained by the distance from P 9 to P' Init from the LUT. As a result of 31 times experiments, the errors of the initial welding position P Init were obtained as shown in Figure 15. In this case, the proposed sensor had the average error of 0.29 cm, the standard deviation of 0.0844 cm, the maximum error of 0.48 cm, and the minimum error of 0.19 cm. In this case, 68% of the 31 position errors are less than 0.3 cm, and 87% of them are less than 0.4 cm. The position error of the proposed sensor system is small enough to practically weld the lug in the field of shipbuilding.

Conclusions
A precise 3D lug pose detection sensor which consists of a camera and four laser line diodes was proposed for automatic robot welding of the lug to the huge steel plate. The lug pose, consisting of position and orientation, could be obtained from the coarse-to-fine alignment. In this case, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms, which are robust to illumination change, were used for robustly extracting feature points from the camera image. As a result of the coarse-to-fine alignment with the proposed sensor, the lug pose could be obtained precise enough to automatically weld the lug to the steel plate. In the experiments, the initial position on the welding line could be obtained with the average error of 0.29 cm, the standard deviation of 0.0844 cm, the maximum error of 0.48 cm, and the minimum error of 0.19 cm. The experimental results are acceptable for practical lug welding in the field of shipbuilding. Consequently, the proposed sensor is expected to make technological innovations of productivity and quality for shipbuilding automation.