Next Article in Journal
Data-Driven Self-Triggered Control for Networked Motor Control Systems Using RNNs and Pre-Training: A Hierarchical Reinforcement Learning Framework
Previous Article in Journal
A Novel Machine Learning-Based ANFIS Calibrated RISS/GNSS Integration for Improved Navigation in Urban Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Coupled Calibration Method for Dual Cameras-Projector System with Sub-Pixel Accuracy Feature Extraction

School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(6), 1987; https://doi.org/10.3390/s24061987
Submission received: 18 February 2024 / Revised: 17 March 2024 / Accepted: 18 March 2024 / Published: 20 March 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Binocular structured light systems are widely used in 3D measurements. In the condition of complex and local highly reflective scenes, to obtain more 3D information, binocular systems are usually divided into two pairs of devices, each having a Single Camera and a Projector (SCP). In this case, the binocular system can be seen as Dual Cameras-Projector (DCP) system. In the DCP calibration, the Left-SCP and Right-SCP need to be calibrated separately, which leads to inconsistent parameters for the same projector, thus reducing the measurement accuracy. To solve this problem and improve manoeuvrability, a coupled calibration method using an orthogonal phase target is proposed. The 3D coordinates on a phase target are uniquely determined by the binocular camera in DCP, rather than being calculated separately in each SCP. This ensures the consistency of the projector parameters. The coordinates of the projector image plane are calculated through the unwrapped phase, while the parameters are calibrated by the plane calibration method. In order to extract sub-pixel accuracy feature points, a method based on polynomial fitting using an orthogonal phase target is exploited. The experimental results show that the reprojection error of our method is less than 0.033 pixels, which improves the calibration accuracy.

1. Introduction

Three-dimensional shape measurement technology is of great significance in many applications, such as intelligent manufacturing, industrial inspection, virtual reality, machine vision, reverse engineering, biomedicine and so on [1,2,3]. Among all of three-dimensional shape measurement methods, Fringe Projection Profilometry (FPP) has been widely studied due to its advantages of being non-contact and high-speed and having high accuracy, high spatial resolution and a large field of view [4,5,6]. In FPP systems, the Digital Light Processing (DLP) projector is commonly used for its advantages of being low cost and having flexible programming [7]. Two typical configurations of the FPP system are the Single Camera-Projector (SCP) system [8,9,10,11] and the Dual Cameras-Projector (DCP) system [12,13].
In the SCP system, the projector is equal to a camera. Therefore, this kind of FPP system can also be regarded as a binocular vision system in principle, which then can be described by the binocular vision model. In the DCP system, the projector projects groups of phase-shifting fringe patterns onto the objects, and the modulated fringe patterns are captured by the binocular camera. Typically, projectors are used to provide a binocular camera with easily matched features [14]. In this case, it is not necessary to calibrate the parameters of the projector.
However, when meet the condition of covering the complex surface between different views or local high reflection, the binocular camera is often unable to capture some unfavorable feature points at the same time, which will result in a matching failure. In this way, in order to further improve the measurement accuracy in difficult scenes, many researchers have regarded the projector as a camera [14,15,16,17], to ensure that as long as one camera in the binocular camera can capture the feature point, the measurement can be realized. Tao et al. [18] use the constraints of the multi-view system, projecting fringes embedded with triangular waves onto objects, in order to retrieve absolute phase and high-speed dynamic 3D measurements of isolated objects. Liu et al. [15] proposed a stereo matching method without phase unwrapping. This method uses the three view geometric constraints of the camera and the projector, which can effectively reduce the number of fringes in the binocular structured light system. Gai et al. [16] use the digital projector to provide additional information for multi-view mapping, which can effectively avoid the problems of small field of view and self-occlusion in 3D measurement. Hu et al. [17] proposed an accurate dynamic 3D shape measurement method based on DIC-assisted phase shifting and a stereo structured-light system model, which requires projected three-step phase-shifting patterns and a speckle pattern. All of those methods require accurate calibration of the projector.
Existing projector calibration methods can be divided into two categories. In the first way, the camera captures both the calibration chessboard and the projected chessboard in the same scene [19,20]. These methods use the calibrated parameters of the camera to calculate the parameters of the projector so that the calibration error of the camera will be accumulated and amplified in the process of projector calibration.
To avoid projector calibration being affected by the calibration errors of the camera, some researchers proposed the “inverse camera” method. The fringe patterns are generated and projected onto the calibration target, and the pixel coordinates of the feature points on the image plane of the projector are determined. In the “inverse camera” method, the camera is just used to enable the projector to “capture” the calibration target. The precision of these methods depends on the accuracy of extracting and mapping the corresponding pixel coordinates of feature points on the image plane of the camera and projector. In [9], the phase-shifting patterns are directly projected onto the printed chessboard, and feature points are extracted by the camera and mapped to their pixel coordinates on the image plane of the projector. Zhang et al. [21] implemented a sub-pixel mapping between the corresponding feature points on the image plane of the camera and the projector, which was based on the projection invariance of the cross ratio. At the same time, with the popularization of machine learning and deep learning, some learning-based calibration methods have also been emerged. Liu et al. [22] proposed a Bayesian network according to the Markov random field hypothesis, which transforms the image intersection point matching problem between a camera and the projector into a maximum a posteriori estimation problem.Yuan et al. [23] designed an unsupervised image deblurring network to recover a sharp target image from the deteriorated one, which can learn more accurate features from the multi-quality target dataset of convenient image acquisition.
Nevertheless, these methods are only applicable to projector calibration under SCP systems. Among the existing multi-view reconstruction methods [15,16,17,18], the Left-SCP and Right-SCP are generally calibrated separately in the system calibration stage. This will cause inconsistencies in the projector parameters, which are mainly reflected in the focal length error and principal point error. These errors can lead to reconstruction errors and rigid transformations and rotations, resulting in a decrease in measurement accuracy. Through the calculation and derivation of the formula, it can be quantitatively calculated that the principal point error will not only cause the reconstruction error, but also cause the rigid body transformation of the 3D data. At the same time, focal length errors can also cause reconstruction errors, and can also cause the 3D data to rotate around one axis.
Another key point in the projector calibration is the extraction accuracy of the feature points. Classical feature points include chessboard corners and circular target centers [8,21,24,25,26]. Xing et al. [24] proposed a method for calibrating the measurement system which has lens distortions. Fitting the phase values projected on the chessboard through a rational function, the phase value of the corner is accurately extracted, and then the corresponding pixel coordinates of the corner on the image plane of the projector are determined. Since the corners of the chessboard are sensitive to light, this method has low accuracy and reliability. Huang et al. [25] proposed a sub-pixel extraction method based on the circle pattern. A group of pixels at the edge of the circle are extracted and mapped to the image plane of the projector. Then the center of the circle is fitted through the least-square method. Due to the perspective projection of the camera, the center of the fitted circle is usually not the center of the true circle target [27]. Chen et al. [26] proposed an improved camera and projector calibration method, an improved sub-pixel edge detection algorithm and a circular projection error compensation algorithm.
Most of the existing methods require high-precision targets with good diffuse reflection. Such targets can provide reliable world coordinates to ensure the accuracy of calibration results. In addition, these methods only apply to SCP systems, corresponding to the problem of inconsistent projector parameters mentioned above.
Ideally, if we calibrate each SCP system in the DCP system independently, the calibrated projector parameters should be consistent. However, due to the extraction error and phase error, there are differences between those projector pixel coordinates corresponding to the same feature point, which is calculated through different pairs of projector-cameras. Figure 1a shows the projector pixel coordinates of a certain pose calculated by two pairs of SCPs. The STD of two direction errors are 0.1548 pixels and 0.1045 pixels, respectively. Figure 1b shows the reprojection errors of each target pose, while each pose is represented by a unique color cross symbol. It can be observed that due to the error mentioned above, the calibrated projector parameters of the two pairs of SCPs are different. This will inevitably cause errors in the 3D measurement results.
This paper proposes a coupled projector calibration method for the DCP system. Through the binocular camera in DCP system and the orthogonal fringe map, our method can obtain the relationship between the 3D coordinates in the world coordinate system and the 2D coordinates on the image plane of the projector, solving the inconsistencies of the projector parameters. Moreover, our method can obtain high-precision projector parameters without the high-precision chessboard targets or circular targets.
The rest of the paper is organized as follows. Section 2 explains the related work about the proposed calibration method. Section 3 introduces the pipeline and details of our methods. Section 4 gives the experimental results to demonstrate the effectiveness of the method. Section 5 discusses the innovation of this study. Section 6 summarizes this paper.

2. Related Works

2.1. Camera Model and Projector Model

The camera model is the simplification of optical imaging geometry, and the pinhole model is widely used because of its simplicity and accuracy [28]. Let the 3D point in the world coordinate system be W ( x w , y w , z w ) , while its homogeneous coordinates are W ˜ ( x w , y w , z w , 1 ) . In the following, the subscript c represents the camera model. Let the corresponding point in the camera image coordinate system be w c ( u c , v c ) , while its homogeneous coordinates are w ˜ c ( u c , v c , 1 ) . The relationship between the world coordinates and the camera image coordinates can be described as
s c u c v c 1 = f c u γ c u c 0 0 f c v v c 0 0 0 1 R c T c x w y w z w 1 = A c R c T c x w y w z w 1
where f c u = f c / d c u and f c v = f c / d c v . f c is the focal length of the camera lens, while d c u and d c v are the pixel size along the u and v axes, respectively. ( u c 0 , v c 0 ) is the coordinate of the principal point. γ c is the skew factor, s c is an arbitrary scale factor and A c is the intrinsic matrix. R c and T c denote the 3 × 3 rotation matrix and the 3 × 1 translation vector from the world coordinate system to the camera image coordinate system, respectively. The matrix composed of R c and T c is the extrinsic matrix.
Due to the distortion of the camera lens, the actual camera model is often not an ideal pinhole model. Through the distortion model, we can obtain the correct correspondence between 3D space points and 2D pixel points. The most commonly used distortion model is the Brown–Conrady model [29], which mainly contains two kinds of distortions: radial distortion and tangential distortion. Let the ideal image point be ( x c , y c ) , and the corresponding distorted point be ( x d c , y d c ) ; the relationship between them can be described as
x d c = x c 1 + k c 1 r c 2 + k c 2 r c 4 + 2 p c 1 x c y c + p c 2 r c 2 + 2 x c 2 y d c = y c 1 + k c 1 r c 2 + k c 2 r c 4 + p c 1 r c 2 + 2 y c 2 + 2 p c 2 x c y c
where ( k c 1 , k c 2 ) denotes the radial distortion coefficients, ( p c 1 , p c 2 ) denotes the tangential distortion coefficients and r c 2 = x c 2 + y c 2 is satisfied. The high-order coefficient term is discarded because its distortion value is insignificant.
The intrinsic matrix A c and the distortion coefficients ( k c 1 , k c 2 , p c 1 , p c 2 ) are constant parameters, while the extrinsic matrix R c T c varies with the poses of the calibration target. Through single-camera calibration, the intrinsic parameters and the distortion coefficients of the camera can be determined.
The projector can be regarded as an inverse camera, so it can also be modeled by the pinhole model with radial and tangential lens distortion. The formula for describing the projector model is the same as Equations (1) and (2), except that the subscript needs to be replaced from c to p.

2.2. Phase Target

Phase targets are widely used in camera calibration because of their robustness against defocussing and their flexible feature points [30,31,32,33]. Differing from the traditional inverse camera method based on the diffuse planar target, the method based on the phase target only depends on the horizontal and vertical fringe patterns to obtain the feature points. These methods avoid extracting the complex feature points (such as the corner points of the chessboard, the center of the circle or the cross line, etc.). In addition, theoretically, all points distributed on the phase target can be used as effective 2D calibration points, so the amount of 2D calibration points is greatly increased, while the calibration accuracy is also improved. Moreover, while under the certain calibration accuracy, the number of required 2D calibration planes can be reduced, and the calibration process can be simplified. The following will introduce the phase-shifting method to obtain the horizontal and vertical phases. A set of horizontal and vertical fringe patterns is generated by computer and projected by projector. The vertical fringe patterns captured by the camera can be expressed as
I V n u c , v c = A V u c , v c + B V u c , v c cos φ V u c , v c + 2 π n N , n = 1 , 2 , , N 1
The horizontal fringe patterns captured by the camera can be expressed as
I H n u c , v c = A H u c , v c + B H u c , v c cos φ H u c , v c + 2 π n N , n = 1 , 2 , , N 1
where ( u c , v c ) is the pixel coordinate on the camera image plane, A V ( u c , v c ) and A H ( u c , v c ) are the vertical and horizontal background intensities. B V ( u c , v c ) and B H ( u c , v c ) are the vertical and horizontal modulation intensities. φ V u c , v c and φ H u c , v c are the vertical and horizontal phase values modulated by the height of the object. The subscript n is the sequence number of the group of fringe pattern images, while N represents the total number of steps of the fringe phase-shifting. The phase value can be calculated by the following formula
φ j u c , v c = arctan n = 0 N 1 I j n u c , v c sin 2 π n N n = 0 N 1 I j n u c , v c cos 2 π n N , j = V , H
The value of φ j u c , v c is wrapped in the range of ( π , π ] by Equation (5). To obtain the continuous phase value φ V u c , v c and φ H u c , v c , the phase unwrapping algorithm is needed to eliminate the 2 π phase discontinuity. In this paper, the multi-frequency time phase unwrapping algorithm [34] is selected to obtain the corresponding unwrapped phase.
Much research shows that as N increases, the phase-shifting method will have better anti-noise performance, while the precision of the obtained fringes and the quality of the phases will also improve [35]. Therefore, in this paper, we chose an eight-step method, instead of the commonly used four-step phase-shifting method.

3. Calibration Method

3.1. Overview

The DCP system is always set up as shown in Figure 2. o w x w y w z w , o w x l y l z l , o r x r y r z r and o p x p y p z p denote the world, left camera, right camera and projector coordinate systems, respectively. The relationship between camera, projector and world coordinates can be described as
W = R l W l + T l W = R r W r + T r W = R p W p + T p
where W , W l , W r and W p are the same points defined in o w x w y w z w , o w x l y l z l , o r x r y r z r and o p x p y p z p , respectively. R l , R r and R p denote the rotation matrices from the world to two cameras and projector; T l , T r and T p denote the translation vectors. Uniting any two formulas in Equation (6) can solve the three-dimensional coordinates of the target point in the world coordinate system. In Equation (6), R l , T l , W l , R r , T r and W r can be determined by Zhang’s method [28], and R p , T p and W p can be determined by the projector calibration proposed by our method. In addition, it is necessary to find the pose relationship between the left and right cameras and the projector for the system calibration.
Set the projector coordinate system as the reference, then eliminate the world coordinate W in Equation (6) to obtain
W p = R p 1 R l W l + R p 1 T l T p = R l p W l + T l p W p = R p 1 R r W r + R p 1 T r T p = R r p W r + T r p
where R p 1 R l and R p 1 T l T p are the rotation matrix and translation vector between the left camera coordinate system and the projector coordinate system, denoted as R l p and T l p . Likewise, R p 1 R r and R p 1 T r T p are the rotation matrix and translation vector between the right camera coordinate system and the projector coordinate system, denoted as R r p and T r p .
In order to solve R p , T p and W p , we proposed a method to calibrate the projector in the DCP system. The complete calibration procedure with the proposed method can be summarized in the following steps:
  • Step 1: Calibrate the intrinsic and extrinsic parameters of the two cameras;
  • Step 2: Project two sets of fringe patterns, one horizontal and the other vertical, onto a white plane. Capture the images of these fringe patterns, respectively, with the two cameras calibrated in step 1;
  • Step 3: Randomly change the poses of the white plane, then repeat step 1 and step 2 to obtain 17 groups of images. Each group contains 96 images; the left and right cameras correspond to 48 pictures each. For each camera, 8 vertical fringe patterns and 8 horizontal fringe patterns with three frequencies are need;
  • Step 4: As shown in Figure 3a, for each group of images, calculate the absolute phase maps from the vertical and horizontal fringe patterns obtained by the binocular camera in DCP system;
  • Step 5: Create the orthogonal fringe map for feature extraction as Figure 3b and extract the feature points on both right and left images as Figure 3c with the method given in Section 3.2. Compute the projector pixel coordinates and the world coordinates of each feature point with the method given in Section 3.2 and Section 3.3;
  • Step 6: Estimate the intrinsic parameters and the distortion coefficients of the projector by optimizing the reprojection error with the Levenberg–Marquardt method as shown in Figure 3d.
The following part will introduce the details about our method.

3.2. Feature Points Extraction and Mapping

Theoretically, any phase value can be used as a feature point, which is also the advantage of using phase targets, i.e., with a large number of accurate feature points. However, the complete period phase is more convenient for calculating the coordinates of feature points on the projector image plane. Other work that utilizes phase targets, such as [31], also uses the complete period phase as feature points. In order to facilitate the extraction and analysis, we select the phase value with a complete period as the feature points. After obtaining the unwrapped phases, the vertical and horizontal sine fringe patterns can be generated using the known phases, and then superimposed onto the orthogonal fringe map, as shown in Figure 4a. In this way, it is easier to capture the feature points. We select the intersections of the orthogonal bright fringe patterns to be feature points. Points with phase values of φ V t a r = 2 π n i and φ H t a r = 2 π n j are selected as feature points, where n i and n j are integral numbers. Since the accuracy of projector calibration often depends on the accuracy of feature point extraction, consequently, in order to obtain higher accuracy, we proposed a method based on polynomial fitting to extract the feature points with sub-pixel level coordinates.
Since there is some background information, the image needs to be pre-processed as follows. As shown in Figure 4b, take the left view as an example. Select four boundary points and calculate the ROI (Region of Interest) mask based on them. Boundary points are determined by hand-selecting the intersections of bright fringes and refining them using the same method as the polynomial fitting described below. This ensures that all feature points in the ROI have complete phase information. Finally, delineate the ROI. In the following process, we are only interested in the complete feature points in the ROI.
In the process of detecting feature points, we first obtain the pixel level coordinates of the orthogonal intersections according to the phase value of pixels like Figure 4c. Then we use the fitting method to further obtain sub-pixel coordinates. Because the white board in our method cannot be treated as an ideal plane, it may be tilted or have subtle unevenness properties, which can cause the phase growth to change from linear to nonlinear. The polynomial fitting method can better fit the geometric characteristics of an ordinary white board, so as to obtain more accurate phase information. Therefore, we choose the polynomial fitting method but not the usual plane fitting method as shown in Figure 4d. We set a sliding window with the size of 20 × 20 pixels as a sub-region for each integer pixel. Based on the least squares method, we use the integer pixel values and their phases in the sub-region to fit the polynomial equation, which can describe the distribution of horizontal and vertical unwrapped phases. The equation is as follows
p 0 x 2 y 2 + p 1 x y 2 + p 2 y 2 + p 3 x 2 y + p 4 x y + p 5 y + p 6 x 2 + p 7 x + p 8 = u c p q 0 x 2 y 2 + q 1 x y 2 + q 2 y 2 + q 3 x 2 y + q 4 x y + q 5 y + q 6 x 2 + q 7 x + q 8 = v c p , x = φ V c y = φ H c
where u c p and v c p are the pixel level integer pixel coordinates in the sliding window, φ V and φ H are the corresponding unwrapped phases and p n ( n = 0 , 1 , , 8 ) and q n ( n = 0 , 1 , , 8 ) are the polynomial coefficients. Then convert the coefficients into the form of matrix for easier calculation, as P and Q show in the following formula
P = p 0 p 3 p 6 p 1 p 4 p 7 p 2 p 5 p 8 , Q = q 0 q 3 q 6 q 1 q 4 q 7 q 2 q 5 q 8
Sub-pixel coordinates can be obtained by bringing the target phase φ V t a r = 2 π n i and φ H t a r = 2 π n j into Equation (8). The process of obtaining sub-pixel level coordinates can be expressed as
u c s p = f P , φ V t a r , φ H t a r v c s p = f Q , φ V t a r , φ H t a r
where f is the process of calculating the sub-pixel coordinates corresponding to the target phase φ V t a r and φ H t a r , using the coefficient matrices P and Q obtained in Equation (9), u c s p and v c s p are the sub-pixel level coordinates. As shown in Figure 4e, the three images on the left are zoomed-in images of the sub-pixel level feature points highlighted in the red rectangle box on the right images. It is obvious that the precision of the feature detection is improved.
The projector pixel coordinates corresponding to the feature points are
u p = φ V f u c s p , v c s p 2 π T V v p = φ H f u c s p , v c s p 2 π T H
where T V and T H are the periods of fringe patterns along the vertical and horizontal directions, respectively, and φ V f and φ H f are the corresponding phases after being fitted with another polynomial in the same way as above.

3.3. World Coordinates Calculation

When the left and right camera pixel coordinates ( u l , u r ) and ( v l , v r ) corresponding to the feature points are obtained, the world coordinates of the feature points can be calculated. As shown in Figure 5, let the projection matrix of the left camera be M l , and the projection matrix of the right camera be M r . After camera calibration, M l and M r are simplified to
M l = 2760.8263 0 787.3906 0 0 2761.8504 549.9437 0 0 0 1 0 M r = 1956.2650 37.1332 2107.9197 864851.6178 286.4883 2764.1705 527.0503 54260.4679 0.5222 0.0080 0.8528 100.2849
According to the projector model in Section 2.1, we have
s l u l v l 1 = A l R l T l x w y w z w 1 = M l x w y w z w 1 = m 11 l m 12 l m 13 l m 14 l m 21 l m 22 l m 23 l m 24 l m 31 l m 32 l m 33 l m 34 l x w y w z w 1
s r u r v r 1 = A r R r T r x w y w z w 1 = M r x w y w z w 1 = m 11 r m 12 r m 13 r m 14 r m 21 r m 22 r m 23 r m 24 r m 31 r m 32 r m 33 r m 34 r x w y w z w 1
Eliminate s l and s r in Equations (13) and (14), and then we have
u l m 31 l m 11 l u l m 32 l m 12 l u l m 33 l m 13 l v l m 31 l m 21 l v l m 32 l m 22 l v l m 31 l m 23 l u r m 31 r m 11 l u r m 32 r m 12 r u r m 33 r m 13 r v r m 31 r m 21 l v r m 32 r m 22 r v r m 33 r m 23 r x w y w z w = m 14 l u l m 34 l m 24 l v l m 34 l m 14 r u r m 34 r m 24 r v r m 34 r
The feature points’ coordinates of the three-dimensional world can be obtained by solving Equation (15). In this research, we assume that the origin of the left camera coordinate system o l x l y l z l coincides with the origin of the world coordinate system o w x w y w z w (i.e., x l = x w , y l = y w , z l = z w ). Therefore, we can obtain the coordinates of the feature points in the left camera coordinate system.
At this point, we have obtained the 3D coordinates of the feature point under the left camera and the sub-pixel level coordinates on the projector image plane. Next, the projector parameters can be optimized as shown in Section 3.4.
Figure 5. The relationship between projector and binocular camera.
Figure 5. The relationship between projector and binocular camera.
Sensors 24 01987 g005

3.4. Projector Parameters Estimation

After determining the sub-pixel coordinates of the feature points on the projector image plane corresponding to the phase target as described in Section 2.2, all these point pairs are used to estimate the parameters of the projector. First, calculate the parameters without lens distortion through Zhang’s method. Then, nonlinear optimization is used to further solve the distortion parameters of the projector. All of the final parameters are optimized with the Levenberg–Marquardt optimization method by minimizing the reprojection error. The optimization objective function is given as follows
F = min i = 1 n j m w p i j w ^ p A p , R p , T p , K p , W i 2
where n is the number of feature points on each phase target, m is the number of poses of the target, w p i j is the pixel coordinate of the j-th point of the i-th pose on the image plane of the projector and w ^ p is the function representing the projection process of the projector. A p , R p , T p , K p are the intrinsic matrix, rotation matrix, translation vector and the distortion coefficients of the projector, respectively. W i is the 3D coordinate of the feature points corresponding to w p i j , calculated by Equation (15).
Due to the assumption of Equation (12), the extrinsic parameters between the two cameras and the projector can be easily calculated. From this, the extrinsic parameters between each SCP can also be calculated.

4. Experiments and Results

We set up a DCP system as shown in Figure 6 to test our algorithm. The system consists of two CMOS cameras with a resolution of 1600 × 1200 (model UI-3250CP-M-GL, produced by IDS Imaging Development Systems GmbH, Obersulm, Germany). The DLP projector with a projection speed of 30 fps and a resolution of 1280 × 800 (model PDC03-A, produced by Giant Vinda, Fuzhou, China). The focal length of the lens of both cameras is 12 mm (Japan Ricoh Corporation, FL-CC1214-2M, Tokyo, Japan). For algorithm validation, we calibrated the system using both the classical separate calibration method and our coupled calibration method. For both methods, we used T V = 80 pixels and T H = 50 pixels per period of fringe patterns and N = 8 steps phase shifting for calibration.
After the fringe images are acquired by the experimental setup shown in Figure 6, and processed by Figure 4a,b, the feature point can be extracted. We conduct a comparative experiment between the pixel level feature point extraction method and our sub-pixel level extraction method with polynomial fitting. The MSE (Mean Squared Error) between pixel level points and sub-pixel level points is 0.2034 pixels.
Figure 7 shows the comparison results of the polynomial fitting method and the general method to extract feature points. The red circle indicates the sub-pixel level coordinates refined by the polynomial fitting method, corresponding to Figure 4d,e. The blue cross indicates the original pixel level coordinates calculated based on phase value only, without being optimized by the polynomial fitting method, and corresponding to Figure 4c. The green arrow indicates the error vector between the sub-pixel accuracy value and the pixel accuracy value. The highlighted box shows the zoomed-in comparison. It can be clearly seen that the polynomial fitting method can avoid the nonlinear errors caused by the overall tilt and subtle deformation of the white board.
The intrinsic parameters and distortion coefficients of the projector are calibrated with our proposed method and classical method, respectively. It is worth mentioning that both Left-SCP and Right-SCP can be used for projector calibration when the classical method is used. The calibration results are listed in Table 1 and Table 2. It is obvious that the standard errors of the calibration results with our method are much lower than those obtained with the classical method.
As shown in Figure 8, to evaluate the calibrated intrinsic parameters, the reprojection errors are calculated for every plane orientation, while every color expresses one of the planes’ orientations. The reprojection error distributions of the separate calibration methods are shown in Figure 8a and Figure 8b, respectively, while Figure 8c shows our method. The reprojection errors of the Left-SCP and the Right-SCP calibrated by classical method are (0.0836, 0.0675) and (0.0853, 0.0718), respectively. Our proposed method reduces these figures to (0.0186, 0.0322). Such a significant improvement is not only mainly caused by the accumulation of the calibration error of the camera, but also the extraction error and phase error.
We measured two ceramic spheres to test the accuracy of our algorithm. To show that our calibration method indeed reconstructs the absolute 3D geometry, we measured the sphere using both our coupled calibration method and the classical separate calibration method. In this experiment, as shown in Figure 9a, two ceramic spheres with diameters of 50.7991 mm and 50.7970 mm were measured ten times from different views. Figure 9b shows the 3D point clouds, with ten measurements, and the different numbers represent the measurement results at each position. By fitting two spheres using the 3D point cloud, the diameters of the two spheres can be obtained. The measurement results of sphere A and sphere B are shown in Figure 9c and Figure 9d, respectively. The Mean Absolute Error (MAE) of the results, measured ten times, using three methods are calculated and shown in Table 3. In Table 3, the measurement accuracy of the proposed method can achieve a spatial resolution of 0.07 mm, which is more accurate than the classical method.

5. Discussion

In this study, we proposed a new method for calibrating projector parameters in a DCP system with high accuracy. In some difficult measurement environments, the projector in the DCP system is seen as an inverse camera, thus playing the role of providing both texture and 3D information. In many previous methods [15,16,17,18], the projector parameters are calibrated in the Left-SCP and Right-SCP, respectively; thus the projector parameters were inconsistent in the two systems due to influencing factors such as camera calibration error transmission and feature point extraction error that may occur in the calibration process, which led to measurement errors. Even though some learning-based calibration methods have emerged [22,23], methods for calibrating the entire DCP system simultaneously are still lacking. Differently from other methods, in order to unify the projector parameters in the whole DCP system, we innovatively proposed a coupled calibration method, which uses the binocular camera to uniquely determine the coordinates of 3D feature points. At the same time, we also used a combination of phase target and polynomial fitting to obtain the coordinates of the feature points at a sub-pixel level, thus simplifying the procedure.

6. Conclusions

We develop a novel projector calibration framework based on binocular structured light systems. Through the binocular structured light system and the phase target, the inconsistency of the calibration results between the Left-SCP and the Right-SCP in the traditional structured light system is effectively eliminated. The experimental results show that the average reprojection error of the proposed method can reach (0.0186, 0.0322) pixels. Specifically, we achieved an average accuracy of 0.07 mm by repeatedly measuring two standard spherical objects. The experimental results are significantly better than the traditional methods.

Author Contributions

Methodology, R.J.; Software, R.J.; Supervision, J.X.; Validation, W.L., Z.S., Z.X. and S.L.; Writing—original draft, R.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sichuan Science and Technology Program, grant number 2023YFG0181.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank Chaowen Chen for their helpful and valuable discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SCPSingle Camera-Projector
DCPDual Cameras-Projector

References

  1. Liu, Y.; Blunt, L.; Zhang, Z.; Rahman, H.A.; Gao, F.; Jiang, X. In-situ areal inspection of powder bed for electron beam fusion system based on fringe projection profilometry. Addit. Manuf. 2020, 31, 100940. [Google Scholar] [CrossRef]
  2. Lin, C.; He, H.; Guo, H.; Chen, M.; Shi, X.; Yu, T. Fringe projection measurement system in reverse engineering. J. Shanghai Univ. 2005, 9, 153–158. [Google Scholar] [CrossRef]
  3. Kuş, A. Implementation of 3D optical scanning technology for automotive applications. Sensors 2009, 9, 1967–1979. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, W.; Wu, Z.; Li, Y.; Liu, Y.; Zhang, Q. Real-time 3D shape measurement with dual-frequency composite grating and motion-induced error reduction. Opt. Express 2020, 28, 26882–26897. [Google Scholar] [CrossRef] [PubMed]
  5. Yu, C.; Ji, F.; Xue, J.; Wang, Y. Adaptive binocular fringe dynamic projection method for high dynamic range measurement. Sensors 2019, 19, 4023. [Google Scholar] [CrossRef]
  6. Yu, C.; Ji, F.; Xue, J.; Wang, Y. Fringe phase-shifting field based fuzzy quotient space-oriented partial differential equations filtering method for gaussian noise-induced phase error. Sensors 2019, 19, 5202. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  8. Zhang, S.; Huang, P.S. Novel method for structured light system calibration. Opt. Eng. 2006, 45, 083601. [Google Scholar]
  9. Yin, Y.; Peng, X.; Li, A.; Liu, X.; Gao, B.Z. Calibration of fringe projection profilometry with bundle adjustment strategy. Opt. Lett. 2012, 37, 542–544. [Google Scholar] [CrossRef]
  10. Xue, J.; Zhang, Q.; Li, C.; Lang, W.; Wang, M.; Hu, Y. 3D face profilometry based on galvanometer scanner with infrared fringe projection in high speed. Appl. Sci. 2019, 9, 1458. [Google Scholar] [CrossRef]
  11. Huang, B.; Tang, Y.; Ozdemir, S.; Ling, H. A fast and flexible projector-camera calibration system. IEEE Trans. Autom. Sci. Eng. 2020, 18, 1049–1063. [Google Scholar] [CrossRef]
  12. Zhao, H.; Liang, X.; Diao, X.; Jiang, H. Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector. Opt. Lasers Eng. 2014, 54, 170–174. [Google Scholar] [CrossRef]
  13. Qian, J.; Feng, S.; Tao, T.; Hu, Y.; Liu, K.; Wu, S.; Zuo, C. High-resolution real-time 360 3d model reconstruction of a handheld object with fringe projection profilometry. Opt. Lett. 2019, 44, 5751–5754. [Google Scholar] [CrossRef]
  14. Zhao, H.; Wang, Z.; Jiang, H.; Xu, Y.; Dong, C. Calibration for stereo vision system based on phase matching and bundle adjustment algorithm. Opt. Lasers Eng. 2015, 68, 203–213. [Google Scholar] [CrossRef]
  15. Liu, X.; Yang, Y.; Tang, Q.; Cai, Z.; Peng, X.; Liu, M.; Li, Q. A method for fast 3d fringe projection measurement without phase unwrapping. In Proceedings of the Sixth International Conference on Optical and Photonic Engineering (icOPEN 2018), Shanghai, China, 8–11 May 2018; Volume 10827, pp. 237–244. [Google Scholar]
  16. Gai, S.; Da, F.; Tang, M. A flexible multi-view calibration and 3D measurement method based on digital fringe projection. Meas. Sci. Technol. 2019, 30, 025203. [Google Scholar] [CrossRef]
  17. Hu, P.; Yang, S.; Zheng, F.; Yuan, Y.; Wang, T.; Li, S.; Dear, J.P. Accurate and dynamic 3D shape measurement with digital image correlation-assisted phase shifting. Meas. Sci. Technol. 2021, 32, 075204. [Google Scholar] [CrossRef]
  18. Tao, T.; Chen, Q.; Da, J.; Feng, S.; Hu, Y.; Zuo, C. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system. Opt. Express 2016, 24, 20253–20269. [Google Scholar] [CrossRef] [PubMed]
  19. Anwar, H.; Din, I.; Park, K. Projector calibration for 3D scanning using virtual target images. Int. J. Precis. Eng. Manuf. 2012, 13, 125–131. [Google Scholar] [CrossRef]
  20. Song, Z.; Chung, R. Use of LCD panel for calibrating structured-light-based range sensing system. IEEE Trans. Instrum. Meas. 2018, 57, 2623–2630. [Google Scholar] [CrossRef]
  21. Zhang, W.; Li, W.; Yu, L.; Luo, H.; Zhao, H.; Xia, H. Sub-pixel projector calibration method for fringe projection profilometry. Opt. Express 2017, 25, 19158–19169. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, J.; Yu, X.; Yang, K.; Zhu, X.; Wu, Y. Automatic calibration method for the full parameter of a camera-projector system. Opt. Eng. 2019, 58, 084105. [Google Scholar] [CrossRef]
  23. Yuan, Q.; Wu, J.; Zhang, H.; Yu, J.; Ye, Y. Unsupervised-learning-based calibration method in microscopic fringe projection profilometry. Appl. Opt. 2023, 62, 7299–7315. [Google Scholar] [CrossRef]
  24. Xing, S.; Guo, H. Iterative calibration method for measurement system having lens distortions in fringe projection profilometry. Opt. Express 2020, 28, 1177–1196. [Google Scholar] [CrossRef]
  25. Huang, Z.; Xi, J.; Yu, Y.; Guo, Q. Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images. Appl. Opt. 2015, 54, 347–356. [Google Scholar] [CrossRef]
  26. Chen, R.; Xu, J.; Chen, H.; Su, J.; Zhang, Z.; Chen, K. Accurate calibration method for camera and projector in fringe patterns measurement system. Appl. Opt. 2016, 55, 4293–4300. [Google Scholar] [CrossRef]
  27. He, D.; Liu, X.; Peng, X.; Ding, Y.; Gao, B.Z. Eccentricity error identification and compensation for high-accuracy 3D optical measurement. Meas. Sci. Technol. 2013, 24, 075402. [Google Scholar] [CrossRef]
  28. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  29. Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  30. Huang, L.; Zhang, Q.; Asundi, A. Camera calibration with active phase target: Improvement on feature detection and optimization. Opt. Lett. 2013, 38, 1446–1448. [Google Scholar] [CrossRef]
  31. Wang, Y.; Liu, L.; Cai, B.; Wang, K.; Chen, X.; Wang, Y.; Tao, B. Stereo calibration with absolute phase target. Opt. Express 2019, 27, 22254–22267. [Google Scholar] [CrossRef]
  32. Wang, Y.; Wang, Y.; Liu, L.; Chen, X. Defocused camera calibration with a conventional periodic target based on Fourier transform. Opt. Lett. 2019, 44, 3254–3257. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, Y.; Yu, X.; Xue, J.; Zhang, Q.; Su, X. A flexible phase error compensation method based on probability distribution functions in phase measuring profilometry. Opt. Laser Technol. 2020, 129, 106267. [Google Scholar] [CrossRef]
  34. Zuo, C.; Huang, L.; Zhang, M.; Chen, Q.; Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [Google Scholar] [CrossRef]
  35. Chen, L.; Chen, Z.; Singh, R.K.; Pu, J. Imaging of polarimetric-phase object through scattering medium by phase shifting. Opt. Express 2020, 28, 8145–8155. [Google Scholar] [CrossRef]
Figure 1. Pixel coordinate errors. (a) Projector pixel coordinates of a certain pose. (b) Reprojection error distribution of different poses.
Figure 1. Pixel coordinate errors. (a) Projector pixel coordinates of a certain pose. (b) Reprojection error distribution of different poses.
Sensors 24 01987 g001
Figure 2. DCP system.
Figure 2. DCP system.
Sensors 24 01987 g002
Figure 3. The pipeline of projector calibration. (a) Eight-step phase shift and Three-frequency heterodyne for phases unwrapping. (b) Create the orthogonal fringe map for feature extraction. (c) Extract the feature points on both right and left image. (d) Optimize the reprojection error.
Figure 3. The pipeline of projector calibration. (a) Eight-step phase shift and Three-frequency heterodyne for phases unwrapping. (b) Create the orthogonal fringe map for feature extraction. (c) Extract the feature points on both right and left image. (d) Optimize the reprojection error.
Sensors 24 01987 g003
Figure 4. The process of detecting feature points.
Figure 4. The process of detecting feature points.
Sensors 24 01987 g004
Figure 6. Experimental setup.
Figure 6. Experimental setup.
Sensors 24 01987 g006
Figure 7. The comparison results of polynomial fitting and general feature point extraction methods.
Figure 7. The comparison results of polynomial fitting and general feature point extraction methods.
Sensors 24 01987 g007
Figure 8. Reprojection error distributions. (a) Separate calibration method (Left-SCP). (b) Separate calibration method (Right-SCP). (c) Coupled calibration method (ours).
Figure 8. Reprojection error distributions. (a) Separate calibration method (Left-SCP). (b) Separate calibration method (Right-SCP). (c) Coupled calibration method (ours).
Sensors 24 01987 g008
Figure 9. Comparison of ceramic sphere measurements. (a) Measured ceramic spheres. (b) 3D point cloud at different locations. (c) Measurement results of sphere A. (d) Measurement results of sphere B.
Figure 9. Comparison of ceramic sphere measurements. (a) Measured ceramic spheres. (b) 3D point cloud at different locations. (c) Measurement results of sphere A. (d) Measurement results of sphere B.
Sensors 24 01987 g009
Table 1. Calibrated intrinsic parameters with standard error (unit: pixel).
Table 1. Calibrated intrinsic parameters with standard error (unit: pixel).
MethodDevice f up f vp u p 0 v p 0
Separate calibrationLeft-SCP1744.3991 ± 16.99571745.2484 ± 16.9660588.1513 ± 3.3219375.4900 ± 3.9784
(Classical)Right-SCP1755.7047 ± 18.40741755.2361 ± 18.3390597.7887 ± 3.3325366.9479 ± 4.3530
Coupled calibration (Ours)DCP1756.5209 ± 0.72931756.2796 ± 0.7163597.7667 ± 0.3391382.3472 ± 0.3184
Table 2. Calibrated distortion coefficients with standard error.
Table 2. Calibrated distortion coefficients with standard error.
MethodDevice k p 1 k p 2 p p 1 p p 2
Separate calibrationLeft-SCP 0.0718 ± 0.03000.3390 ± 1.23650.0002 ± 0.0004 0.0042 ± 0.0006
(Classical)Right-SCP 0.0762 ± 0.0287 0.2317 ± 1.1333 0.0018 ± 0.0004 0.0014 ± 0.0006
Coupled calibration (Ours)DCP 0.0610 ± 0.00240.0223 ± 0.0461 0.0001 ± 0.00004 0.0014 ± 0.00005
Table 3. Comparison of the MAE of the proposed method and the conventional methods (unit: mm).
Table 3. Comparison of the MAE of the proposed method and the conventional methods (unit: mm).
MethodDeviceMAE of the Diameter of Sphere A MAE of the Diameter of Sphere B
Separate calibrationLeft-SCP0.07260.1115
(Classical)Right-SCP0.09570.1601
Coupled calibrationLeft-SCP0.04810.0294
(Ours)Right-SCP0.03410.0687
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, R.; Xue, J.; Lu, W.; Song, Z.; Xu, Z.; Lu, S. A Coupled Calibration Method for Dual Cameras-Projector System with Sub-Pixel Accuracy Feature Extraction. Sensors 2024, 24, 1987. https://doi.org/10.3390/s24061987

AMA Style

Jia R, Xue J, Lu W, Song Z, Xu Z, Lu S. A Coupled Calibration Method for Dual Cameras-Projector System with Sub-Pixel Accuracy Feature Extraction. Sensors. 2024; 24(6):1987. https://doi.org/10.3390/s24061987

Chicago/Turabian Style

Jia, Ran, Junpeng Xue, Wenbo Lu, Zeyu Song, Zhichao Xu, and Shuxin Lu. 2024. "A Coupled Calibration Method for Dual Cameras-Projector System with Sub-Pixel Accuracy Feature Extraction" Sensors 24, no. 6: 1987. https://doi.org/10.3390/s24061987

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop