Camera Calibration with Phase-Shifting Wedge Grating Array

: Planar targets with known features have been widely used for camera calibration in various vision systems. This paper utilizes phase-shifting wedge grating (PWG) arrays as an active calibration target. Features points are encoded into the carrier phase, which can be accurately calculated using the phase-shifting algorithm. The 2 π -phase points are roughly extracted with edge detection and then optimized by windowed bicubic ﬁtting with sub-pixel accuracy. Two 2 π -phase lines for each PWG are obtained using linear ﬁtting method. The PWG centers that are used as feature points are detected through computing the intersections of 2 π -phase lines. Experiment results indicate that the proposed method is accurate and reliable.


Introduction
Camera calibration has been extensively studied for few past decades, which attempts to determine the intrinsic and extrinsic parameters of cameras [1][2][3]. Through capturing calibration targets with known features, one can acquire an exact set of one-to-one correspondences between the world and the image coordinates. Camera parameters are then estimated from these correspondences. Thus, the feature detection will directly affect the calibration accuracy. Numerous studies have focused on developing patterns with distinctive features that can be accurately localized in the images [4]. Among various existing patterns, squares [5][6][7][8], and circles [9][10][11][12] have become popular due to their ease of use. In order to avoid the influence of the fabrication to the calibration results and further simplify the calibration procedures, the digital displays have been applied to the camera calibration, which provide an alternative way for planar targets [13][14][15][16][17][18][19].
When compared with the conventional calibration object (a planar target marked by printed patterns), digital displays have many advantages, including adjustable brightness, guaranteed flatness, and known pixel sizes in spite that they may be more expensive and inconvenient to move with power and data cables. Arbitrary features can be realized through simple programming on the computer. Most importantly, digital displays make active targets be possible. Therefore, more and more researchers used phase-shifting patterns, which that could solve the problem of the calibration imprecision that is caused by the coordinate extraction errors of the feature points [14,17] as calibration targets with the advantages of digital displays. Schmalz et al. [14] performed camera calibration with two phase-shifting sequences, one horizontal, and one vertical. Huang et al. [15] also applied horizontal/vertical phase-shifting fringe patterns for camera calibration, and made improvement on feature detection and optimization. Ma et al. [16] used horizontal/vertical fringe patterns as calibration targets, but extracted the two-dimensional (2D) phase-difference pulses as feature points. Xue et al. [17] applied concentric circles and wedge gratings for solving the vanishing points, and then estimated

Camera Model
The camera is modeled as the classic pinhole. Let P = (X W , Y W , Z W , 1) T and P = (u, v, 1) T be the homogeneous coordinates of the 3D world point and the 2D image point, respectively. The relationship between P and P can be mathematically described as [1]: where λ is an arbitrary scale factor, K called the intrinsic parameters contains focal length ( f u , f v ), principal point (u 0 , v 0 ), and skew angle parameter γ. The rotation matrix R and the translation vector t denote the extrinsic parameters. Generally, camera lens consisting of several optical elements does not obey the ideal pinhole model, which will introduce nonlinear distortions to correct the captured images [3]. Radial and tangential distortions are the two most common that can be approximated as [19]: where (u, v) and (u , v ) are the distorted and undistorted image point in normalized image coordinates, k 1 and k 2 are the radial distortion coefficients, p 1 and p 2 are tangential distortion coefficients,

Calibration Pattern
Phase-shifting methods are widely used in optical metrology, which can provide very high precision and dense coding [14,20]. This paper encodes the feature points into the carrier phase of three PWG arrays with a phase shift of 2π/3. The wrapped phase of the patterns can be recovered with phase-shifting algorithm. The intensities of the PWG are, respectively, described as [17]: where (x, y) denotes one point on the PWG; A is the average intensity, and B is the intensity modulation; A and B are constant, usually we set A = B = 0.5; 2π f θ is the phase to be determined, and f is the frequency, θ(x, y) is the angle with x-axis that is expressed as Equation (6); the radius r = (x − x 0 ) 2 + (y − y 0 ) 2 is the Euclidean distance between the points (x, y) and the PWG center (x 0 , y 0 ), and r min < r < r max is used to restrict the size of the PWG. Figure 1a shows the PWG arrays contains 3 × 3 uniform PWGs with f = 2/π and a phase shift of 2π/3. Figure 1b shows a single PWG, also the basic unit of the second PWG array, the corresponding wrapped phase of the PWG is shown in Figure 1c.

Calibration Pattern
Phase-shifting methods are widely used in optical metrology, which can provide very high precision and dense coding [14,20]. This paper encodes the feature points into the carrier phase of three PWG arrays with a phase shift of 2 / 3  . The wrapped phase of the patterns can be recovered with phase-shifting algorithm. The intensities of the PWG are, respectively, described as [17]: , cos(2 2 / 3) where   , x y denotes one point on the PWG; A is the average intensity, and B is the intensity modulation; A and B are constant, usually we set   is the phase to be determined, and f is the frequency,   , x y  is the angle with x-axis that is expressed as Equation (6); the radius is the Euclidean distance between the points   , x y and the PWG center   0 0 , x y , and min max r r r   is used to restrict the size of the PWG. Figure 1a shows the PWG arrays contains 3 × 3 uniform PWGs with and a phase shift of 2 / 3  . Figure 1b shows a single PWG, also the basic unit of the second PWG array, the corresponding wrapped phase of the PWG is shown in Figure 1c.  Obviously, the 2π-phase points are distributed on two straight lines that intersect at the PWG center, as shown in Figure 1d. If we can detect the two 2π-phase lines in the image, the PWG center can be located through computing the intersection. Since one PWG only has one center, some identical PWGs are arranged to generate PWG array, and their centers are used as feature points. Figure 2a shows the PWG array consisting of 3 × 3 PWGs. Specifically, for the M × N PWG array, the PWG at m − th row and n − th column is centered at: where m = 0, 1, · · · , M − 1; n = 0, 1, · · · , N − 1; d indicates the distance between adjacent PWG centers between the horizontal or vertical direction, which must be greater than 2r max to avoid interference between adjacent PWGs. Figure 2b shows its wrapped phase image. Obviously, the 2π-phase points are distributed on two straight lines that intersect at the PWG center, as shown in Figure 1d. If we can detect the two 2π-phase lines in the image, the PWG center can be located through computing the intersection. Since one PWG only has one center, some identical PWGs are arranged to generate PWG array, and their centers are used as feature points. Figure 2a shows the PWG array consisting of 3 × 3 PWGs. Specifically, for the M N  PWG array, the PWG at m th  row and n th  column is centered at: where 0,1, , ; d indicates the distance between adjacent PWG centers between the horizontal or vertical direction, which must be greater than max 2r to avoid interference between adjacent PWGs. Figure 2b shows its wrapped phase image.

Feature Detection
The emphasis of feature detection is how to detect the PWG centers. The detailed procedures of our method can be summarized, as follows: J u v respectively represent the three PWG array images that are captured at the same viewpoint. Adding up the three images, the phase-modulated area  ( Figure 3a) for PWG arrays can be obtained with a suitable gray threshold T: Label the connected components in the phase-modulated area  , the sub-masks k  for each PWG can be achieved. Calculate the centroid   , k k c c u v of each k  , which can be regarded as the rough location of the PWG center. 2. Based on the three-step phase-shifting algorithm, the wrapped phase ranging from [0, 2π] can be calculated as: Clearly, ( , ) u v  abruptly changes at the 2π-phase jump areas. Thus, through edge detection (e.g., Sobel, Canny), we can easily extract the 2π-phase points  ,

Feature Detection
The emphasis of feature detection is how to detect the PWG centers. The detailed procedures of our method can be summarized, as follows:

1.
Let J 1 (u, v), J 2 (u, v), and J 3 (u, v) respectively represent the three PWG array images that are captured at the same viewpoint. Adding up the three images, the phase-modulated area Ω ( Figure 3a) for PWG arrays can be obtained with a suitable gray threshold T: Label the connected components in the phase-modulated area Ω, the sub-masks Ω k for each PWG can be achieved. Calculate the centroid u k c , v k c of each Ω k , which can be regarded as the rough location of the PWG center.

2.
Based on the three-step phase-shifting algorithm, the wrapped phase ranging from [0, 2π] can be calculated as: Clearly, φ(u, v) abruptly changes at the 2π-phase jump areas. Thus, through edge detection (e.g., Sobel, Canny), we can easily extract the 2π-phase points (û i ,v i ) with pixel-level accuracy. These edge points (û i ,v i ) are distributed on several lines, as shown in Figure 3b.
The φ(û ij ,v ij ) with 2π-phase jumps can be unwrapped as: Let R ij be the Euclidean distance between û ij ,v ij and its corresponding center (u c , v c ). Then, we can establish the function relations x = f x (Φ, R) and y = f y (Φ, R) by the windowed bicubic fitting algorithm. The optimized 2π-phase points ( u, v) can be obtained through Those 2π-phase points with sub-pixel accuracy are utilized to locate the real center (u 0 , v 0 ).

4.
When all of the 2π-phase points are optimized, two 2π-phase straight lines for each PWG can be obtained with linear fitting algorithm. The intersection point of the two straight lines can be treated as the PWG center, as well as the feature point. Using the one-to-one correspondences between the world and image coordinates of the PWG centers, the camera parameters can be estimated.

Experiments
The performance of the proposed method has been evaluated with some experiments. The experimental setup mainly includes a camera to be calibrated and an LCD used to display the calibration patterns. The camera model is IOI Flare 2M360-CL (IO Industries Inc., London, ON, Canada) with a resolution of 2048 × 1088 pixels. The camera lens has a zoom lens with a focal length of 12-35 mm. The LCD model is Philips 226V4L (Koninklijke Philips N.V., Amsterdam, Holland, The Netherlands), with a resolution of 1920 × 1080 pixels and a pixel pitch of 0.248mm. To begin with, the camera fixed on a device was placed at a suitable distance from the LCD and the optical axis of the camera was perpendicular to the screen. Then, a suitable focal length was adjusted to capture the sharp image of the pattern. The XY plane of the world reference frame was located on the LCD, and the Z axis was perpendicular to the planar monitor. Therefore, the feature points had the same . The camera intrinsic parameters were estimated using the standard calibration method implemented in the MATLAB environment, with the assistance of Camera calibration toolbox [21] and Image Processing Toolbox [22]. The calibration accuracy can be assessed on the basis of the rootmean-square re-projection error (RMSE) of feature points [5,14], which can be computed by the equation:

Experiments
The performance of the proposed method has been evaluated with some experiments. The experimental setup mainly includes a camera to be calibrated and an LCD used to display the calibration patterns. The camera model is IOI Flare 2M360-CL (IO Industries Inc., London, ON, Canada) with a resolution of 2048 × 1088 pixels. The camera lens has a zoom lens with a focal length of 12-35 mm. The LCD model is Philips 226V4L (Koninklijke Philips N.V., Amsterdam, Holland, The Netherlands), with a resolution of 1920 × 1080 pixels and a pixel pitch of 0.248mm. To begin with, the camera fixed on a device was placed at a suitable distance from the LCD and the optical axis of the camera was perpendicular to the screen. Then, a suitable focal length was adjusted to capture the sharp image of the pattern. The XY plane of the world reference frame was located on the LCD, and the Z axis was perpendicular to the planar monitor. Therefore, the feature points had the same z = 0, which could simplify the calculation procedures. All of the feature detection methods were implemented in MATLAB R2014a (MATLAB 8. The camera intrinsic parameters were estimated using the standard calibration method implemented in the MATLAB environment, with the assistance of Camera calibration toolbox [21] and Image Processing Toolbox [22]. The calibration accuracy can be assessed on the basis of the root-mean-square re-projection error (RMSE) of feature points [5,14], which can be computed by the equation: where L is the total number of the feature points, (u l , v l ) are the extracted feature point locations, (û l ,v l ) are the re-projection point locations, which were computed from the known world coordinates of the feature point. The PWG array used to calibrate the camera consists of 6 × 6 uniform PWGs, which can produce 36 feature points per viewpoint. The minimum and maximum radii of the PWG are r min = 20 pixels and r max = 65 pixels. The distance between adjacent PWG centers is d = 150 pixels. The frequency is f = 2/π.

PWG Arrays VS Checkerboard and Circle Patterns
In this experiment, checkerboard and circle patterns were selected as contrasting objects, which also designed to have 6 × 6 feature points with same locations. Both of the performances for three-step and four-step PWG arrays were explored. The four patterns were displayed on the LCD separately. Through adjusting the location of the camera, we collected four groups of pattern images from 10 different viewpoints. We obtained nine images per viewpoint: one for checkerboard, one for circle patterns, three for three-step PWG arrays, and four for four-step PWG arrays. Figure 4 shows the captured images of the four patterns.
where L is the total number of the feature points,  

PWG Arrays VS Checkerboard and Circle Patterns
In this experiment, checkerboard and circle patterns were selected as contrasting objects, which also designed to have 6 × 6 feature points with same locations. Both of the performances for threestep and four-step PWG arrays were explored. The four patterns were displayed on the LCD separately. Through adjusting the location of the camera, we collected four groups of pattern images from 10 different viewpoints. We obtained nine images per viewpoint: one for checkerboard, one for circle patterns, three for three-step PWG arrays, and four for four-step PWG arrays. Figure 4 shows the captured images of the four patterns.  After capturing the calibration target images, we extracted the feature points from the images: the corners for checkerboard with Camera Calibration Toolbox for Matlab [21], the centers for circles with OpenCV function findCirclesGrid [23], and the centers for three-step and four-step PWG arrays with the feature detection method in Section 2.3. Figure 5a-c show the three-step PWG array images and Figure 5d shows their wrapped phase with detected feature points. After capturing the calibration target images, we extracted the feature points from the images: the corners for checkerboard with Camera Calibration Toolbox for Matlab [21], the centers for circles with OpenCV function findCirclesGrid [23], and the centers for three-step and four-step PWG arrays with the feature detection method in Section 2.3. Figure 5a-c show the three-step PWG array images and Figure 5d shows their wrapped phase with detected feature points.  Figure 6 shows the re-projection errors of feature points for the four patterns. The results clearly show that PWG arrays have an improvement in RMSEs than checkerboard and circle patterns. Table  1 shows the intrinsic parameters estimated from the four patterns. The calibration results are very close to each other. The tangential coefficients as well as the skew angle parameter are extremely small. It is sufficient to keep radial coefficients for nonlinear distortion. The RMSEs in the presented method are smaller than checkerboard and circles patterns and the four-step PWG arrays are more accurate than three-step PWG arrays. To our knowledge, four-step pattern always recover the wrapped phase with higher precision than three-step pattern. Thus, the calibration results of PWG arrays are more accurate than checkerboard and circle patterns and the calibration accuracy will be higher and higher with the step increases.  Figure 6 shows the re-projection errors of feature points for the four patterns. The results clearly show that PWG arrays have an improvement in RMSEs than checkerboard and circle patterns. Table 1 shows the intrinsic parameters estimated from the four patterns. The calibration results are very close to each other. The tangential coefficients as well as the skew angle parameter are extremely small. It is sufficient to keep radial coefficients for nonlinear distortion. The RMSEs in the presented method are smaller than checkerboard and circles patterns and the four-step PWG arrays are more accurate than three-step PWG arrays. To our knowledge, four-step pattern always recover the wrapped phase with higher precision than three-step pattern. Thus, the calibration results of PWG arrays are more accurate than checkerboard and circle patterns and the calibration accuracy will be higher and higher with the step increases.  Figure 6 shows the re-projection errors of feature points for the four patterns. The results clearly show that PWG arrays have an improvement in RMSEs than checkerboard and circle patterns. Table  1 shows the intrinsic parameters estimated from the four patterns. The calibration results are very close to each other. The tangential coefficients as well as the skew angle parameter are extremely small. It is sufficient to keep radial coefficients for nonlinear distortion. The RMSEs in the presented method are smaller than checkerboard and circles patterns and the four-step PWG arrays are more accurate than three-step PWG arrays. To our knowledge, four-step pattern always recover the wrapped phase with higher precision than three-step pattern. Thus, the calibration results of PWG arrays are more accurate than checkerboard and circle patterns and the calibration accuracy will be higher and higher with the step increases.  Parameter Pattern

PWG Arrays VS Horizontal/Vertical Phase-Shifting Fringe Patterns
In this experiment, fringe patterns generated with the same phase shift of 2 / 3  and feature point locations were chosen to compare with three-step PWG arrays. Both of them were phaseshifting patterns, and the fringe patterns consisted of horizontal and vertical phase-shifting fringe patterns. We performed the experimental procedures in the same way that is described in Section 3.1. Nine images per viewpoint consisted of three PWG images, three horizontal fringe pattern images and three vertical phase-shifting fringe pattern images were captured and ninety images were obtained in total. Figure 7 shows the captured images of the two phase-shifting patterns. Then, we extracted the feature points from the images: the centers for three-step PWG arrays and the centers for three-step phase-shifting fringe patterns with the feature detection method in paper [16], which extracted the two-dimensional (2D) phase-difference pulse signals that were optimized by interpolation as feature points. The workload of capturing images for PWG arrays was

PWG Arrays VS Horizontal/Vertical Phase-Shifting Fringe Patterns
In this experiment, fringe patterns generated with the same phase shift of 2π/3 and feature point locations were chosen to compare with three-step PWG arrays. Both of them were phase-shifting patterns, and the fringe patterns consisted of horizontal and vertical phase-shifting fringe patterns. We performed the experimental procedures in the same way that is described in Section 3.1. Nine images per viewpoint consisted of three PWG images, three horizontal fringe pattern images and three vertical phase-shifting fringe pattern images were captured and ninety images were obtained in total. Figure 7 shows the captured images of the two phase-shifting patterns.  Parameter Pattern

PWG Arrays VS Horizontal/Vertical Phase-Shifting Fringe Patterns
In this experiment, fringe patterns generated with the same phase shift of 2 / 3  and feature point locations were chosen to compare with three-step PWG arrays. Both of them were phaseshifting patterns, and the fringe patterns consisted of horizontal and vertical phase-shifting fringe patterns. We performed the experimental procedures in the same way that is described in Section 3.1. Nine images per viewpoint consisted of three PWG images, three horizontal fringe pattern images and three vertical phase-shifting fringe pattern images were captured and ninety images were obtained in total. Figure 7 shows the captured images of the two phase-shifting patterns. Then, we extracted the feature points from the images: the centers for three-step PWG arrays and the centers for three-step phase-shifting fringe patterns with the feature detection method in paper [16], which extracted the two-dimensional (2D) phase-difference pulse signals that were optimized by interpolation as feature points. The workload of capturing images for PWG arrays was Then, we extracted the feature points from the images: the centers for three-step PWG arrays and the centers for three-step phase-shifting fringe patterns with the feature detection method in paper [16], which extracted the two-dimensional (2D) phase-difference pulse signals that were optimized by interpolation as feature points. The workload of capturing images for PWG arrays was reduced by half when compared with the fringe patterns. Figure 8 shows the 2D phase-difference pulses. reduced by half when compared with the fringe patterns. Figure 8 shows the 2D phase-difference pulses. Figure 8. Two-dimensional (2D) phase-difference pulses of the horizontal/vertical fringe patterns. Table 2 shows the intrinsic parameters estimated from the two phase-shifting fringe patterns. The RMSE for the presented method is smaller than that for the method in paper [16]. The difference in RMSEs was caused by the precision of feature detection. The method in paper [16] extracted the feature points based on the sum of two 2π-phase point images. While the addition operation brought more noise, which directly influenced the accurate extraction of feature points than the presented method that used one 2π-phase point image. Therefore, the proposed method is accurate and reliable. Table 2. Camera intrinsic parameters estimated from different phase-shifting patterns.
Parameter Pattern

Conclusions
In this study, PWG arrays are used as calibration patterns for accurate camera calibration, which is displayed on an LCD. The 2π-phase points are detected and optimized to obtain sub-pixel accuracy. Then, the PWG centers used as feature points are precisely extracted from the phase by using linear fitting method rather than directly from the intensity. When compared to horizontal/vertical phase-shifting fringe patterns, it is convenient and time-saving with the workload of capturing images reduced by half. Experimental results indicate that the proposed method is accurate and reliable.   Table 2 shows the intrinsic parameters estimated from the two phase-shifting fringe patterns. The RMSE for the presented method is smaller than that for the method in paper [16]. The difference in RMSEs was caused by the precision of feature detection. The method in paper [16] extracted the feature points based on the sum of two 2π-phase point images. While the addition operation brought more noise, which directly influenced the accurate extraction of feature points than the presented method that used one 2π-phase point image. Therefore, the proposed method is accurate and reliable.

Conclusions
In this study, PWG arrays are used as calibration patterns for accurate camera calibration, which is displayed on an LCD. The 2π-phase points are detected and optimized to obtain sub-pixel accuracy. Then, the PWG centers used as feature points are precisely extracted from the phase by using linear fitting method rather than directly from the intensity. When compared to horizontal/vertical phase-shifting fringe patterns, it is convenient and time-saving with the workload of capturing images reduced by half. Experimental results indicate that the proposed method is accurate and reliable.