Three-dimensional surface imaging by multi-frequency phase shift profilometry with angle and pattern modeling for system calibration

In this paper, we present a 3D surface imaging system based on the well-known phase shift profilometry. To yield the analytical solutions, four shifted phases and three high carrier frequencies are used to compute the phase map and reduce the noises that are caused by the inherent optical aberrations and external influences, e.g. different illumination light sources, uneven intensity distribution and automatic image processing algorithms. To reduce the system noise, we propose to model the pattern of the calibration grid in a virtual space. To obtain the modeled pattern, we use a plane to intercept the rays that are modeled by the proposed angle modeling method. In the world coordinate system, the angle and the pattern are computed based on the calibration data. A registration method is used to transform the modeled pattern in the virtual space to the ideal pattern in the world coordinate system by computing the least squared errors between the true points in the modeled pattern and the measured points in the practical pattern. The modeled (true) points are used for re-calibration of the 3D imaging system. Experimental results showed that the measurement accuracy increases considerably and the MSE is reduced from 0.95 mm to 0.65 mm (32% average error decrease) after replacing the measured points with the true points for calibration.


Introduction
3D surface imaging is able to provide the depth information of the objects, which breaks the restriction of the traditional cameras and sensors that could only acquire 2D images. As a result, non-contact 3D surface imaging technology advances rapidly and greatly in the past decades with the stimulation of market demands in industrial community, medical applications and scientific research. The non-contact 3D surface imaging techniques include two main categories based on the reflective characteristics of the surface: (1) techniques to measure the specular surfaces [1][2][3][4][5], and (2) techniques to measure the diffuse surfaces [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Among them, phase shift profilometry is the most popular method due to its overall higher measurement accuracy and measurement efficiency compared to other non-contact 3D surface imaging techniques.
In the 1980s, researchers proposed the phase shift profilometry to measure the depth of the objects [13,14]. After many decades, it remains a major 3D surface imaging method in many applications. The development and widespread of the cost-saving digital camera and projector makes the phase shift profilometry more accurate and more efficient. However, the

Measurement Science and Technology
Three-dimensional surface imaging by multi-frequency phase shift profilometry with angle and pattern modeling for system calibration In this paper, we present a 3D surface imaging system based on the well-known phase shift profilometry. To yield the analytical solutions, four shifted phases and three high carrier frequencies are used to compute the phase map and reduce the noises that are caused by the inherent optical aberrations and external influences, e.g. different illumination light sources, uneven intensity distribution and automatic image processing algorithms. To reduce the system noise, we propose to model the pattern of the calibration grid in a virtual space. To obtain the modeled pattern, we use a plane to intercept the rays that are modeled by the proposed angle modeling method. In the world coordinate system, the angle and the pattern are computed based on the calibration data. A registration method is used to transform the modeled pattern in the virtual space to the ideal pattern in the world coordinate system by computing the least squared errors between the true points in the modeled pattern and the measured points in the practical pattern. The modeled (true) points are used for re-calibration of the 3D imaging system. Experimental results showed that the measurement accuracy increases considerably and the MSE is reduced from 0.95 mm to 0.65 mm (32% average error decrease) after replacing the measured points with the true points for calibration.
Keywords: modeling, phase shift profilometry, 3D measurement, instrumentation (Some figures may appear in colour only in the online journal) Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. measurement accuracy of this technique is still significantly affected by the noise and optical aberrations inherent in the camera and projector. In the 3D imaging system of the phase shifting profilometry, both the camera and the projector cause noises. To suppress the noise, the authors of [15,16] used multiple frequencies to compute multiple phases and then unwrap them to obtain the final phase. They incorporated the radial lens distortion that the straight line in a scene does not remain straight in the image into the calibration model while rectified the radial lens distortion caused by the projector independently. There are also many other literatures dealing with the reduction of noise during calibration of the camera [21,22]. However, all of them tried to minimize the noise introduced by the calibration grid and automatic image processing instead of trying to remove them totally [23]. In addition, they deal with the radial lens distortion separately from the noise minimization, which requires additional computational effort and complexity.
In addition to the radial lens distortion, there are other types of optical aberrations that add noise or distortion to the imaging system. These optical aberrations include astigmatism that causes the image to be blurred or distorted to some extent at all distances, spherical aberration that is caused by increased refraction of light rays when they strike the lens, chromatic aberration that is caused by chromatic dispersion when the lens fails to focus all the colors to the same convergence point, comatic aberration in which the imperfection in the lens results in the off-axis point source and inconstant (non-uniform) magnification over the optical image, defocus aberration that occurs when the image is out of focus and Petzval field curvature. In the past research, these aberrations are coped with independently with different methods. For example, a cylindrical lens [24] can be introduced into the imaging system to create astigmatism in optical superresolution microscopy [25] and some telescopes use deliberately designed astigmatic optics [26]. In an ideal situation, chromatic aberration could be removed by scaling the fringed color channels, or subtracting some of a scaled versions of the fringed channels, so that all channels spatially overlap each other correctly in the final image [27].
In the practical implementations, it would be impossible to remove the effect of each optical aberration independently one by one. On the other hand, it would be wise to remove the effects of all the optical aberrations as a whole. Moreover, there are also negative effects caused by the illumination light sources and uneven intensity distributions. We define the deviation of the measured point from its true position caused by the effect of the optical aberrations and other negative influencing factors as system noise in this paper. As can be seen, the system noise is constituted by all the unwanted interferences and it causes the measurement errors. Instead of reducing the system noise, we aim to remove it completely.
During calibration, the measured points are affected by the system noise defined above. As a result, they usually differ from the true points considerably. In the past research [15][16][17][18][19][20][21][22], the authors used the measured points instead of the true points to calibrate the camera or the phase shift profilometry 3D imaging systems. Consequently, the ultimate measurement accuracy of these systems could achieve is limited. In addition, it is usually difficult to obtain the true points from the measured points during calibration applications, which hinders the development of such kind of a method or technique. In this paper, we propose an angle modeling and pattern modeling method to calculate the true points in a virtual space. The designed pattern in the virtual space and the captured pattern in the practical world coordinate system are related with an optimal interception plane and the modeled rays. By registration, the true points and the measured points could be converted to each other by an affine transformation matrix. After replacing the measured points with the true points for the calibration of the phase shift profilometry 3D imaging system, we achieved a 32% average error decrease. This error decrease might be significant for phase profilometry, however, it is far from the utmost limit of this noise removal technique. To be general, phase shift profilometry is more vulnerable to noise during measuring tiny objects when the size or depth of the object is close to or smaller than the magnitude of noise. For such kind of challenging applications, this noise removal technique could play a more important role in increasing the measurement accuracy.

3D imaging by phase shifting profilometry
Depending on the designed pattern, there are a variety of formulation methods [13][14][15][16][17] for phase shift profilometry to calculate the phase and depth. The typical phase shift profilometry [13,14] is formulated as: where I 0 is the intensity image, γ is the modulation of the interference fringes, and ϕ is the phase. With the minimum of three phase shifts α 1 , α 2 , α 3 , these three unknowns can be solved by least square estimation [14]. In the practical implementation, the above closed form solution is affected greatly by the noise. To reduce the effect of noise, more phase values at each point are computed by adding high carrier frequencies f i . Accordingly, equation (1) becomes: To be more accurate, the phase shifted value α i were chosen based on precise calculations by different researchers to compute the phase ϕ. In this paper, the phase shifted values 0, π 2 , π, π 3 2 are selected to yield the closed form solution for the phase ϕ. With these four shifted values substituted into equation (2), the following equations are acquired.
Subtracting equation (3) by equation (5), we get: Subtracting equation (4) by equation (6), we get: Dividing equation (8) by equation (7), we get: We use the high frequencies, 24 Hz, 48 Hz and 96 Hz and they are selected as multiples of 2 for computation convenience. The final phase is calculated by the following steps.
Step 1: The phase based on the base frequency, Step 2: The phase based on the first high frequency, Unwrap ϕ 2 based on ϕ 1 and scale it in the range of ( ] π π − , as the unwrapped phase, ϕ 2 u .
Step 3: The phase based on the second high frequency, Unwrap ϕ 3 based on ϕ 2 u and scale it in the range of ( ] π π − , as the unwrapped phase, ϕ 3 u .
Step 4: The phase based on the third high frequency, Unwrap ϕ 4 based on ϕ 3 u and scale it in the range of ( ] π π − , as the unwrapped phase, ϕ 4 u . The y coordinate of the projector, y p is determined: The affine matrix between the world coordinate     is computed automatically from the captured image and thus the noise is inevitable. Consequently, the accuracy of computed affine matrixes, M WC and M WP will be decreased. In addition to noise, the camera lens and projector lens have the problem of radial distortion, which decreases the accuracy further. The measurement accuracy of phase shift profilometry is determined directly by the accuracy of the computed two affine matrixes, M WC and M WP . Hence, removing the noise and radial lens distortion inherent in ( ) x y y , , i i i c c p and increasing the accuracy of the computed affine matrixes are important for the measurement accuracy of phase shift profilometry.
In summary, the above algorithms (equations (3)- (14)) are applied to calculate the phase, y p . With ( ) x y y , , i i i c c p known, the system is calibrated by equations (15) and (16) to compute the affine matrixes, M WC and M WP . Figure 1 shows the flowchart of the calibration process that is also used in [15,16].

Noise definition and pattern modeling
In the phase shift profilometry system, both the camera and projector suffer inherent optical aberrations that include astigmatism, spherical aberration, chromatic aberration, comatic aberration, defocus aberration, Petzval field curvature and radial distortion. In addition to these internal factors, there are external factors that affect the measurement accuracy of the system, such as uneven illumination by different light sources, non-perfect image intensity distributions and non-perfect image processing algorithms. These internal and external factors affect the measurement accuracy of the system in the form of noise that is defined as system noise and denoted by the random variable, w k in this study. Accordingly, the relation between the true point defined as P t and the measured point defined as P m is formulated as: It would be intractable to remove the system noise defined above separately based on their causes due to the great number of influencing factors. On the contrary, it would be much easier for the system noise defined in equation (18) to be removed as a whole. To remove the system noise defined above during the calibration stage of the phase shift profilimetry 3D surface imaging technique, we propose to compute the true points in a virtual space which is free of internal optical aberrations and other external factors that causes the system noise. To relate the virtual space with the practical world coordinate system, we need to model the angle first based on the calibration data of phase shift profilometry in this section.
In the 3D imaging view, all the rays intersect at the projection center ( ) C x y z , , c c c of the 3D imaging system and the angle between any two rays can be computed by the following equation. In the virtual space, the calibration grid is designed perfectly and the camera obeys the rule of central projection that a projection of one plane onto another plane such that a point on the first plane and its image on the second plane lie on a straight line through the projection center. In the perfectly designed pattern (the pattern on one plane of the calibration grid), each point represents one projected ray. The center point   where d i denotes the distance between the center point and the ith point and D denotes the distance between the projection center and the interception plane that is vertical to the center ray. Both d i and D are unknowns. We assume the distance d i equals the practical distance in the designed pattern. To determine the distance D, the following searching algorithm is proposed: Step 1: From the practical calibration data, we compute the practical angles, θ i0 between the center ray and all the other rays with equation (19).
Step 2: In the virtual space, we compute the virtual angles, θ i between the center ray and all the other rays by equation (20) with an initial estimated value of D which is chosen as 1000 mm in this study. Please note that the initial value of D is chosen close to the practical size of the used calibration grid for faster searching speed.
Step 3: We vary the value of D and compute the differences between the practical angles, θ i0 and the virtual angles, θ i by the following equation.
where θ i0 denotes the angle between the ith ray and the center ray.
Step 4: We choose the distance D that makes Δθ minimum for the angle computation by varying the value of D in the range of (−500 000, 500 000).
After the distance D is determined, the angle between the center ray and any other ray is computed by equation (20).
In the virtual space, the rays could be modeled and determined after the angles are modeled. To relate the practical pattern (the pattern on one plane of the calibration grid) in the 3D imaging view and the ideally designed pattern in the virtual space, we use an interception plane to intercept the modeled rays and find the optimal interception plane. The optimal interception plane yields the interception pattern that matches the practical pattern (the pattern on one plane of the calibration grid) best. The optimal interception plane is found by the following steps.
Step 1: In the virtual space, we model the rays with points in the perfectly designed pattern (the pattern on one plane of the calibration grid) and the projection center ( ) The unit of the modeled points is chosen as mm for convenience. The modeled rays are formulated as: W is the ith point in the perfectly designed pattern.
Step 2: In the virtual space, we use the plane ax + by + cz = 1 to intercept the modeled rays and we then compute the virtual distances, d i m between the center point and other points on the same plane.
W is the center point of the perfectly designed pattern.
Step 3: In the practical world coordinate system, we compute the practical distances, d i p between the center point and the same set of points as those used in Step 2 for the practical pattern in the 3D imaging viewby the following equation.    Step 4: We compute the total differences between the virtual distances, d i m and the practical distances, d i p by the following equation.
Step 5: We search for the interception plane P(a,b,c) that makes Δd minimum by varying the values of a b , and c in the range of (−50, 50) and choose it as the optimal plane. The interception pattern generated by this optimal plane as the final modeled pattern.    figure 5(a)). The scale is mm.
After the optimal interception plane is determined, the optimal modeled pattern is obtained. However, the coordinates of the points on the optimal modeled pattern in the virtual space and the coodinates of the points on the practical pattern in the world coordinate system are different. To relate them, we register these two set of points, ( ) X Y Z , , (e) differences of modeled camera coordinate y c ; (f ) differences of modeled projector coordinate y p .
where ω is a constant and L denotes the total number of points in the registered pattern. After registration, the practically computed (measured) points, ( ) P x y y , , m i i i c c p are replaced with the modeled (true) points, ( ) P x y y , , t i i i c c p for the calibration of the 3D imaging system. In summary, the above proposed pattern modeling method could be demontrated by the flowchart in figure 2. Figure 3 shows the flowchart of the calibration processing with the above pattern modeling algorithms.
To determine the projection center ( ) C x y z , , c c c of the 3D imaging system more accurately, we choose ( ) x y , c c as the center of the camera coordinate system and the initial value of z c as 10 000. We then search for the optimal z c by minimizing the total calibration errors of the 28 calibration points modeled by the method described above. As a matter of fact, the initial designated projection center could yield good calibration accuracy while a further search process could achieve better calibration accuracy. The selection of the center would affect the calibration accuracy because it affects the accuracy of finding the interception plane in the predefined range (−50, 50). Figure 4 shows the illustration of the 3D imaging system in which the 3D surface of a ball is imaged by computing the distortion of the projected phase shift pattern. θ denotes the angle between the optical axis of the camera and the optical axis of the projector.

Experimental results
In our experiments, a calibration grid with 28 points To remove the noise, we model the points on the two planes (the blue points on the back plane and the red points on the front plane as shown in figure 5) independently. The modeled angles by equation (20) versus the computed angles by equation (19) are plotted in figures 6(a) and (b) for the two sets of points respectively. As can be seen, the modeled angles match the computed angles quite well, which is critical for the computation of the projection center. The modeled points versus the two sets of computed points on the two planes ( figure 5(b)) are shown in figures 7(a) and (b). As can be seen, they match well, which indicates that it is correct to apply the proposed pattern modeling method in phase shift profilometry 3D imaging system.
To see the effect of noise removal, the coordinate differences before and after pattern modeling are plotted in figures 8(a)-(f ). In figures 8(a)-(c), the differences of the camera coordinates and projector coordinates ( ) x y y , , i i i c c p vary randomly. These random variations are caused by the system noise.
In figures 8(d)-(f ), the noises (random variations) were removed successfully, which verifies the effectiveness of the proposed pattern modeling method.
The measurement accuracy with and without pattern modeling are 0.65 mm and 0.95 mm respectively for measuring the object with the length about 1000 mm. The measurement accuracy percentages are thus 0.000 65 and 0.000 95 respectively. The average error decrease-rate achieved by the proposed pattern modeling method is 32%. In addition, it takes the proposed pattern modeling method an extra 32.62 s in MATLAB and 2.38 s in C++ to obtain the affine matrixes, M WC and M WP , which is very efficient since the calibration process is run only once before the measurement stage. Figure 9 shows the reconstructed grid points versus the original virtual points. As can be seen, they match well. Finally, we show the reconstruction results of the author's face (cropped from the whole 3D data points for better view) by the presented phase shift profilometry 3D imaging system in figure 10. As can be seen, the 3D surface information (depth) of the face is reconstructed vividly.

Conclusion
A 3D surface imaging technique based on the well-known phase shift profilometry is presented in this paper. Different from the previous research, four phase shifts are combined with three high carrier frequencies to compute the phase map and reduce the noise during the reconstruction stage. A pattern modeling method is proposed to remove the noise during the calibration stage. The true points of the calibration grid are obtained after removing the noise defined in this paper. The measured points are then replaced with the true points for system re-calibration. Experimental results showed that removing the noise from the calibration data could improve the measurement accuracy of the phase shift profilometry 3D imaging system.