Motion-induced error compensation for phase shifting profilometry

: This paper proposes a novel method to substantially reduce motion-introduced phase error in phase-shifting proﬁlometry. We ﬁrst estimate the motion of an object from the diﬀerence between two subsequent 3D frames. After that, by leveraging the projector’s pinhole model, we can determine the motion-induced phase shift error from the estimated motion. A generic phase-shifting algorithm considering phase shift error is then utilized to compute the phase. Experiments demonstrated that proposed algorithm eﬀectively improved the measurement quality by compensating for the phase shift error introduced by rigid and nonrigid motion for a standard single-projector and single-camera digital fringe projection system.


Introduction
Phase-shifting profilometry (PSP) is widely employed in 3D shape measurement because of its accuracy, resolution, speed, and robustness to noise. In general, PSP uses multiple fringe images with precisely known phase shifts to accurately recover the phase, and thus the measured scene should remain stationary while these phase-shifted fringe images are being captured.
For practical PSP systems, phase shifts are not always precisely known, and the unknown phase shifts introduce measurement errors. In a conventional laser interferometry system, the phase shift error could be introduced by the displacement error of the mirror driven by a piezoelectric device. The phase-shift error, in general, could also be introduced by motion of the object, i.e., the object moves during the time of capturing the required number of phase-shifted fringe patterns for phase determination. The former introduces homogeneous phase shift error that remains the same across the entire measurement surface, while the latter introduces nonhomogeneous phase shift error that could vary from point to point.
Researchers have developed methods to address the homogeneous phase-shift error problem. Wang and Han [1] proposed to find the randomly-shifted phase by iteratively solving for the phase and actual phase shift using the least-square optimization. This method has relatively stable and fast convergence, yet assumes the background intensity and modulation amplitude do not have pixel-to-pixel variation. By analyzing the region with zero difference of intensity between two images with different phase shifts, Guo et al.
[2] developed a method to determine the actual phase shift. Gao et al. [3] proposed a method to extract the actual phase shift in interferometry with arbitrary unknown phase shift. This method first computes a rough estimation of the phase shift based on the statistical property of object's phase, and then employs an iterative approach to further improve the phase-shift estimation.
Researchers also attempted to address the more complex nonhomogeneous phase-shift error problems especially introduced by motion of the object, which is common in practice. Lu et al. [4] proposed a method to reduce artifacts caused by the planar motion parallel to the imaging plane. They essentially placed a few markers on the object and analyzed the movement of the marker to estimate the rigid-body motion of an object. Lu et al. [5] later improved their method [4] to handle error induced by translation in the direction of object height. In this new method, the motion is estimated using the arbitrary phase shift extraction method developed in Wang and Han [1]. However, this method only works well for homogeneous background intensity and modulation amplitude. Feng et al. [6] proposed to apply the homogeneous phase-shift extraction method proposed by Gao et al. [3] to solve the nonhomogeneous motion artifact problem by segmenting objects with different rigid shifts. However, this method still assumes that the phase shift error within a single segmented object is homogeneous. As a result, this method may not work well for dynamically deformable objects where each point could introduce different phase shift errors.
This paper proposes a novel method to substantially reduce the nonhomogeneous motioninduced phase shift error. We first estimate the motion of an object from the difference between two subsequent 3D frames. We then take advantage of projector's pinhole model to determine the motion-induced phase shift error from the estimated motion. A generic phase-shifting algorithm considering phase shift error is then utilized to compute the phase. Since this proposed method treats each individual point independently, we experimentally demonstrated that it substantially reduced the motion artifact for dynamically deformable objects (e.g., human facial expressions).

Phase-shifting profilometry
Assume k-th fringe images of a generic M-step phase-shifting algorithm can be described as, where (u c , v c ) denotes the pixel location on the camera image coordinate, then the phase can be calculated as The arctangent function in Eq.
(3) gives the wrapped phase within(−π, π] with 2π discontinuities. The desired continuous phase Φ(u c , v c ) can be obtained by applying a a phase unwrapping algorithm to determine k(u c , v c ), the number of 2π's to be added for each point, that is where k(u c , v c ) is an integer number that is often referred as fringe order.

Proposed error compensation method
Phase-shifting profilometry works well if φ(u c , v c ) can be accurately determined using Eq. (3), which requires the phase shift δ k to be precisely known. However, if the object moves between different frames, the actual phase shift δ k differs from ideal phase shift δ k with an error k , i.e., where k (u c , v c ) is motion dependent and thus can vary from point to point. Figure 1 illustrates that the motion of an deformable object can introduce phase shift error. The camera image point C 1 corresponds to point S 1 on the object surface without motion, but actually corresponds to S 1 if the object moves. The corresponding projected fringe pattern points are P 1 and P 1 , respectively. These two points on the projector correspond to two different phase values Φ 1 and Φ 1 ; and we define the difference between these phase values as phase shift error, Similarly, the phase shift error also occurs for another camera image point C 2 whose corresponding object surface point is S 2 assuming the object does not move, and S 2 if the object moves. The phase difference for the corresponding projector points P 2 and P 2 is, Apparently, 1 is not always the same as 2 for different image points, and thus the phase shift error introduced by object motion is nonhomogeneous. In order to determine the phase shift error from object motion, we propose to utilize the pin-hole model of a projector [8], which describes the relation between the 3D world coordinates (x w , y w , z w ) and the 2D projector coordinate (u p , v p ) as P 11 P 12 P 13 P 14 P 21 P 22 P 23 P 24 P 31 P 32 P 33 P 34 where s p is the scaling factor, R p and t p are the rotation matrix and the translation vector between the world coordinate system and the lens coordinate system, A p is the intrinsic parameter matrix describing the relation between the lens coordinate system and its plane coordinate system. These parameters can be determined by structured light system calibration [8].
From the pin-hole model described in Eq. (8), we can clearly see that if a point at (x w , y w , z w ) moves by ∆x w in x direction, the corresponding point in projector's plane changes accordingly, from (u p , v p ) to (u p , v p ). Mathematically, the relationship between the movement along x direction and the corresponding change on the projector plane can be calculated as , ∂u p ∂ x w = P 11 (P 32 y w + P 33 z w + P 34 ) − P 31 (P 12 y w + P 13 z w + P 14 ) (P 31 x w + P 32 y w + P 33 z w + P 34 ) 2 .
Similarly, the motion along the y and z directions could result in corresponding point changes on the projector plane. If the phase changes along u p direction on the projector and the phase remains constant along v p direction, and the fringe period is λ in pixels on the projector, then the overall phase shift error induced by motion can be mathematically represented as, This equation indicates that the motion-induced phase-shift error for a given point (x w , y w , z w ) can be determined if the motion of the point is known. In this research, we proposed to estimate the object motion by, where (x w , y w , z w ) are the coordinates of a point for the current 3D frame, and (x w , y w , z w ) are the coordinates of the same point for the subsequent 3D frame, and N is the number of frames between the beginning of two sequences of high-frequency fringe images that ultimately provide the wrapped phase for two successive 3D measurements. If we assume that the camera speed is not significantly slower than the motion, then (u c , v c ) ≈ (u c , v c ), and the object approximately moves at a constant speed within the time of capturing two successive 3D frames. However, this phase shift error estimation method is approximate and may not be accurate. To further improve the accuracy, we iteratively apply the same method to the updated 3D reconstructions from the previous step(s) until the algorithm converges. Our experiments found that this process typically converges within 2 or 3 iterations. The estimated phase-shift error is used to determine the actual phase shift δ k in Eq. (5), that is used to calculate phase using Eqs. (2)-(3).

Experiment
We built a PSP system to evaluate the performance of our proposed method. This system includes a complementary metal-oxide-semiconductor (CMOS) camera (model: PointGrey Grasshopper3 GS3-U3-23S6M) attached with a 8 mm lens (model: Computar M0814-MP2) and a digital light processing (DLP) projection developmental kit (model: LightCrafter 4500). The projector's resolution is 912 × 1140 pixels and the camera's resolution was set as 640 × 480 pixels. The system was calibrated using the method described by Li et al. [9]. Both the projector and the camera speeds were set at 120 Hz. We employed a three-step phase-shifting algorithm for wrapped phase retrieval and a three-frequency temporal phase-unwrapping algorithm.
We evaluated the performance of the proposed motion error compensation method by measuring a moving sphere with a diameter of 79.2 mm. For this experiment, the sphere is moving at a speed of approximately 80 mm/s. Visualization 1 shows all 3D frames of the measurement sequence. Figure 2 shows the result of one representative 3D frame. Figure 2(a) shows the raw 3D result without employing our proposed method. The sphere surface is not smooth, as expected, because the 3D shape measurement speed is not sufficient to capture the motion of the object. We then employed our proposed phase-shift error compensation algorithm, the quality was drastically improved, as shown in Fig. 2(b). To quantitatively evaluate the improvement using our proposed method, we compared the measurement results with an ideal sphere. Figures 2(c) and 2(d) shows the corresponding error map that is the difference between the measured data and the ideal sphere. Before error compensation, the mean error is 0.482 mm, and the standard deviation is 0.209 mm. In contrast, after applying our phase-shift error compensation method, the mean error is reduced to 0.032 mm and the standard deviation is reduced to 0.037 mm.
To verify that our proposed method can also work for more complex geometry than a smooth surface, we measured a surface with small, detailed features. Figure 3 shows the results of the experiment. Figure 3(a) shows the object photograph; and Fig. 3(b) shows the result without employing our proposed phase-shift error compensation method. The measured surface depicts vertical stripes introduced by motion. In contrast, after applying our proposed method, the vertical stripes almost disappear, as shown in Fig. 3(c). To visually verify the performance of our proposed method with a ground truth, we measured the same object while it did not move, and Fig. 3(d) shows the result. Visually, our proposed method produced the same result as the ground truth, demonstrating that our proposed method works well for non-smooth object.
We also evaluated the performance of our proposed method to measure dynamically deformable objects. In this experiment, we measured human facial expressions. Figure 4 shows one representative 3D frame, and Visualization 2 shows the entire video sequence. Figure 4(a) shows the 3D reconstruction without adopting our phase-shift error compensation algorithm. This result clearly shows measurement errors (non-smooth surface geometry). We then employed our error compensation algorithm to process the same frame, and Fig. 4(b) shows the result. The motion artifacts are less obvious and the measurement quality is significantly improved. This experiment demonstrated that our proposed method can successfully alleviate motion artifacts even for dynamically deformable objects with complex surface geometry.
In addition, we noticed that our proposed method can also significantly improve texture quality. Figure 4(c) shows the texture of the frame before applying our phase-shift error compensation method, showing some vertical stripes on the image. Figure 4(d) shows the texture after applying our phase-shift error compensation method, and those stripes almost disappear. All these experimental data clearly demonstrate the effectiveness of our proposed motion error compensation method.

Conclusion
This paper has presented a novel method to reduce nonhomogeneous phase shift error caused by object motion. The principle behind this proposed method was elucidated. Our experimental results demonstrated that our proposed method can effectively compensate for motion introduced phase-shift error and enhance measurement quality for objects with rigid motion as well as objects that are dynamically deformable.