Modeling and analysis of pixel quantization error of binocular vision system with unequal focal length

Current Analysis of pixel quantization error has following problems in the binocular vision system. Most literatures are based on the ideal binocular vision system with equal focal length. Moreover, the precision of pixel quantization is relatively low. In order to solve above problems, an analysis method of pixel quantization error for binocular vision system with unequal focal length is proposed. Firstly, a pixel quantization error model is established and a mathematical expression is presented to characterize the pixel quantization error model. Secondly, in the case of equal and unequal focal lengths, measurement error of object point is discussed respectively. Finally, simulation experiments were conducted and the effect of some parameters on measurement error was analyzed. Reasonability and efficiency of proposed methods are verified.


Introduction
With the rapid development of stereo vision technology, the in-depth application and development of measurement field are limited by measurement precision. Therefore, measurement precision is required to be higher and higher. There are many factors affecting measurement accuracy, such as precision of camera calibration, lens parameters and structural parameters of system. Aguilar evaluated the precision of camera calibration and measurement methods [1]. Li analyzed the average pixel quantization error, total error and photonic noise of multiple images under fixed and different exposure time [2]. Sankowski proposed a simulation model and mathematical formula to determine measurement error [3]. Yu analyzed the correlation of structural parameters of parallel binocular vision system and discussed the effects of structural parameters on measurement accuracy, such as baseline distance, focal length and visual angle [4].
The research of analyzing pixel quantization error is as follows. Frane projected 3D error area to 2D diamond area and analyzed the relationship between pixel quantization error and object depth, baseline distance, focal length [5]. Blostein projected 3D pixel quantization error model into a 2D diamond region and deduced the closed probability distribution function of measurement error [6]. Wu modeled the 3D error region as a polyhedral region and fitted the vertices of polyhedron into an ellipsoid, volume of ellipsoid was used to estimate measurement error [7]. Fooladgar modeled the field of view of a pixel as a cone and proposed three simplification methods, including line-circle projection, line-cone intersection and Lagrange method [8][9]. Behzad proposed a mathematical model to estimate quantization error of hexagonal structure [10]. Some literatures analyzed probability density function of depth error caused by pixel quantization and calculated expectation of depth error amplitude [11][12][13].
In this paper, in the case of equal and unequal focal lengths, pixel quantization error is investigated respectively. Firstly, geometric analysis and mathematical expression of pixel quantization error model 2 are presented. Then, characterizations of measurement error are discussed, including line-line intersection and the midpoint of common perpendicular in different planes methods. In addition, the measurement error caused by pixel quantization is estimated by 3D uncertain region volume. At last, simulation experiments were carried out. Meanwhile, the discussed method was compared with 3D convex hull algorithm under the condition of unequal focal length.

Geometric analysis of pixel quantization error model
In this paper, pinhole camera model is adopted. Firstly, object point P in world coordinate system is marked as , , . Then, as shown in equation (1), object point P is converted to camera coordinate system and corresponding point is marked as , , . When object point is projected onto image plane, physical coordinate , is expressed as equation (2).
Where, is the rotation matrix converted from world coordinate system to camera coordinate system, , , is the translation matrix converted from world coordinate system to camera coordinate system and is focal length. Physical coordinates of left and right image points are denoted as , and , respectively. Meanwhile, baseline distance and the focal lengths of left and right cameras are marked as d, f1 and f2 respectively. In this paper, the world coordinate system is established in left camera coordinate system. Therefore, the translation matrices for left and right cameras are 0,0,0 and 0,0, respectively, the rotation matrices of left and right cameras both are unit matrices. Thus, the physical coordinates of left and right image points are obtained from equations (3) and (4). Due to the pixels quantization in image plane, the image points will not be located in the coordinates , and , precisely. In reality, they will be located in the region of 1/2 pixel size, as shown in figure 1, where du and dv are equal to half of length and width of pixel size respectively. According to perspective projection model, the perspective center of camera and four vertices of pixel quantization unit form a projection cone. The intersection area of left and right cones is uncertain area of object point, as shown in figure 2. In figure 2, eight vertices of left and right pixel units are marked as a, b, c, d and e, f, g, h respectively. Physical coordinates of these vertices are shown in equations (5) and (6).
, , , ℎ , (6) Then, the world coordinates of eight vertices are calculated. Vertexes , and , are converted to camera coordinate system with coordinates , , and , , . The world coordinates of vertices a and e can be obtained from equations (7) and (8).
Where, and represent the rotation matrix converted from left and right camera coordinate system to world coordinate system respectively. From discussed above, rotation matrices are unit matrices, and are known. Therefore, equations (9) and (10) can be obtained. Similarly, the world coordinates of other six vertices are shown in equations (11) and (12).

The mathematical expression of pixel quantization error model
The pixel quantization error model shown in figure 2 is analyzed by mathematical method. Firstly, the eight sides of left and right projection cones are represented by mathematical expressions. Then, the can be defined as the formula (13). In the equation (14), , , is the direction vector of the straight line. Therefore, the parametric equation is expressed as equation (15). : (20) The hexahedron formed by these vertices is uncertain region of object point. Therefore, Under the assumption that object points are uniformly distributed, the volume of uncertain region can represent measurement error of object point. But the volume of an hexahedron cannot be calculated directly. Therefore, the uncertain region is divided into five tetrahedrons in this paper, because volume of a tetrahedron is easy to be calculated. These five tetrahedrons include Intersection_ (1,3,4,7), Intersection_  Where, abs (·) is the absolute value of the number in parentheses, and | A | is the determinant of matrix A. According to equation (29), the volume of five tetrahedrons are obtained respectively, as shown in equations (30)-(34). Obviously, these volume formula are independent of x and y. Finally, volume of uncertain region is sum of five tetrahedrons volumes.

Characterization of object point measurement error with unequal focal length
In the case of unequal focal length, each pair of straight line in figure 2 is in different planes, these pairs of straight line include and , and ℎ , and , and ℎ , b and , and , and , and . Therefore, intersection cannot be obtained. The approximate method is used to characterize the measurement error in this paper. Firstly, the midpoint of common vertical line of each pair of different plane lines is calculated. Then, these midpoints are taken as vertexes of uncertainty region and the uncertain region is also a convex hexahedron. Similarly, when object points follow uniform distribution pattern, the equation (29) is directly used in this section. Meanwhile, the convex hull algorithm is compared with the method discussed. Below we elaborate the method of solving midpoint of common vertical line of two straight lines in different planes.
Two points on line L1 are A , , , B , , and two points on line L2 are C , , , D , , . Firstly, the direction vectors of L1 and L2 are calculated, which are denoted as and respectively, as shown in equation(35). Then the common perpendicular vectors of L1 and L2 is calculated, which is denoted as , as shown in equation (36).

Experiment
In this section, simulation experiments were carried out to verify the reasonability and efficiency of discussed methods. The effect of some parameters on pixel quantization error is analyzed and these parameters include baseline length, focal length, pixel size and depth. In the simulation experiments, simulation parameters are shown in Table 1.
Simulation results of vertices coordinates of uncertainty region with equal and unequal focal length are shown in table 2 and table 3 respectively. Furthermore, (a) and (b) in figure 3 are the simulation results of uncertain region, which are simulated by the method discussed in this paper. Figure (c) is simulation result of convex hull algorithm under the condition of unequal focal length. As we can see from the figures, uncertain regions are a hexahedron. Under the condition of unequal focal length, (b) and (c) also show that the result of method discussed and the convex hull algorithm are the same. Therefore, the reasonability and efficiency of methods mentioned above are verified. Table 1. Simulation parameters of pixel quantization error model Table 2. Vertexes coordinates of uncertainty region in the case of equal focal length   Figure 4 shows variation of error volume with baseline length. In the case of unequal focal length, the discussed method above and convex hull algorithm are simulated. As can be seen from the figure, error volume decreases with the increase of baseline length. Because the position of object point is fixed, when the baseline length increases, the left and right image points are far away from the center of image planes, so projection cone will become smaller and area where they intersect will become narrower, thus reducing the error volume. Figure 5 depicts the change of error volume with the change of focal length. In the case of unequal focal length, the focal length of left camera is changed. As shown in the figure, the volume error decreases with the focal length increases. According to the error model, the projection cone of left and right pixel units will become narrower when the focal length of camera increases, resulting in smaller space intersection area. Therefore, in actual project, the camera with large focal length can be selected as much as possible to improve measurement precision. Figure 6 shows the effect of pixel size on the error volume. the quantization error volume increases with the increase of pixel size as shown in the figure. Because the larger the pixel size is, the larger the projection cone of pixel unit will be, so the uncertainty area will be larger. Figure 7 shows that farther the distance between object point and camera is, the greater the measurement error will be. While object point is (0, 0, z), z is variable. Obviously, when object point is far away from cameras, its pixel projection cone will intersect at a farther place. Therefore, when uncertain region increases, the measurement error will be greater.
The error volume of object points on plane parallel to cameras is also analyzed, as shown in figure  8. It can be seen from the figure that error volume does not change under the condition of equal focal length, which can be seen from the third part. In the case of unequal focal length, the discussed algorithm is not changed, but the result does not prove that the actual situation is the same, because the approximation method is used in this paper. Nor can it demonstrate that error volume is independent of x and y.

Conclusion
This paper discussed an analysis method of pixel quantization error. Firstly, a pixel quantization error model is established. Then, the mathematical expression is presented to characterize pixel quantization error model. Meanwhile, in order to obtain the vertexes of uncertain region, the line-line intersection and the midpoint of common perpendicular are illustrated. Moreover, the volume of uncertain region is used to estimate measurement error. Finally, the reasonability and efficiency of discussed methods are verified by simulation experiments. This paper draw an important conclusion, which is that the error volume of object points on the plane parallel to the camera does not change under the condition of equal focal length. This conclusion is different from following literatures. Sharma concluded that the dynamic range on the horizontal axis (xaxis) was larger than that on the vertical axis (y-axis) and error volumes was a valley [7]. Fooladgar concluded that object point far from the camera center had a low error volume [8][9]. The reason why the conclusion of this paper is different from these literatures is that Sharma and Fooladgar both used approximate volumes to estimate measurement error. While real volume were used to estimate measurement error in this paper, which has more practical reference value.