Next Article in Journal
High Speed, Localized Multi-Point Strain Measurements on a Containment Vessel at 1.7 MHz Using Swept-Wavelength Laser-Interrogated Fiber Bragg Gratings
Next Article in Special Issue
Enhanced Image-Based Endoscopic Pathological Site Classification Using an Ensemble of Deep Learning Models
Previous Article in Journal
Plant Leaf Position Estimation with Computer Vision
Previous Article in Special Issue
Development of a Robust Multi-Scale Featured Local Binary Pattern for Improved Facial Expression Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry

1
School of Mechanical and Electrical Engineering, China University of Petroleum (East China), Huangdao, Qingdao 266580, China
2
Polytechnic Institute, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(20), 5934; https://doi.org/10.3390/s20205934
Submission received: 24 September 2020 / Revised: 15 October 2020 / Accepted: 16 October 2020 / Published: 20 October 2020
(This article belongs to the Special Issue Visual and Camera Sensors)

Abstract

:
Lens distortion is closely related to the spatial position of depth of field (DoF), especially in close-range photography. The accurate characterization and precise calibration of DoF-dependent distortion are very important to improve the accuracy of close-range vision measurements. In this paper, to meet the need of short-distance and small-focal-length photography, a DoF-dependent and equal-partition based lens distortion modeling and calibration method is proposed. Firstly, considering the direction along the optical axis, a DoF-dependent yet focusing-state-independent distortion model is proposed. By this method, manual adjustment of the focus and zoom rings is avoided, thus eliminating human errors. Secondly, considering the direction perpendicular to the optical axis, to solve the problem of insufficient distortion representations caused by using only one set of coefficients, a 2D-to-3D equal-increment partitioning method for lens distortion is proposed. Accurate characterization of DoF-dependent distortion is thus realized by fusing the distortion partitioning method and the DoF distortion model. Lastly, a calibration control field is designed. After extracting line segments within a partition, the de-coupling calibration of distortion parameters and other camera model parameters is realized. Experiment results shows that the maximum/average projection and angular reconstruction errors of equal-increment partition based DoF distortion model are 0.11 pixels/0.05 pixels and 0.013°/0.011°, respectively. This demonstrates the validity of the lens distortion model and calibration method proposed in this paper.

1. Introduction

Vision measurement is a subject that allows quantitative perception of scene information by combining image processing with calibrated camera parameters. Therefore, the calibration accuracy of the parameters is an important determinant of the vision measurement uncertainty. The lens distortion is closely related to the depth of field (DoF), which refers to the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image. For medium- or high-accuracy applications, close-range imaging parameters (e.g., short object distance (<1 m) and small focal length) are often adopted. In such occasions, DoF has a significant influence on lens distortion and, hence, becomes a major cause of vision measurement errors. For instance, to ensure that the vision has a micron-level accuracy when detecting the contouring error of a machine tool [1], the camera is placed 400 mm away from the focal plane to collect and analyze the image sequence of the interpolation trajectory running in the DoF. In this case, measurement errors ranging from dozens to hundreds of microns can be caused by a large lens distortion. Therefore, to improve the vision measurement accuracy in close-range photogrammetry, accurate modeling and calibration of the DoF-dependent lens distortion are urgently needed.
The lens distortion model maps the relation between distorted and undistorted image points. Models to show the relations vary according to the types of optical systems, which include the polynomial distortion model, logarithmic fish-eye distortion model [2], polynomial fish-eye distortion model [2,3,4,5], field-of-view (FoV) distortion model [6], division distortion model [7,8], rational function distortion model [9,10], and so on. In 1971, Brown [11,12] proposed the Gaussian polynomial function to express radial and decentering distortion, which is particularly suitable for studying the distortion of a standard lens in high-accuracy measurements [13,14]. Later, researchers noticed that the observed radial and decentering distortion varies with the focal length, the lens focusing state (i.e., focused or defocused), and the DoF position. Since then, researchers have focused on the improvement of distortion calibration and modeling methods to obtain a precise representation of distortion behavior. For the distortion calibration, the study goes in two directions: the coupled-calibration method and the decoupled-calibration method. The former can be generally divided into three types: self-calibration method [15], active calibration method, and traditional calibration method [16]. Among the traditional ones, Zhang’s calibration method [17] and its improved method [18,19,20], used widely in industry and scientific research, are the most popular. In this coupled-calibration method, the distortion parameters are calculated by performing a full-scale optimization for all parameters. Due to the strong coupling effect, the estimated errors of other parameters (i.e., intrinsic and extrinsic parameters) in the camera model would be propagated to that of distortion parameters, thus leading to the failure of getting optimal solutions. By contrast, the decoupled-calibration method does not involve coupling other factors or entail any prior geometric knowledge of the calibration object, and only geometric invariants of some image features, such as straight lines [6,12,21,22,23], vanishing points [24], or spheres [25], are needed to solve the parameters. Among these features, straight lines can be easily reflected in scenes and extracted from noise images, thus having enormous potential.
Regarding the distortion modeling, some researchers incorporated the DoF into the distortion function. Magill [26] used the distortion of two focal planes at infinity to solve that of an arbitrary focal plane. Then, Brown [12] improved Magill’s model by establishing distortion models of any focal plane and any defocused plane (the plane perpendicular to the optical axis in the DoF) on the condition that the distortions of two focal planes are known. Soon after, Fryer [27], based on Brown’s model, realized the lens distortion calibration of an underwater camera [28]. Fraser and Shortis [29] introduced an empirical model and solved the Brown model’s problem of inaccurate description of large image distortion. Additionally, Dold [30] established a DoF distortion model that is different from Brown’s and solved the model parameters through the strategy of bundle adjustment. In 2004, Brakhage [31] characterized the DoF distortion of the telecentric lens in a fringe projection system by using Zernike Polynomials. Moreover, in 2006, the DoF distortion distribution of the grating projection system was experimentally analyzed by Bräuer-Burchardt. In 2008, Hanning [32] introduced depth (object distance) into the spline function to form a distortion model and used the model to calibrate radial distortion.
The above DoF distortion models not only depend on the focusing state but also relate to the distortion coefficients on the focal plane. For these models, on the one hand, the focusing state is usually adjusted by manually twisting the zoom and focus rings, which introduces human errors and changes the camera parameters. On the other hand, the focus distance and distortion parameters on the focal plane cannot be determined accurately. To overcome the problem, Alvarez [33], based on Brown’s and Fraser’s models, deduced a radial distortion model that is suitable for planar scenarios. With this model, when the focal length is locked, distortion at any image position can be estimated by using two lines in a single photograph. In 2017, Dong [34] proposed a DoF distortion model, by which the researcher accurately calibrated the distortion parameters on arbitrary object planes, and reduced the error from 0.055 mm to 0.028 mm in the measuring volume of 7.0 m × 3.5 m × 2.5 m with the large-object-distance of 6 m. Additionally, in 2019, Ricolfe-Viala [35] proposed a depth-dependent high distortion lens calibration method, by embedding the object distance in the division distortion model, and the highly distorted images can be corrected with only one distortion parameter. However, these researchers only used one set of coefficients, which is not sufficient to accurately represent the distortion. To address this problem, some scholars adopted the idea of partitioning to process image distortion, which uses several sets of distortion coefficients to characterize the distortion. The study, however, which is only applicable to the partitioning of a 2D object plane, fails to take into account the distortion partition within the DoF and the correlation between lens distortion and DoF. Our previous work partitioned the distortion with an equal radius [36]. Although it improved the vision measurement accuracy, the distortion correction accuracy within the partition corresponding to the image edge is still low. Besides, the distortion model we adopted depends on the focusing state of the lens, thus is less practical. In general, the current distortion model and partitioning method cannot accurately reflect the lens DoF distortion behavior in close-range photography, especially for short-distance measurements.
To solve the above problems, the lens distortion model and calibration method for short-distance measurement, which takes into consideration the dimensions of DoF and equal-increment partition of distortion, are proposed in this paper. The rest of this paper is organized as follows. In Section 2, a focusing-state-independent DoF distortion model, which only involves the spatial position of the observed point, is constructed. In Section 3, based on the model in previous section, an equal-increment partitioning DoF distortion model is proposed, which enables a fine representation of the lens distortion in the photographic field. Section 4 details the calibration method for both DoF distortion and camera model parameters, as well as the image processing of the control field for distortion calibration. In Section 5, experimental verification of the proposed lens distortion model and calibration method is carried out. Finally, Section 6 concludes this paper.

2. Focusing-State-Independent DoF Distortion Model

The observed distortion of a point varies with its position within the DoF. Though the close-range imaging configuration increases the visible range, it enlarges the DoF image distortion, consequently affecting the measurement accuracy. To break the limitations of the aforementioned in-plane and DoF distortion model in the vision measurement of short-distance and small-focal-length settings, a DoF-dependent yet focusing-state-independent distortion model is proposed in this paper.

2.1. Pinhole Camera Model with Distortion

As illustrated in Figure 1, the linear pinhole camera model depicts the one-to-one mapping between the 3D points in the object space and its 2D projections in the image. Let p ( u l i v l i ) be undistorted coordinates mapped from a spatial point in the world coordinate system O w X w Y w Z w to the image coordinate system o u v through the optical center O C . Then, camera mapping can be expressed as [17]
z [ u l i v l i 1 ] = K M [ X w Y w Z w 1 ]
where z describes the scaling factor; K is the intrinsic parameter matrix, which quantitatively characterizes the critical parameters of the image sensor (i.e., Charge Coupled Device (CCD) or Complementary Metal-Oxide Semiconductor (CMOS)); Matrix M , expressing the transformation between the vision coordinate system (VCS) and the world coordinate system, consists of the rotation matrix R and translation matrix T .
However, manufacturing and assembly errors can lead to radial and decentering lens distortion. Consequently, the pinhole assumption does not hold for real camera systems, and the image projection of a straight line would be bent into a curve (Figure 1b,c). To characterize the lens distortion, Brown proposed the distortion model in a polynomial form [11,12]:
{ u l i = u ¯ l i + δ u l i v l i = v ¯ l i + δ v l i δ u l i = u ¯ l i ( 1 + K 1 r 2 + K 2 r 4 + ) + [ P 1 ( r 2 + 2 u ¯ l i 2 ) + 2 P 2 u ¯ l i v ¯ l i ] δ v l i = v ¯ l i ( 1 + K 1 r 2 + K 2 r 4 + ) + [ P 2 ( r 2 + 2 v ¯ l i 2 ) + 2 P 1 u ¯ l i v ¯ l i ]
where ( u ¯ l i v ¯ l i ) is the distorted coordinates; δ u l i and δ v l i are the distortion functions of an image point in the u and v direction respectively; ( u 0 v 0 ) denotes the distortion center; r = ( u ¯ l i u 0 ) 2 + ( v ¯ l i v 0 ) 2 stands for the distortion radius of the image point; K 1 and K 2 are the first and second-order coefficients of radial distortion respectively; while, P 1 and P 2 are the first and second-order coefficients of decentering distortion respectively.

2.2. Distortion Model in the Focal Plane

2.2.1. Radial Distortion Model

Let δ r be the radial distortion for a lens that is focused on plus infinity, while δ r on the minus infinity. m s refers to the vertical magnification in the focal plane at object distance s . According to Magill’s model [26], δ r s , the lens radial distortion in the focal plane, can be expressed as
δ r s = δ r m s δ r
Let δ r s m and δ r s k be the radial distortions in the focal planes when the lens is focused on the distances of s m and s k respectively. Then, the distortion function for focal plane δ r s at distance s can be written as
δ r s = α s δ r s m + ( 1 α s ) δ r s k
where, f is the focal length; α s = ( s k s m ) ( s f ) ( s k s ) ( s m f ) . The i -th radial distortion coefficients K i s for focused object plane at distance s are
K i s = α s K i s m + ( 1 α s ) K i s k    i = 1 , 2 .
where K i s m and K i s k are the i -th radial distortion coefficients when the lens is focused on the distances of s m and s k respectively. As can be easily noticed in Equation (5), if the radial distortion coefficients of two different focal planes are known, the radial distortion coefficients of any focal plane can be obtained.

2.2.2. Decentering Distortion

As for the decentering distortion, the equations are as follows [12]:
{ δ r u = ( 1 f s ) ( P 1 ( r 2 + 2 u 2 ) + 2 P 2 u v ) δ r v = ( 1 f s ) ( P 1 ( r 2 + 2 v 2 ) + 2 P 2 u v ) r s , s = s f s f s s
where ( 1 f s ) r s , s is the compensation coefficient; δ r u and δ r v represent the components of the decentering distortion in the u and the v direction respectively; s and s depict the object distances corresponding to the two focal planes, respectively.

2.3. DoF-Dependent Distortion Model for Arbitrary Defocused Plane

2.3.1. DoF-Dependent Radial Distortion Model

Fraser and Shortis [29] proposed an empirical model for describing the distortion of any object plane (or defocused plane), which solved Brown model’s problem of inaccurate description of severe distortion caused by the image configuration of short-distance and small-focal-length settings. The equation is as follows:
K s , s p = K s + g ( K s p K s )
where K s , s p denotes the radial distortion coefficient in the defocused plane with the depth of s p when the lens is focused at distance s ; g is the empirical coefficient; K s p and K s represent the radial distortion coefficients in the focal planes at distances s p and s respectively. By extending the equation, we can get the radial distortion function δ r s , s n at s n expressed by the δ r s , s m at s m when the lens is focused at the distance of s :
{ δ r s , s m = δ r s + g ( δ r s m δ r s ) δ r s , s n δ r s , s m = δ r s + g ( δ r s n δ r s ) δ r s g ( δ r s m δ r s )
From the above equation, we can easily obtain δ r s , s n = δ r s , s m + α s , s m ( s n ) ( δ r s δ r s , s m ) . Then, by extending the results to the radial distortion of a point in the defocused plane at distance s k , the relationship between δ r s , s n , δ r s , s m and δ r s , s k can be given by
{ δ r s , s n = δ r s , s m + α s , s m ( s n ) ( δ r s δ r s , s m ) δ r s , s n = δ r s , s k + α s , s k ( s n ) ( δ r s δ r s , s k ) δ r s , s m = δ r s , s k + α s , s k ( s m ) ( δ r s δ r s , s k )
in which α s , s m ( s n ) = s m s n s m s s f s n f , α s , s k ( s n ) = s k s n s k s s f s n f , and α s , s k ( s m ) = s k s m s k s s f s m f .
After eliminating the focus distance and the distortion in the focal plane, we can obtain the following equation:
{ δ r s , s m = δ r s , s k + s k s m s k s s f s m f ( δ r s δ r s , s k ) δ r s , s n = δ r s , s k + s k s n s k s s f s n f ( δ r s δ r s , s k )
Then, we can have s f s k s ( δ r s δ r s , s k ) = ( δ r s , s m δ r s , s k ) s m f s k s m . δ r s , s n can be expressed as
δ r s , s n = δ r s , s k ( s m f ) ( s k s m ) + ( s k s n ) ( s k f ) ( δ r s , s m δ r s , s k ) ( s m f ) ( s k s m )
Obviously, when the lens is focused at distance s , through two distortions corresponding to object distances s m and s k respectively, radial distortion coefficient in any defocused plane with the depth of s n when the lens is focused at distance s can be obtained:
K i s , s n = K i s , s k ( s m f ) ( s k s m ) + ( s k s n ) ( s k f ) ( K i s , s m K i s , s k ) ( s m f ) ( s k s m ) i = 1 2 .
When two object planes are set, K i s , s m , K i s , s k , s m , s k and f are known. Thus, K i s , s n in Equation (12) is only dependent on s n , and it is independent of the distortion coefficient K i s on the focal plane and the focus distance s .

2.3.2. DoF-Dependent Decentering Distortion Model

In Equation (6), since δ P s , s m = r s , s m δ P , the distortion in the focal plane can be written as
{ δ P s , s m = ( 1 f s ) s f s m f s m s δ P δ P s , s k = ( 1 f s ) s f s k f s k s δ P δ P s , s n = ( 1 f s ) s f s n f s n s δ P
where δ P s , s m , δ P s , s k and δ P s , s n are the decentering distortion functions in the defocused plane at the object distances of s m , s k and s n when the lens is focused at distance s respectively. From the first two lines of the above equation, we get δ P s , s k δ P s , s m = s f s k f s m f s f s k s s s m = s m f s k f s k s m = M s k , s m , and then
{ f = 1 M s k , s m s k s m M s k , s m δ P s , s n δ P s , s m = s m f s n f s n s m
Put the first line into the second one, and we obtain
δ P s , s n δ P s , s m = s m 1 M s k , s m s k s m M s k , s m s n 1 M s k , s m s k s m M s k , s m s n s m = s m s k s m 2 M s k , s m 1 + M s k , s m s n s k s m s n M s k , s m 1 + M s k , s m s n s m
Equation (15) can be simplified to
{ δ P s , s n = M s k , s m ( 1 s m 2 ) + ( s m s k 1 ) M s k , s m ( 1 s n s m ) + s n s k 1 s n s m δ P s , s m P i s , s n = M s k , s m ( 1 s m 2 ) + ( s m s k 1 ) M s k , s m ( 1 s n s m ) + s n s k 1 s n s m P i s , s m    i = 1 , 2 .
Put M s k , s m = P i s , s k P i s , s m ( i = 1 , 2 ) into Equation (16), and the following equation is obtained:
P i s , s n = P i s , s k ( 1 s m 2 ) + P i s , s m ( s m s k 1 ) P i s , s k ( 1 s m s n ) + P i s , s m ( s n s k 1 ) s n s m P i s , s m i = 1 , 2 .
Given the parameters P i s , s m , P i s , s k , s m and s k are known, it can be illustrated from Equation (17) that the decentering distortion coefficient P i s , s n in any defocused plane is dependent only on the object distance, s n , and is independent of the focus distance s and the distortion P i s in the focal plane. Moreover, since focal length f is not included in Equation (17), decentering distortion is not affected by this parameter.
Hereto, the DoF-dependent yet focusing-state-independent distortion model suitable for close-range, short-distance measurement scenes is established, which overcomes the limited practicability caused by the way of calibrating DoF distortion by manual adjustment of the focus and zoom rings, and it also solves the problem when the current position and the distortion parameters of the focal plane are not exactly known.

3. Equal-Increment Partition Based DoF Distortion Model

The distortion coefficients are solved by minimizing the straightness error of the observed points. If a set of distortion coefficients is used to describe the distortion in the whole image, the distortion coefficients will be the error balance of all points. However, for each region of the image the error is not the minimum. Hence, an equal-increment partition based DoF distortion model is proposed in this section. The distortion spreads outward from the image center along a circumferential contour, with the characteristics of the image being small in the middle and large on the image edge. In this paper, we first partition the in-plane distortion in an equal-increment way, then the 2D partition strategy is extended to the 3D photographic field.

3.1. Equal-Increment Based Distortion Partitioning Method

Figure 2 presents two distortion partitioning methods. The X axis represents the distance from an image point to the distortion center ( u 0 , v 0 ) , namely the distortion radius. The Y axis describes the distortion in pixels. The blue curve is the distortion curve calculated by the features in the whole image. As illustrated in Figure 2a, when DoF distortion is partitioned by an equal radius, the distortion increment of each partition is different ( Δ 1 < Δ 2 < Δ 3 < Δ 4 < Δ 5 ) despite the same distortion radius increment ( R 1 = R 2 = R 3 = R 4 = R 5 ) [36]. For a polynomial-based distortion function, it is well known that the more scattered the distorted points and the larger the distortion increments are, the lower the regression accuracy of the function to the distortion is. As a result, the estimated accuracy of the partition’s distortion parameters decreases gradually from inside to outside ( ε 1 > ε 2 > ε 3 > ε 4 > ε 5 ).
To solve the problem, a DoF distortion model based on the equal-increment partition is proposed in this paper, and the procedures are as follows:
(1)
Estimate the distortion curve using all features in the whole image (Figure 2b). Then, determine the maximum value of image distortion δ m a x according to the maximum distortion radius and distortion curves. The maximum distortion radius of the image is r max = 1 2 ( I l u 0 ) 2 + ( I h v 0 ) 2 , where I l and I h are the length and height of the image, respectively.
(2)
In the central image region, the distortion is so tiny that it cannot converge after iteration, which results in a poorer quality of the undistorted image than that of the original one. Therefore, we use δ l i m i t e d , the minimum distortion value when the algorithm converges in the central image region, as the threshold to estimate r l i m i t e d , the minimum value of the image distortion radius.
(3)
Determine the number of partitions n p .
(4)
Use the maximum distortion δ m a x , the lower-limit distortion δ l i m i t e d , and n p to determine the distortion increment of each partition δ e q u = ( δ max δ l i m i t e d ) / ( n p 1 ) , δ e q u = Δ 2 = Δ 3 = Δ 4 = Δ 5 (Figure 2b).
(5)
Calculate the radius increment of each partition using δ e q u and the distortion curve, R 1 R 2 R 3 R 4 R 5 (Figure 2b).
(6)
Calibrate the distortion curve of each partition by the features in the corresponding partition of the image utilizing the decoupled-calibration method (see Section 4).
Then the distortion partition of the 2D object plane is extended to the 3D DoF. As can be known from Equation (1), the object-to-image mapping satisfies the following:
{ x = f X m Z m = f X k Z k y = f Y m Z m = f Y k Z k
where P m ( X m Y m Z m ) and P k ( X k Y k Z k ) are two points in the VCS. The 2D point p ( x y ) (in millimeters) is the image projection of the P m and P k ( P m , P k , and O C are collinear). Let ρ be the partition radius, then x 2 + y 2 = ρ 2 , and we get
{ f 2 X m 2 Z m 2 + f 2 Y m 2 Z m 2 = ρ 2 f 2 X k 2 Z k 2 + f 2 Y k 2 Z k 2 = ρ 2
From the above equation, we can know that f R m = ρ Z m and Z m R k = Z k R m , where Z m and Z k are the depths of the m -th ( Π m ) and the k -th ( Π k ) object planes in the VCS respectively. R m and R k are the partition radius of the two object planes. Let s m = Z m and s k = Z k , and then extend the above distortion partitions to 3D DoF domain. As shown in Figure 3, if the range of the g -th partition in the object plane Π m is [ ( g 1 ) R m g R m ] , the partition range in object planes Π k and Π n are [ ( g 1 ) ( s k R m / s m ) g ( s k R m / s m ) ] and [ ( g 1 ) ( s n R m / s m ) g ( s n R m / s m ) ] respectively. In this way, although the distortion radius in each partition is different, distortion coefficients can be obtained with high accuracy when the image distortion is partitioned by equal distortion increments.

3.2. Equal-Increment Partition Based DoF Distortion Model

After partitioning the DoF distortion, we incorporate the partitions into the DoF distortion model. Procedures to solve the partition radius and distortion coefficients on any object distance s n are as follows:
(1)
Partition the distortion in the object plane Π m using the proposed method, and calculate the i -th order radial and decentering distortion coefficients in the g -th partition. Register the two coefficients as K i s , s m g and P i s , s m g respectively.
(2)
Based on the g -th partition in the object plane Π m (the object distance is s m ), the corresponding partition radius in the object plane Π k (the object distance is s k ) is calculated. In addition, the i -th order radial and decentering distortion coefficients can be computed. Register the two coefficients as K i s , s k g and P i s , s k g respectively.
(3)
Based on the partitions in the object plane Π m , we calculate the partitions in the object distance plane Π n (the object distance is s n ). Then, for the g -th partition of the object plane Π n , the radial distortion coefficient K i s , s n g and the decentering distortion coefficient P i s , s n g can be expressed as
{ K i s , s n g = f ( K i s , s m g , K i s , s k g , s n ) P i s , s n g = f ( P i s , s m g , P i s , s k g , s n )
From the equation, we can know
{ K i s , s n g = K i s , s k g ( s m f ) ( s k s m ) + ( s k s n ) ( s k f ) ( K i s , s m g K i s , s k g ) ( s m f ) ( s k s m )    P i s , s n g = P i s , s k g ( 1 s m 2 ) + P i s , s m g ( s m s k 1 ) P i s , s k g ( 1 s m s n ) + P i s , s m g ( s n s k 1 ) s n s m P i s , s m g    i = 1 , 2    g = 1 , 2 , , n p
At this point, we have established an equal-increment partition based DoF distortion model for any object plane at s n when the lens is focused at distance s .

4. Calibration Method for Camera Parameters

In close-range photography, the DoF images are seriously distorted, so the calibration accuracy of the distortion parameters is the decisive factor affecting the vision measurement accuracy. When the coupled-calibration method is used to solve the distortion parameters, the estimated errors of intrinsic and extrinsic parameters will be propagated to distortion parameters. Thus, a two-step method is proposed to calibrate the camera parameters, in which distortion parameters are estimated independently.

4.1. Independent Distortion Calibration Method Based on Linear Conformation

Figure 4 details the experimental system for DoF lens distortion, which consists of a monocular camera, a control field, a light source, an electric control platform, and a multi-axis motion controller. The X, Y, A, and C axes of the platform are in the object space, while the Z-axis is in the image space. A control field, with the features of circle, corner, and line, is used to calibrate the lens distortion, and the geometric relationship between the features is known accurately. On this basis, the pose of the control field relative to the image plane can be adjusted by the Perspective-n-Point (PnP) algorithm.
In this paper, the distortion coefficients can be estimated by the plumb-line method [12] alone. It is defined by Brown (1971) as “a straight line in the object space will be mapped to the image plane in a straight way after a perfect lens, and any change of straightness can be reflected as the lens distortion described by the radial and decentering distortion coefficients.”
As demonstrated in Figure 5, when N edge points ( u 1 v 1 ) ( u N v N ) on the same curve are known, the regression line equation determined by the point group is
α u + β v γ = 0
Let α = sin θ , β = cos θ , γ = A u sin θ + A v cos θ , tan 2 θ = 2 V u v V u u V v v . where θ is the angle between the regression line and the u axis (Figure 5). A u = 1 N i = 1 N u i , A v = 1 N i = 1 N v i , V u u = 1 N i = 1 N ( u i A u ) 2 , V u v = 1 N i = 1 N ( u i A u ) ( v i A v ) , V v v = 1 N i = 1 N ( v i A v ) 2 .
Given there are L lines and there are N l points in the l -th line, the average sum of squared distances from the points ( u l i v l i ) to all the lines can be written as
D = 1 L l = 1 L 1 N l i = 1 N l ( α l u l i + β l v l i γ l ) 2
Any distortion of a line’s straightness in the image plane can be corrected by a mapping involving radial and decentering distortion. Thus, substitute Equation (2) into Equation (23) and we can get
F ( u ¯ l i , v ¯ l i ; K 1 , K 2 , P 1 , P 2 ) = 0
If there are L lines in an image and N l observation points are extracted from each line, we can have L N l equations. In these equations, there are L + 4 variables ( L line coefficients and 4 distortion coefficients). If L N l > L + 4 , the optimal solution of distortion coefficients can be obtained.
After solving the image distortion coefficients, the inverse mapping i m R ( u , v ) = i m D ( u d , v d ) between the undistorted image i m R and distorted image i m D is established by cubic B-spline interpolation. In this way, the image distortion can be corrected. Besides, in this paper, the three straightness indicators of the maximum, average, and root mean square (RMS) d = D / l = 1 L N l of the point-to-line distance, and the Peak Signal-to-Noise Ratio (PSNR) P S N R = 10 × log 10 ( ( 2 n - 1 ) 2 / M S E , are used to evaluate the distortion correction effects. D has been defined in Equation (23), and M S E is the mean square error of the image before and after distortion correction.

4.2. Image Processing and Camera Calibration

In this paper, the parameters in the equal-increment partition based DoF distortion model are calculated by using straight lines in a particular area of the control field. To this end, the corner control based method, for extracting line segments within a partition, is proposed. As shown in Figure 6, the image processing procedures include the following:
(1)
Image acquisition. Capture the image of the control field using the monocular camera (Figure 6a).
(2)
Point detection. Corners of the checkerboard are extracted by the Harris detector (Figure 6b), and the edge points on the curve are detected by the Canny operator with subpixel accuracy.
(3)
Point connection. Use the edge points between two adjacent corners to form unit segments (Figure 5). In each segment, David Lowe’s method [37] is used to track and connect the edge points in the four-link area from one particular point to the others (Figure 6c). The minimum connection length is set to be greater than 10 pixels.
(4)
Point reselection. The distortion is not evenly distributed on the image, with the largest at the image edge, which makes it difficult to remove the noisy point. To solve this problem, the tolerance band of 4 pixels (Figure 5) set in each unit segment is used as the constraint to filter out the outliers. Consequently, the new edge points are determined (Figure 6d).
(5)
Line extraction. Any line can be obtained according to the corner position and the predefined distortion radius. Figure 6e,f shows the extraction results of the 19th horizontal line and the lines in different areas of the control field, respectively.
By combining the image processing results with the DoF distortion partition model, distortion parameters at any position of the DoF can be determined. To avoid the coupling effect between the distortion parameters and other parameters in the camera model, the camera’s intrinsic and extrinsic parameters are preliminarily calibrated by Zhang’s method. Then, we fix the distortion parameters and place the high-precision target in multiple spatial positions to optimize the intrinsic and extrinsic parameters. The cost function to be optimized is
{ E d e p t h _ d e p e n d e n t q ( R q ) = g = 1 m g ( H 1 ( u 0 , v 0 , f x , f y , g K i , g P j , R q , T q ) ) i = 1 , 2 j = 1 , 2
where E d e p t h _ d e p e n d e n t q ( R q ) describes the cost function when the control field is in the q -th pose. R q and T q are the rotation and translation matrices in the q -th pose. K j g and P j g are the i -th order radial and j -th order decentering distortion coefficients in the g -th partition of the q -th pose. By using the Levenberg–Marquardt (LM) algorithm, the optimal solution of the camera’s intrinsic and extrinsic parameters can be obtained.
Through the above process, the monocular camera calibration can be realized. In practice, the partition where a spatial point is located X 2 + Y 2 f ρ Z can be determined after estimating its 3D position ( X Y Z ) . Then, the observed distortion can be corrected by choosing the proper distortion coefficients, thus realizing high-accuracy vision measurements.

5. Accuracy Verification Experiments of Both the Distortion Modeling and Calibration Method

5.1. Experimental Verification of the 2D Distortion Partitioning Method

The experimental system is shown in Figure 7. The stroke of the electric control platform along the optical axis of the camera is 500 mm, and the size of the control field is 300 × 300 mm. The SIGMA zoom lens (18–35 mm) and HIK ROBOT camera (MV-CH120-10TM) are selected for imaging. The resolution and focal length are set as 2560 × 2560 pixels and 18 mm respectively. The procedures are as follows:
(1)
calibrate the intrinsic and extrinsic parameters of the monocular camera;
(2)
make adjustments to ensure that the circle features are distributed symmetrically around the image center;
(3)
the pose of the control field is determined and adjusted repeatedly to ensure the object and the image planes are parallel;
(4)
the control field is driven by the electronic control platform to move several object planes along the optical axis. The image of the control field in each plane is collected and analyzed by the algorithm on the graphic workstation.
First, the accuracy of the 2D distortion partitioning method is verified. The image of the control field at the focus distance is divided into five concentric rings by the equal-radius (Figure 8a–e) and equal-increment (Figure 9a–e) distortion partition models, respectively. In each partition, the corresponding lines (green ones) are selected to solve the distortion coefficients and correct the image distortion. For each of the two partitioning methods, five corrected images can be obtained (i.e., Figure 8f–j and Figure 9f–j). Here, we use Figure 8f and Figure 9f as an example to illustrate the results of distortion correction. The distortion on the image edge solved by the distortion coefficients of the first partition is far beyond the actual distortion here. After distortion correction, distortion is removed overly, thus resulting in the distortion in the opposite direction.
To compare the distortion correction effect of each partition in the image, we subtract the simulated undistorted image with the corrected image, and we get Figure 8k–o and Figure 9k–o. Obviously, the smaller the gray value is, the closer the undistorted image is to the ground truth, and the better the distortion removal effect is. As can be seen from the figures, the distortion correction results of each partition by the equal-radius partitioning method (Figure 8k–m) were not as good as that by the equal-increment partitioning method. Notably, in the fourth partition and the fifth partition located at the edge of the image, the green concentric ring in Figure 8n–o had a larger gray value, while in Figure 9n–o the gray values of pixels in the green concentric ring were approximately 0. This shows that the equal-increment partitioning method had a better performance on eliminating distortion.
Meanwhile, all the lines were used to solve and correct the image distortion as well. Then, the distortion correction effects with and without partition were compared using the aforementioned indexes (Section 4.1). As shown in Table 1, the undistorted images obtained by the two distortion partition methods had a good PSNR of up to 37.61 dB. Compared with the results obtained when without partition, the two partition methods showed a smaller straightness error in each partition. However, compared with the two partitioning methods, the maximum and average errors in the fourth and fifth partitions by the equal-radius partitioning method were at least 4 times and 2 times those by the partitioning method proposed in this paper. That is to say, with the equal-increment partitioning method, each partition can get better distortion correction results. The enlarged image of the best distortion curve for each partition is shown in Figure 10, which validates the effectiveness and accuracy of the proposed partitioning method in 2D settings.

5.2. Accuracy Verification Experiments of DoF Distortion Partitioning Model and Camera Calibration

In this section, the accuracy of the DoF distortion model and camera calibration is verified. The control field is driven to move four different object planes within the DoF, two of which are at the limit positions of the front and rear DoF, and the other two planes are within the DoF. The front object plane was divided into five areas with equal distortion increment of 20.2 pixels. Then, based on the distortion parameters in two object planes with known depths, the distortions in the other two object planes are calculated by the non-partition model, the proposed DoF distortion model with equal-radius partition, and the proposed DoF distortion model with equal-increment distortion partition, respectively. Thereafter, we manually adjusted the ring to focus the lens on the two object planes located at the limit positions of the front and rear DoF. Then, based on the calculated radial and decentering distortion coefficients on the two focal planes, Brown’s model [12] with equal-radius partition is used to estimate distortion parameters on the two planes within the DoF.
Furthermore, the results are compared with the distortion directly solved by the lines (the observed value) within the corresponding partition. To compare the accuracy of different DoF distortion models, we took the in-plane point located in the common area (the second column of Table 2) partitioned by the two models at the same object distance (e.g., 400 mm in the first column of Table 2) as an example. As shown in Table 2, for Brown’s model [12] with equal-radius partition, the maximum and average absolute differences between the calculated and the observed values were 7.32 μm and 2.81 μm, respectively. Those errors are smaller than that of the traditional Zhang’s model without considering the DoF and distortion partition, but much larger than those of the proposed DoF distortion model with equal-radius partition and the proposed DoF distortion model with equal-increment distortion partition, respectively. The maximum and average absolute differences between the calculated and the observed values of the equal-increment distortion partition based DOF distortion model were 1.53 μm and 0.88 μm, respectively. By contrast, the errors of equal-radius partition based DOF distortion model were 4.64 μm and 1.94 μm, which was more than two times those of the proposed model in this paper. The results verified the accuracy of the DoF distortion partitioning model in 3D settings.
Images of circular markers with known precise distance on the planar artifact are collected. The calibration accuracy of the monocular camera is verified by the re-projection errors and the angular reconstruction errors, respectively. Specifically, the planar artifact is driven by the high-accuracy pitch axis to rotate five positions, between two adjacent ones of which are 10°. In each position, the pose matrix between the planar artifact Figure 11a and the calibrated camera is calculated by the OPNP algorithm [38] with the equal-radius and the equal-increment partitioning based DoF distortion models, respectively. Thereafter, 20 markers are projected back to the image via the estimated pose matrix, and the re-projection errors, the image distances between the projected and observed points, are calculated. As shown in Figure 11b, for equal-radius DoF distortion partition model, the maximum and average re-projection errors of the five positions were 0.29 pixels and 0.17 pixels, respectively, while the maximum and average projection errors of the proposed model were 0.11 pixels and 0.05 pixels, respectively. The angle between two adjacent positions of the artifact is reconstructed with the two models as well. As illustrated in Figure 11c, the 3D measurement accuracy of the system is assessed by comparing it with the nominal angle. The results show that the maximum and average angular errors of equal-radius based DoF distortion partition model were 0.48° and 0.30°, respectively, while those of the proposed model were 0.013° and 0.011°, which means that the angular reconstruction errors are effectively reduced. The above results comprehensively verify the accuracy of the DoF distortion partitioning model and the camera calibration method proposed in this paper.

6. Conclusions

This paper has investigated the methods of modeling and calibration of lens distortions for close-range photogrammetry (e.g., short object distance and small focal length). Our work finds that the following:
(1)
A focusing-state-independent DoF distortion model is constructed, and the distortion parameters at any object plane can be solved through the distortion on two defocus planes, which removes the human errors introduced by manual adjustment of the focus and zoom rings.
(2)
A 2D-to-3D equal-increment partitioning method for lens distortion is proposed. After fusing with the DoF distortion model to form a DoF distortion partition model, the accuracy of lens distortion characterization is further improved.
(3)
A two-step method is proposed to calibrate camera parameters, in which the DoF distortion is calculated independently by the plumb-line method, which eliminated the coupling effect among the parameters in the camera model.
(4)
Experiments were performed to verify the accuracy of the 2D distortion partition model, DoF-dependent distortion partition model, and camera calibration. The results show that the maximum and average angular reconstruction errors by the proposed model were 0.013° and 0.011° respectively, which validates the accuracy and feasibility of the equal-increment partitioning based DoF distortion method.
The main limitation of the present study is that the number of partitions is not optimized to achieve higher calibration accuracy. Our future work will focus on this and extend our model to other optical systems with fisheye or catadioptric lenses.

Author Contributions

Conceptualization, W.L. and X.L.; formal analysis, X.L.; funding acquisition, W.L. and X.L.; investigation, X.L.; methodology, X.L.; project administration, W.L.; supervision, X.M. and X.Y. (Xiaokang Yin); writing—original draft, X.L.; writing—review & editing, X.Y. (Xin’an Yuan). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Special National Key Research and Development Plan (No. 2016YFC0802303), National Natural Science Foundation of China (No. 52005513), and the Fundamental Research Funds for the Central Universities (No. 27RA2003015).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

(List of abbreviations and symbols present in this article)
Acronym or SymbolsDefinitionAcronym or SymbolsDefinition
DoFdepth of field K i s , s n i -th radial distortion coefficient of the object plane at distance s n when the lens is focused at the distance of s
FoVfield-of-view K i s , s m i -th radial distortion coefficient of the object plane at distance s m when the lens is focused at the distance of s
p ( u l i v l i ) undistorted coordinates K i s , s k i -th radial distortion coefficient of the object plane at distance s k when the lens is focused at the distance of s
O w X w Y w Z w world coordinate system δ P s , s m decentering distortion function in the defocused plane at the object distances of s m when the lens is focused at distance s
o u v image coordinate system δ P s , s k decentering distortion function in the defocused plane at the object distances of s k when the lens is focused at distance s
O C optical center δ P s , s n decentering distortion function in the defocused plane at the object distances of s n when the lens is focused at distance s
z scaling factor P s , s n i -th decentering distortion coefficient of the object plane at distance s n when the lens is focused at the distance of s
K intrinsic parameter matrix P i s , s m i -th decentering distortion coefficient of the object plane at distance s m when the lens is focused at the distance of s
CCDCharge Coupled Device P i s , s k i -th decentering distortion coefficient of the object plane at distance s k when the lens is focused at the distance of s
CMOSComplementary Metal-Oxide Semiconductor δ m a x maximum value of image distortion
M transformation matrix r m a x maximum distortion radius of the image
VCSvision coordinate system I l length of the image
R rotation matrix I h height of the image
T translation matrix δ l i m i t e d minimum distortion value
( u ¯ l i v ¯ l i ) distorted coordinates r l i m i t e d minimum value of the image distortion radius
δ u l i distortion function of an image point in the u direction n p number of partitions
δ v l i the distortion function of an image point in the v direction δ e q u distortion increment
( u 0 , v 0 ) distortion center ρ partition radius
r distortion radius P m point in VCS
K 1 first-order coefficient of radial distortion P k point in VCS
K 2 second-order coefficient of radial distortion Π m m -th object plane in the VCS
P 1 first-order coefficient of decentering distortion Π k k -th object plane in the VCS
P 2 second-order coefficient of decentering distortion Π n n -th object plane in the VCS
δ r radial distortion for a lens that is focused on plus infinity K i s , s m g i -th order radial distortion coefficient in the g -th partition of object plane Π m
δ r radial distortion for a lens that is focused on minus infinity P i s , s m g i -th order decentering distortion coefficient in the g -th partition of object plane Π m
m s vertical magnification in the focal plane at object distance s K i s , s k g i -th order radial distortion coefficient in the g -th partition of object plane Π k
δ r s lens radial distortion in the focal plane P i s , s k g i -th order decentering distortion coefficient in the g -th partition of object plane Π k
δ r s m radial distortion in the focal plane when the lens is focused on the distance of s m K i s , s n g i -th order radial distortion coefficient in the g -th partition of object plane Π n
δ r s k radial distortion in the focal plane when the lens is focused on the distance of s k P i s , s n g i -th order decentering distortion coefficient in the g -th partition of object plane Π n
f focal lengthPnPPerspective-n-Point
K i s i -th radial distortion coefficient for focused object plane at distance s θ angle between the regression line and the u axis
K i s m i -th radial distortion coefficient when the lens is focused on the distance of s m D average sum of squared distances from the points ( u l i v l i ) to all the lines
K i s k i -th radial distortion coefficient when the lens is focused on the distance of s k i m R undistorted image
δ r u component of the decentering distortion in u direction i m D distorted image
δ r v component of the decentering distortion in v directionRMSroot mean square
K s , s p radial distortion coefficient in the defocused plane with the depth of s p when the lens is focused at distance s PSNRPeak Signal-to-Noise Ratio
g empirical coefficient M S E mean square error of the image before and after distortion correction
K s p radial distortion coefficient in the focal plane at distance s p R q rotation matrix in the q -th pose
K s radial distortion coefficient in the focal plane at distanced s T q translation matrix in the q -th pose
δ r s , s n radial distortion function of the object plane at distance s n when the lens is focused at the distance of s K i g i -th order radial distortion coefficient in the g -th partition of the q -th pose
δ r s , s m radial distortion function of the object plane at distance s m when the lens is focused at the distance of s P j g j -th order decentering distortion coefficient in the g -th partition of the q -th pose
δ r s , s k radial distortion function of the object plane at distance s k when the lens is focused at the distance of s LMLevenberg-Marquardt

References

  1. Fraser, C.S. Automatic camera calibration in close range photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  2. Basu, A.; Licardie, S. Alternative models for fish–eye lenses. Pattern Recognit. Lett. 1995, 16, 433–441. [Google Scholar] [CrossRef]
  3. Lee, H.; Han, D. Rectification of bowl-shape deformation of tidal flat DEM derived from UAV imaging. Sensors 2020, 20, 1602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Drap, P.; Lefèvre, J. An exact formula for calculating inverse radial lens distortions. Sensors 2016, 16, 807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Liu, M.; Sun, C.; Huang, S.; Zhang, Z. An accurate projector calibration method based on polynomial distortion representation. Sensors 2015, 15, 26567–26582. [Google Scholar] [CrossRef] [Green Version]
  6. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
  7. Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 125–132. [Google Scholar]
  8. Alemán-Flores, M.; Alvarez, L.; Gomez, L.; Santana-Cedrés, D. Automatic lens distortion correction using one-parameter division models. Image Process. Line 2014, 4, 327–343. [Google Scholar] [CrossRef]
  9. Claus, D.; Fitzgibbon, A.W. A rational function lens distortion model for general cameras. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 213–219. [Google Scholar]
  10. Huang, J.; Wang, Z.; Xue, Q.; Gao, J. Calibration of camera with rational function lens distortion model. Chin. J. Lasers 2014, 41, 0508001. [Google Scholar] [CrossRef]
  11. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. 1966, 32, 444–462. [Google Scholar]
  12. Brown, D.C. Close–range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  13. Tsai, R.Y. A versatile camera calibration technique for high–accuracy 3D machine vision metrology using off–the–shelf TV cameras and lenses. IEEE Rob. Autom Mag. 2003, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  14. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  15. Kakani, V.; Kim, H.; Kumbham, M.; Park, D.; Jin, C.B.; Nguyen, V.H. Feasible Self–Calibration of Larger Field–of–View (FOV) Camera Sensors for the Advanced Driver–Assistance System (ADAS). Sensors 2019, 19, 3369. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, X.; Liu, W.; Pan, Y.; Liang, B.; Zhou, M.D.; Li, H.; Wang, F.J.; Jia, Z.Y. Monocular–vision–based contouring error detection and compensation for CNC machine tools. Precis. Eng. 2019, 55, 447–463. [Google Scholar] [CrossRef]
  17. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  18. Liu, W.; Ma, X.; Li, X.; Pan, Y.; Wang, F.J.; Jia, Z.Y. A novel vision-based pose measurement method considering the refraction of light. Sensors 2018, 18, 4348. [Google Scholar] [CrossRef] [Green Version]
  19. Yang, J.H.; Jia, Z.Y.; Liu, W.; Fan, C.N.; Xu, P.T.; Wang, F.J.; Liu, Y. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information. Meas. Sci. Technol. 2016, 27, 105009. [Google Scholar] [CrossRef]
  20. Zhao, Z.; Zhu, Y.; Li, Y.; Qiu, Z.; Luo, Y.; Xie, C.; Zhang, Z. Multi-camera-based universal measurement method for 6-DOF of rigid bodies in world coordinate system. Sensors 2020, 20, 5547. [Google Scholar] [CrossRef]
  21. Prescott, B.; Mclean, G.F. Line–based correction of radial lens distortion. Graph. Models Image Process. 1997, 59, 39–47. [Google Scholar] [CrossRef]
  22. Ahmed, M.; Farag, A. Nonmetric calibration of camera lens distortion: Differential methods and robust estimation. IEEE Trans. Image Process. 2005, 14, 1215–1230. [Google Scholar] [CrossRef]
  23. Santana-Cedrés, D.; Gomez, L.; Alemán-Flores, M.; Salgado, A.; Esclarín, J.; Mazorra, L.; Alvarez, L. Estimation of the lens distortion model by minimizing a line reprojection error. IEEE Sens. J. 2017, 17, 2848–2855. [Google Scholar] [CrossRef]
  24. Becker, S.C.; Bove, V.M., Jr. Semiautomatic 3D–model extraction from uncalibrated 2D–camera views. Int. Soc. Opt. Photonics 1995, 2410, 447–461. [Google Scholar]
  25. Penna, M.A. Camera calibration: A quick and easy way to determine the scale factor. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 1240–1245. [Google Scholar] [CrossRef]
  26. Magill, A.A. Variation in distortion with magnification. J. Res. Natl. Bur. Stand. 1955, 45, 148–149. [Google Scholar]
  27. Fryer, J.G. Lens distortion for close–range photogrammetry. Photogramm. Eng. Remote Sens. 1986, 52, 51–58. [Google Scholar]
  28. Treibitz, T.; Schechner, Y.Y.; Singh, H. Flat refractive geometry. IEEE Trans. Pattern. Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef] [Green Version]
  29. Fraser, C.S.; Shortis, M.R. Variation of distortion within the photographic field. Photogramm. Eng. Remote Sens. 1992, 58, 851–855. [Google Scholar]
  30. Dold, J. Ein hybrides photogrammetrisches Industriemesssystem höchster Genauigkeit und seiner Überprüfung. Ph.D. Thesis, Schriftenreihe Studiengang Vermessungswesen, Heft 54. Universität der Bundeswehr, München, Germany, 1997. [Google Scholar]
  31. Brakhage, P.; Notni, G.; Kowarschik, R. Image aberrations in optical three–dimensional measurement systems with fringe projection. Appl. Opt. 2004, 43, 3217–3223. [Google Scholar] [CrossRef]
  32. Hanning, T. High precision camera calibration with a depth dependent distortion mapping. In Proceedings of the 8th IASTED International Conference on Visualization, Imaging, and Image Processing, Palma de Mallorca, Spain, 1–3 September 2008; pp. 304–309. [Google Scholar]
  33. Alvarez, L.; Gómez, L.; Sendra, J.R. Accurate depth dependent lens distortion models: An application to planar view scenarios. J. Math. Imaging Vis. 2011, 39, 75–85. [Google Scholar] [CrossRef]
  34. Sun, P.; Lu, N.; Dong, M. Modelling and calibration of depth–dependent distortion for large depth visual measurement cameras. Opt. Express 2017, 25, 9834–9847. [Google Scholar] [CrossRef]
  35. Ricolfe-Viala, C.; Esparza, A. Depth-dependent high distortion lens calibration. Sensors 2020, 20, 3695. [Google Scholar] [CrossRef] [PubMed]
  36. Li, X.; Liu, W.; Pan, Y.; Ma, J.; Wang, F. A knowledge–driven approach for 3D high temporal–spatial measurement of an arbitrary contouring error of CNC machine tools using monocular vision. Sensors 2019, 19, 744. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Lowe, D.G. Object recognition from local scale–invariant features. In Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, Kerkyra, Greece, 20–27 September 1999; pp. 51–65. [Google Scholar]
  38. Zheng, Y.; Kuang, Y.; Sugimoto, S.; Astrom, K.; Okutomi, M. Revisiting the PnP problem: A fast, general and optimal solution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 2344–2351. [Google Scholar]
Figure 1. Schematic diagram of camera model and lens distortion: (a) camera model; (b) barrel distortion; (c) pincushion distortion.
Figure 1. Schematic diagram of camera model and lens distortion: (a) camera model; (b) barrel distortion; (c) pincushion distortion.
Sensors 20 05934 g001
Figure 2. Two distortion partitioning methods: (a) equal-radius partition; (b) equal-increment partition.
Figure 2. Two distortion partitioning methods: (a) equal-radius partition; (b) equal-increment partition.
Sensors 20 05934 g002
Figure 3. The geometric relationship between the partition radii in different object planes.
Figure 3. The geometric relationship between the partition radii in different object planes.
Sensors 20 05934 g003
Figure 4. Experimental system for calibrating depth of field (DoF) lens distortion.
Figure 4. Experimental system for calibrating depth of field (DoF) lens distortion.
Sensors 20 05934 g004
Figure 5. Schematic diagram of distortion calibration based on linear conformation configuration.
Figure 5. Schematic diagram of distortion calibration based on linear conformation configuration.
Sensors 20 05934 g005
Figure 6. Image processing procedures for linear conformation: (a) image of the control field; (b) corner detection; (c) edge point connection; (d) point reselection; (e) horizontal line extraction; (f) line detection results in different areas.
Figure 6. Image processing procedures for linear conformation: (a) image of the control field; (b) corner detection; (c) edge point connection; (d) point reselection; (e) horizontal line extraction; (f) line detection results in different areas.
Sensors 20 05934 g006
Figure 7. Experimental system for DoF distortion calibration: (a) system hardware; (b) control field.
Figure 7. Experimental system for DoF distortion calibration: (a) system hardware; (b) control field.
Sensors 20 05934 g007
Figure 8. Distortion calibration and correction results based on the equal-radius partition method (f = 18 mm): (a) partition 1; (b) partition 2; (c) partition 3; (d) partition 4; (e) partition 5; (f) distortion correction (partition 1); (g) distortion correction (partition 2); (h) distortion correction (partition 3); (i) distortion correction (partition 4); (j) distortion correction (partition 5); (k) difference (partition 1); (l) difference (partition 2); (m) difference (partition 3); (n) difference (partition 4); (o) difference (partition 5).
Figure 8. Distortion calibration and correction results based on the equal-radius partition method (f = 18 mm): (a) partition 1; (b) partition 2; (c) partition 3; (d) partition 4; (e) partition 5; (f) distortion correction (partition 1); (g) distortion correction (partition 2); (h) distortion correction (partition 3); (i) distortion correction (partition 4); (j) distortion correction (partition 5); (k) difference (partition 1); (l) difference (partition 2); (m) difference (partition 3); (n) difference (partition 4); (o) difference (partition 5).
Sensors 20 05934 g008
Figure 9. Distortion calibration and correction results based on the proposed partition method (f = 18 mm): (a) partition 1; (b) partition 2; (c) partition 3; (d) partition 4; (e) partition 5; (f) distortion correction (partition 1); (g) distortion correction (partition 2); (h) distortion correction (partition 3); (i) distortion correction (partition 4); (j) distortion correction (partition 5); (k) difference (partition 1); (l) difference (partition 2); (m) difference (partition 3); (n) difference (partition 4); (o) difference (partition 5).
Figure 9. Distortion calibration and correction results based on the proposed partition method (f = 18 mm): (a) partition 1; (b) partition 2; (c) partition 3; (d) partition 4; (e) partition 5; (f) distortion correction (partition 1); (g) distortion correction (partition 2); (h) distortion correction (partition 3); (i) distortion correction (partition 4); (j) distortion correction (partition 5); (k) difference (partition 1); (l) difference (partition 2); (m) difference (partition 3); (n) difference (partition 4); (o) difference (partition 5).
Sensors 20 05934 g009aSensors 20 05934 g009b
Figure 10. Distortion curves solved by the lines in each partition using the proposed partition method.
Figure 10. Distortion curves solved by the lines in each partition using the proposed partition method.
Sensors 20 05934 g010
Figure 11. Camera calibration accuracy verification: (a) artifact; (b) re-projection error; (c) angular reconstruction error.
Figure 11. Camera calibration accuracy verification: (a) artifact; (b) re-projection error; (c) angular reconstruction error.
Sensors 20 05934 g011aSensors 20 05934 g011b
Table 1. Comparison of distortion correction of the two partition models.
Table 1. Comparison of distortion correction of the two partition models.
IndicatorEqual-Radius Partition Model/The Proposed ModelNon-Partitioned Model
Partition 1Partition 2Partition 3Partition 4Partition 5
Maximum error/pixel0.32/0.220.62/0.410.77/0.532.1/0.562.7/0.557.46
Average error/pixel0.05/0.030.07/0.060.10/0.080.17/0.080.26/0.100.52
RMS/pixel0.04/0.040.08/0.060.10/0.070.11/0.080.32/0.090.48
PSNR/dB37.61/37.6137.26/37.2637.10/37.3037.28/37.2937.34/37.3337.53
Table 2. Accuracy verification for DoF distortion partition model.
Table 2. Accuracy verification for DoF distortion partition model.
Position of Object Plane (mm)In-Plane Position
(Distance from the Distorted Point to the Optical Axis)
Distortion
Observed Value
(μm)
Brown’s Distortion Model with Equal-Radius PartitionEqual-Radius Partition Based DOF Distortion ModelEqual-Increment Distortion Partition Based DOF Distortion ModelZhang’s Model
Calculated (μm)Difference | C O | (μm)Calculated (μm)Difference | C O | (μm)Calculated (μm)Difference | C O | (μm)Calculated (μm)Difference | C O | (μm)
400Point in partition #1 (56 mm)86.4186.330.0886.380.0386.40.0186.270.14
Point in partition #2
(112 mm)
1836.231834.122.111835.820.411836.090.141832.423.81
Point in partition #3
(168 mm)
3586.063582.813.253583.72.363584.841.223581.814.25
Point in partition #4
(224 mm)
5335.885332.013.875333.062.825334.571.315329.226.66
Point in partition #5
(280 mm)
−7085.71−7093.037.32−7090.354.64−7087.051.34−7095.039.32
500Point in partition #1
(70 mm)
51.351.280.0251.280.0251.290.0151.280.02
Point in partition #2
(140 mm)
1468.981467.931.051468.50.481468.440.541466.82.18
Point in partition #3
(210 mm)
2886.672884.632.042884.272.402885.461.212883.113.56
Point in partition #4
(280 mm)
4304.354301.183.174301.642.714302.821.534299.84.55
Point in partition #5
(350 mm)
−5722.03−5727.295.26−5725.594.56−5723.551.52−5729.847.81
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Li, W.; Yuan, X.; Yin, X.; Ma, X. DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry. Sensors 2020, 20, 5934. https://doi.org/10.3390/s20205934

AMA Style

Li X, Li W, Yuan X, Yin X, Ma X. DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry. Sensors. 2020; 20(20):5934. https://doi.org/10.3390/s20205934

Chicago/Turabian Style

Li, Xiao, Wei Li, Xin’an Yuan, Xiaokang Yin, and Xin Ma. 2020. "DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry" Sensors 20, no. 20: 5934. https://doi.org/10.3390/s20205934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop