Next Article in Journal
Electric Field Detection System Based on Denoising Algorithm and High-Speed Motion Platform
Previous Article in Journal
Game Theory-Based Authentication Framework to Secure Internet of Vehicles with Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Least-Square-Method-Based Optimal Laser Spots Acquisition and Position in Cooperative Target Measurement

1
School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
2
Beijing Institute of Aerospace Automatic Control, Beijing 100089, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(14), 5110; https://doi.org/10.3390/s22145110
Submission received: 26 May 2022 / Revised: 24 June 2022 / Accepted: 5 July 2022 / Published: 7 July 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The relative positioning precisions of coordinate points is an important indicator that affects the final accuracy in the visual measurement system of space cooperative targets. Many factors, such as measurement methods, environmental conditions, data processing principles and equipment parameters, are supposed to influence the cooperative target’s acquisition and determine the precision of the cooperative target’s position in a ground simulation experiment with laser projected spots on parallel screens. To overcome the precision insufficiencies of cooperative target measurement, the factors of the laser diode supply current and charge couple device (CCD) camera exposure time are studied in this article. On the hypothesis of the optimal experimental conditions, the state equations under the image coordinates’ system that describe the laser spot position’s variation are established. The novel optimizing method is proposed by taking laser spot position as state variables, diode supply current and exposure time as controllable variables, calculating the optimal controllable variables through intersecting the focal spot centroid line and the 3-D surface, and avoiding the inconvenience of solving nonlinear equations. The experiment based on the new algorithm shows that the optimal solution can guarantee the focal spot’s variation range in 5–10 pixels under image coordinates’ system equivalent to the space with a 3 m distance and 0.6–1.2 mm positioning accuracy.

1. Introduction

The position and attitude measurement of space targets have a wide range of application backgrounds in aviation, aerospace, satellite navigation and many other fields [1,2,3,4]. In these applications, in the implementation process of non-contact measurement, such as visual measurements, as the main measurement method, in order to meet the precise positioning of the target [5,6], the artificially set cooperative target is often used in cooperation with the shooting method, and the precise coordinates are obtained by extracting the center of the image value. This measurement method is often difficult to achieve further accuracy breakthroughs due to the long development cycle of the coordinate solution algorithm and complex data processing [7,8]. Moreover, the real-time requirements of coordinate solutions are in contradiction with the inherent nonlinearity of the solution formula. Therefore, the final position and attitude information need data compensation to be accurate [9], which means that the construction of a high-precision cooperative target measurement system requires the entire measurement system and data processing to be optimized. Traditional measurement conditions focus on the consideration of geometric quantities, including the relative position of the measuring instrument and the measured target, the indicators of flatness and straightness, and the definition of measurement error, while ignoring the consideration of other physical quantities that indirectly cause a change in spot position information. In this sense, the appropriate measurement conditions mostly include the selection of geometric quantities to be measured and the adjustment of physical parameters. Challenges still exist regarding how to optimize the parameters’ configuration and put them into practice [10,11,12]. Based on the above viewpoints, a ground simulation system [13,14] composed of two or more monocular planar array CCD cameras, a cooperative target of a cross-shaped target and paralleled screens can accurately reveal the process of space coordinates’ acquisition and position in three-dimensional space. In this ground simulation system, there are still challenges with improving the projected spot’s position precision regarding the calibration of the planar array CCD [15,16], efficiency of the spot centroid extraction algorithm [17,18], and optimization of the experimental conditions. Once the camera calibration method and centroid extraction algorithm are determined, the optimization of the experimental conditions will be the key point, the content of which involves the measuring principles, circumstances demand, measurement parameters selection and optimization, and the processing of measurement results.
In the ground simulation experiment, the quality of the measurement results depend on the choice of measurement conditions. The selection of new measurement conditions parameters, determination of parameter optimization criteria [19,20,21] and optimization algorithms [22,23,24,25,26] will determine the final accuracy and practicability of the ground simulation experiment.
The contributions of this paper are as follows: (1) Diode supply current and camera exposure time are adopted as two controllable parameters. (2) The novel state equation is established based on position information as state variables and physical parameters as controllable variables. (3) An innovative algorithm is proposed for determining the optimal controllable variables by calculating the intersection of line and surface. (4) Experiments based on the above algorithm are carried out to verify the accuracy.
The structure of the article is as follows: Related studies, prior knowledge and context are briefly introduced in Section 2. The principles and schemes including state equation under the image coordinates system, the description of the state variables and the optimization process are given in Section 3. Then, the optimized controllable variables are calculated by a novel algorithm. Section 4 introduces the algorithm validation experiment, provides the error curves, and an in-depth analysis of the calculation results is conducted. The conclusions are presented in Section 5.

2. Related Works

According to the geometric relationship of a straight-line intersection, the target coordinates can be calculated by the four projected points coordinates shown in Figure 1.
In Figure 1, the space target coordinates are calculated by the geometric location of the four projected spots: A1, A2, B1, and B2 on two parallel screens. The analytical solutions of the two laser trajectory line equations are expressed in Equation (1):
{ x = ( x A 1 x B 1 y A 2 x A 2 x B 1 y A 1 x A 1 x B 2 y A 2 + x A 2 x B 2 y A 1 x A 1 x B 1 y B 2 + x A 1 x B 2 y B 1 + x A 2 x B 1 y B 2 x A 2 x B 2 y B 1 ) ( x A 1 y B 1 x B 1 y A 1 x A 1 y B 2 x A 2 x B 1 + x B 1 y A 2 + x B 2 y A 1 + x A 2 y B 2 x B 2 y A 2 ) y = ( x A 1 y A 2 y B 1 x A 2 y A 1 y B 1 x A 1 y A 2 y B 2 + x A 2 y A 1 y B 2 x B 1 x A 1 y B 2 + x B 2 y A 1 y B 1 + x B 1 y A 2 y B 2 x B 2 y A 2 y B 1 ) ( x A 1 y B 1 x B 1 y A 1 x A 1 y B 2 x A 2 x B 1 + x B 1 y A 2 + x B 2 y A 1 + x A 2 y B 2 x B 2 y A 2 ) z = ( x A 1 z A 2 z B 1 x A 2 z A 1 z B 1 x A 1 z A 2 z B 2 + x A 2 z A 1 z B 2 x B 1 z A 1 z B 2 + x B 2 z A 1 z B 1 + x B 1 z A 2 z B 2 x B 2 z A 2 z B 1 ) ( x A 1 z B 1 x B 1 z A 1 x A 1 z B 2 x A 2 z B 1 + x B 1 z A 2 + x B 2 z A 1 + x A 2 z B 2 x B 2 z A 2 )
Once the projected coordinates on the parallel screen under O-XYZ coordinate system—A1(xA1, yA1, zA1), A2(xA2, yA2, zB2), B1(xB1, yB1, zB1), and B2(xB2, yB2, zA2)—are determined, there is a unique space target position coordinate (x, y, z) corresponding to them. Mathematically, the accurate projected coordinates can be manifested as the centroid coordinates and the target precision depends on the precision of the projected coordinates [27,28].

2.1. Position Relationship between Cameras and Target Coordinates

Assuming that each diode laser on the space target emits an ideal Gaussian beam, accordingly, the projected spots on the parallel screen express scattered spots, and the picture shot by the planar CCD camera shows grayscale images. Based on the above analysis, 2-D image coordinate axis is built in the plane of focal spot, in which the X and Y axes demonstrate the spot moving in a horizontal and vertical direction. The intensity distribution of the focal spot is described in Equation (2) [9], where I0 is the light intensity at the maximum value of the spot centroid, [x0, y0] is the spot centroid, and D is the spot diameter. The gray level and light intensity distribution are shown in Figure 2 and Figure 3, respectively:
I ( x , y ) = I 0 e ( x x 0 ) 2 ( y y 0 ) 2 ( D 2 ) 2
The relationship between the three coordinate systems in monocular camera models are shown in Figure 4. The camera coordinate system (CCS) has the optical axis center of O as the origin. The plane created by the imaging origin Of, which is located upon the upper left of the plane, represents the imaging coordinate system (ICS). A real target coordinate system is a world coordinate system (WCS), the original center of which is Ow.
The definition of each quantity expressed in Figure 4 is as follows: the coordinates of the target P under the WCS are set to be Pw (Xw, Yw, Zw), and the coordinates of its projection point under the ICS are Pu (Xu, Yu, Zu). θ is the angle between the connection line formed by the origin O and Pw under the CCS and the optical axis Z, which represents the attitude information. f is the focal length of the camera. O′ is the intersection of the optical axis of the camera in the imaging plane and the projection of the origin O under the ICS, the coordinate value of which is (cx, cy, 0).
If the physical size of each pixel is obtained by the calibrated internal parameters as sx = 1/dx, sy = 1/dy, the dimensions of which are 1/mm, the conversion formula, as shown in Equation (3) [29,30,31], is able to facilitate the coordinate transformation from ICS to CCS:
( u v 1 ) = ( s x 0 c x 0 s y c y 0 0 1 ) ( x y 1 )
Followed by the matrix of R3×3 and T3×1, the transformation matrix from ICS to WCS is shown in Equation (4). If the inner parameters of [1/sx, 1/sy, cx, cy f, θ] and the external parameters in R3×3 and T3×1 are obtained or calibrated, coordinate transformation is carried out:
( X u Y u Z u 1 ) = [ R 3 × 3 T 3 × 1 0 1 × 3 1 1 × 1 ] [ X w Y w Z w 1 ]

2.2. Laser Lighting Characteristics

As a stimulated radiation device, the semiconductor light-emitting diode (LED) is selected as the light source to meet the demands for space cooperative target measurement. To ensure supply current sustainability and light intensity, an efficient DC current regulator for the LED is needed. In this case, the LED is able to generate a controllable and high-quality light beam with a negligible optical distortion and tiny divergence angle.
In the CCS, the centroid algorithm is utilized to obtain the accurate position of the projected spot, and the change in the spot brightness has an effect on the position measurement. From a quantitative point of view, the relative luminous flux of the LED depends on the supply current. Figure 5 [13] shows this characteristic curve.
Figure 5 shows that the relative luminous flux is approximately proportional to the supply current in the special region where the supply current is between 100 mA and 400 mA and the relative luminous flux is less than 100%. The slope value of change is 0.22 based on the one-dimensional fitting data’s calculation. As the relative luminous flux and the illuminance have linear deterministic numerical correspondences, it can be considered that the illuminance and the supply current have a relationship as shown in Equation (5), where I0 has a similar definition and dimension as I0 in Equation (2), and k is 0.22 lx/mA:
I 0 = k i

2.3. Characteristics of the Camera’s Exposure Time

Ideally, the laser spot projected image is required to use the most suitable exposure time without saturating any of the pixels. This means that for the brightest pixels, the intensity is just below saturation. The approximate formula is shown in Equation (6) [18].
H i = 1 4 A α R s T τ ρ E s ( D f ) 2 1 ( 1 + m ) 2 t
In Equation (6): Hi—the image gray level; A—camera gain factor; α—quantitative coefficient; R—the responsiveness of the CCD unit; s—the CCD unit area; T—optical lens transmittance; τ—the atmospheric transmittance; ρ—target reflection coefficient; Es—the target luminance; D—light flux aperture; f—focal length; D/f—relative aperture; m—imaging system magnification; t—the exposure time.
Based on the above analysis, Equation (5) shows that the supply current of the laser diode is proportional to the luminous flux within a specific interval, and Equation (6) shows that when the camera parameters are determined, the gray level of the image is approximately proportional to the exposure time. Therefore, adjusting the supply current and exposure time can effectively adjust the gray value of the spot-projected image, thereby adjusting the pixel coordinates of the spot centroid.

3. Data and Method

The data processing and optimizing flowchart is displayed in Figure 6.

3.1. State Equation of the Imaging System

It can be concluded from Section 2.1 to Section 2.3 that the accuracy of the projected spot’s position depends on two factors, one of which is the coordinate transformation accuracy. For this factor, a calibration method, such as the Zhengyou Zhang Method [32], can determine accurate parameters, so the position precision can be guaranteed. The second factor is the brightness of the spot, which depends on two parameters: the supply current of i and the exposure time of t. For this part, the discrete state equation can comprehensively describe the process.
The state equation is established as follows: First, the supply current of i and exposure time of t are implemented as controllable variables; Second, the laser-projected spot positions in the imaging plane under ICS are utilized as the fundamental state variables; Third, assuming the centroid coordinates of the la-ser spot is (u, v), the characteristics of which are essentially discrete variables, k indicates the camera frame rate and also characterizes the sample rate; therefore, the state equation can be built in Equation (7).
[ u ( k + 1 ) v ( k + 1 ) ] = [ u ( k ) v ( k ) ] + [ Δ u ( k ) Δ v ( k ) ]
In Equation (7), u(k + 1) and u(k), v(k + 1) and v(k) theoretically represent the same projected spot. Similarly, Δu(k) and Δv(k) show the position variation between two experimental conditions, respectively. This change in state variable has a functional relationship with the discretized supply current and exposure time. The characteristic expression is proposed in Equation (8) in the case that the variables of i and t are quantized with k:
[ Δ u ( k ) Δ v ( k ) ] = F [ i ( k ) t ( k ) ]

3.2. Acquisition of Experimental Images

The details are as follows: three active light spots are implemented as substitutes for the projected spots to ensure the flatness of spots and equivalence of light characteristics. Three separated triangle-shaped lasers are settled in a fixed plate. In the first group, the supply current is regulated, then the CCD camera records the laser spot images; In the second group, the fixed plate rotates 180° counterclockwise, exposure time is adjusted, then a CCD camera records the laser spot images, the purpose of which is to avoid mutual interference between the two experimental conditions. Centroid coordinates of spots are calculated with the variation of i and t. Two groups of images are displayed in Figure 7 and Figure 8.
In Figure 7 and Figure 8, the three projected spots are stable and simulate the stationary state of the actual space target point. The planar distribution of the projected spots’ centroid positions under two testing conditions is shown in Figure 9, in which different points and shapes represent three projected spots’ coordinates in the X–Y plane under ICS.
The coordinates shadow means that the actual projected spots’ position has a tiny perturbation. In fact, the absolute position of the three cooperative lasers remains stable during the experiment; therefore, the projected spots should completely overlap. Theoretically, the only changeable parameters are the supplied current of i and exposure time of t. The variation in the projected spot centroid positions is shown in Table 1 and Table 2, where the unit of the XY coordinates are pixels. The centroid is regarded as a reference point. The relationship between the area of the saturated pixel of the projected point and the corresponding parameters in the four sets of experiments are listed in Table 3 and Table 4.
On the basis of Equations (5) and (6), the macroscopic manifestation of the two factors is the position change in the centroid of the projected spots. As a result, the projected spot positions of (u, v) are regarded as state variables, then the supply current of i and the exposure time of t are considered as controllable variables, and an index function that satisfies the minimum position errors between the adjacent pixel points, need to be established. This method of modeling reveals guidelines for minimizing pixel point overlapping errors and avoids the phenomenon of non-correspondence due to the optimization of a single function. The optimized results of such an index function can be treated as the optimal controllable variables.

3.3. Optimization Process

The expression of Equation (8) indicates a decoupled relationship between the controllable variables of [i, t] and the state variable of [u, v]. Strictly speaking, there is a nonlinear functional connections with these variables. The optimization process attempts to investigate the independent impact factors of each controllable variable. Therefore, the index function is built in Equation (9), in which the product of the state vector and the transposed state vector signifies the dimensions of the second moment of the image:
[ Δ u ( k ) Δ v ( k ) ] T [ Δ u ( k ) Δ v ( k ) ] = F T [ i ( k ) t ( k ) ] F [ i ( k ) t ( k ) ]
When the space target point is stationary or moving slowly (v < 1 cm/s), the coordinates of the projected point in ICS should be stationary or moving slowly. In this sense, the only factors that affect the change in the coordinates of the projected spot centroid are i and t. The index function takes the partial derivative with respect to i and t, and the optimal parameter configuration can be obtained. Equation (8) is inherently a nonlinear equation by means of the simplification of Equations (5) and (6). Then, the mathematical problem of the partial derivative of the index function with respect to i and t can be transformed into the mathematical problem of the partial derivative of the index function with respect to u and v. The linearized algebraic equation of Equation (9) is shown in Equation (10). From this point of view, the projected spot position variation minimization is considered as the novel optimization criteria, and the results of optimizing process decide the optimal state variables. Based on the above analysis, the normal function and minimum criteria equation of Equation (10) is expressed in Equation (11). The optimal controllable parameter equation equivalent to the optimal state variable is shown in Equation (12):
Q = k = 1 N { [ u j * ( k ) u j ( k ) ] 2 + [ v j * ( k ) v j ( k ) ] 2 }
{ Q u = 0 Q v = 0 2 Q u 2 > 0 2 Q v 2 > 0
[ i o p t i m a l t o p t i m a l ] = F 1 [ u o p t i m a l v o p t i m a l ]
In Equations (10)–(12): N—number of sampling images, in which N equals 4; u*j (k), v*j (k)—the coordinates of the optimal centroid position; j—label of each laser spot, in which j = 1, 2, 3; Q—novel optimization criteria; ioptimal—optimal supply current; toptimal—optimal exposure time; uoptimal—optimal horizontal pixel in ICS; voptimal—optimal vertical pixel in ICS.
The optimization principles of Equation (10) are based on the criteria to satisfy the minimum value of the position residual’s square sum. Since u and v represent pixel values in two mutually orthogonal directions, there is no coupling between u and v. The optimizing procedures consist of three steps: First, the normal equation of Equation (10) is established, the form of which is the difference equation of Equation (10), and the displacement of u, v is with respect to k. Second, the solution of the normal equation is to be worked out, and the algorithm can be attributed to solve the pseudo-inverse matrix by Newton method. Third, it is recommended that the solutions are validated, and the precision of each solution needs to be determined. The final calculated results are listed in Table 5, in which the units are pixels.
It is confirmed from Table 1, Table 2 and Table 5 that the optimal value is equal to the arithmetic mean value of the experimental data in the same array. The results indicate that the two constraints conditions have been formally decoupled.

3.4. Determination of the Optimal Controllable Variables

Equation (7) reveals that there is essentially a nonlinear relationship between [u, v] and [i, t]. Equation (8) shows that in the local interval of [Δu, Δv], this nonlinear relationship can be approximately linearized according to Equations (5) and (6). In this case, the 3-D envelope surface can be drawn to describe this nonlinear relationship. The surface-extending trend shows that [u, v] continuously varies with i and t in the specified region. Furthermore, if the surface is projected onto the X–Z plane or the Y–Z plane, the curves of u-i or v-i and the curves of t-u or t-v can be obtained. Six surfaces in two groups of the experiment are shown in Figure 10, in which the left column expresses the relationship between u, v and i, and the right column expresses the relationship between u, v and t.
In each of the subgraphs of Figure 10, four different sets of data are employed to generate each three-dimensional surface. The data derive from Table 1 and Table 2, and each group of data represents a set of controllable variables (i1, i2, i3, i4; t1, t2, t3, t4). On the basis of Equations (5) and (6), it is understood that i and t have a proportional relationship with I0 and Hi, which means that the surface is analytic near the adjacent area of u and v. Due to its continuity, the linear polynomials fitting is able to approach the surface near the adjacent area of u and v in a certain precision. Once the optimal state variables of (u, v) are determined, the red straight line that characterizes the optimal variable can be formed, and the X–Y projected coordinates of the intersections between the straight line and the 3-D surface can be regarded as the best controllable variables of i and t. The coordinates of the intersection points can be calculated, as shown in Table 6.

4. Experimental Validation

4.1. Experiment Setup

In actual experiment, three active green LEDs were used as the laser sources to simulate the real reflected light and were mounted on a smooth disc as the coordinate target. Three LEDs form a right triangle, the aim of which is to ensure the accurate feature extraction of three landmark points after rotating 180°. The target disc schematic is shown in Figure 11.
Considering that the light spot of the LED laser simulates the projected light spot of the space cooperation target, the experimental scheme was adopted, in which the target disc was rotated 180° and the planar array CCD camera received images twice on one side. This can accurately simulate two images with a difference of 180° received by the same camera on one side, which is equivalent to two projected images of the same marker point received by two cameras on both sides. In the actual experiment, three CCD cameras were settled on a horizontal stent to record the three different laser spots, as shown in Figure 11, and three LEDs form a right triangle. Each laser spot has a unique current controller and separated exposure time switch. The on-site photo of the experimental platform is shown in Figure 12.

4.2. Experiment Process and Data Analysis

The experiment was performed in an underground vibration isolation laboratory, and the light laser spots were installed on a three-axis turntable on a marble platform to simulate the cooperative space target, as shown in Figure 12. Three planar CCD cameras were installed to shoot the laser spot images and the images were transmitted into digital processing by high-speed data acquisition card and preprocessed on a computer. During the course of this experiment, the laser diode current and camera exposure time can be separately tuned by the calculated parameters, as shown in Table 5. The total station was used to calibrate the position of the LED spots, the differences between the vision measurement and total station are the errors, and the error curves are expressed in Figure 13.
Figure 13 reveals that the image pixel positioning errors of the optimized parameter configuration are an order of magnitude smaller than the image positioning errors of the unoptimized parameter configuration. From a quantitative analysis, the pixel errors can approach in 5–10 pixels, which means the position precision can satisfy 0.6–1.2 mm with a 3 m distance and 55 mm focal length through equivalent conversion. The comparative tests show that the ordinary experiment without variables’ optimization can only reach 30–50 pixels, which means the position precision can satisfy 3.6–6 mm with a 3 m distance. The positioning precision of the cooperative target position was improved.
The type and optical parameters of the cameras and lenses are listed in Table 7. For the experiment, a series of zoom lenses were chosen to ensure a certain depth of field and ensure a clear image, the internal parameters of which were calibrated in advance.
If the field of view distance is enlarged to 10 m, and the distance between parallel screens is equal to 20 m, the calculation, which is based on the above analysis, reveals that the spot positioning error without using the optimized controllable parameters continues to be amplified, the simulation curves of which are shown in Figure 14.
Figure 14 shows that when the field of view is enlarged, due to the magnification of the distance and cumulative effect of errors, the final positioning error is converted to the space target coordinate point will be on the order of 0.1–1 m, which seriously affects the reliability and stability of the measurement results. However, if the optimized control parameters are employed, the positioning error can be kept on the order of centimeters, and the measurement precision of a certain index can still be guaranteed.

5. Conclusions

In this work, the space cooperation target position measurement experiment simulated by LED laser point projected is implemented. The variables of supply current i and exposure time t are the two key factors that influence the position precision of XY coordinates in the ICS. The nonlinearities and 3-D surface, which are able to characterize the functional relationship between (u, v) and i or t, are calculated. This novel idea is proposed by the repeated spot position of (u, v) as state variables, taking the controllable variables of i and t as optimizing variables, and acquiring the optimal controllable values by LSM. The results of this new method are validated by our experiment, which is essential for satisfying the optimal measurement conditions. The conclusion can be drawn that the variables of supply current and exposure time can be separately adjusted within the controllable range in spite of the nonlinear relationship, and this causes the two variables to simultaneously reach their optimal values. The experimental results show that this new method can effectively improve the positioning precision of the light spot in the image coordinate system within a certain field of view. After conversion, it can effectively improve its positioning accuracy in the world coordinate system.
This work still needs to be improved in three aspects: First, the premise of the experiment is that the space target is stationary or moving slowly (v < 1 cm/s), and if the object moves quickly, how to accurately describe the influence of control parameters on the position accuracy will be worthy of further discussion. Second, the LED light spot is used to simulate the real projected spot, and the area array CCD is used to shoot the target rotated 180° to one side to simulate the real dual-screen shooting. The influence of these factors on measurement accuracy is worthy of further study. Third, other influencing factors aside from i and t must be fully discussed for space-cooperative coordinate measurements. For the above three aspects, the theoretical derivation and experiments may also predict other factors that can affect accuracy, which may be parameter-controlled and optimized using such methods to achieve a higher position accuracy.

Author Contributions

Conceptualization, K.L. and F.Y.; methodology, K.L., F.Y. and Y.H.; software, K.L., Y.H. and Y.D.; validation, K.L., F.Y., Y.H. and Y.D.; formal analysis, K.L. and F.Y.; investigation, K.L., F.Y., Y.H. and Y.D.; resources, W.C. and C.L.; data curation, K.L., F.Y. and Y.H.; writing—original draft preparation, K.L.; writing—review and editing, K.L. and Y.D.; visualization, K.L. and Y.D.; supervision, F.Y. and Y.H.; project administration, K.L. and F.Y.; funding acquisition, W.C. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Basic Research Project of the General Armament Department (BRPGAD) of China under Grant 514010202-301.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barnes, R.A.; Brown, S.W.; Lykke, K.R.; Guenther, B.; Xiong, X.; Butler, J.J. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near-Infrared. Appl. Opt. 2015, 54, 10376–10395. [Google Scholar] [CrossRef] [Green Version]
  2. Qi, N.; Xia, Q.; Guo, Y.; Chen, J.; Ma, Z. Pose Measurement Model of Space Cooperative Target Capture based on Zoom Vision System. Adv. Mech. Eng. 2016, 8, 1687814016655954. [Google Scholar] [CrossRef]
  3. Huo, J.; Yang, N.; Yang, M. Tracking and Recognition of Projective Spots for Cooperation Targets in Vehicle Simulation Test. Opt. Precis. Eng. 2015, 8, 2134–2142. [Google Scholar] [CrossRef]
  4. Wang, F.; Dong, H.; Chen, Y.; Zheng, N. An Accurate Non-Cooperative Method for Measuring Textureless Spherical Target Based on Calibrated Lasers. Sensors 2016, 16, 2097. [Google Scholar] [CrossRef]
  5. Yang, N.; Huo, J.; Yang, M. A Method for Attitude Measurement of A Test Vehicle based on the Tracking of Vectors. Meas. Technol. 2015, 26, 085019. [Google Scholar] [CrossRef]
  6. Feng, Z.; Huang, L.; Gong, M.; Jin, G. Beam Shaping System Design Using Double Freeform Optical Surfaces. Opt. Express 2013, 12, 14728–14735. [Google Scholar] [CrossRef]
  7. Singh, R.; Hattuniemi, J.M.; Mäkynen, A.J. Analysis of Accuracy of Laser Spot Centroid Estimation. Proc. SPIE Adv. Laser Technol. 2008, 7022, 354–359. [Google Scholar]
  8. Liu, X.; Lu, Z.; Wang, X.; Ba, D.; Zhu, C. Micrometer Accuracy Method for Small-Scale Laser Focal Spot Centroid Measurement. Opt. Laser Technol. 2015, 66, 58–62. [Google Scholar] [CrossRef]
  9. Li, W.C.; Gu, J.Q.; Wang, Y.P. Measurement of Light Spot Size of Laser and of Beam Waist. J. Tianjin Univ. 2002, 3, 358–361. [Google Scholar]
  10. Teschke, M.; Kedzierski, J.; Finantu-Dinu, E.; Korzec, D.; Engemann, J. High-Speed Photographs of a Dielectric Barrier Atmospheric Pressure Plasma Jet. IEEE Trans. Plasma Sci. 2005, 2, 310–311. [Google Scholar] [CrossRef]
  11. Chen, R.Q.; Cao, G.; Mao, Z.H. Computation Method of Exposure Time for Space Array CCD Imaging. Comput. Eng. 2012, 12, 1–4. [Google Scholar]
  12. Tao, H.; Yang, H.; Wang, Y.; Ling, Y. Study on Interference to Imaging Process of Visible CCD Camera by Adjustable Light. Infrared Laser Eng. 2014, 5, 1605–1609. [Google Scholar]
  13. Hain, R.; Kähler, C.J.; Tropea, C. Comparison of CCD, CMOS, and Intensified Cameras. Exp. Fluids 2007, 42, 403–411. [Google Scholar] [CrossRef]
  14. King, S. Luminous Intensity of an LED as a Function of Input Power. J. Phys. 2008, 2, 1–4. [Google Scholar]
  15. Lee, J.U. Photovoltaic Effect in Ideal Carbon Nanotube Diodes. Appl. Phys. Lett. 2005, 7, 073101. [Google Scholar] [CrossRef]
  16. Gan, B.; Feng, H.; Jin, S. Research on Property of High-Power White LED. Opt. Instrum. 2005, 5, 33–37. [Google Scholar]
  17. Mullikin, J.C.; van Vliet, L.J.; Netten, H.; van der Feltz, F.R.B.G.; Young, I.T. Methods for CCD Camera Characterization. Proc. SPIE 1994, 2173, 72–84. [Google Scholar]
  18. Nakajima, H.; Fujikawa, M.; Mori, H.; Kan, H.; Ueda, S.; Kosugi, H.; Anabuki, N.; Hayashida, K.; Tsunemi, H.; Doty, J.P.; et al. Single Event Effect Characterization of the Mixed-Signal ASIC Developed for CCD Camera in Space Use. Nucl. Instrum. Methods Phys. Res. A 2013, 731, 166–171. [Google Scholar] [CrossRef] [Green Version]
  19. Lin, H.; Da, Z.S.; Cao, S.K.; Wang, Z.Z. Algorithm of Focal Spot Reconstruction for Laser Measurement Using the Schlieren Method. Optik 2017, 145, 61–65. [Google Scholar] [CrossRef]
  20. Hazarika, S.; Hazarika, C.; Das, A. Multiple Filamentation and Control of Properties of Self-Guided Super-Gaussian Laser Beam. Optik 2017, 141, 124–129. [Google Scholar] [CrossRef]
  21. Al Kamal, I.; Al-Alaoui, M. Online Machine Vision Inspection System for Detecting Coating Defects in Metal Lids. Proc. Int. Multi-Conf. Eng. Comput. Sci. 2008, 2, 1319–1322. [Google Scholar]
  22. Duan, Z.; Wang, N.; Fu, J.; Zhao, W.; Duan, B.; Zhao, J. High Precision Edge Detection Algorithm for Mechanical Parts. Meas. Sci. Rev. 2018, 18, 65–71. [Google Scholar] [CrossRef] [Green Version]
  23. Shi, Z.; Song, H.; Chen, H.; Sun, Y. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes. Meas. Sci. Rev. 2018, 18, 13–19. [Google Scholar] [CrossRef] [Green Version]
  24. Guillory, J.; Truong, D.; Wallerand, J.P.; Alexandre, C. Absolute Multilateration-based Coordinate Measurement System Using Retroreflecting Glass Spheres. Precis. Eng. 2022, 73, 214–227. [Google Scholar] [CrossRef]
  25. Del Alamo, M.B.; Soncco, C.; Helaconde, R.; Alba, J.L.B.; Gago, A.M. Laser Spot Measurement Using Simple Devices. AIP Adv. 2021, 11, 075016. [Google Scholar] [CrossRef]
  26. Zhu, J.; Xu, Z.; Fu, D.; Hu, C. Laser Spot Center Detection and Comparison Test. Photonic Sens. 2019, 9, 49–52. [Google Scholar] [CrossRef] [Green Version]
  27. Bedoya, A.; González, J.; Rodríguez-Aseguinolaza, J.; Mendioroz, A.; Sommier, A.; Batsale, J.C.; Pradere, C.; Salazar, A. Measurement of In-Plane Thermal Diffusivity of Solids Moving at Constant Velocity Using Laser Spot Infrared Thermography. Measurement 2019, 134, 519–526. [Google Scholar] [CrossRef]
  28. Krawczyk-Suszek, M.; Martowska, B.; Sapuła, R. Analysis of the Stability of the Body in a Standing Position When Shooting at a Stationary Target—A Randomized Controlled Trial. Sensors 2022, 22, 368. [Google Scholar] [CrossRef]
  29. Gawlicki, M.; Jankowski, Ł. Trajectory Identification for Moving Loads by Multicriterial Optimization. Sensors 2021, 21, 304. [Google Scholar] [CrossRef]
  30. Ferdowsi, M.H.; Sabzikar, E. Optical Target Tracking by Scheduled Range Measurements. Opt. Eng. 2015, 54, 044101. [Google Scholar] [CrossRef] [Green Version]
  31. Papoutsidakis, M.; Kalovrektis, K.; Drosos, C.; Stamoulis, G. Intelligent Design and Algorithms to Control a Stereoscopic Camera on a Robotic Workspace. Int. J. Comput. Appl. 2017, 167, 0975–8887. [Google Scholar] [CrossRef]
  32. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1335. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic of space cooperative target measurement based on projected laser.
Figure 1. Schematic of space cooperative target measurement based on projected laser.
Sensors 22 05110 g001
Figure 2. The gray level picture of a projected spot.
Figure 2. The gray level picture of a projected spot.
Sensors 22 05110 g002
Figure 3. Light intensity distribution.
Figure 3. Light intensity distribution.
Sensors 22 05110 g003
Figure 4. The diagram of a monocular camera under three coordinate systems.
Figure 4. The diagram of a monocular camera under three coordinate systems.
Sensors 22 05110 g004
Figure 5. LED current vs. luminous flux.
Figure 5. LED current vs. luminous flux.
Sensors 22 05110 g005
Figure 6. Data processing and optimizing flowchart.
Figure 6. Data processing and optimizing flowchart.
Sensors 22 05110 g006
Figure 7. Laser spot images with different supply current (t = 500 ms). (a) Supply current = 0.12 A; (b) Supply current = 0.18 A; (c) Supply current = 0.35 A; (d) Supply current = 0.63 A.
Figure 7. Laser spot images with different supply current (t = 500 ms). (a) Supply current = 0.12 A; (b) Supply current = 0.18 A; (c) Supply current = 0.35 A; (d) Supply current = 0.63 A.
Sensors 22 05110 g007
Figure 8. Laser spot images with a different exposure time. (i = 0.86 A) (a) Exposure time = 600 ms; (b) Exposure time = 1000 ms; (c) Exposure time = 1500 ms; (d) Exposure time = 2000 ms.
Figure 8. Laser spot images with a different exposure time. (i = 0.86 A) (a) Exposure time = 600 ms; (b) Exposure time = 1000 ms; (c) Exposure time = 1500 ms; (d) Exposure time = 2000 ms.
Sensors 22 05110 g008aSensors 22 05110 g008b
Figure 9. Distribution of laser spot centroid position. (a) Variation of supply current (t = 500 ms); (b) Variation of exposure time (i = 0.86 A).
Figure 9. Distribution of laser spot centroid position. (a) Variation of supply current (t = 500 ms); (b) Variation of exposure time (i = 0.86 A).
Sensors 22 05110 g009
Figure 10. The optimal controllable variables selection of different lasers. (a) Laser spot 1 (supply current); (b) Laser spot 1 (exposure time); (c) Laser spot 2 (supply current); (d) Laser spot 2 (exposure time); (e) Laser spot 3 (supply current); (f) Laser spot 3 (exposure time).
Figure 10. The optimal controllable variables selection of different lasers. (a) Laser spot 1 (supply current); (b) Laser spot 1 (exposure time); (c) Laser spot 2 (supply current); (d) Laser spot 2 (exposure time); (e) Laser spot 3 (supply current); (f) Laser spot 3 (exposure time).
Sensors 22 05110 g010
Figure 11. The schematic of target disc with three green LEDs.
Figure 11. The schematic of target disc with three green LEDs.
Sensors 22 05110 g011
Figure 12. The on-site photos of the experimental platform.
Figure 12. The on-site photos of the experimental platform.
Sensors 22 05110 g012
Figure 13. Calibration results and position error distribution. (a) Optimized results; (b) Optimized position error distribution; (c) Unoptimized results; (d) Unoptimized position error distribution.
Figure 13. Calibration results and position error distribution. (a) Optimized results; (b) Optimized position error distribution; (c) Unoptimized results; (d) Unoptimized position error distribution.
Sensors 22 05110 g013aSensors 22 05110 g013b
Figure 14. Comparative simulation results (a) Optimized simulation curves; (b) Unoptimized simulation curves.
Figure 14. Comparative simulation results (a) Optimized simulation curves; (b) Unoptimized simulation curves.
Sensors 22 05110 g014
Table 1. Laser spot centroid position (x, y) of various supply current.
Table 1. Laser spot centroid position (x, y) of various supply current.
Supply Current (A)Laser 1 (Pixel)Laser 2 (Pixel)Laser 3 (Pixel)
0.12(811.81, 969.86)(828.50, 737.00)(950.00, 813.50)
0.18(812.29, 969.99)(829.60, 737.25)(951.18, 813.24)
0.35(812.19, 969.94)(829.33, 737.19)(951.36, 813.13)
0.63(812.13, 969.85)(829.11, 737.10)(950.84, 813.22)
Table 2. Laser spot centroid position (x, y) of various exposure time.
Table 2. Laser spot centroid position (x, y) of various exposure time.
Exposure Time (ms)Laser 1 (Pixel)Laser 2 (Pixel)Laser 3 (Pixel)
600(633.85, 728.64)(694.08, 591.74)(848.58, 728.96)
1000(634.00, 728.72)(694.30, 591.97)(848.51, 728.86)
1500(633.88, 728.92)(694.19, 591.60)(848.51, 728.86)
2000(633.69, 728.89)(694.23, 591.71)(848.78, 729.01)
Table 3. Area of saturated pixels with different supply current.
Table 3. Area of saturated pixels with different supply current.
Supply Current (A)Laser 1 (Pixel2)Laser 2 (Pixel2)Laser 3 (Pixel2)
0.1213134.5
0.1813012784
0.35149.513086
0.63235.5212.5159.5
Table 4. Area of saturated pixels with different exposure time.
Table 4. Area of saturated pixels with different exposure time.
Exposure Time (ms)Laser 1 (Pixel2)Laser 2 (Pixel2)Laser 3 (Pixel2)
600116.5131134.5
1000157.5176185.5
1500225.5229.5241
2000267296.5308.5
Table 5. The optimal (x, y) values with different conditions.
Table 5. The optimal (x, y) values with different conditions.
ResultsLaser 1 (Pixel)Laser 2 (Pixel)Laser 3 (Pixel)
Supply current(812.11, 969.91)(829.14, 737.14)(950.85, 813.27)
Exposure time(633.85, 728.79)(694.20, 591.76)(848.60, 728.92)
Table 6. Calculation results.
Table 6. Calculation results.
CharacteristicsLaser 1Laser 2Laser 3
Supply current (A)0.470.600.58
Exposure time (ms)7341610997
Table 7. Camera and lens optical parameters.
Table 7. Camera and lens optical parameters.
Camera TypeResolutionOptical SizePixel SizeFrame FrequencyA/D Transfer PrecisionPixel DepthExposure StyleShutter TimeLaser WavelengthField of View Distance
MER-500-7UM-L2592 × 19441/2.5 inch2.2 µm × 2.2 µm7 fps12 bit8 bitERS/GRR6 µs–1 s480–550 nm3 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, K.; Yuan, F.; Hu, Y.; Du, Y.; Chen, W.; Lan, C. Least-Square-Method-Based Optimal Laser Spots Acquisition and Position in Cooperative Target Measurement. Sensors 2022, 22, 5110. https://doi.org/10.3390/s22145110

AMA Style

Li K, Yuan F, Hu Y, Du Y, Chen W, Lan C. Least-Square-Method-Based Optimal Laser Spots Acquisition and Position in Cooperative Target Measurement. Sensors. 2022; 22(14):5110. https://doi.org/10.3390/s22145110

Chicago/Turabian Style

Li, Kai, Feng Yuan, Yinghui Hu, Yongbin Du, Wei Chen, and Chunyun Lan. 2022. "Least-Square-Method-Based Optimal Laser Spots Acquisition and Position in Cooperative Target Measurement" Sensors 22, no. 14: 5110. https://doi.org/10.3390/s22145110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop