Real-Time Stripe Width Computation Using Back Propagation Neural Network for Adaptive Control of Line Structured Light Sensors

A line structured light sensor (LSLS) is generally constituted of a laser line projector and a camera. With the advantages of simple construction, non-contact, and high measuring speed, it is of great perspective in 3D measurement. For traditional LSLSs, the camera exposure time is usually fixed while the surface properties can be varied for different measurement tasks. This would lead to under/over exposure of the stripe images or even failure of the measurement. To avoid these undesired situations, an adaptive control method was proposed to modulate the average stripe width (ASW) within a favorite range. The ASW is first computed based on the back propagation neural network (BPNN), which can reach a high accuracy result and reduce the runtime dramatically. Then, the approximate linear relationship between the ASW and the exposure time was demonstrated via a series of experiments. Thus, a linear iteration procedure was proposed to compute the optimal camera exposure time. When the optimized exposure time is real-time adjusted, stripe images with the favorite ASW can be obtained during the whole scanning process. The smoothness of the stripe center lines and the surface integrity can be improved. A small proportion of the invalid stripe images further proves the effectiveness of the control method.


Introduction
A line structured light sensor (LSLS) is generally constituted of a laser line projector and a camera. Due to the advantages of simple construction, non-contact, and high measuring speed, it has been widely used for geometrical measurement [1,2], condition monitoring [3], profile evaluation [4], position identification [5], etc. In the measuring process, a laser plane from the projector intersects the object and a perturbed stripe image is captured by the camera. The geometrical information of the intersection profile can then be solved based on the laser triangle measurement principle [6].
Current research in LSLS mainly focuses on sensor calibration [7][8][9], stripe center extraction [10][11][12], collaborative measurement via multiple sensors [13][14][15], and the integration with motion axes [16][17][18]. The last two research scopes aim at the measurement integrity via data fusion, where the computation of the transformation matrix between different coordinate systems is the core issue. Sensor calibration is to determine the camera intrinsic parameters, lens distortion, and the equation of the laser plane [7]. For a specific LSLS, these parameters remain unchanged after the calibration. However, the stripe images, as the origin information for profile computation, are directly affected by the surface properties. However, the parts may have different geometries, colors, surface roughness, etc. Even for one specific time is discussed in Section 4. After that, a linear iterative method is proposed for exposure control in Section 5. Finally, the experiment results are analyzed and discussed in Section 6.

Measurement Principle
The measurement principle of the laser line scanning system is shown in Figure 1. The laser line projector and the camera, which are fixed on the same frame, constitute the LSLS. The laser plane is emitted from the projector and intersects the part. A perturbed laser stripe reflected from the intersection profile can be captured by the camera. The stripe image carries the geometrical information. As the projector has a fixed relative position with the camera, the point coordinates on the intersection profile can be solved by the pre-calibrated sensor parameters [9]. When the part moves, the laser plane will intersect the surface at different positions and a series of intersection profiles can be achieved. The 3D topography can be obtained by combining these profiles with their translation distances [16]. In this research, we mainly focused on the adaptive control of the sensor.

Computation of ASW Using BPNN
As a measure of stripe quality, the ASW was selected as the controlled parameter. To obtain the ASW value, the first step was to compute the width of each cross section profile of the stripe image. As the stripe can be discontinuous or extremely under exposed, a minimum gray threshold value, θg, was used to evaluate the effectiveness of each cross section profile. The width of each cross section profile was computed column by column. For one column of the stripe image, if the maximum gray value of the pixels is smaller than θg, then it is ineffective. Only the effective cross section profiles were selected as the inputs of the BPNN.

Computation Principle Using BPNN
The computation principle of each cross section profile using BPNN is shown in Figure 2. For the vth effective column, its central pixel is first determined by use of the extreme value method. Then, the same number of pixels is selected up and down as the network input. Neuron numbers of the input and hidden layers are n and m, respectively. The output layer only has one neuron. Network input vector is expressed by Xv = (xv,1, xv,2, …, xv,n) T , and xv,n is the normalized gray scale value of the nth pixel. The output vector of the input layer and hidden layer are A = (a1, a2, …, an) T , B = (b1, b2, …, bm) T . The weight matrix from the input layer to the hidden layer is Wm×n, where wi,j is the value of ith line and jth column. The weight vector from the hidden layer to the output layer is the H1×m, where hk is the kth value. Cv is the output, and also the width of this column.

Computation of ASW Using BPNN
As a measure of stripe quality, the ASW was selected as the controlled parameter. To obtain the ASW value, the first step was to compute the width of each cross section profile of the stripe image. As the stripe can be discontinuous or extremely under exposed, a minimum gray threshold value, θ g , was used to evaluate the effectiveness of each cross section profile. The width of each cross section profile was computed column by column. For one column of the stripe image, if the maximum gray value of the pixels is smaller than θ g , then it is ineffective. Only the effective cross section profiles were selected as the inputs of the BPNN.

Computation Principle Using BPNN
The computation principle of each cross section profile using BPNN is shown in Figure 2. For the vth effective column, its central pixel is first determined by use of the extreme value method. Then, the same number of pixels is selected up and down as the network input. Neuron numbers of the input and hidden layers are n and m, respectively. The output layer only has one neuron. Network input vector is expressed by x v,n ) T , and x v,n is the normalized gray scale value of the nth pixel. The output vector of the input layer and hidden layer are A = (a 1 , a 2 , . . . , a n ) T , B = (b 1 , b 2 , . . . , b m ) T . The weight matrix from the input layer to the hidden layer is W m×n , where w i,j is the value of ith line and jth column. The weight vector from the hidden layer to the output layer is the H 1×m , where h k is the kth value. C v is the output, and also the width of this column. The weight value of input neurons is given as 1 and the activation function is fa(x) = x. Therefore, A = Xv. The activation function of the hidden layer adopts the Sigmoid activation function, and its output vector can be expressed by The Sigmoid activation function is also adopted for the output layer, but magnified by n to cover the possible width value of the cross section profile. The output for the vth effective column is The ASW can be achieved by where V is the total number of effective cross sections.

Compute Reference Cross Section Width Using Gaussian Fitting
The reference width of each cross section profile needs to be computed for network training. As the intensity of the cross section profile follows the Gaussian distribution [34,35], Gaussian fitting (GF) can be a more robust way to access the width. The Gaussian distribution equation can be expressed by ( ) where p is the pixel index number; Ip is the gray value of this pixel; A0 is the amplitude; μ0 is the mean value; and σ0 is the standard deviation. To achieve a better fitting result, the Gauss-Newton method was adopted to solve the coefficients. Assuming a0 = A0, a1 = −1/(2σ 2 0 ), a2 = μ0/σ 2 0 , a3 = −μ 2 0 /(2σ 2 0 ), the fitting error for each pixel of the profile can be achieved as ( ) The squared fitting error F can be expressed by where P is the number of the pixels for the fitting. To obtain the coefficients, the Jacobian matrix is computed as The weight value of input neurons is given as 1 and the activation function is f a (x) = x. Therefore, A = X v . The activation function of the hidden layer adopts the Sigmoid activation function, and its output vector can be expressed by The Sigmoid activation function is also adopted for the output layer, but magnified by n to cover the possible width value of the cross section profile. The output for the vth effective column is The ASW can be achieved by where V is the total number of effective cross sections.

Compute Reference Cross Section Width Using Gaussian Fitting
The reference width of each cross section profile needs to be computed for network training. As the intensity of the cross section profile follows the Gaussian distribution [34,35], Gaussian fitting (GF) can be a more robust way to access the width. The Gaussian distribution equation can be expressed by where p is the pixel index number; I p is the gray value of this pixel; A 0 is the amplitude; µ 0 is the mean value; and σ 0 is the standard deviation. To achieve a better fitting result, the Gauss-Newton method was adopted to solve the coefficients. Assuming a 0 = A 0 , a 1 = −1/(2σ 2 0 ), a 2 = µ 0 /σ 2 0 , a 3 = −σ 2 0 /(2σ 2 0 ), the fitting error for each pixel of the profile can be achieved as The squared fitting error F can be expressed by Sensors 2020, 20, 2618 where P is the number of the pixels for the fitting. To obtain the coefficients, the Jacobian matrix is computed as Let α = (a 0 , a 1 , a 2 , a 3 ) T and ε = (ε 1 , ε 2 , · · · , ε P ) T , the iterative formula can be expressed as where α (k) are coefficients of the kth iteration and ε (k) is the corresponding fitting error. Initial coefficients, α (0) , are calculated by the least squared method. When the deviation between current fitting error and the previous one is smaller than the given threshold value, the iteration stops.
To verify the fitting method, cross section profiles with different exposure time were analyzed, as shown in Figure 3. Figure 3a shows three unsaturated cross section profiles. For all of the cases, the fitted curve well coincided with the profiles. When the exposure time is further increased, the stripes will become saturated. Although the fitted curves would become a little bit higher than the maximum gray value, their rising and falling edges still corresponded well with the gray values, as shown in Figure 3b. Thus, the reference width C* of the cross section profile, which is defined by 6σ 0 of the fitting curve, can be obtained.
Sensors 2020, 20, x FOR PEER REVIEW 5 of 17 , , , , the iterative formula can be expressed as where α (k) are coefficients of the kth iteration and ε (k) is the corresponding fitting error. Initial coefficients, α (0) , are calculated by the least squared method. When the deviation between current fitting error and the previous one is smaller than the given threshold value, the iteration stops.
To verify the fitting method, cross section profiles with different exposure time were analyzed, as shown in Figure 3. Figure 3a shows three unsaturated cross section profiles. For all of the cases, the fitted curve well coincided with the profiles. When the exposure time is further increased, the stripes will become saturated. Although the fitted curves would become a little bit higher than the maximum gray value, their rising and falling edges still corresponded well with the gray values, as shown in Figure 3b. Thus, the reference width C* of the cross section profile, which is defined by 6σ0 of the fitting curve, can be obtained.

Training of BPNN
Assuming the reference width of the vth effective cross section profile is C * v , the squared width error between the network result and the reference value can be defined as where l denotes the iteration number of network training. Adjustment value of the weights can be computed according to the gradient descent principle as , , where η ∈ [0,1] is the factor of learning efficiency. Based on the chain rule and the activation functions, the adjustment values in Equation (10) can be expressed by

Training of BPNN
Assuming the reference width of the vth effective cross section profile is C * v , the squared width error between the network result and the reference value can be defined as where l denotes the iteration number of network training. Adjustment value of the weights can be computed according to the gradient descent principle as where η ∈ [0,1] is the factor of learning efficiency. Based on the chain rule and the activation functions, the adjustment values in Equation (10) can be expressed by Weights after the adjustment are When the training error is smaller than the given value, or the iteration has reached the maximum value, the network training is stopped. The weights of the network, W and H, can be saved.

Relationship between ASW and Exposure Time
To develop a reasonable control method, the relationship between ASW and exposure time was explored. The experiment considered different geometries, materials, and colors. Two specific parts were selected for the analysis, as shown in Figure 4. The first one, Figure 4a, had a shiny surface and the other, Figure 4c, had a matt surface. The part was placed on the stage with its surface intersecting with the laser plane at different intersection profiles. For one specific intersection profile, the exposure time was manually adjusted and the C avr was computed by Equation (3). Figure 4b,d are the corresponding curves of the intersection profiles in Figure 4a,c, respectively. It can be seen that these curves varied remarkably, which is because the points on different profiles have different normal vectors that determine the reflective light into the camera. However, for each specific profile, the ASW increased monotonically with the exposure time.
Sensors 2020, 20, x FOR PEER REVIEW 6 of 17 Weights after the adjustment are When the training error is smaller than the given value, or the iteration has reached the maximum value, the network training is stopped. The weights of the network, W and H, can be saved.

Relationship between ASW and Exposure Time
To develop a reasonable control method, the relationship between ASW and exposure time was explored. The experiment considered different geometries, materials, and colors. Two specific parts were selected for the analysis, as shown in Figure 4. The first one, Figure 4a, had a shiny surface and the other, Figure 4c, had a matt surface. The part was placed on the stage with its surface intersecting with the laser plane at different intersection profiles. For one specific intersection profile, the exposure time was manually adjusted and the Cavr was computed by Equation (3). Figure 4b,d are the corresponding curves of the intersection profiles in Figure 4a,c, respectively. It can be seen that these curves varied remarkably, which is because the points on different profiles have different normal vectors that determine the reflective light into the camera. However, for each specific profile, the ASW increased monotonically with the exposure time. To further explore the relationship, parts with different materials were examined, as shown in Figure 5. For simple analysis, one intersection profile was selected on one part. As expected, the ASW also increased with the exposure time, as shown in Figure 6. In this figure, the ASW of curve To further explore the relationship, parts with different materials were examined, as shown in Figure 5. For simple analysis, one intersection profile was selected on one part. As expected, the ASW also increased with the exposure time, as shown in Figure 6. In this figure, the ASW of curve (d) increased rapidly in the early stage and then slowed down in the later stage. For other curves, there was an approximate linear relationship between the ASW and the exposure time.
Sensors 2020, 20, x FOR PEER REVIEW 7 of 17 (d) increased rapidly in the early stage and then slowed down in the later stage. For other curves, there was an approximate linear relationship between the ASW and the exposure time.  From the above experiments, the following conclusions can be obtained. (1) When the camera exposure time gets longer, the ASW increases monotonically. Each exposure time vs. ASW curve is smooth and approximately follows a linear manner. (2) For a specific exposure time, the ASW can be very different under different geometries, materials, and colors. Thus, fixed sensor parameters cannot satisfy different measurement situations.

Adaptive Control of Exposure Time
Based on the above analysis, the exposure time vs. ASW curves of different intersection profiles can vary significantly, even for the same part. Thus, a linear iterative method was proposed that does not rely on the ideal exposure time vs. ASW curve, as shown by Figure 7. The control objective was to modulate the ASW within a required range of [C min avr C max avr ]. For T0, the stripe image is captured and the ASW is computed as C (0) avr . It can be seen that C

Adaptive Control of Exposure Time
Based on the above analysis, the exposure time vs. ASW curves of different intersection profiles can vary significantly, even for the same part. Thus, a linear iterative method was proposed that does not rely on the ideal exposure time vs. ASW curve, as shown by Figure 7. The control objective was to modulate the ASW within a required range of [C min avr C max avr ]. For T0, the stripe image is captured and the ASW is computed as C

Adaptive Control of Exposure Time
Based on the above analysis, the exposure time vs. ASW curves of different intersection profiles can vary significantly, even for the same part. Thus, a linear iterative method was proposed that does not rely on the ideal exposure time vs. ASW curve, as shown by Figure 7. The control objective was to modulate the ASW within a required range of [C min avr C max avr ]. For T 0 , the stripe image is captured and the ASW is computed as C avr . It can be seen that C (0) avr was not within the range. Then, line L 1 can be constructed, which also passes the origin O. The next estimated exposure time is computed by Sensors 2020, 20, 2618 where Q is the maximum number of iterations and C set avr is the ideal stripe width. After the camera exposure time is set to T q+1 , the corresponding stripe image can be obtained and C where Q is the maximum number of iterations and C set avr is the ideal stripe width. After the camera exposure time is set to Tq+1, the corresponding stripe image can be obtained and C (q+1) avr can be computed. If C (q+1) avr falls within the required range, the iteration stops. The current image is used for stripe center extraction. Otherwise, the iteration continues. At the beginning of a measurement, the ASW under pre-defined exposure time may have a large deviation with C set avr . For this instance, several linear iterations may be needed. In the measuring process, the material and the color for a specific part are usually unchanged. The 3D geometry is also continuous for most of the cases. Thus, the exposure time for the next image can be estimated using the previous one.

Experiments and Analysis
The measurement system is shown in Figure 8. It generally constitutes a laser projector (Shenzuan Lasers Co. Ltd., Shantou, China), a camera (MV-UB500M, Mindvision Technology Co. Ltd., Shenzhen, China), a linear stage, and the structural parts that connect them together. The laser projector has a wavelength of 650 nm and a power of 5 mW. Its minimum line width can reach 0.4 mm at the projection distance of 300 mm. The camera resolution is 1280 × 960 pixels. The focal length of the lens is 4~12 mm, and can be manually adjusted. During the measurement process, the stripe images were processed by a computer with an Intel i5-3470 CPU and 4 GB RAM.  At the beginning of a measurement, the ASW under pre-defined exposure time may have a large deviation with C set avr . For this instance, several linear iterations may be needed. In the measuring process, the material and the color for a specific part are usually unchanged. The 3D geometry is also continuous for most of the cases. Thus, the exposure time for the next image can be estimated using the previous one.

Experiments and Analysis
The measurement system is shown in Figure 8. It generally constitutes a laser projector (Shenzuan Lasers Co. Ltd., Shantou, China), a camera (MV-UB500M, Mindvision Technology Co. Ltd., Shenzhen, China), a linear stage, and the structural parts that connect them together. The laser projector has a wavelength of 650 nm and a power of 5 mW. Its minimum line width can reach 0.4 mm at the projection distance of 300 mm. The camera resolution is 1280 × 960 pixels. The focal length of the lens is 4~12 mm, and can be manually adjusted. During the measurement process, the stripe images were processed by a computer with an Intel i5-3470 CPU and 4 GB RAM.
Sensors 2020, 20, x FOR PEER REVIEW 8 of 17 where Q is the maximum number of iterations and C set avr is the ideal stripe width. After the camera exposure time is set to Tq+1, the corresponding stripe image can be obtained and C (q+1) avr can be computed. If C (q+1) avr falls within the required range, the iteration stops. The current image is used for stripe center extraction. Otherwise, the iteration continues. At the beginning of a measurement, the ASW under pre-defined exposure time may have a large deviation with C set avr . For this instance, several linear iterations may be needed. In the measuring process, the material and the color for a specific part are usually unchanged. The 3D geometry is also continuous for most of the cases. Thus, the exposure time for the next image can be estimated using the previous one.

Experiments and Analysis
The measurement system is shown in Figure 8. It generally constitutes a laser projector (Shenzuan Lasers Co. Ltd., Shantou, China), a camera (MV-UB500M, Mindvision Technology Co. Ltd., Shenzhen, China), a linear stage, and the structural parts that connect them together. The laser projector has a wavelength of 650 nm and a power of 5 mW. Its minimum line width can reach 0.4 mm at the projection distance of 300 mm. The camera resolution is 1280 × 960 pixels. The focal length of the lens is 4~12 mm, and can be manually adjusted. During the measurement process, the stripe images were processed by a computer with an Intel i5-3470 CPU and 4 GB RAM.   Figure 9 were computed to verify the effectiveness of the trained network. These stripes are denoted by the region of interest (ROI) on the images. The reference width of each effective cross section profile, which is used for the network training, is computed by the use of the GF. These profiles generally cover all of the stripe status from under to over exposure. The detailed training process can be found in Section 3.3. The factor of learning efficiency is determined by experiments. When η = 0.5, the network converges well.
Sensors 2020, 20, x FOR PEER REVIEW 9 of 17 selected stripes in Figure 9 were computed to verify the effectiveness of the trained network. These stripes are denoted by the region of interest (ROI) on the images. The reference width of each effective cross section profile, which is used for the network training, is computed by the use of the GF. These profiles generally cover all of the stripe status from under to over exposure. The detailed training process can be found in Section 3.3. The factor of learning efficiency is determined by experiments. When η = 0.5, the network converges well. Figure 9. Stripes for network training and verification. (a) Normal exposed wave stripe; (b,f) under exposed random stripes; (c,e) over exposed smooth stripes; (d) normal exposed arc stripe.
After the training, the weights, W and H, can be obtained. The width of each cross section profile can be achieved just through the forward computation. For easy comparison, the width of each cross section profile in Figure 9 was computed by use of the GF and BPNN, respectively. The results are shown in Figure 10. It can be seen that the values obtained from the BPNN had good agreement with that of GF. The ASW values from these two methods are illustrated in Table 1. For all of the stripe images in Figure 9, the maximum deviation was 0.0403 pixels, and the minimum deviation was only 0.0048 pixels. This shows the high accuracy of the stripe width computation using BPNN. To achieve real-time exposure adjustment, the ASW should first be computed in the shortest possible time. Thus, the GF method, which needs seconds to achieve the computation, is incompetent. On the other hand, the proposed BPNN method only needs less than 1% computation time of the GF method and is more suitable.
For actual measurements, the stripes are usually continuous or piecewise continuous. To further reduce the computation time, we may not need to compute the width value of all cross section profiles, but only compute a certain amount of uniformly-sampled ones of the stripe. The sampling interval can be set to 5, 10, 15, and 20 pixels. The corresponding stripe width and the computation time are shown in Figure 11. From Figure 11a, it can be seen that the ASW had no significant variations with the increase in interval space. The computation time, however, could be further reduced to smaller than 5 ms, as shown by Figure 11b. For the above experiments, it can be seen that the BPNN could get the width of the laser stripe accurately. More importantly, it can be extremely efficient, which makes the real time stripe width assessment possible. For the following experiments, the sampling interval was set as 10 pixels. Figure 9. Stripes for network training and verification. (a) Normal exposed wave stripe; (b,f) under exposed random stripes; (c,e) over exposed smooth stripes; (d) normal exposed arc stripe.
After the training, the weights, W and H, can be obtained. The width of each cross section profile can be achieved just through the forward computation. For easy comparison, the width of each cross section profile in Figure 9 was computed by use of the GF and BPNN, respectively. The results are shown in Figure 10. It can be seen that the values obtained from the BPNN had good agreement with that of GF. The ASW values from these two methods are illustrated in Table 1. For all of the stripe images in Figure 9, the maximum deviation was 0.0403 pixels, and the minimum deviation was only 0.0048 pixels. This shows the high accuracy of the stripe width computation using BPNN. To achieve real-time exposure adjustment, the ASW should first be computed in the shortest possible time. Thus, the GF method, which needs seconds to achieve the computation, is incompetent. On the other hand, the proposed BPNN method only needs less than 1% computation time of the GF method and is more suitable.    Figure 11. Influence of sampling intervals on the (a) average stripe width and the (b) run time.

Adaptive Control for A Single Intersection Profile
The favorite cross section profiles of a stripe image should be the ones that have the maximum gray value and are not saturated as the profile that has the exposure time of t = 7 ms in Figure 3a. In this situation, the cross section profile can reach a favorite reliability [35]. The favorite width relies on the component properties of the LSLS like camera resolution, field of view, and width of the For actual measurements, the stripes are usually continuous or piecewise continuous. To further reduce the computation time, we may not need to compute the width value of all cross section profiles, but only compute a certain amount of uniformly-sampled ones of the stripe. The sampling interval can be set to 5, 10, 15, and 20 pixels. The corresponding stripe width and the computation time are shown in Figure 11. From Figure 11a, it can be seen that the ASW had no significant variations with the increase in interval space. The computation time, however, could be further reduced to smaller than 5 ms, as shown by Figure 11b. For the above experiments, it can be seen that the BPNN could get the width of the laser stripe accurately. More importantly, it can be extremely efficient, which makes the real time stripe width assessment possible. For the following experiments, the sampling interval was set as 10 pixels.   Figure 11. Influence of sampling intervals on the (a) average stripe width and the (b) run time.

Adaptive Control for A Single Intersection Profile
The favorite cross section profiles of a stripe image should be the ones that have the maximum gray value and are not saturated as the profile that has the exposure time of t = 7 ms in Figure 3a. In this situation, the cross section profile can reach a favorite reliability [35]. The favorite width relies on the component properties of the LSLS like camera resolution, field of view, and width of the

Adaptive Control for a Single Intersection Profile
The favorite cross section profiles of a stripe image should be the ones that have the maximum gray value and are not saturated as the profile that has the exposure time of t = 7 ms in Figure 3a.
In this situation, the cross section profile can reach a favorite reliability [35]. The favorite width relies on the component properties of the LSLS like camera resolution, field of view, and width of the laser line. For the proposed sensor, the favorite width is computed from the profile and its width is about 10 pixels. Thus, the ideal width is C set avr = 10 pixels, and the required range of stripe width can be set from C min avr = 8 pixels to C max avr = 12 pixels. The minimum gray threshold value was set as θ g = 70. To analyze the effectiveness of the control method, two specific intersection profiles were examined. The first was the concave surface and the second was from a convex surface, as shown by Figure 12. For each one, three stripe images with different exposure times were captured and are illustrated by Figure 12a,c. In each figure, the stripe can be over exposed, under exposed, and adaptively controlled. The center line of each stripe is computed by the use of the gray gravity method [12]. For each cross section profile, 31 pixels were selected to compute the center point. The center extraction results are shown on the stripe images. For easy comparison, these lines are also plotted in Figure 12b,d, respectively. It can be seen that if the stripe image is over exposed, significant noises are introduced, especially at the corner regions. On the other hand, if the stripe is under exposed, many center points would be lost. When the adaptive control method is adopted, a more favorable stripe image and a center extraction result can be achieved. To remove the extremely over exposed cross section profiles, a maximum width threshold parameter was defined as θ w = 25 pixels. If C v > θ w , the center point of this cross section will not be computed.
Sensors 2020, 20, x FOR PEER REVIEW 11 of 17 laser line. For the proposed sensor, the favorite width is computed from the profile and its width is about 10 pixels. Thus, the ideal width is C set avr = 10 pixels, and the required range of stripe width can be set from C min avr = 8 pixels to C max avr = 12 pixels. The minimum gray threshold value was set as θg = 70. To analyze the effectiveness of the control method, two specific intersection profiles were examined. The first was the concave surface and the second was from a convex surface, as shown by Figure 12. For each one, three stripe images with different exposure times were captured and are illustrated by Figure 12a,c. In each figure, the stripe can be over exposed, under exposed, and adaptively controlled. The center line of each stripe is computed by the use of the gray gravity method [12]. For each cross section profile, 31 pixels were selected to compute the center point. The center extraction results are shown on the stripe images. For easy comparison, these lines are also plotted in Figure 12b,d, respectively. It can be seen that if the stripe image is over exposed, significant noises are introduced, especially at the corner regions. On the other hand, if the stripe is under exposed, many center points would be lost. When the adaptive control method is adopted, a more favorable stripe image and a center extraction result can be achieved. To remove the extremely over exposed cross section profiles, a maximum width threshold parameter was defined as θw = 25 pixels. If Cv > θw, the center point of this cross section will not be computed. To further compare the center extraction results, each center line was fitted by use of the moving least squares method [12]. The fitting error can be used to evaluate the center extraction results. The absolute average error (AVR) and the root mean squared error (RMSE) for the three To further compare the center extraction results, each center line was fitted by use of the moving least squares method [12]. The fitting error can be used to evaluate the center extraction results. The absolute average error (AVR) and the root mean squared error (RMSE) for the three center lines are shown in Table 2. Thus, the smoothness of the stripe center lines can be effectively enhanced via the adaptive control.

Adaptive Control for Part Scanning
To further demonstrate the advantages of our method, a stamped aluminum part was measured with the scanning direction shown in Figure 13a. When the laser plane intersected with the part at IP a , the adaptive control method was adopted and the ASW was modulated within the required range. The exposure time was recorded as T a . Similarly, when the laser plane intersected with the part at IP b , another optimal exposure time could be obtained as T b . For the traditional scanning process, the exposure time is unchanged. If T a is selected as the constant exposure time, the ASW will change in accordance with the surface geometries, as shown by Figure 13b. At the right end of the part, the normal surface gets closer with the camera optical axis. The average stripe width increases significantly with some of the cross section profiles becoming over exposed. On the contrary, the exposure time of T b would make most of the cross section profiles under exposed. Neither of them can guarantee a favorite ASW during the whole scanning process. When the proposed method is applied, the ASW can be adaptively controlled within the required range, which would benefit the stripe center extraction.
Sensors 2020, 20, x FOR PEER REVIEW 12 of 17 center lines are shown in Table 2. Thus, the smoothness of the stripe center lines can be effectively enhanced via the adaptive control.

Adaptive Control for Part Scanning
To further demonstrate the advantages of our method, a stamped aluminum part was measured with the scanning direction shown in Figure 13a. When the laser plane intersected with the part at IPa, the adaptive control method was adopted and the ASW was modulated within the required range. The exposure time was recorded as Ta. Similarly, when the laser plane intersected with the part at IPb, another optimal exposure time could be obtained as Tb. For the traditional scanning process, the exposure time is unchanged. If Ta is selected as the constant exposure time, the ASW will change in accordance with the surface geometries, as shown by Figure 13b. At the right end of the part, the normal surface gets closer with the camera optical axis. The average stripe width increases significantly with some of the cross section profiles becoming over exposed. On the contrary, the exposure time of Tb would make most of the cross section profiles under exposed. Neither of them can guarantee a favorite ASW during the whole scanning process. When the proposed method is applied, the ASW can be adaptively controlled within the required range, which would benefit the stripe center extraction. Measurement results of the part are shown in Figure 14. Figure 14a-c are the point cloud data, and Figure 14d-f are the corresponding surfaces obtained via the Delaunay triangulation. Here, we mainly focused on the portions with a large slope denoted by a1, b1, c1 and the portions with large curvature denoted by a2, b2, c2, on the corresponding figures. It can be clearly seen that more points can be obtained with the adaptive control of the exposure time and the surface integrity can be enhanced.
Other than the first part, three other parts with different surface properties were investigated. The first one was a complex 'Mickey Mouse' surface, the second was brown with a convex surface, and the last was pink with a concave surface, as shown in Figure 15. For comparative analysis, these parts were also scanned using different exposure times. The exposure time Ta and Tb can be obtained by using the same method as the part in Figure 13a. When constant exposure time is applied for the whole scanning process, the stripe can be over/under exposed due to the geometrical variation of the part. This would lead to missing data points and obvious holes on the Measurement results of the part are shown in Figure 14.        Other than the first part, three other parts with different surface properties were investigated. The first one was a complex 'Mickey Mouse' surface, the second was brown with a convex surface, and the last was pink with a concave surface, as shown in Figure 15. For comparative analysis, these parts were also scanned using different exposure times. The exposure time T a and T b can be obtained by using the same method as the part in Figure 13a. When constant exposure time is applied for the whole scanning process, the stripe can be over/under exposed due to the geometrical variation of the part. This would lead to missing data points and obvious holes on the reconstructed surfaces. The adaptive control of the exposure time, on the other hand, can effectively improve the measurement integrity.

Comparative Analysis of Effective Points
For each intersection profile, the number of ideal center points should be the same as the columns of the stripe image. One center point can be obtained for each column. However, the maximum gray value of a specific cross section profile can be smaller than the gray value threshold θ g = 70 or the width of the cross section profile is larger than the presented value θ w = 25 pixels. The center point of this cross section profile was not computed. The former case also included two situations: (1) The cross section profile was under exposed and (2) there was no stripe due to self-block of the part. Otherwise, one center point could be obtained for this column. Thus, the ratio of effective points can be defined as where N eff is the number of effective points for the whole measuring process; V c is the columns of each stripe image; and N val is the number of effective stripe images. Ratio of effective points is shown in Figure 16. It can be seen that the R eff value could be improved significantly for all cases when the exposure time was adaptively controlled during the measurement process.

Comparative Analysis of Effective Points
For each intersection profile, the number of ideal center points should be the same as the columns of the stripe image. One center point can be obtained for each column. However, the maximum gray value of a specific cross section profile can be smaller than the gray value threshold θg = 70 or the width of the cross section profile is larger than the presented value θw = 25 pixels. The center point of this cross section profile was not computed. The former case also included two situations: (1) The cross section profile was under exposed and (2) there was no stripe due to self-block of the part. Otherwise, one center point could be obtained for this column. Thus, the ratio of effective points can be defined as where Neff is the number of effective points for the whole measuring process; Vc is the columns of each stripe image; and Nval is the number of effective stripe images. Ratio of effective points is shown in Figure 16. It can be seen that the Reff value could be improved significantly for all cases when the exposure time was adaptively controlled during the measurement process.

Effective Analysis of Linear Iteration
In the measurement process, if the ASW locates within the required range of 8~12 pixels, this stripe image is used to compute the center points, which is called an effective image. Otherwise, the image is only used to estimate the exposure time of the next frame and is called an invalid stripe image. The invalid stripe images do not contribute to the 3D measured points. The ratio of ineffective image frames is defined by where Ninval is the number of invalid frames and Ntotal is the number of total frames in the measuring process. For the four parts above, the Rin values are shown in Figure 17. It can be seen that Rin relies on the part. For part Figure 13a, it had a large slope at the right end. For part Figure 15a, there was a steep feature at the center. These dramatic changes needed more iterative steps to achieve the required stripe width. Thus, both of them had relatively large Rin values. When the surfaces become smoother, the Rin value can be much smaller, like parts in Figure 15e,i. For the part in Figure 15i, the Rin value even reached 2.06% which means that the exposure time of the later image can be well

Effective Analysis of Linear Iteration
In the measurement process, if the ASW locates within the required range of 8~12 pixels, this stripe image is used to compute the center points, which is called an effective image. Otherwise, the image is only used to estimate the exposure time of the next frame and is called an invalid stripe image. The invalid stripe images do not contribute to the 3D measured points. The ratio of ineffective image frames is defined by where N inval is the number of invalid frames and N total is the number of total frames in the measuring process. For the four parts above, the R in values are shown in Figure 17. It can be seen that R in relies on the part. For part Figure 13a, it had a large slope at the right end. For part Figure 15a, there was a steep feature at the center. These dramatic changes needed more iterative steps to achieve the required stripe width. Thus, both of them had relatively large R in values. When the surfaces become smoother, the R in value can be much smaller, like parts in Figure 15e,i. For the part in Figure 15i, the R in value even reached 2.06% which means that the exposure time of the later image can be well estimated by the former one. This demonstrates that the proposed adaptive control method can improve the surface integrity without increasing the computation cost too much.
Sensors 2020, 20, x FOR PEER REVIEW 15 of 17 Figure 17. Ratio of invalid frames for the measurement of different parts.

Conclusions
An adaptive control method was proposed to adjust the ASW via real-time tuning of the exposure time. To enhance the computing efficiency, the width of each stripe cross section profile was calculated by use of the BPNN. The reference width, which was used for the network training, was obtained by the GF method. The GF method is time consuming and not competent for real-time computation. The BPNN, on the other hand, can reduce the computation time to several milliseconds and makes real-time adaptive exposure control possible. To reveal the relationship between exposure time and ASW, several experiments were carried out. The results show that the exposure time vs. ASW curve approximately follows a linear manner. Therefore, a linear iteration method was brought out to adjust the camera exposure time. When the control method was applied, the ASW could be controlled within a required range and the quality of stripe image could be effectively improved. The measurement results of different parts showed that the control method could improve the surface integrity. The small proportion of ineffective images during the measurement process further validated the adaptive control method. It should also be noted that the dramatic changes of geometry would lead to varied exposure times. Thus, the sampling interval can be non-uniform. In further research, the scanning velocity may also be included in the adaptive control system to enhance the measurement quality.

Conclusions
An adaptive control method was proposed to adjust the ASW via real-time tuning of the exposure time. To enhance the computing efficiency, the width of each stripe cross section profile was calculated by use of the BPNN. The reference width, which was used for the network training, was obtained by the GF method. The GF method is time consuming and not competent for real-time computation. The BPNN, on the other hand, can reduce the computation time to several milliseconds and makes real-time adaptive exposure control possible. To reveal the relationship between exposure time and ASW, several experiments were carried out. The results show that the exposure time vs. ASW curve approximately follows a linear manner. Therefore, a linear iteration method was brought out to adjust the camera exposure time. When the control method was applied, the ASW could be controlled within a required range and the quality of stripe image could be effectively improved. The measurement results of different parts showed that the control method could improve the surface integrity. The small proportion of ineffective images during the measurement process further validated the adaptive control method. It should also be noted that the dramatic changes of geometry would lead to varied exposure times. Thus, the sampling interval can be non-uniform. In further research, the scanning velocity may also be included in the adaptive control system to enhance the measurement quality.