Reliable and Accurate Wheel Size Measurement under Highly Reflective Conditions

Structured-light vision sensor, as an important tool for obtaining 3D data, is widely used in fields of online high-precision measurement. However, the captured stripe images can show high-dynamic characteristics of low signal-to-noise ratio and uneven brightness due to the complexity of the onsite environment. These conditions seriously affect measurement reliability and accuracy. In this study, a wheel size measurement framework based on a structured-light vision sensor, which has high precision and reliability and is suitable for highly reflective conditions, is proposed. Initially, the quality evaluation criterion of stripe images is established, and the entire stripe is distinguished into high- and low-quality segments. In addition, the multi-scale Retinex theory is adopted to enhance stripe brightness, which improves the reliability of subsequent stripe center extraction. Experiments verify that this approach can remarkably improve measurement reliability and accuracy and has important practical value.


Introduction
High-speed and high-precision vision measurement in outdoor environments has become an increasingly important tool for industrial scenes, e.g., vision perception [1], defect detection [2,3], and size measurement [4,5]. Among them, the online measuring instrument represented by a structured-light vision sensor is widely used, e.g., wheel size measurement [6,7], reconstruction of large forging parts [8], and monitoring of pantograph running status [9,10], where the online wheel size measurement has tremendous and remarkable practical meanings [11]. However, this measurement holds three main characteristics in the measuring process: (1) High speed, because the moving speed of the measured wheel is considerably fast, real-time calculation is required for the equipment; (2) high reflection, which is due to long time fraction, resulting in smooth wheel tread surfaces and specular reflection with active illumination, thereby causing overexposure (generally, the camera is set to working modes of a large aperture and short exposure time); and (3) irregular surface, that is, the topography of the measured wheel is irregular and the light is reflected away from the camera, generating low-brightness images. Regarding the restriction of sensor observation frustum, ensuring a good imaging status in the entire surface is difficult.
The imaging characteristics in highly reflective conditions are mainly reflected in dramatic brightness changes, that is, a high dynamic. The dynamic range (DR) of images represents the ratio of maximum and minimum brightness. A high DR indicates a large difference between the maximum brightness and minimum brightness of images. In dynamic images, the stripe brightness and width distribution are uneven. Figure 1 presents the camera imaging light path, where the measured wheel has high speed, high reflection, and an irregular surface; the camera is set to working modes of a large aperture, short exposure, and high sensitivity; and the stripe images captured have At present, experts have been investigating wheel size measurement sensors [12][13][14][15][16][17][18], and numerous types of products have been invented and applied, such as MERMEC [19], IEM [20], and KLD [21], which are suitable for low-speed trains. A stable and low-speed environment is selected to avoid measurement instability caused by outdoor high reflection. When the trains reach a high speed, the measurement accuracy seriously decreases. MERMEC adopts a specially designed high-resolution camera, and its measurement accuracy can reach 0.2 mm when the pass speed does not exceed 20 km/h. To date, existing measurement systems are mainly installed at the fence or garage entrance, under ideal environmental conditions. However, systems [17] installed in the main line are seriously affected by outdoor lighting and other factors. Not one measurement system of structured-light vision is robust and reliable and meets the requirement of online measurement in highly reflective conditions. Therefore, the quality of stripe images in highly reflective conditions must be improved to increase system reliability. To our knowledge, no solution has been proposed to improve the measurement system of structured-light vision in highly reflective conditions. The conventional methods often focus on image brightness enhancement. Recently, various image enhancement methods have been proposed to improve low-brightness images. These methods consist of spatial and frequency domains. Frequency methods mainly address the transform coefficients in a certain frequency domain of the image, and then the enhanced image is obtained by inverse transformation. In Reference [22], a multi-resolution overlapping sub-block equalization method based on wavelet is proposed. The image is decomposed and equalized and then At present, experts have been investigating wheel size measurement sensors [12][13][14][15][16][17][18], and numerous types of products have been invented and applied, such as MERMEC [19], IEM [20], and KLD [21], which are suitable for low-speed trains. A stable and low-speed environment is selected to avoid measurement instability caused by outdoor high reflection. When the trains reach a high speed, the measurement accuracy seriously decreases. MERMEC adopts a specially designed high-resolution camera, and its measurement accuracy can reach 0.2 mm when the pass speed does not exceed 20 km/h. To date, existing measurement systems are mainly installed at the fence or garage entrance, under ideal environmental conditions. However, systems [17] installed in the main line are seriously affected by outdoor lighting and other factors. Not one measurement system of structured-light vision is robust and reliable and meets the requirement of online measurement in highly reflective conditions. Therefore, the quality of stripe images in highly reflective conditions must be improved to increase system reliability. To our knowledge, no solution has been proposed to improve the measurement system of structured-light vision in highly reflective conditions. The conventional methods often focus on image brightness enhancement. Recently, various image enhancement methods have been proposed to improve low-brightness images. These methods consist of spatial and frequency domains. Frequency methods mainly address the transform coefficients in a certain frequency domain of the image, and then the enhanced image is obtained by inverse transformation. In Reference [22], a multi-resolution overlapping sub-block equalization method based on wavelet is proposed. The image is decomposed and equalized and then reconstructed through inverse wavelet transform, but the method is unsuitable for images in the histogram concentrated distribution. The gray field transformation method adjusts the DR or contrast of the image, which is an important means of enhancement. This method transforms the grayscale in the original image into a new value, which can increase the gray DR. The categories of transformation function forms include linear, piecewise linear, and nonlinear. For the nonlinear gamma correction method [23,24], the selection of a nonlinear correction function is particularly important, its local processing capacity is poor, and the effect on structured-light image enhancement is not evident. In addition, histogram equalization [25] can increase the uniformity of gray distribution and enhance image contrast. However, this method does not consider the frequency and details of the image, which is likely to cause overenhancement. The gradient enhancement method [26] converts the image to the logarithm domain with gradient processing, thereby reducing the gradient value for compressing the image DR and increasing the large gradient to enhance edges in the image. Nevertheless, this method is unsuitable for real-time application. The Retinex [27] enhancement method utilizes the Gaussian smoothing function to estimate the luminance component of the original image. This method is suitable for handling images with low local gray value, thereby effectively enhancing the details of the dark part and maintaining the original image brightness to a certain extent while compressing the image contrast. In recent years, numerous forms based on Retinex theory, e.g., single scale [28], multi-scale [29], and color image enhancement [30], have achieved the ideal enhancement effect.
The accuracy of stripe center extraction determines the measurement precision, whereas the stripe quality affects the accuracy of stripe center extraction. Therefore, this paper presents a method for wheel size measurement in highly reflective conditions. First, the stripe imaging process in highly reflective conditions is analyzed. Second, the stripe quality evaluation is proposed to locate enhanced stripe segments accurately. The image brightness is enhanced based on multi-scale Retinex (MSR). Physical experiments proved that, in the outdoor complex environment of the railway, the proposed method, compared with the existing methods, can improve the accuracy of the online wheel size measurement system and effectively reduce leakage and false-positive rates.
The remainder of this paper is organized as follows. Section 2 briefly reviews the wheel profile reconstruction and overviews the scheme of high-dynamic image processing. Section 3 describes the stripe imaging in highly reflective conditions and presents an effective enhancement approach for a high-dynamic stripe image. Section 4 discusses the experiments and evaluations. Section 5 draws the conclusion.

System Overview
The online wheel size dynamic measurement system is composed of equipment on rail, control unit, data processing, and wheel size calculation modules. Equipment on rail includes two structured-light vision sensors, which are installed under a measured wheel. The control unit receives the wheel arrival signal from a magnetic trigger and generates data synchronously to cameras and lasers. Data processing involves image acquisition, image processing, stripe center extraction, and 3D reconstruction, which facilitate wheel size calculation. Wheel cross profile acquisition is a critical process in measurement, and its layout is shown in Figure 2. Two structured-light vision sensors are installed inside and outside the measured wheel, and their laser planes are adjusted to be coplanar in space and pass through the wheel center.
When the wheel passes the trigger position, the inner and outer sensors observe the portion of the wheel cross profile. Among them, the inner camera is mainly used to detect the inner rim and part flange, and the outer camera is utilized to detect the tread and part flange. Let K i , K o be the inner and outer camera intrinsic parameters, F i , F o be plane functions in a camera coordinate system (CCF), and R oi , T oi be transformation matrices of the outer camera to the inner camera. The sensors are calibrated offline [31,32]. Let C i and C o be the wheel profiles from the inner and outer sensors. After translating C o to the inner CCF, C o = R oi · C o + T oi , the complete wheel profile is C c = C i ∪ C o , which is used to calculate the parameters in accordance with the wheel size definition.  Specifically, we focus on solving the stripe image enhancement in a wheel size measurement system, the diagram of which is shown in Figure 3. Figure 3 illustrates that stripe skeleton extraction is implemented on the raw images captured by each camera. Based on the stripe quality evaluation criteria, the stripe segments to be enhanced are established along the stripe skeleton trajectory. Based on MSR theory, the stripe with low-brightness segments are enhanced for center extraction. The reliable and accurate wheel size measurement framework can be concluded as follows: Step 1. Stripe images are synchronously captured from sensors installed around the measured wheel.
Step 2. Image dilation and low threshold segmentation are adopted to obtain the stripe area in which valid stripes are located and the central skeleton points are extracted.
Step 3. Stripe image quality evaluation criteria are established in accordance with the center extraction principle.
Step 4. Based on the stripe quality criteria, the stripe segments with low brightness are obtained along the stripe skeleton trajectory.
Step 5. MSR brightness enhancement is implemented to the low-quality stripe segments generated in step 4. Moreover, the valid enhanced stripe is segmented accurately based on the reflectivity.
Step 6. The center points of the enhanced stripe are extracted and the wheel cross profile is reconstructed. Finally, wheel size parameters are calculated. Specifically, we focus on solving the stripe image enhancement in a wheel size measurement system, the diagram of which is shown in Figure 3. Figure 3 illustrates that stripe skeleton extraction is implemented on the raw images captured by each camera. Based on the stripe quality evaluation criteria, the stripe segments to be enhanced are established along the stripe skeleton trajectory. Based on MSR theory, the stripe with low-brightness segments are enhanced for center extraction.  Specifically, we focus on solving the stripe image enhancement in a wheel size measurement system, the diagram of which is shown in Figure 3. Figure 3 illustrates that stripe skeleton extraction is implemented on the raw images captured by each camera. Based on the stripe quality evaluation criteria, the stripe segments to be enhanced are established along the stripe skeleton trajectory. Based on MSR theory, the stripe with low-brightness segments are enhanced for center extraction.  The reliable and accurate wheel size measurement framework can be concluded as follows: Step 1. Stripe images are synchronously captured from sensors installed around the measured wheel.
Step 2. Image dilation and low threshold segmentation are adopted to obtain the stripe area in which valid stripes are located and the central skeleton points are extracted.
Step 3. Stripe image quality evaluation criteria are established in accordance with the center extraction principle.
Step 4. Based on the stripe quality criteria, the stripe segments with low brightness are obtained along the stripe skeleton trajectory.
Step 5. MSR brightness enhancement is implemented to the low-quality stripe segments generated in step 4. Moreover, the valid enhanced stripe is segmented accurately based on the reflectivity.
Step 6. The center points of the enhanced stripe are extracted and the wheel cross profile is reconstructed. Finally, wheel size parameters are calculated. The reliable and accurate wheel size measurement framework can be concluded as follows: Step 1. Stripe images are synchronously captured from sensors installed around the measured wheel.
Step 2. Image dilation and low threshold segmentation are adopted to obtain the stripe area in which valid stripes are located and the central skeleton points are extracted.
Step 3. Stripe image quality evaluation criteria are established in accordance with the center extraction principle.
Step 4. Based on the stripe quality criteria, the stripe segments with low brightness are obtained along the stripe skeleton trajectory.
Step 5. MSR brightness enhancement is implemented to the low-quality stripe segments generated in step 4. Moreover, the valid enhanced stripe is segmented accurately based on the reflectivity. Step 6. The center points of the enhanced stripe are extracted and the wheel cross profile is reconstructed. Finally, wheel size parameters are calculated.

Algorithm Implementation
In this section, the characteristics of wheel imaging and the cause of low brightness are analyzed. High-dynamic stripe enhancement includes stripe quality evaluation, locating enhanced segments, stripe enhancement, and enhanced stripe segmentation.

Stripe Imaging Description
The structured-light vision sensor projects a laser onto the measured object surface and obtains the image of the reflected laser line. The camera-sensitized illumination model can be expressed as I = I e + I d + I s , where I e is the ambient component, I d denotes the diffuse component, and I s represents the specular component. I e is independent of the viewing angle and related to the light source and object reflectivity, which is negligible in structured-light measurement due to an optical filter. I d is determined by the view angle and object surface reflectivity. When the view angle is small, the diffuse reflection component received is considerable. Meanwhile, the view angle, object reflectivity, and specular direction determine I s . When the angle between the view vector and specular direction decreases, the specular component increases and the stripe is prone to overexposure. Therefore, in the structured-light imaging process, we attempt to avoid the effect of the specular component while increasing the diffuse component to improve the image SNR and uniform brightness and ensure measurement accuracy. Figure 4a,b shows the stripe images captured by the outer and inner cameras, respectively. After the line laser is irradiated to the wheel surface, most of the parts, except those absorbed by the wheel, are reflected and composed by I d and I s . Most of the laser energy is reflected away due to the high reflectivity of the wheel surface, large changes in topography, and the imaging angle of the camera, resulting in partial darkness of the stripe image, as shown in Figure 4a r1 and r2 segments. Similarly, for the large view angle of the inner camera and the reflectivity of the wheel inner side, the stripe brightness of Figure 4b r1 and r2 segments are dark in the inner rim.

Algorithm Implementation
In this section, the characteristics of wheel imaging and the cause of low brightness are analyzed. High-dynamic stripe enhancement includes stripe quality evaluation, locating enhanced segments, stripe enhancement, and enhanced stripe segmentation.

Stripe Imaging Description
The structured-light vision sensor projects a laser onto the measured object surface and obtains the image of the reflected laser line. The camera-sensitized illumination model can be expressed as    Figure 5 displays the gray value distribution in the stripe and normal directions. Ideally, the gray distribution is even in the stripe direction and is inclined to a Gaussian distribution in the normal direction. However, the grayscale distribution of the image captured on the site changes strongly. Figure 5b shows that the gray distribution in the stripe direction and the curve fluctuate severely and are accompanied by two dark curve segments marked with a red dotted frame. Figure 5c  Figure 5 displays the gray value distribution in the stripe and normal directions. Ideally, the gray distribution is even in the stripe direction and is inclined to a Gaussian distribution in the normal direction. However, the grayscale distribution of the image captured on the site changes strongly. Figure 5b shows that the gray distribution in the stripe direction and the curve fluctuate severely and are accompanied by two dark curve segments marked with a red dotted frame. Figure 5c illustrates the gray distribution curves in a normal direction at approximately eight sampling points, presenting as spiked, steep, and narrow. If we increase the camera exposure time, then the bright segments become brighter, which leads to overexposure. Therefore, the best method is to enhance the brightness of dark segments and maintain the original ones. illustrates the gray distribution curves in a normal direction at approximately eight sampling points, presenting as spiked, steep, and narrow. If we increase the camera exposure time, then the bright segments become brighter, which leads to overexposure. Therefore, the best method is to enhance the brightness of dark segments and maintain the original ones.  Figure 6 displays the online stripe images captured from the outer cameras. Three different segments are used, namely, low brightness, overexposure, and stray light. Overexposure and other stray lights that are unavoidable in real measurement are reduced by decreasing the exposure time. Generally, the low-brightness stripe image is the main factor causing the leakage, which is the key focus of this study.

Low brightness:
Over exposure: Reflection imge:  Figure 6. Online stripe images captured from the outer camera. ① and ③ present the stipe images captured with low exposure. ② and ④ exhibit the stipe images captured with high exposure. Figure 7 shows the center extraction results of stripe images. Figure 7a displays the images with low brightness in tread segments. Figure 7b illustrates the stripe images with overexposure in the flange segments. The former cases cause useful segments to not be extracted accurately. By  Figure 6 displays the online stripe images captured from the outer cameras. Three different segments are used, namely, low brightness, overexposure, and stray light. Overexposure and other stray lights that are unavoidable in real measurement are reduced by decreasing the exposure time. Generally, the low-brightness stripe image is the main factor causing the leakage, which is the key focus of this study. illustrates the gray distribution curves in a normal direction at approximately eight sampling points, presenting as spiked, steep, and narrow. If we increase the camera exposure time, then the bright segments become brighter, which leads to overexposure. Therefore, the best method is to enhance the brightness of dark segments and maintain the original ones.  Figure 6 displays the online stripe images captured from the outer cameras. Three different segments are used, namely, low brightness, overexposure, and stray light. Overexposure and other stray lights that are unavoidable in real measurement are reduced by decreasing the exposure time. Generally, the low-brightness stripe image is the main factor causing the leakage, which is the key focus of this study.

Low brightness:
Over exposure: Reflection imge: Online stripe images captured from the outer camera. ① and ③ present the stipe images captured with low exposure. ② and ④ exhibit the stipe images captured with high exposure. Figure 7 shows the center extraction results of stripe images. Figure 7a displays the images with low brightness in tread segments. Figure 7b illustrates the stripe images with overexposure in the flange segments. The former cases cause useful segments to not be extracted accurately. By contrast, the key segments in the tread are almost not extracted, which causes wheel parameters to  Figure 7 shows the center extraction results of stripe images. Figure 7a displays the images with low brightness in tread segments. Figure 7b illustrates the stripe images with overexposure in the flange segments. The former cases cause useful segments to not be extracted accurately. By contrast, the key segments in the tread are almost not extracted, which causes wheel parameters to not be calculated.

Stripe Brightness Analysis
When the incident light is a laser, the wheel surface is irregularly undulating even though the light is regular. Therefore, the reflected light shows irregularity with bright, dark, or uneven distribution. Let 1 α and 2 α be the incident and scatter angles, respectively. Then, the scatter light intensity is described as: where 0 ( , ) I x y represents the incident light, h σ denotes the surface roughness, and λ refers to the wavelength. In the online wheel size dynamic measurement system, the near-infrared laser wavelength is = 808 nm λ , and the wheel surface roughness is Figure 8 shows that the abscissa is the ratio of the surface roughness to the wavelength, and the vertical axis is the normalized intensity of scattered light. The scattered light increases as the ratio of the surface roughness to the wavelength increases and decreases as the view angle increases. Given that the relative roughness of the surface is small, it can be regarded as a fixed value. Figure 8 S1 illustrates that the surface is smooth, and the scattered light intensity is weak. The angle of observation and surface normal vector are large, and the scattered light is weak. Figure 8 S2 indicates that the stripe observed by the camera becomes dark after a long period of friction of the tread area, whereas other rough areas become bright. Figure 4a exhibits that the stripe in the tread area is darker than other rough areas due to smoothness. By contrast, Figure 4b shows a dark area in the inner rim. In the stripe image captured by the outer camera, as shown in Figure 4a, tread segments of r1 and r2, due to the smooth surface and large view angle, allow minimal light into the digital sensor. The reflectivity of the stripe image from the inner camera, due to oil pollution above the inner rim, is reduced and the stripe is dimmed in Figure 4b r1 and r2 segments.

Stripe Brightness Analysis
When the incident light is a laser, the wheel surface is irregularly undulating even though the light is regular. Therefore, the reflected light shows irregularity with bright, dark, or uneven distribution. Let α 1 and α 2 be the incident and scatter angles, respectively. Then, the scatter light intensity is described as: where I 0 (x, y) represents the incident light, σ h denotes the surface roughness, and λ refers to the wavelength. In the online wheel size dynamic measurement system, the near-infrared laser wavelength is λ= 808 nm, and the wheel surface roughness is σ h ∈ [40, 100]. Figure 8 shows that the abscissa is the ratio of the surface roughness to the wavelength, and the vertical axis is the normalized intensity of scattered light. The scattered light increases as the ratio of the surface roughness to the wavelength increases and decreases as the view angle increases. Given that the relative roughness of the surface is small, it can be regarded as a fixed value. Figure 8 S1 illustrates that the surface is smooth, and the scattered light intensity is weak. The angle of observation and surface normal vector are large, and the scattered light is weak. Figure 8 S2 indicates that the stripe observed by the camera becomes dark after a long period of friction of the tread area, whereas other rough areas become bright.  A good laser stripe is a prerequisite for structured-light vision sensor measurement. Therefore, laser stripe image quality evaluation is conductive for determining whether or which stripe  Figure 4a exhibits that the stripe in the tread area is darker than other rough areas due to smoothness. By contrast, Figure 4b shows a dark area in the inner rim. In the stripe image captured by the outer camera, as shown in Figure 4a, tread segments of r1 and r2, due to the smooth surface and large view angle, allow minimal light into the digital sensor. The reflectivity of the stripe image from the inner camera, due to oil pollution above the inner rim, is reduced and the stripe is dimmed in Figure 4b r1 and r2 segments.

Quality Evaluation of Stripe Image
A good laser stripe is a prerequisite for structured-light vision sensor measurement. Therefore, laser stripe image quality evaluation is conductive for determining whether or which stripe segments need to be enhanced. Xie Fei [33] conducted a statistical behavior analysis for the laser stripe center detector based on Steger algorithm [34]. The quantitative relationship between the center point uncertainty and its surrounding SNR is determined. Based on the stripe extraction algorithm, the stripe line satisfies the Gaussian distribution in a normal direction, and its center coordinate is the laser energy point. The features of the ideal stripe include a uniform brightness in the stripe direction, a Gaussian distribution in the normal direction, and space continuous. Consequently, three aspects are considered, namely, gray distribution in the stripe direction, gray distribution in the normal direction, and the space continuous.

a. Stripe direction
Let g i be the brightness of each stripe pixel, µ g be the mean value, and σ g be the variance. Generally, a high µ g and small σ g indicate that the stripe has uniform brightness without fluctuations. In other words, the overall quality is good. Based on experience, the image with µ g > 0.8, σ g < 0.2 is a good stripe image.

b. Normal direction
Based on the center extraction algorithm, the gray in the normal stripe direction meets the Gaussian distribution, which induces a small positioning error. We determine the difference between the origin and Gaussian-filtered stripes to describe the normal gray distribution. The small difference proves that the stripe conforms to the Gaussian distribution. We set the difference (namely, image noise), ρ i = ∑ h i − h i ⊗ G µ−σ , and calculate the mean, µ ρ , and variance, σ ρ , where h i is the normal sample gray and G µ−σ represents the Gaussian convolution mask. A small µ ρ and σ ρ manifest few burr and uneven parts, which indicates a good quality. The template size of the selected Gaussian filter should refer to the actual width of the stripe. Generally, we set µ ρ < 0.024, σ ρ < 0.008 as the evaluation criteria in the normal direction.
c. Stripe continuity A good laser stripe should be continuous in space. Correspondingly, when the disconnection segment is detected, image enhancement is implemented. We set D p as the percentage of the effective stripe in the entire stripe. Let the distance between the adjacent stripe points be d i , the mean distance of d i in the entire stripe be µ d , and the variance be σ d . Similarly, a small µ d and σ d indicates that the stripe is complete and with few disconnected parts. Usually, when adjacent stripe points' µ d < 5pixel, it is valid for measurement. Furthermore, this parameter is determined by the real measurement requirement.
In addition, µ i and σ i , i = g, ρ, d are normalized in [0, 1]. Regarding the stripe quality, the highest priority is stripe continuity, which ensures measurement reliability. Then, the gray value in the stripe direction and the noise in the normal direction determine the measurement accuracy.

Enhanced Stripe Segments Location
Based on the stripe quality evaluation criteria described in Section 3.2.1, we conduct stripe quality evaluation to enhance the stripe segments. Therefore, to obtain the coordinates of the stripe image center roughly, the stripe is segmented out, and binary operation and skeleton extraction are implemented, thereby obtaining the continuous stripe points along the stripe direction. Figure 9 shows the establishment process of the stripe quality regions from 1 to 4 . Quality evaluation is implemented in each stripe point in accordance with evaluation criteria; thus, we can obtain low-quality segments for brightness enhancement and extract good-quality segments for avoiding additional calculations.
by the real measurement requirement.
In addition, i µ and i σ , , , i g d ρ = are normalized in [0,1] . Regarding the stripe quality, the highest priority is stripe continuity, which ensures measurement reliability. Then, the gray value in the stripe direction and the noise in the normal direction determine the measurement accuracy.

Enhanced Stripe Segments Location
Based on the stripe quality evaluation criteria described in Section 3.2.1, we conduct stripe quality evaluation to enhance the stripe segments. Therefore, to obtain the coordinates of the stripe image center roughly, the stripe is segmented out, and binary operation and skeleton extraction are implemented, thereby obtaining the continuous stripe points along the stripe direction. Figure 9 shows the establishment process of the stripe quality regions from ① to ④. Quality evaluation is implemented in each stripe point in accordance with evaluation criteria; thus, we can obtain low-quality segments for brightness enhancement and extract good-quality segments for avoiding additional calculations. (a) (b) Figure 9. Location of stripe segments that are to be enhanced. (a) Images captured by the outer camera, (b) inner images. ① is the original image, ② shows the segmented and binarized image, ③ presents the stripe skeleton, and ④ indicates the regions established through stripe quality evaluation. Figure 9 displays the images captured by the outer and inner cameras, where ① is the original image whose brightness is uneven and contains good-and low-quality segments. ② presents the segmented and binarized image from the original image. ③ shows the stripe skeleton points extracted from ②. In ④, the red points indicate the stripe segments with good quality, and the green dot rectangle region is the segment to be enhanced, e.g., segments R1 and R2 in the tread and flange regions and R3 in the rim.

Image Brightness Enhancement
Retinex theory was introduced by Edwin. H. Land [32], who found that the color perceived by the human vision system is determined by the reflection of an object and does not involve the light from a scene. Based on this analysis, the stripe image brightness depends mainly on the surface reflectivity. Hence, stripe brightness enhancement based on Retinex is appropriate for solving high-reflection problems. The Retinex algorithm that holds the image is composed of two parts, namely, the reflected and incident components. To acquire the reflection component further to restore the original appearance of the object, the illuminance component is obtained by computing the brightness between pixels. The image can be expressed as follows: Figure 9. Location of stripe segments that are to be enhanced. (a) Images captured by the outer camera, (b) inner images. 1 is the original image, 2 shows the segmented and binarized image, 3 presents the stripe skeleton, and 4 indicates the regions established through stripe quality evaluation. Figure 9 displays the images captured by the outer and inner cameras, where 1 is the original image whose brightness is uneven and contains good-and low-quality segments. 2 presents the segmented and binarized image from the original image. 3 shows the stripe skeleton points extracted from 2 . In 4 , the red points indicate the stripe segments with good quality, and the green dot rectangle region is the segment to be enhanced, e.g., segments R1 and R2 in the tread and flange regions and R3 in the rim.

Image Brightness Enhancement
Retinex theory was introduced by Edwin. H. Land [32], who found that the color perceived by the human vision system is determined by the reflection of an object and does not involve the light from a scene. Based on this analysis, the stripe image brightness depends mainly on the surface reflectivity. Hence, stripe brightness enhancement based on Retinex is appropriate for solving high-reflection problems. The Retinex algorithm that holds the image is composed of two parts, namely, the reflected and incident components. To acquire the reflection component further to restore the original appearance of the object, the illuminance component is obtained by computing the brightness between pixels. The image can be expressed as follows: where L(x, y) represents the incident light and R(x, y) denotes the reflection property of the object. I(x, y) indicates the image to be enhanced. In fact, the incident light L(x, y) directly determines the DR of a pixel reaching in the image. R(x, y) determines the intrinsic nature of the image. The purpose of Retinex theory is to obtain R(x, y) from I(x, y). Equation (2) is transformed into the logarithmic domain: log(I(x, y)) = log(R(x, y)) + log(L(x, y)).
Land proposed a center/surround Retinex algorithm with the basic idea that the brightness of each center pixel is estimated by setting different weights around the pixel. Jobson finally determined that the Gaussian surround function can achieve good results. Equation (3) can be expressed as: log(R(x, y)) = log(I(x, y)) − log(F(x, y) · I(x, y)), where the Gaussian function,F(x, y) = 1 2πσ 2 exp(− (x 2 +y 2 ) 2σ 2 ), and σ refers to the scale parameter, which directly affects the estimation of the incident component. When σ is small, the Gaussian template is small and the Gaussian function is relatively steep. The estimated component of the incident after convolution is also relatively rugged, with a strong dynamic compression. However, the detail is retained and the brightness is poor. On the contrary, when σ is large, the Gaussian template is large and the Gaussian function is relatively gentle; the convolution of the incident component is also relatively smooth and the performance of brightness fidelity is good. However, the dynamic compression ability is poor and the details of the enhancement are not evident. Thus, the Retinex algorithm cannot guarantee detail and brightness enhancements based on a single σ. The MSR is a generalization of the single-scale Retinex algorithm, which ensures detail and brightness enhancement: where r(x, y) = log(R(x, y)), K is the total number of σ, W j signifies the weight, and Under normal circumstances, MSR adopts high, medium, and low scales, that is,  Figure 10 shows the stripe images before and after enhancement. Figure 10a presents the original image with uneven luminance distribution. In particular, the brightness of the middle region (which is used to calculate the tread wear) is dark, which easily affects tread-base point extraction. In Figure 10b, the tread area is recovered, thereby ensuring the accuracy of stripe center extraction. Figure 10 shows the stripe images before and after enhancement. Figure 10a presents the original image with uneven luminance distribution. In particular, the brightness of the middle region (which is used to calculate the tread wear) is dark, which easily affects tread-base point extraction. In Figure 10b, the tread area is recovered, thereby ensuring the accuracy of stripe center extraction.

Stripe Segmentation
The enhanced image often contains an enhanced background and stray stripes; thus, segmenting the effective stripe is necessary. This study does not directly use the image segmentation method based on the gray threshold because the image brightness changes greatly and is difficult to segment properly. Nevertheless, the reflection of the stripe area is distinct from the background. We adopt reflectivity to generate a valid stripe mask for filtering out the enhanced stripe. Figure 11a,b shows the gray and reflectivity distributions, respectively. The gray image is not considerably different in the background. By contrast, the reflectivity distribution is evident, and the stripe section can be segmented accurately.

Stripe Segmentation
The enhanced image often contains an enhanced background and stray stripes; thus, segmenting the effective stripe is necessary. This study does not directly use the image segmentation method based on the gray threshold because the image brightness changes greatly and is difficult to segment properly. Nevertheless, the reflection of the stripe area is distinct from the background. We adopt reflectivity to generate a valid stripe mask for filtering out the enhanced stripe. Figure 11a,b shows the gray and reflectivity distributions, respectively. The gray image is not considerably different in the background. By contrast, the reflectivity distribution is evident, and the stripe section can be segmented accurately. region (which is used to calculate the tread wear) is dark, which easily affects tread-base point extraction. In Figure 10b, the tread area is recovered, thereby ensuring the accuracy of stripe center extraction.
(a) (b) Figure 10. Stripe images before and after brightness enhancement. (a) Stripe image before enhancement; (b) stripe image after enhancement.

Stripe Segmentation
The enhanced image often contains an enhanced background and stray stripes; thus, segmenting the effective stripe is necessary. This study does not directly use the image segmentation method based on the gray threshold because the image brightness changes greatly and is difficult to segment properly. Nevertheless, the reflection of the stripe area is distinct from the background. We adopt reflectivity to generate a valid stripe mask for filtering out the enhanced stripe. Figure 11a,b shows the gray and reflectivity distributions, respectively. The gray image is not considerably different in the background. By contrast, the reflectivity distribution is evident, and the stripe section can be segmented accurately.

Experimental Setup
To verify the effectiveness of this proposed method, we applied it to the online wheel size measurement system installed in Lixian County, Hebei Province, China. Figure 12 shows that four structured-light vision sensors are equipped under the rail, and each wheel is measured by two structural light sensors mounted inside and outside the rail. Given that this device is installed outdoors and in the main line, complex lighting with various interferences, complicated wheel appearance, and reflectivity cause the collected images to show high-dynamic characteristics, especially in the wheel tread. Table 1 lists the sensor configuration parameters. structured-light vision sensors are equipped under the rail, and each wheel is measured by two structural light sensors mounted inside and outside the rail. Given that this device is installed outdoors and in the main line, complex lighting with various interferences, complicated wheel appearance, and reflectivity cause the collected images to show high-dynamic characteristics, especially in the wheel tread. Table 1 lists the sensor configuration parameters.

Inner Sensor Outer Sensor
Rail Camera Laser Figure 12. Wheel size measurement system in an outdoor complex environment.

Comparison of Stripe Image Processing Results
The processing methods include the proposed method (PM), histogram equalization (HE), contrast adjustment (CA), logarithmic transformation (LT), convolution filter (CF), median filtering (MF), and sharpening operation (SO). The abbreviations for each method are utilized to facilitate the description.

Comparison of Stripe Quality
To verify the improvement of the stripe image, its quality before and after the enhancement treatment was measured. We select four data sets; each contains 100 images, where data sets 1, 2, 3, and 4 present 40%, 50%, 60%, and 70% low brightness stripes (induced by a highly reflective object surface), respectively. The image was enhanced by applying the abovementioned methods and the stripe quality evaluation parameters are calculated. Table 2 shows the statistics of image quality evaluation parameters for different data sets corresponding to different enhancement methods. The optical stripe quality evaluation parameters corresponding to the proposed method are the best in different datasets, indicating that the proposed method can effectively improve the stripe quality under highly reflective conditions.

Comparison of Stripe Image Processing Results
The processing methods include the proposed method (PM), histogram equalization (HE), contrast adjustment (CA), logarithmic transformation (LT), convolution filter (CF), median filtering (MF), and sharpening operation (SO). The abbreviations for each method are utilized to facilitate the description.

Comparison of Stripe Quality
To verify the improvement of the stripe image, its quality before and after the enhancement treatment was measured. We select four data sets; each contains 100 images, where data sets 1, 2, 3, and 4 present 40%, 50%, 60%, and 70% low brightness stripes (induced by a highly reflective object surface), respectively. The image was enhanced by applying the abovementioned methods and the stripe quality evaluation parameters are calculated. Table 2 shows the statistics of image quality evaluation parameters for different data sets corresponding to different enhancement methods. The optical stripe quality evaluation parameters corresponding to the proposed method are the best in different datasets, indicating that the proposed method can effectively improve the stripe quality under highly reflective conditions.

SD SE
We adopt multi-thread flow acceleration and a serialized output program, which can theoretically reach unlimited speed. However, when considering the hardware overhead, including thread switching and additional calculation, the actual processing speed based on 13-level acceleration can increase to 30 HZ with the train speed of up to 189 km/h. Although the proposed method induces numerous additional calculations, it can still meet the measurement needs of high-speed trains through multi-thread flow acceleration, which meets the online testing requirements of normal freight and passenger trains. Furthermore, the acceleration framework is versatile and suitable for other image processing methods.

Comparison of Various Image Enhancement Methods
To verify the effectiveness of the proposed method, we compare various enhancement methods. Figures 13 and 14 show the inner and outer stripe image enhancement results, respectively. Each row contains four types of images to be processed and presents the processing result of an image enhancement method. Figures 13 and 14 illustrate that this method has a greater improvement than other methods. By contrast, the HE method leads to serious stripe overexposure. The CA method adjusts all pixels' gray values, but the brightness of the enhanced part remains relatively low. LT, MF, and SO do not play a great role in brightness enhancement.
reconstructed result corresponding to the original image is compared with the proposed method. Figure 17 shows the wheel cross contour reconstruction results without image processing. Some parts are missing in stripe images, including the tread and inner rim, causing the last wheel contour to be incomplete and to not be utilized for calculation. On the contrary, Figure 18 illustrates that all stripe images are enhanced properly and the last wheel cross contour is complete; thus, the original missed wheels can still be measured accurately.    Figure 15c,f presents numerous stay stripes extracted. In Figure 15d,e,g,h, the stripe is incomplete for measurement. In Figure 15b, the entire stripe is extracted correctly, especially the tread area. Correspondingly, the same result exists in Figure 16, that is, only the proposed method can guarantee the integrity and accuracy of stripe extraction.   Given that the coordinates of the center point obtained by other methods differ greatly from the stripe required for actual measurement, contrasting is not needed. In this study, only the reconstructed result corresponding to the original image is compared with the proposed method. Figure 17 shows the wheel cross contour reconstruction results without image processing. Some parts are missing in stripe images, including the tread and inner rim, causing the last wheel contour to be incomplete and to not be utilized for calculation. On the contrary, Figure 18 illustrates that all stripe images are enhanced properly and the last wheel cross contour is complete; thus, the original missed wheels can still be measured accurately. Figure 16. Center extraction results of inner stripe images based on different methods. Columns (1)-(4) present the tread segments with different missing percentages. Row (a) shows the raw images. Rows (b-h) display the center extraction results of images through HE, CA, LT, CF, MF, and SO methods, respectively, and row (b) exhibits the center extraction results of images processed through the proposed method. (1) (2) (3) Figure 17. Wheel cross contour reconstruction without image processing. Column (1) presents the raw images. Column (2) shows the center extraction results of (1). Column (3) exhibits the wheel cross contour results of each wheel. Figure 17. Wheel cross contour reconstruction without image processing. Column (1) presents the raw images. Column (2) shows the center extraction results of (1). Column (3) exhibits the wheel cross contour results of each wheel.

Performance Evaluation of Online Measurement
For evaluating the online measuring performance of the proposed method, we repeated the experiment with the standard wheel, performed comparative experiments under highly reflective conditions, and conducted onsite dynamic measurement. As we cannot determine the precise size of each wheel, the statistical leakage and false-positive rates are mainly explained.

Measurement Accuracy
The standard freight wheel is adopted as a measured object whose size is identified. We implement 100 repeated-experiments under various highly reflective conditions (highly reflective wheel moving back and forth in the measuring process) and perform statistics of three key wheel size parameters, including flange thickness, tread wear, and rim thickness. Table 4 presents the measurement accuracy of the wheel size by using different methods. Only the results of PM, HE, CF, and SO methods are calculated. Furthermore, the measurement accuracy of the proposed method is considerably higher than that of other methods. (1) (2) (3) Figure 18. Wheel cross contour reconstruction with image processing. Column (1) presents the raw images. Column (2) shows the center extraction results of (1). Column (3) displays the wheel cross contour results of each wheel.

Performance Evaluation of Online Measurement
For evaluating the online measuring performance of the proposed method, we repeated the experiment with the standard wheel, performed comparative experiments under highly reflective conditions, and conducted onsite dynamic measurement. As we cannot determine the precise size of each wheel, the statistical leakage and false-positive rates are mainly explained.

Measurement Accuracy
The standard freight wheel is adopted as a measured object whose size is identified. We implement 100 repeated-experiments under various highly reflective conditions (highly reflective wheel moving back and forth in the measuring process) and perform statistics of three key wheel size parameters, including flange thickness, tread wear, and rim thickness. Table 4 presents the measurement accuracy of the wheel size by using different methods. Only the results of PM, HE, CF, and SO methods are calculated. Furthermore, the measurement accuracy of the proposed method is considerably higher than that of other methods. Table 4. Measurement accuracy of wheel size by using different methods (mm).  Table 4. Measurement accuracy of wheel size by using different methods (mm). Note: "WP" stands for "without processing" and "------" indicates that the result is calculated unsuccessfully.

Detection Rate Statistics
To evaluate the robustness and accuracy of the proposed method, an equipment installed in Shuohuang railway, Hebei, China, is selected as the experimental subject. We analyze the leakage and false-positive rates from 00:00:00 to 13:00:00 on 12 November 2017. A total of 56 vehicles and 23,436 wheel sets pass through the measurement equipment.
In practical application, the inspection department is most concerned about the reliability and accuracy of the equipment during operation. The leakage rate is used to evaluate the reliability of the equipment and is a prerequisite for an accurate measurement. The false-positive rate is utilized to check the accuracy of the equipment, which can effectively improve the inspector efficiency. Figure 19 illustrates the leakage and false-positive rates with different methods in an outdoor complex environment, respectively.
In practical application, the inspection department is most concerned about the reliability and accuracy of the equipment during operation. The leakage rate is used to evaluate the reliability of the equipment and is a prerequisite for an accurate measurement. The false-positive rate is utilized to check the accuracy of the equipment, which can effectively improve the inspector efficiency. Figure 19 illustrates the leakage and false-positive rates with different methods in an outdoor complex environment, respectively.  Figure 19a shows that only SO, CF, and the proposed method can effectively reduce the leakage rate. As other methods have limited brightness enhancement of the original image, the leakage rate is not considerably improved. In comparison with SO and CF, the proposed method can reduce the leakage rate to 0.38%. Moreover, in comparison with the false-positive rate shown in Figure 19b, the SO and CF methods are considerably greater than 0.14%, which is achieved by the proposed method. Therefore, the proposed method can effectively reduce the leakage and false-positive rates and can meet the requirements of the railway inspection department.

Conclusions
High speed, dynamic, and precise laser vision sensors are widely used in online measurement in a complex environment, for example, an online wheel size measurement system. However, the stripe images show high-dynamic characteristics of low SNR due to onsite highly reflective conditions, which seriously affects the measurement accuracy and reliability. To address this issue, a high-precision and high-reliability wheel size measurement system suitable for highly reflective conditions was proposed in this study. Initially, the stripe image quality evaluation mechanism was established and used to assess which part of the stripe was enhanced. MSR theory was adopted to enhance the stripe brightness, and reliable stripe segmentation was achieved through reflectivity, thereby solving the problem that low-brightness stripes are not extracted in reflective conditions. Physical experiments analyzed the before and after enhancement results, thereby proving that the method can effectively improve the measurement reliability and greatly reduce the leakage and  Figure 19a shows that only SO, CF, and the proposed method can effectively reduce the leakage rate. As other methods have limited brightness enhancement of the original image, the leakage rate is not considerably improved. In comparison with SO and CF, the proposed method can reduce the leakage rate to 0.38%. Moreover, in comparison with the false-positive rate shown in Figure 19b, the SO and CF methods are considerably greater than 0.14%, which is achieved by the proposed method. Therefore, the proposed method can effectively reduce the leakage and false-positive rates and can meet the requirements of the railway inspection department.

Conclusions
High speed, dynamic, and precise laser vision sensors are widely used in online measurement in a complex environment, for example, an online wheel size measurement system. However, the stripe images show high-dynamic characteristics of low SNR due to onsite highly reflective conditions, which seriously affects the measurement accuracy and reliability. To address this issue, a high-precision and high-reliability wheel size measurement system suitable for highly reflective conditions was proposed in this study. Initially, the stripe image quality evaluation mechanism was established and used to assess which part of the stripe was enhanced. MSR theory was adopted to enhance the stripe brightness, and reliable stripe segmentation was achieved through reflectivity, thereby solving the problem that low-brightness stripes are not extracted in reflective conditions. Physical experiments analyzed the before and after enhancement results, thereby proving that the method can effectively improve the measurement reliability and greatly reduce the leakage and false-positive rates. The method has great practical value for improving the accuracy and reliability of a wheel size measurement system in highly reflective conditions. However, some space remains for promotion; as we review the enhanced images, the width of the stripe changes slightly. Therefore, research on a multi-scale stripe center extraction algorithm is necessary, in which case the stripe center points can be extracted adequately. Furthermore, for a high measurement frequency, we plan to utilize an FPGA or GPU hardware equipment to improve the stripe processing and center extraction speed, which renders it suitable for a wide range of applications. For some cases, such as the stray stripe images with extra high reflectivity and overexposure shown in Figure 7a, improvement was impossible. The proposed method was the best approach for decreasing exposure and rejecting stray stripes. Similarly, no laser was captured in tread segments in Figure 7b, and our method also cannot restore stripes, which is unavoidable.
Author Contributions: X.P. conceived and designed the experiments. X.P. and Z.L. performed the experiments and analyzed the data. X.P. wrote the paper under the supervision of Z.L. and G.Z.