Automatic Gauge Detection via Geometric Fitting for Safety Inspection

For safety considerations in electrical substations, the inspection robots are recently deployed to monitor important devices and instruments with the presence of skilled technicians in the high-voltage environments. The captured images are transmitted to a data station and are usually analyzed manually. Toward automatic analysis, a common task is to detect gauges from captured images. This paper proposes a gauge detection algorithm based on the methodology of geometric fitting. We first use the Sobel filters to extract edges which usually contain the shapes of gauges. Then, we propose to use line fitting under the framework of random sample consensus (RANSAC) to remove straight lines that do not belong to gauges. Finally, the RANSAC ellipse fitting is proposed to find most fitted ellipse from the remaining edge points. The experimental results on a real-world dataset captured by the GuoZi Robotics demonstrate that our algorithm provides more accurate gauge detection results than several existing methods.


I. INTRODUCTION
Transformer substations step down high-voltage electricity from power lines into low-voltage electricity for urban usage.Such substations involve many special devices, whose operating states are monitored by various instruments.Being exposed in complex temperature-humidity-radiation conditions, these instruments also need manual inspection and maintenance.However, the intense radiation impose considerable risks to human health.Thus, it is desirable to use robots to inspect instruments such gauges.To this end, inspection robots are equipped with various sensors such as visiblelight camera, infrared cameras, or/and Lidar.A typical robot inspection routine includes the following steps: 1) stop at a pre-defined location with inspection tasks; 2) adjust the pose of cameras and capture pictures of the targets; 3) repeat the previous two steps until the robot travels through the predefined path.Captured images are sent back to a monitoring center via wireless channels.The captured images are analyzed in the monitor center, e.g., detecting potential defects of the instruments and reading out the gauge values.To achieve this, the first step is to detect gauges from the captured images.The most straightforward way is to use template The associate editor coordinating the review of this manuscript and approving it for publication was Zhaoqing Pan.matching with SIFT [1] or SURF [2] features.However, this would require a pre-requisite dataset for the detected gauges, and somehow lack generalization capability as the substations may change the gauges, and different substations may also use different gauges.The alternative approach is to train a prevalent neural network which can categorize these objects automatically.But unfortunately, training a deep neural network with promising generalization would require considerable amount of data, which is difficult to collect.We note that the shapes of gauges in captured images are circles or ellipse as shown in Fig. 1.As the captured image usually do not contain other objects, this observation motivates us to detect gauges by fitting geometrical shapes, which is robust to appearance change of various gauges.Based on the analysis above, we propose a gauge detection method based on the geometric fitting approach.Concretely, we first utilize a set of Sobel filters to detect edges.The lines in the edge maps (corresponding to the pillar holding the gauge) are removed via line fitting with random sample consensus (RANSAC) [3].Finally, RANSAC ellipse fitting is proposed to detect the shapes of gauges.The proposed approach does not need template and is accurate and fast.Experimental results show that the proposed algorithm is able to reliably detect gauges from real images captured from substations, outperforming several existing methods.

II. RELATED WORK
The most related work to our work is circle detection.For circle detection, the most common strategy is applying Circular Hough Transform [4] (CHT).This strategy first applies edge detector, such as Canny edge detection, to detect the edges, and then utilize the edge information to predict the location of the circle.This strategy costs a large storage space.The computational complexity is also high and the processing speed is low.In addition, the detection accuracy is poor, especially under noisy conditions [5].It is difficult for CHT to process images with high resolution in real time.To solve this problem, many approaches are proposed by researchers.For example, the Probabilistic Hough transform [6], the randomized Hough transform (RHT) [7] and the fuzzy Hough transform [8].Lu and Tan [9] proposed an Iterative Randomized Hough Transformation (IRHT) and achieved promising results on noisy and complex images.The algorithm iteratively utilizes RHT to the region determined from the latest estimation of circle parameters.
Besides Hough transform, there are some optimization based methods for circle detection.Ayalaamirez et al. [10] presented a circle detector based on a genetic algorithm, but it usually cannot handle imperfect circles.Dasgupta et al. [11] proposed an automatic circle detector using the Bacterial Foraging Algorithm (BFAOA) as an optimization method.The two methods demand the algorithm to repeatedly perform in order to detect multiple circles.The work in [12] utilized the Clonal Selection Algorithm (CSA) to detect multiple-circles, which assumes that the detection process is a multi-modal optimization problem.Cuevas et al. [13] proposed a fast circle detection method utilizes Learning Automana (LA), which has lower computing complexity.The LA method searches within the probability space rather than exploring the parameter space as commonly done by other optimization techniques.
Different from them, we propose a very simple but effective gauge detection method.We first extract contours of gauges by the gradient information, then utilize RANSAC fitting to remove unnecessary lines.Then we propose an ellipse fitting algorithm to fit the shapes of gauges.

III. METHOD
Our proposed gauge detection method consists of three steps, i.e. edge extraction, line removal, and ellipse fitting.In the following, we give a detailed description of the three steps, and typical results of these steps are shown in Fig. 2.

A. EDGE DETECTION
To successfully detect shapes, we first detect the edges from the input image with Sobel kernels.Since the captured images may be contaminated by noise, we first utilize a K × K median filter to suppress the noise.The kernels used for edge detection are denoted by S x and S y , where subscripts x and y denotes the horizontal and vertical directions of kernels, respectively.The two kernels are defined as follows.
Let X be the input image after the median filtering.Then, the filtering by the Sobel kernels generate two high-passed images: VOLUME 7, 2019 where ' * ' denotes the convolution operator.Then the edge map, denoted by E 1 , is obtained by the union of binarized edge maps from X hx and X hy : where '∪' denotes the union of two sets.The binarization operator B with the threshold τ is defined as The threshold for binarization is set as τ = 255/3 in our implementation.
X hx and X hy extract vertical and horizontal edge information, respectively.Fusing the two sets of complementary edge information, edge map E 1 contains most prominent edges in the image for the subsequent shape fitting.Fig. 2 (b) shows the fused edge maps.

B. LINE REMOVAL
However, as shown in Fig. 2 (b), we observe that there are some straight lines that do not belong to the shape of the targeted gauges.The straight line would interfere the estimate of gauge shapes, and therefore should be removed.We propose to use the RANSAC [3] approach to fit the straight lines.The line model is defined as where p (x, y) denotes the image coordinate, k and b are the slop and intersect, respectively.For compact presentation, denote a point by p (x, y), and a line by {y = kx + b, x ∈ R}.Given a pair of points (p 1 , p 2 ), the line is uniquely determined by Note that special cases for x 1 = x 2 are detected and treated separately to avoid division by zero.To fit lines in the image, we first generate K l line proposals k , k = 1, . . ., K l by K l randomly drawing point pairs (p k 2 ), k = 1, . . ., K l from the extracted edge map E 1 .As long as K l is large enough, there would be some proposals hitting the lines to be detected.To select the fitted ones, we count the number of inlier points for each line proposal, and select the ones with largest inliers.A point p is considered as an inlier of a line if p is close by .Concretely, the distance from p to is defined as Then, the proposals with top 20% largest number of inliers are chosen as fitted lines.The inlier points associated with the fitted lines are removed from P 1 to reduce the interference in the subsequent ellipse fitting.Denote by P 2 the point set after removing line inliers.Then, P 2 is determined as The associated edge map E 2 after line removal from E 1 is obtained for subsequent geometrical fitting.Fig. 2 (d) shows that lines are effectively removed through the RANSAC line fitting approach.

C. ELLIPSE FITTING 1) ELLIPSE ESTIMATION
To estimate the latent ellipse from the edge map E 2 , we propose using RANSAC to perform ellipse fitting from the associated point set P 2 .We start by establishing a model for estimating the coefficients of Conic Equation (CE) [14] using a set of points {p 1 , p 2 , . . ., p N , N ≥ 5}.The conic equation model is defined as where a, b, c, d, e, f are CE coefficients.For simplicity, we normalize f as 1, and Eq. ( 8) is written as We define t = [ã, b, c, d, ẽ] as the vector of parameters to be estimated, , where ''•'' denotes element-wise multiplication.Based on the least squares method, CE parameters are obtained by minimized the following cost function: Estimator vector t * = [a * , b * , c * , d * , e * ] is yielded by taking the derivation of the cost function with respect to t: If t * satisfies a * c * > 0 (we hold a * positive), Eq. ( 9) is denoted as a General Ellipse Equation (GEE).Note that an ellipse has a tilt when the cross term xy has a non-zero coefficient.However, a tilted ellipse is difficult to find its foci.
For the sake of simplicity, a tilted ellipse is transformed into a non-tilted one.General Non-tilted Ellipse Equation (GNEE) is defined as We remove the tilt with the following substitution: According to Eq. ( 9) and Eq. ( 13), φ is determined as: Therefore GNEE coefficients a g , c g , d g , e g in Eq. ( 12) are determined as: We define the Standard Non-tilted Ellipse Equation (SNEE) as where (x 0 , y 0 ) is the center of the ellipse, a s , b s are the ellipse ''radiuses''.According to Eq. ( 12) and Eq. ( 16), we obtain SNEE parameters in terms of the GNEE ones: where K is defined as The foci (f 1 , f 2 ) of the ellipse are determined as follows: (i) if a s ≥ b s : (ii) if a s < b s : where Without loss of generality, we assume a s > b s .Based on the property of an ellipse E, the distances from a point p ∈ E to F 1 and F 2 adds up to a constant, which equals to the length of long axis: where • denotes the distance between two points.

2) RANSAC ELLIPSE FITTING
To fit ellipse from edge map E 2 , we first generate . ., K e by K e randomly point sets (p k 1 , . . ., p k N ), k = 1, . . ., K e , N = 5 from the extracted edge map E 2 .Some proposals hitting the ellipses would be detected as long as K e is large enough.To select the fitted ellipse, we count the number of inlier points for each ellipse proposal, and select the ellipses with top 5 largest member of inliers for refinement.A point p is considered as an inlier of an ellipse E if p is close by E.
According to Eq. ( 22), the inlier points of ellipse E k is defined The ellipse proposals with top five largest number of inliers are selected, denoted by { Ēk } 5 k=1 .The associated inlier points are denoted by { Ēk } 5 k=1 , for the selected ellipses respectively.
Then, the final ellipse is estimated from the collected inlier points ∪ 5 k=1 Ēk .Fig. 2 (e) shows that ellipses are detected through our ellipse fitting approach.

IV. EXPERIMENTAL RESULTS
In this section, we first present our intermediate results for edge extraction, line removal, and ellipse fitting.Then, we compare the proposed method with various methods based on Hough transform and template matching.

A. INTERMEDIATE RESULTS
To further demonstrate the effectiveness of the proposed three-step based gauge detection method, we present the intermediate results generated by our method for four images, as shown in Fig. 3.The four images represent the typical cases where the pressure gauges are captured.It can be observed that the generated edge map contains candidate pixels of the gauge shape.The line removal algorithm works well in removing straight lines, and the ellipse fitting algorithm is good at recovering the most possible ellipse from the line removal result.The detected ellipse fits well with the pressure gauge, even if the pressure gauge is mixed with complex background.

B. COMPARISON WITH OTHER METHODS
Our method is evaluated on a real-world dataset provided by GuoZi Robotics, which contains 118 images of pressure gauges.In this subsection, we compare the proposed gauge detection method with several existing methods and their variants: the circle detection based on Hough transform (CHT), and detection methods based template matching using different cost functions, i.e., sum of square difference (SQDIFF), correlation (CCORR), correlation coefficient (CCOEFF), and their normalized versions.
The performance of the detection is evaluated both subjectively and objectively.For objective evaluation, we manually label the gauges for all the images in our dataset to serve as the ground truth.Then, for the detection results, we calculate the Precision, Recall, and F-score to evaluate the detection performance.Table 1 presents the comparison results.Our method achieves the best results compared with other methods.
For subjective evaluation, we present the comparison results for all the above methods in Fig. 4∼7.It can be observed that our method obtains the most appropriate detection results for gauges captured under looking-straight or looking-up/down viewpoints.Meanwhile, the CDHT method works well for the gauges captured under looking-straight viewpoints and cannot work well when the gauges are captured under looking-down viewpoints, as shown in Fig. 4  and 6.This further demonstrates the generality of the proposed method in dealing with different kinds of gauge images.

FIGURE 1 .
FIGURE 1.Comparison of detected shapes of gauges with different methods.From left to right: the input gauge images, CHT method and our method.

FIGURE 2 .
FIGURE 2. Steps of our gauge detection method: a) the input gauge image, b) result of edge detection, c) result of the line fitting, d) result of line removal, e) result of ellipse fitting and d) the superposition of the result of ellipse fitting and the original input gauge image.

FIGURE 3 .
FIGURE 3. Intermediate results generated by our methods.From left to right: the arc map, line removal result and the superposition of the detected ellipse and the original input.

TABLE 1 .
Comparison of gauge detection results.