Direct Least Absolute Deviation Fitting of Ellipses

Scattered data from edge detection usually involve undesired noise which seriously affects the accuracy of ellipse fitting. In order to alleviate this kind of degradation, a method of direct least absolute deviation ellipse fitting by minimizing the l1 algebraic distance is presented. Unlike the conventional l2 estimators which tend to produce a satisfied performance on ideal and Gaussian noise data, while do a poor job for non-Gaussian outliers, the proposed method shows very competitive results for non-Gaussian noise. In addition, an efficient numerical algorithm based on the split Bregman iteration is developed to solve the resulting l1 optimization problem, according to which the computational burden is significantly reduced. Furthermore, two classes of l2 solutions are introduced as the initial guess, and the selection of algorithm parameters is studied in detail; thus, it does not suffer from the convergence issues due to poor initialization which is a common drawback existing in iterative-based approaches. Numerical experiments reveal that the proposed method is superior to its l2 counterpart and outperforms some of the state-ofthe-art algorithms for both Gaussian and non-Gaussian artifacts.


Introduction
Ellipse fitting is a fundamental tool for many computer vision tasks such as object detection, recognition, camera calibration [1], and 3D reconstruction [2]. A large amount of technologies have been developed for this issue and work well under the assumption that the fitting error follows a Gaussian distribution, among which Fitzgibbon's direct least square fitting [3] has attracted a lot of attention due to its high accurateness, solution uniqueness, and fast and easy to use and has been regarded as the standardized landmark of ℓ 2 fitting. Based on which, the scale normalization [4], robust data preprocessing for noise removal [5], optimization of the Sampson distance [6,7], minimizing the algebraic distance in subject to suitable quadratic constraints [8,9], and modeling the noise as a sum of random amplitude-modulated complex exponentials [10] and iterative orthogonal transformations [11] are launched. Although the performance has been improved to some extent, the inherit disadvantage of ℓ 2 norm-based methods is that the square term will enlarge the influence of outliers and lead to a rough result. To overcome this shortcoming, the ℓ 1 [12] and ℓ 0 norms [13] are cited to address this issue; theoretically, it turns to be less sensitive to outliers for algorithms based on ℓ p -norm (0 < p < 2), and the selection of iterative initial value is still pending yet. Given the advantage of ℓ 1 -norm and direct algebraic distance minimizing strategy, a natural way is to replace the ℓ 2 item with ℓ 1 in Fitzgibbon's model thus comes out our ℓ 1 model, and we will explore its efficacy in the following sections.

Problem
Modeling. For a general conic by an implicit second-order polynomial, it can be drawn as F(D, α) � Dα � ax 2 + bxy + cy 2 + dx + ey + f � 0, (1) where α � [a b c d e f] T is denoted as the conic parameter, D � [x 2 xy y 2 x y 1] T is the data point, for every point i � 1, 2, . . . , N, where N is the number of data points, and |D i α| is called the algebraic distance between the data point (x i , y i ) and the conic F(D, α) � 0; note that the symbol |x| stands for the absolute value of x. e sum of all algebraic distances at each point from 1 to N is In order to force the conic curve into an ellipse, the parameters of the conic curve should meet the necessary conditions b 2 − 4ac < 0; for convenience, we take its subset 4ac − b 2 � 1 instead [3], and its matrix form can be rewritten as α T Cα � 1, with C � us, the matrix form of the objective ℓ 1 minimization model is given as follows: By introducing the Lagrange multiplier λ, the constrained problem can be turned into an unconstrained one as arg min

Numerical Aspects.
us, our method is to seek optimal α that minimizes the cost function in equation (5). e difficulty in solving equation (5) is that the ℓ 1 term is nondifferentiable and inseparable. To overcome this problem, we follow the method of Zhou et al. [14]. e split Bregman method is first introduced in [15] as a very efficient tool to solve the general ℓ 1 -regularized optimization problems. e basic idea is to convert the unconstrained minimization problem in equation (5) We can approximate equation (6) by adding one penalty function term as done in [16] to obtain an unconstrained problem. is yields arg min α,s where μ is a positive penalization parameter. Finally, we strictly enforced the constraints by applying the split Bregman iteration [17] with a Bregman variable t to obtain two subproblems: arg min α,s,t In order to further simplify the subproblems, we split equation (8) into two separate ones. We now investigate these subproblems one by one.
(1) e α-related subproblem is arg min Note that it is a least-square problem. us, we can obtain a closed solution as follows: (2) e s-related subproblem is arg min is type of problem can be effectively computed using the standard soft-threshold formula presented in [16]. where e complete procedure of minimization problem equation (5) with the split Bregman iteration is summarized in Algorithm 1, where n max is the maximum number of iterations; to avoid the procedure falling into an invalid solution, the iteration stop criterion is set as the algebraic distance calculated on the (k + 1)-th iteration, |Dα k+1 | 1 is greater than k-th |Dα k | 1 or the iterations reach to the maximum.
With the high development of the variational partial differential equation (PDE) in the field of image process, the ℓ 1 -regularized PDE-based methods have gained more and more attention and turned out to be a promising approach for various applications such as denoise [18], destripe [19], 2 Mathematical Problems in Engineering deblur [20,21], compressive sensing [22], and object tracking [23]. ough, the ℓ 1 -regularized model typically achieves a very competitive performance, however, we have to select its parameters carefully or else it may fall into a local extremum and results in an undesired outcome. Almost all PDE-based methods suffer from this defect, so it is difficult to be promoted in industrial environments. Considering the classical ℓ 2 ellipse fitting [3,24], it aims to minimizing the following energy functional: For it is a least-square problem, set zJ/zα � 0, and we have Comparing the ℓ 1 solution derived from equation (11) with the ℓ 2 solution in equation (16), when it satisfies the following conditions, the initial solution of ℓ 1 is close to ℓ 2 . Without losing generality, we set μ � 1, λ � 1, λ ′ � 1, for x 3 y x 2 y xy 3 x 2 y xy 2 xy x 2 y 2 xy 3 y 4 xy 2 y 3 y 2 x 3 x 2 y xy 2 x 2 xy x x 2 y xy 2 y 3 xy y 2 y where x, y represents the position of the data point, thus, en, its centroid (X c , Y c ), major semiaxis R x , minor semiaxis R y , and angle θ can be determined by [25] For two ellipses E and F, their relative error can be defined as where S E ∩ S F denotes the area covered by ellipses E and F and simultaneously said the intersection of E and F and S E ∪ S F represents the area covered by ellipses E and F in total and said the union of E and F.

Experimental Results
In this section, some experiments are conducted to verify the robustness of the proposed method in suppressing the influence of noise for ellipse fitting with synthetic and natural data. Several state-of-the-art ellipse fitting algorithms are implemented for performance comparison such as Fitzgibbon's ℓ 2 -norm based direct least-squares ellipse fitting (DLS) [3], the hyper-least-squares fitting (HyperLS) [4], guaranteed ellipse fitting with Sampson distance (Samp-sonLS) [6], ElliFit [26], and the RANSAC method (RAN-SAC) [2] which are investigated. If there is no special explanation, all the parameters hold the same for all experiments. e scale normalization parameter f 0 � 600 is set for HyperLS. e number of sampling, the maximum number of iterations, and the threshold for eliminating the outliers in the RANSAC method are 5, 1,000, and 2, (3) Solve s k+1 using (13) (4) Solve t k+1 using (9) (5) Solve α k+1 using (11) (6) Set k :� k + 1. Mathematical Problems in Engineering 3 respectively, as the author advised. e parameter η � 0 is set for the proposed method. We also studied the average running time of all tested algorithms as shown in Table 1.

Ellipse Fitting with Synthetic Data.
In this section, experiments on three types of simulated ellipse data including Gaussian noise, Laplace outlier, and ellipse fragment are conducted to evaluate the performance of the proposed method.
(1) In the first experiment, 256 data points were sampled from an ellipse α � X c � 72, Y c � 64, R x � 70, R y � 50, θ � 30°} according to a uniform sampling in angle. e noise is stochastic additive Gaussian one with mean μ � 0 and standard deviation σ � 0.5, { 5, 10, 15, 20, 25}, where the noise level σ is equivalent to 1%, 10%, 20%, 30%, 40%, 50% { } of the length of the ellipse minor semiaxis R y . For this kind of degradation, 100% data points are polluted. Figure 1(a) gives an example of simulated ellipse points corroded with Gaussian noise with μ � 0, σ � 5. For a total run of 100 trials at per noise level σ, the mean, standard deviation of the relative error, and estimated ellipse parameters are demonstrated in Table 2. It can be found that, for each noise level σ, the HyperLS method obtains the highest accuracy, and the proposed method (α 0 � HyperLS) is the suboptimal. It outperforms about 30% which corresponds to the DLS method while causes a little performance reduction by 4% to HyperLS. Figures 2(a) and 2(b) demonstrate the mean and standard deviation of the relative error obtained with the tested approaches, respectively. It should be pointed out that though the DLS method indicates a more competitive stability compared with the proposed method, an obvious performance improvement has been gained by our ℓ 1 fitting model. (2) In the second experiment, take the ellipse parameters unchanged instead of the noise type with Laplace whose mean μ � 0 and standard deviation σ hold the same as the first experiment (in this case, 64 points from 256 are selected randomly polluted by Laplace noise; an example is shown in Figure 1(b) with μ � 0, σ � 25). In order to illustrate the convergence behavior of the proposed algorithm, we plot the functional energy calculated by equation (5) which varies with the number of iterations as shown in Figure 3. From the figure, we can see that the proposed method (α 0 � DL S) can work well in a large wide parameter space; with the increasing of η, the convergent speed becomes slower gradually. Empirically, when the number of the iterations approaches 300, a satisfactory output will be gained by the proposed method. For a total run of 100 trials at per Laplace noise level σ, the mean, standard deviation of the relative error, and estimated ellipse parameters are demonstrated in Table 3. It can be found that the proposed method achieves the best performance at each noise level except for σ � 0.5 and even outperforms the robust RANSAC method. Figures 2(c) and 2(d) are the mean and standard deviation of the relative error obtained with the tested approaches, respectively, from which it can be declared that the RANSAC method is very robust to Laplace outliers for different noise levels; meanwhile, the proposed and the RANSAC method are far superior to the other algorithms in this case.  Figure 5, we can find that, for the Gaussian ellipse fragment, the SampsonLS and the proposed method (α 0 � HyperLS) get the lowest relative error, while for Laplace outliers, the proposed method (α 0 � HyperLS) and the RANSAC method take a significant promotion in accuracy as expected. Either for Gaussian or Laplace, the DLS and ElliFit methods have good stability.

Ellipse Fitting with Natural Data.
Selective laser melting is one of the crucial steps in additive manufacturing (AM). e fusion area contour plays a very important role in geometrical quality control for AM [27]. In our project, the levelset method [28] is used to extract the profile of the part. Figure 6(a) is a frame of the image captured with a CCD camera in the fusion area. In order to highlight the visual quality, a subimage is zoomed in Figure 6(b), and its contour extract results are shown in Figures 6(c) and 6(d). It can be easily seen from Figure 6(c) that there are several bumps that appeared in the outline, owing to which, the estimated ellipse center X c and the major semiaxis produced an obvious offset towards the outlier excluding our ℓ 1 fitting model as shown in Figure 7. It may can be declared that corresponding to the ℓ 2 -norm based methods, though the proposed ℓ 1 -normbased model may lead to a slight performance degradation in the Gaussian outlier, it turns out to be a more robust estimator for Laplacian and Laplacian-like abrupt degradation data.

Conclusions
In conclusion, a novel direct ℓ 1 -norm-based ellipse fitting model is presented in this study. It not only works well with the Gaussian outlier but also Laplacian degradation. Comparison experiments suggest that the proposed method can mitigate a wide variety of artifacts regardless of the noise type which is a very challenging issue for other state-of-the- Ellipse data points Ground truth (relative error = 0%) DLS (relative error = 5.46%) HyperLS (relative error = 5.88%) SampsonLS (relative error = 5.66%) ElliFit (relative error = 6.26%) RANSAC (relative error = 2.64%) Proposed (α 0 = DLS) (relative error = 1.65%) Proposed (α 0 = HyperLS) (relative error = 3.96%) art algorithms. Benefited from the standard ℓ 2 , HyperLS estimator, and split Bregman iteration, its convergence behavior is tested to verify its efficacy.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.