A variational algorithm for the detection of line segments

In this paper we propose an algorithm for the detection of edges in images that is based on topological asymptotic analysis. Motivated from the Mumford--Shah functional, we consider a variational functional that penalizes oscillations outside some approximate edge set, which we represent as the union of a finite number of thin strips, the width of which is an order of magnitude smaller than their length. In order to find a near optimal placement of these strips, we compute an asymptotic expansion of the functional with respect to the strip size. This expansion is then employed for defining a (topological) gradient descent like minimization method. As opposed to a recently proposed method by some of the authors, which uses coverings with balls, the usage of strips includes some directional information into the method, which can be used for obtaining finer edges and can also result in a reduction of computation times.


Introduction
Detection of edges, that is, points in a digital image at which the image intensity changes sharply is one of the most often performed steps in image processing. Ideally, the algorithm employed for solving this problem should provide a set of connected curves that indicate the edges of objects. In a recent paper [9], three of the authors have developed an iterative algorithm for edge detection using the concept of topological asymptotic analysis. The basic idea of this approach is to cover the expected edge set with balls of radius ε > 0 and use the number of balls, multiplied with 2ε, as an estimate for its length. It was shown that under certain condition the proposed variational model approximates the Mumford-Shah functional [12] in the sense of Γ-limits, and, therefore, this algorithm may be considered as a computational method for the approximate minimization of the Mumford-Shah functional.
A criterion for the optimal positioning of balls covering the edge set is provided by the leading term of a topological asymptotic expansion of the approximating functional. The (iterative) implementation of the algorithm selects edges successively according to certain rules. In a follow up paper [7], it was shown that this approach is useful for scale detection of edges.
In this paper, we consider again the problem of edge detection in the framework of topological asymptotic analysis. As opposed to the previous work, however, we consider now covering the edge set with line segments rather than with balls. There are several reasons: First, edges should rather be seen as a union of small line segments than as accumulations of points. Second, numerically, the resulting algorithm is expected to be faster, as in each iteration step a whole set of edge points (the segment) is detected and not a single point only. We admit here that there is still a conceptual misfit between the continuous formulation and the discrete setting. Theoretically, by our analysis, only edge segments can be detected that display a certain distance from the previously detected ones (this will be reflected in the constant δ 0 below). We believe that this technical problem can in fact be solved, but it seems that this requires a much more sophisticated analysis of the topological expansion. In fact, for practical realizations, it is not a severe restriction, since the distance can, theoretically, be chosen arbitrarily small, in particular below half of the pixel size, in which case the union of line segments appears closed. However, compared to [9] the effect is less pronounced, because the covering line segments are relatively larger than the balls.
The novelty of this paper is an algorithm for edge detection based on the asymptotic analysis for topological derivatives with respect to line segments. We note that the topological asymptotic expansion in [9] has been derived in the framework of potential theory [13]. However, in the present case, due to the more complex geometry of the inhomogeneities and the impossibility of introducing a uniform scaling, this approach fails. To avoid these difficulties, in this paper we build up on a geometry independent approach of Capdeboscq & Vogelius [3,4]. To outline our method, we have to introduce some notation first.
Let Ω be an open and bounded subset of R 2 . We assume that a given image f : Ω → R is a bounded function that assigns to each point x ∈ Ω some gray value f (x) ∈ R. Definition 1.1. We denote by (1) σ ε (y, τ ) := {x ∈ R 2 : x = y + ρτ , −ε ≤ ρ ≤ ε} a line segment of length 2ε > 0 centered at y ∈ R 2 and with the unit tangent vector τ ∈ S 1 . Moreover, we define a thin strip around σ ε (y, τ ) as If K ⊂ Ω is a closed subset and 0 < κ < 1, we define the function v K : Ω → R by In particular, we will apply this notion if K is the union of strips ω ε (y, τ ). Finally, for every v ∈ L 2 (Ω) we define Here we set m ε (v) := +∞, if v = v K for every finite subset S ⊂ R 2 × S 1 with K = (y,τ ) ω ε (y, τ ).
With this notation at hand, we introduce the functional which is to be minimized over all functions u ∈ H 1 (Ω) and v ∈ L ∞ (Ω). Here α and β are some positive parameters. For the approximate numerical minimization of J ε we use a topological asymptotic expansion. Defining Thus the largest decrease of J ε with respect to a strip ω ε (y, τ ) can as well be found by optimizing J with respect to y and τ . Let now K be some subset of Ω; in particular, it can be the union of a finite number of thin strips. Now assume that we cut out a small strip ω ε (y, τ ) of Ω \ K and denote by v K and v K∪ωε := v K∪ωε(y,τ ) the corresponding edge indicators. Denote moreover by u K and u K∪ωε the minimizers of the functionals J (·, v K ) and J (·, v K∪ωε ), respectively. Our main result in Section 2 is the derivation of an expansion of the form where M = 1 κ n ⊗ n + τ ⊗ τ and n, τ are the unit normal and tangent vectors to the segment σ ε , and the intersection of K and ω ε (y, τ ) is empty. The above difference (6) is asymptotically valid whenever a strip is removed from the potential edge set and can be used for finding the points of Ω where we can expect the largest decrease of J ε by removing small strips.

Asymptotic expansion
We assume that Ω ⊂ R 2 is an open bounded smooth domain and f : Ω → R is a given function in L ∞ (Ω). We define the functional for u ∈ H 1 (Ω) and v ∈ L ∞ (Ω), and the parameter α > 0. Now assume that K is a fixed open subset of Ω and define the function v : Ω → R by with 0 < κ < 1. Using standard results of calculus of variations, one can show that the unique minimizer u ∈ H 1 (Ω) of J (·, v) is the unique weak solution to the boundary value problem In the remainder of this section, we will derive a variation of the functional J with respect to perturbation of the function v obtained by adding a small strip to the set K. More precisely, let us denote by L 0 a compact subset of Ω \ K such that Let y ∈ int(L 0 ) and τ ∈ S 1 . We choose ε > 0 small enough so that the thin strip ω ε (y, τ ) defined as in (2) is contained in L 0 . From now on, in order to simplify the notation, we set ω ε := ω ε (y, τ ) and σ ε := σ ε (y, τ ). We define the function v ε : Ω → R by Similarly as above, we note that the unique minimizer u ε ∈ H 1 (Ω) of J (·, v ε ) is the unique weak solution to the boundary value problem Our goal is to establish an expansion for J (u ε , v ε ) − J (u, v) in powers of ε as ε → 0. We will prove the following theorem: where M = 1 κ n ⊗ n + τ ⊗ τ and n, τ are the unit normal and unit tangent vectors to the segment σ ε .
In order to prove Theorem 2.1 we will follow the approach of Capdeboscq & Vogelius [3,4]. We will need the set which is constructed in such a way that it satisfies L 0 ⊂L 0 ⊂ Ω \K and dist(L 0 , ∂Ω ∪ K) ≥ δ 0 /2, and several auxiliary lemmas.
We now derive energy estimates for u ε − u.
Proof of Lemma 2.4. Subtracting (14) from (13) we get Setting φ = u ε −u, inserting it in the last equality and applying Schwarz' inequality, we get and by Schwarz' inequality and the regularity estimates proved in Lemma 2.3 for u we derive To prove (21) we subtract (13) from (14) getting Let w ∈ H 1 (Ω) be the solution to By the Sobolev imbedding theorem, the last inequality implies that ∇w ∈ L p (L 0 ) for any p ∈ (1, +∞) and Let us now choose q ∈ (1, 2) and p so that 1 p + 1 q = 1. Then, combining the variational formulation of the problem (23) with (22) and applying Hölder's inequality we get and since 1 q > 1 2 the claim follows. We recall here a general, geometry independent, result due to Capdeboscq & Vogelius (cf. [3,4]). Let us indicate with V j := x j − 1 |∂Ω| ∂Ω x j dσ, j = 1, 2, the so called corrector terms. Let and let V j ε , j = 1, 2, be the solutions to Proceeding with similar arguments as in Lemma 2.4 one easily sees that there exists a constant C = C(κ, δ 0 ) such that for j = 1, 2 and for some η > 0. Observe now that, as ε → 0, (29) |ω ε | −1 1 ωε (·) converges in the sense of measure to µ and the Borel measure µ is concentrated on L 0 . In fact, due to the form of the set ω ε , it is immediate to see that µ = δ y , where δ y denotes the Dirac measure concentrated at y. Using (27) and from the analysis in [3] it follows that, possibly up to the extraction of a subsequence, where M ij is a Borel measure with support in L 0 . Again, the fact that the set ω ε shrinks to the point y implies that the measure M ij is simply a multiple of δ y . Hence, identifying M ij with M ij δ y , we have for every smooth function φ. Following [3] it is possible to show the following result: Lemma 2.5. Let ε → 0 be such that (29) and (30) hold. Then for any j = 1, 2.
Proof of Lemma 2.5. From the energy estimates we have that, possibly extracting a subsequence that we do not relabel, for all continuous function φ in Ω. In order to prove (32) we will prove the following relation for ε → 0 Once (34) is proved, passing to the limit as ε → 0, we get from which (32) follows. Hence let us prove (34). Let us notice that, if φ ∈ C 1 0 (L 0 ), then φv ε = φγ ε and φv = φγ 0 = φ, and we have Using (35) and (36) and after some algebraic manipulations we get Now, by Lemma 2.4, (27), (28), Schwarz inequality and using finally the regularity of u and of V j we get (34) and the claim follows.
We are now ready to prove our main result: Proof of Theorem 2.1. Let ω ε be defined as in the proof of Proposition 1. Then, by Lemma 2.2, we can write Observe now that Using Schwarz inequality, the regularity estimates of Lemma 2.3 and Lemma 2.4 we get Let us choose some vector function Φ ∈ C 0 0 (Ω; R 2 ) such that Then, from Lemma 2.5 we get Finally, observing that the remainder term is uniformly bounded with respect to y ∈ L 0 , i.e., |o(ε 3 )| ≤ Cε 3+η for some η > 0 and C and η depend only on κ, δ 0 , f H −1 (Ω) , f L ∞ (Ω) and ∇u is continuous on the compact set L 0 the claim follows.

Numerical Implementation
We now propose two variants of an algorithm to edges detection that is based on the expansion derived above, which states that with M = 1 κ n ⊗ n + τ ⊗ τ . For fixed y ∈ Ω, the right hand side in (46) is minimal for τ equal to the unit vector perpendicular to ∇u(y) in which case As a consequence, we can expect a decrease of the function J ε in case |∇u(y)| 2 ≥ βκ αε 2 (1 − κ) and the decrease is maximal at points y where the gradient of u is maximal.
(1) Our first algorithm computes a smoothed version u s of the input image f a-priori, and then finds, using only the smoothed image u s , a sequence of edge indicators K (k) , where K (k+1) is formed from K (k) by the addition of a strip σ ε (x (k) , τ (k) ) for which the expected decrease in the approximated functional J ε from (5) is maximal. Theorem 2.1 indicates that, as long as we only add strips that are away from K (k) , this is the case if x (k) is chosen such that |∇u(x)| is maximal and τ (k) = (∇u(x (k) )) ⊥ . However, because the asymptotic expansion of Theorem 2.1 is only valid away from K (k) , we have to restrict the search for a maximum of |∇u| to some set L (k) which is compactly contained in Ω \ K (k) . For instance, one can set L (k) := Ω \ (∂Ω ∪ K (k) + B δ ) for some δ > 0; a different construction, which we have used in the numerical examples, is described below. The iteration is stopped when the expected decrease of the gradient term in the functional is compensated by the increase in the edge term. This is the case when |∇u(x (k) )| 2 < βκ αε 2 (1−κ) . This method is summarized in Algorithm 1.

Algorithm 1: Implementation without updates of the smoothed function
In fact this algorithm is an anisotropic edge detector, which take into account edge magnitudes and local edge orientations.
(2) In our second algorithm, we combine updates of the edge indicator with updates of the smoothed function u: After adding a fixed number n max of strips to the edge set K, we define the new diffusivity v by and then compute a corresponding smoothed function u, which is then used for selecting the next at most n max strips in the edge set. The process of alternating between the addition of strips and updates of the smoothed function u is repeated until no more admissible points x ∈ L exist for which |∇u(x)| 2 ≥ βκ αε 2 (1−κ) . The rationale behind this idea is the fact that the expansion derived above, though still valid, becomes increasingly inaccurate as the number of added strips becomes larger. Therefore, at some point some reinitialization is necessary. Note, however, that the number n max of strips that are added in each iteration mainly determines the computation time, as the computation of u is the most costly part of the algorithm. Thus the number n max should not be chosen too small. In the numerical implementations, we chose n max in such a way that approximately 10 computations of u were needed.
The resulting method is described in Algorithm 2. repeat set n = 1; repeat find x * ∈ L with |∇u(x * )| maximal; compute a strip σ ε of size ε centered at x * with normal ∇u(x * ); set K ← K ∪ σ ε ; compute an enlargement S of σ ε ; set L ← L \ S; set n ← n + 1; until max x * ∈L |∇u(x * )| 2 < Update of the Edge Indicator. For updating the edge indicator set K (and the function v), we have to find maximizers of |∇u|. We restrict the search to midpoints of the rectangular elements E k of the finite elements and evaluate the gradient on the elements analytically.  Assume now that the maximum of |∇u| is attained at y * . For the update of the set L we define the enlargement S of Σ ε as a rectangle of side-lengths 2ε and 2δ for some 0 < δ < ε around the center-line of the strip. That is, (see Figure 1) S := {y : dist(y k , σ ε ) ≤ δ and |(y − y k ) · ∇u(y k ) ⊥ | ≤ ε|∇u(y k )|} for some δ > 0. In the numerical experiments below we have chosen ε = 3h and δ = 2h with h being the pixel distance.
Numerical Experiments. We have tested the two algorithms proposed above using the Parrots image (see Figure 2, upper left). In addition, we provide a comparison with the results of the algorithm proposed in [9], where balls instead of strips are used for covering the edge set.  [9]. One clearly sees the spurious edges in the first segmentation and the thick edge in the third segmentation, which is partly resolved in the second one.
While, generally speaking, the positions of the detected edges do approximately agree for the different algorithms, the actual form of the edges may significantly differ. Thus the algorithm of [9] results in thick edges, which do not appear in the results from the strip based methods. This difference is due to the fact that the directional information present in the strips allows the exclusion of laterally neighboring pixels from further considerations. In contrast, the balls that are used for edge covering in [9] do not allow a similar exclusion of pixels.
Concerning computation times, Algorithm 1 is clearly faster than Algorithm 2, as the main computational effort of the methods lies in the solution of the PDE, which has to be computed several times in the case of Algorithm 2. We do note, however, that Algorithm 1 introduces artifacts in the form of parallel edges, which can be clearly seen in the close up view of the parrot's head in Figure 3  to zero. The asymptotic expansion derived in Theorem 2.1, however, relies on κ being bounded away from zero; the constant in the O(ε 3 ) expansion tends to +∞ as κ → 0.
Theorem. Assume that κ(ε) = o(ε 2 ) as ε → 0. Then, Proof. In order to prove the Γ-convergence result, we have to show that The proof of the lim inf-inequality is along the lines of [9]. Therefore we only prove the lim sup-inequality.
(2) The set S u is the union of a finite number of almost disjoint line segments contained in Ω, that is, their pairwise intersections are either empty or contain a single point. This set has been shown to be dense in SBV(Ω) in the sense that, for every u ∈ SBV, there exists a sequence (u j ) j∈N ∈ W(Ω) such that u j − u L 2 → 0 and F(u j ) → F(u) (see [5,6]). Now assume that u ∈ W(Ω) and ε > 0. In the following, we will construct sequences u ε → u and v ε → 1 such that J ε (u ε , v ε ) → F(u, 1).
Because of the aforementioned density of W(Ω) and the fact that F(u, v) = +∞ for v = 1, this will prove the lim sup-part.