Optimal Design of Sensors via Geometric Criteria

We consider a convex set Ω\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Omega $$\end{document} and look for the optimal convex sensor ω⊂Ω\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega \subset \Omega $$\end{document} of a given measure that minimizes the maximal distance to the points of Ω.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Omega .$$\end{document} This problem can be written as follows inf{dH(ω,Ω)||ω|=candω⊂Ω},\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \inf \{d^H(\omega ,\Omega ) \ |\ |\omega |=c\ \text {and}\ \omega \subset \Omega \}, \end{aligned}$$\end{document}where c∈(0,|Ω|),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c\in (0,|\Omega |),$$\end{document}dH\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d^H$$\end{document} being the Hausdorff distance. We show that the parametrization via the support functions allows us to formulate the geometric optimal shape design problem as an analytic one. By proving a judicious equivalence result, the shape optimization problem is approximated by a simpler minimization problem of a quadratic function under linear constraints. We then present some numerical results and qualitative properties of the optimal sensors and exhibit an unexpected symmetry breaking phenomenon.


Introduction
The optimal shape and placement of sensors frequently arises in industrial applications such as urban planning and temperature and pressure control in gas networks. Roughly, a sensor is optimally designed and placed if it assures the maximum observation of the phenomenon under consideration. Naturally, it is often designed in a goal oriented manner, constrained by a suitable PDE, accounting for the physics of the process. For more examples and details, we refer to the following non exhaustive list of works [12,[19][20][21]. Recently, with the emergence of data driven methods, several authors considered approaches based on Machine Learning in order to accelerate the computations, we refer for example to [22,24,26,28].
Here, we address the problem in a purely geometric setting, without involving the specific PDE model. We consider a simple and natural geometric criterion of performance, based on distance functions. But, as we shall see, tackling it will require to employ geometric analysis methods.
More precisely, we address the issue of designing an optimal sensor inside a given set in such a way to minimize the maximal distance from the sensor to all the points of the largest domain. This type of questions naturally arises in optimal resources distribution problems as one wants to minimize the maximal distance between the resources and the species present in the considered environment. Also in urban planning, it makes sense to look for the optimal design and placement of some facility (for example a park or an artificial lake) inside a city while taking into account a certain equity and accessibility criterion that consists in minimizing the maximal distance from any point in the city to the facility.
These problems can then be formulated in a shape optimization framework. Indeed, given a set ⊂ R 2 , and a mass fraction c ∈ (0, | |), the problem can be mathematically formulated as follows: We are then interested in considering the following problem inf{d H (ω, ) | |ω| = c and ω ⊂ }, where c ∈ (0, | |). By using a homogenization strategy, which consists in uniformly distributing the mass of the sensor over (see Fig. 1), we see that problem (1) does not admit a solution as the infimum vanishes and is asymptotically attained by a sequence of disconnected sets with an increasing number of connected components. It is then necessary to impose Fig. 1 The homogenization strategy additional constraints on ω in order to obtain the existence of optimal solutions. In the present paper, we focus on the convexity constraint and assume that both the set and the sensor ω are planar convex bodies. Then, given a convex bounded domain ∈ R 2 , we are interested in the numerical and theoretical study of the following problem: inf{d H (ω, ) | ω is convex, |ω| = c and ω ⊂ }, where c ∈ (0, | |). We note that the convexity constraint is classically considered in shape optimization and sometimes appears as a natural simplifying hypothesis in physical problems. For example, one of the first problems in the calculus of variations is a least resistance problem posed by Newton in his Principia.
The problem is to consider a convex body that moves forward in a homogeneous medium composed of point particles. The medium is extremely rare, so as to assume that there is no mutual interaction. The particles are assumed to be initially at rest. When colliding with the body, each particle is reflected elastically. As a result of collisions, there appears a drag force that acts on the body and slows down its motion. Take a coordinate system in R 3 and assume that the body is travelling in positive z-direction. Let the upper part of the convex body's surface be the graph of a concave function u : −→ R, where is the projection of the body on the (x, y)-plane. By elementary physics arguments, Newton obtained the following resistance functional and introduced the following natural problem inf{J (u) | u : −→ R is concave}, which consists in looking for the shape of the convex body that minimizes the resistance. We refer to [5] for a detailed discussion of the model and the history of this problem.
Let us now state the main results of the present paper. A first important theorem is the following: | ω is convex, |ω| = c and ω ⊂ } is continuous and strictly decreasing. Moreover, for every c ∈ [0, | |], problem (2) admits solutions and is equivalent to the following shape optimization problems: in the sense that any solution of one of the problems also solves the other ones.
Let us give a few comments on this theorem: • The results hold in higher dimensions. Nevertheless, we have made the choice to state them in the planar framework for readability sake and coherence with the qualitative and numerical results obtained in the planar case, see Sects. 4 and 5. • On the other hand, in addition to its importance from a theoretical point of view (as we shall see in Sect. 4), the equivalence result above allows to drastically simplify the numerical resolution of problem (2): indeed, as it is explained in Sect. 5.1, the equivalent problem (III) can be reformulated via the support functions h and h of the sets ω and in the following analytical form: where H 1 per (0, 2π) is the set of H 1 functions that are 2π -periodic and c ∈ [0, | |]. This analytical problem is then approximated by a finite dimensional one, involving the truncated Fourier series of the support functions h as in [2,3], which yields to a simple minimization problem of a quadratic function under linear constraints.
For more details on the support function parametrization, we refer to Sect. 2.1 and for the complete description of the numerical scheme used in the paper, we refer to Sect. 5.
One could expect that solutions of (2) will inherit the symmetries of the set . We show that this is not always the case and highlight a symmetry breaking phenomenon appearing when is a square, see Fig. 2. Our result can be stated as follows: There exists a threshold c 0 ∈ (0, 1) such that: • If c ∈ [c 0 , 1], then the solution of (2) is given by the square of area c and same axes of symmetry as . • If c ∈ [0, c 0 ), then the solution of (2) is given by a suitable rectangle.
The paper is organized as follows: in Sect. 2, we present the notations used and recall some classical results on the support function which is a classical parametrization of convex sets that allows to formulate the considered geometric problems as purely analytic ones. In Sect. 3, we present the proof of Theorem 1. Section 4 is devoted to the proof of Theorem 2 and some qualitative properties of intrinsic interest: namely, we prove that when the set is a polygon, the optimal sensor is also a polygon. At last, in Sect. 5, we present a numerical framework for solving the problem and show that thanks to the equivalence result of Theorem 1, problem (2) can be numerically addressed by a simple minimization of a quadratic function under some linear constrains.

Definition of the Support Function and Classical Results
If is convex (not necessarily containing the origin), its support function is defined as follows: h : x ∈ R 2 −→ sup{ x, y | y ∈ }.
Since the functions h satisfy the scaling property h (t x) = th (x) for t > 0, it can be characterized by its values on the unit sphere S 1 or, equivalently, on the interval [0, 2π ]. We then adopt the following definition:

Definition 3
The support function of a planar bounded convex set is defined on [0, 2π ] as follows ( Fig. 3): The support function has some interesting properties: • It allows to provide a simple criterion of the convexity of .  • It allows to parametrize the inclusion in a simple way. Indeed, if 1 and 2 are two convex sets, we have • It also provides elegant formulas for some geometric quantities. For example, the perimeter and the area of a convex body are respectively given by and the Hausdorff distance between two convex bodies 1 and 2 is given by see [23, Lemma 1.8.14].

Notations
• K c corresponds to the class of planar, closed, bounded and convex subsets of , of area c, where c ∈ [0, | |]. • If X and Y are two subsets of R n , the Hausdorff distance between the sets X and Y is defined as follows where d(a, B) := inf b∈B a − b quantifies the distance from the point a to the set B. Note that when ω ⊂ , as it is the case in the problems considered in the present paper, the Hausdorff distance is given by • If is a convex set, then h corresponds to its support function as defined in Sect. 2.1. • Given a convex set , we denote by −t its inner parallel set at distance t ≥ 0, which is defined by • H 1 per (0, 2π) is the set of H 1 functions that are 2π -periodic.

Proof of Theorem 1
For the convenience of the reader, we decomposed the proof in 3 parts: first, we prove the existence of solutions of problem (2). Then, we prove the monotonicity and continuity of the function f : At last, we present the proof of the equivalence between the four shape optimization problems stated in Theorem 1.

Proposition 4 Problem (2) admits at least one solution.
Proof First, we note that the functional ω −→ d H (ω, ) is 1-Lipschitz (thus continuous) with respect to the Hausdorff distance. Indeed, for every convex sets ω 1 and ω 2 , we have Let (ω n ) be a minimizing sequence for problem (2), i.e., such that ω n ∈ K c and Since all the convex sets ω n are included in the bounded set , we have, by Blaschke's selection Theorem (see [23,Th 1.8.7]), that there exists a convex set ω * ⊂ such that (ω n ) converges up to a subsequence (that we also denote by (ω n )) to ω * with respect to the Hausdorff distance. By the continuity of the volume functional with respect to the Hausdorff distance, we have which means that ω * ∈ K c . Moreover, by the continuity of ω −→ d H (ω, ) with respect to the Hausdorff distance, we have that This shows that ω * is a solution of problem (2).

Monotonicity and Continuity
• We first show an inferior limit inequality. Let (c n ) n≥1 be a sequence converging to c 0 such that lim inf Since all the convex sets ω c n are included in the bounded set , we have, by Blaschke selection theorem and the continuity of the functional ω −→ d(ω, ) and the volume, the existence of a set ω * ∈ K c 0 as a limit of a subsequence still denoted by (ω c n ) with respect to the Hausdorff distance.
We then have • It remains to prove a superior limit inequality. Let (c n ) n≥1 be a sequence converging to c 0 such that lim sup Let us now consider the following family of convex sets where τ c is chosen in R + in such a way that and t c is chosen in [0, 1] in such a way that The map c ∈ [0, | |] −→ Q c is continuous with respect to the Hausdorff distance and Using the definition of f , we have Passing to the limit, we get lim sup As a consequence, we finally get

The Equivalence Between the Problems
We then obtain the following important proposition that provides the equivalence between four different shape optimization problems.

Proposition 6
Let c ∈ [0, | |]. The following shape optimization problems are equivalent in the sense that any solution to one of the problems also solves the other ones.
Proof Let us prove the equivalence between the four problems.
• We first show that any solution of (I) solves (II): let ω c be a solution to (I). Then for every convex ω ⊂ such that |ω| ≤ c, one has • Reciprocally, let now ω c be a solution of (II): we want to show that ω c must be of volume c. We notice that where the first inequality follows as the problem (II) allows more candidates than in the definition of f , and the last inequality uses again the monotonicity of f . Therefore f (c) = f |ω c | , and since f is continuous and strictly decreasing, we obtain |ω c | = c, which implies that ω c solves (I).
We proved the equivalence between problems (I) and (II); the equivalence between problems (III) and (IV) is shown by similar arguments. It remains to prove the equivalence between (I) and (III).
• Let ω c be a solution of (I), which means that Thus, since f is decreasing, we get c = |ω c | ≤ |ω|, which means that ω c solves (III). • Let now ω c be a solution of (III). We have Thus, by the monotonicity of f we get c ≥ |ω c |. On the other hand, since ω c solves (III) and there exists a solution ω c of (I), we have |ω c | ≥ c, which finally gives |ω c | = c and shows that ω c solves (I).

Remark 7
For clarity purposes, the results of Sect. 3 have been stated and proved in the planar case. Nevertheless, it is not difficult to see that all the results hold in higher dimensions n ≥ 2. Indeed, one just has to consider support functions defined on the unit sphere S n−1 (see for example [23, Section 1.7.1]) and reproduce the exact same steps.

Proposition 8
Let ω be a solution of problem (2). Then, there exist (at least) two different couples of points (x 1 , y 1 ), (x 2 , y 2 ) ∈ ∂ω × ∂ such that Proof Let us argue by contradiction. We assume that there exists only one couple (x 1 , y 1 ) ∈ ∂ω × ∂ such that Let x ∈ ∂ω be a point different from x 1 . By cutting an infinitesimal portion of the convex ω (see Fig. 4), we obtain a set ω ε such that d H (ω, ) = d H (ω ε , ) (because we assumed that the Hausdorff distance is attained at only one couple of points) and |ω| > |ω ε |, for sufficiently small values of ε. Thus, ω is not a solution of the third problem of Proposition 6, which is absurd since ω is assumed to be a solution of problem (2) (which is proven to be equivalent to the later one in Proposition 6). (2) is also a polygon of at most N sides.

Proposition 9 If the set is a polygon of N sides, then any solution of problem
Proof Let us denote by v 1 , . . . , v N , with N ≥ 3, the vertices of the polygon and consider a solution ω of problem (3).
The distance function x −→ min y∈ω x − y is convex, thus, it is well known that its maximal value on the convex polygon is attained at some vertices that we denote by (v k ) k∈ 1,K , where K ≤ N . Note that since ω a solution of problem (3), we have K ≥ 2 by Proposition 8. Moreover, for every k ∈ 1, K there exists a unique u k ∈ ∂ω such that v k − u k = d H (ω, ), which is the projection of the vertex v k onto the convex sensor ω. Let us consider two successive projection points u 1 and u 2 and assume without loss of generality that their coordinates are given by (0, 0) and (x 0 , 0), with x 0 > 0, see Fig. 5.
We consider the altitude h ≥ 0 defined as follows h := sup{s | ∃x ∈ [0, x 0 ], such that (x, s) ∈ ω}. that is equivalent to problem (2) by Proposition 6. This provides a contradiction since ω is assumed to be a solution of problem (2). We then have that h = 0, which means that the segment of extremities u 1 and u 2 is included in the boundary of the optimal set ω. By repeating the same argument with the successive couple of points u k and u k+1 (with the convention u k+1 = u 1 ), we prove that the boundary of the optimal set ω is exactly given by the union of the segments of extremities u k and u k+1 which means that ω is a polygon (of K sides).

Application to the Square: Symmetry Breaking
In this section, we combine the results of Propositions 6 and 9 to solve problem (3) when is a square. This leads to observe the non uniqueness of the optimal shape and a symmetry breaking phenomenon. The phenomenon might seem surprising as one could expect that the optimal sensor will inherit all the symmetries of .
Before presenting the proof, we exhibit the solutions for different values of c, when is a square. Remark 10 As one observes in Fig. 6, for values of c close to | | = 1, the optimal sensor is a square with the same symmetries of , but for small values of c, the optimal sensor is no longer the square but a certain rectangle. One should then note that the optimal sensor is not necessarily unique (as one can consider rotating the rectangle with an angle π/2) and it does not necessarily inherit all the symmetries of the shape (as it is not symmetric with respect to the diagonals of ).
Let us now present the details of the proof. By Propositions 6 and 9, problem (3) is equivalent to the following one: with δ ∈ [0, 1 2 ]. In the following proposition, we completely solve problem (4).
be the unit square and δ ∈ [0, 1 2 ). The solution of problem (4) is given by • and by one of the two rectangles of vertices Proof We denote by A 1 (0, 0), A 2 (1, 0), A 3 (1, 1) and A 4 (0, 1) the vertices of the square and by B 1 , B 2 , B 3 and B 4 the balls of radius δ and centers respectively A 1 , A 2 , A 3 and A 4 , see Fig. 7. Let ω be a solution of problem (4) (it is then also a solution of problem (3) by Proposition 6). By the result of Proposition 9, since is a square (in particular a polygon), the optimal shape ω is also a polygon with at most four vertices. Since d H (ω, ) = δ, the polygon ω has four different vertices. Each one of them is contained in a set B k ∩ , with k ∈ 1, 4 .
In fact, since the optimal set ω minimises the area for a given Hausdorff distance, we deduce that all its vertices are located on the arcs of circles ∂ B k ∩ given by the intersection of the boundaries of the balls B k and the square . Indeed, if it were not the case, one could easily construct a convex polygon strictly included in ω (thus, with strictly less volume) such that its Hausdorff distance to the square is equal to δ, see Fig. 7 Now that we know that each vertex of the optimal sensor ω is located on a (different) arc of circle ∂ B k ∩ , with k ∈ 1, 4 , let us denote them by where θ 1 , θ 2 , θ 3 , θ 4 ∈ [0, π 2 ], see Fig. 8. The area of the polygon ω can be expressed via the coordinates of its vertices as follows: where (x k , y k ) correspond to the coordinates of the points M k , with the convention (x 5 , y 5 ) := (x 1 , y 1 ). By straightforward computations, we obtain (cos θ k sin θ k+1 + cos θ k+1 sin θ k ), with the convention θ 5 = θ 1 . We then perform a judicious factorization to obtain the following formula We then use the inequality a + b ≥ 2 √ ab, where the equality holds if and only if a = b, and obtain with equality if and only if We then write and use again the inequality a + b ≥ 2 √ ab to obtain where the equality holds if and only if one has By combining the equality conditions (5) and (7), we show that the inequality (6) is an equality if and only if θ 1 = θ 3 , θ 2 = θ 4 and which is equivalent to which holds if and only if θ 1 = θ 2 , because the function θ −→ . We then conclude that the equality in (6) holds if and only if θ 1 = θ 2 = θ 3 = θ 4 , which means that the optimal sensor is a rectangle that corresponds to the value of θ δ that minimizes the function Since we have f δ ( π 2 − θ) = f δ (θ ) for every θ ∈ [0, π 4 ], we deduce by symmetry that it suffices to study the function f δ on the interval [0, π 4 ]; we have The function g δ : θ −→ − 1 2 + δ cos θ + δ sin θ is continuous and strictly increasing on [0, π 4 ]. Thus, Then, the sign of g δ on [0, π 4 ] (and thus the variation of f δ , see Fig. 9) depends on the value of δ ∈ [0, 1 2 ). Indeed: (i.e., g δ ( π 4 ) ≤ 0), then g < 0 on (0, π 4 ), which means that f δ is strictly decreasing on [0, π 4 ] and thus attains its minimal value at θ δ = π 4 . • If δ > 1 2 √ 2 (i.e., g δ ( π 4 ) > 0), straightforward computations show that the function f δ is strictly decreasing on [0, θ δ ] and increasing on [θ δ , π 4 ], with θ δ = arcsin . Thus, f δ attains its minimal value at θ δ .

Numerical Simulations
In this section, we present the numerical scheme adopted to solve the problems under consideration in the present paper. In particular, we focus on the following (equivalent) problems: where c, d ≥ 0. As we shall see, even-though the problems are equivalent (see Theorem 6), problem (9) is much easier to solve numerically as it is approximated by a simple problem of minimizing a quadratic function under linear constraints.

Parametrization of the Functionals
In Sect. 2.1 we recalled that if both and ω are convex, we have the following formulae for the Hausdorff distance between ω and where h and h ω respectively correspond to the support functions of the convex sets and ω.
On the other hand, the inclusion constraint ω ⊂ can be expressed by h ω ≤ h on [0, 2π ] and the convexity of the sensor ω can also be analytically expressed as follows in the sense of distributions. We refer to [23] for more details and results on convexity.
Therefore, the use of the support functions allows to respectively transform the purely geometric problems (8) and (9) into the following analytical ones where H 1 per (0, 2π) is the set of H 1 functions that are 2π -periodic. Since problem (11) can be reformulated as follows To perform the numerical approximation of optimal shape, we have to retrieve a finite dimensional setting. We then follow the same ideas in [2,3] and parametrize the sets via Fourier coefficients of their support functions truncated at a certain order N ≥ 1. Thus, we look for solutions in the set (kθ) a 0 , . . . , a N , b 1 , . . . , b N This approach is justified by the following approximation proposition: Proposition 12 [23, Section 3.4] Let ∈ K 2 and ε > 0. Then there exists N ε and ε with support function h ε ∈ H N ε such that d H ( , ε ) < ε.
We refer to [2,4] for other and applications to different problems and some theoretical convergence results.
Let us now consider the regular subdivision (θ k ) k∈ 1,M of [0, 2π ], where θ k = 2kπ/M and M ∈ N * . The inclusion constraints h (θ ) − d ≤ h(θ ) ≤ h (θ ) and the convexity constraint h (θ ) + h(θ ) ≥ 0 are approximated by the following 3M linear constraints on the Fourier coefficients: At last, the area of the convex set corresponding to the truncated support function of ω at the order N is given by the following quadratic formula: Thus, the infinitely dimensional problems (10) and (12) are approximated by the following finitely dimensional ones: a 1 ,...,a N ,b 1 and 0 ,a 1 ,...,a N ,b 1 ,...,b N Remark 13 We conclude that the shape optimization problems considered in the present paper are approximated by problem (14), which simply consists in minimizing a quadratic function under linear constraints.

Computation of the Gradients
A very important step in shape optimization is the computation of the gradients. In our case, the convexity and inclusion constraints are linear and the area constraint is quadratic. Thus, its gradients are obtained by direct computations. Nevertheless, the computation of the gradient of the objective function in Problem (13) is not straightforward as it is defined as a supremum. This is why we use a Danskin's differentiation scheme [9] to compute the derivative.  , a 0 , . . . , b N ).
The function j admits directional derivatives in every direction and we have and for every k ∈ 1, N , Proof Since the same scheme is followed for every coordinate, we limit ourselves to present the proof for the first coordinate a 0 . In order to simplify the notations, we will write for every x ∈ R, For every t ≥ 0, we denote by θ t ∈ [0, 2π ] a point such that We have Thus, which means that Let us now consider a sequence (t n ) of positive numbers decreasing to 0, such that We have, for every n ≥ 0, Thus, where θ ∞ is an accumulation point of the sequence (θ n ). It is not difficult to check that θ ∞ ∈ . Thus, we have lim sup By the inequalities (15) and (16) we deduce the announced formula for the derivative.

Numerical Results
Now that we have parameterized the problem and computed the gradients, we are in position to perform numerical shape optimization. We use the 'fmincon' Matlab routine. In the following figures we present the results obtained for different shapes and different mass fractions c 0 := α 0 | |, where α 0 ∈ {0.01, 0.1, 0.4, 0.7}.

Remark 15
The numerical simulations presented in Fig. 10 suggest that for large mass fractions (α 0 ≈ 1), the optimal sensor is exactly given by an inner parallel set of the domain (see Sect. 2.2 for the definition of inner parallel sets). In the work in preparation [10], we prove that this statement holds under some regularity assumptions on the set and investigate its relation with the apparition of caustics when computing the distance function to the boundary ∂ by solving following the eikonal equation

Optimal Spherical Sensors and Relation with Chebyshev Centers
In this section, we show that the ideas developed in the last sections can be efficiently used to numerically solve the problem of optimal placement of a spherical sensor inside the convex set . We show also that this problem is related to the task of finding the Chebyshev center of the set, i.e., the center of the minimal-radius ball enclosing the entire set .
We are then considering the following optimal placement problem min{d H (B, ) | B is a ball included in and of radiusR}, with R ∈ [0, r ( )], where r ( ) is the inradius of , that is the radius of the biggest ball contained in .
Since the support function of a ball B of center (x, y) and radius R is simply given by h B : θ −→ R + x cos θ + y sin θ, problem (17) can be formulated in terms of support functions as follows: min (x,y) { h − R + x cos θ + y sin θ ∞ | ∀θ ∈ [0, 2π ], Here also, as in Sect. 5.1, the inclusion constraint B ⊂ (i.e., h B ≤ h ) can be approximated by a finite number of linear inequalities where θ k := 2kπ/M, with k ∈ 1, M and M chosen equal to 500.
Thus, we retrieve a problem of minimizing the non linear function (x, y) −→ h − R + x cos θ + y sin θ ∞ (whose gradient is computed by using the result of Proposition 14) with a finite number of linear constraints.
In Fig. 11, we present some numerical results. At last, we note that solving problem (17) with R = 0 is equivalent to finding the Chebyshev center of that is the center of the minimal-radius ball enclosing the entire set , see Fig. 12. This center has been considered by several authors in different settings, especially in functional analysis, we refer for example to [1,14,17,18].

Conclusion and Perspectives
The present paper is devoted to the design of one convex sensor inside a given convex domain minimizing the farthest distance between the two sets. Many natural extensions could be considered. In this section we discuss some possible development and present some ideas that we are planning to develop for future works.
• Multiple sensors on domains and networks. A natural extension would be the optimal placement and design of multiple sensors inside a given region or a network. These cases are out of the scope of the present work and we believe that different techniques should be considered for their treatment. Indeed, our approach is mainly based on parametrizing the boundaries of the set and the sensor ω via their support functions h and h ω and use them to compute the Hausdorff distance between and ω via the formula We believe that no similar formula for the Hausdorff distance between the union of two or more sensors and the set could be found. Indeed, in contrary to the case of one sensor where the Hausdorff distance is always attained at points on the boundaries of the sets ω and (see Fig. 4), in the case of multiple sensors the Hausdorff distance may be attained at a point inside the domain (see Fig. 13), which makes the parametrization via support functions irrelevant. It is then natural to investigate the optimal design and placement of N sensors (S k ) inside a domain or a network in such a way that any point in is "easily" reachable from one of the sensors. This problem can be mathematically formulated as follows where d(y, ∪ N k=1 S k ) is the minimal (geodesic if is a network) distance from the point y to the union of the sensors. Fig. 13 The Hausdorff distance between the disconnected sensor ω 1 ∪ ω 2 and the set If we consider (v ε ) a family of functions approximating the distance function y −→ d(y, ∪ N k=1 S k ) when ε goes to 0 (such as the ones defined below in Theorems 16 and 17), we may consider approximating problem (19) by the following one min max The advantage of such approximated problems is that they involve elliptic equations that are much easier to deal with from the theoretical and numerical points of views. Once problem (20) is solved, the next natural step would be to justify that the obtained solutions converge to those of the initial problem (19). This is classically done by proving -convergence results, see for example [16,Section 6].
• Approximation of the distance function. The problems studied in the present paper involve the distance function, that satisfies the classical eikonal equation Such equation is nonlinear and hyperbolic, which makes it quite difficult to deal with from a numerical perspective, especially in the context numerical shape optimization.
It may then be interesting to use a suitable approximation of the distance function based on some PDE results in the spirit of Crane et al. in [8], where the authors introduce a new approach to compute distances based on a heat flow result of Varadhan [27], which says that the geodesic distance φ(x, y) between any pair of points x and y on a Riemannian manifold can be recovered via a simple pointwise transformation of the heat kernel φ(x, y) = lim t−→0 −4t log k t,x (y), where k t,x (y) is the heat kernel, which measures the heat transferred from a source x to a destination y after time t. We refer to [8,27] for more details and to [25] for an extension to graphs.
In the same spirit, one could use a suitable approximation of the distance function in terms of the solution of an elliptic PDE, inspired by the following classical result: Theorem 16 [27,Th. 2.3] Let be an open subset of R n and ε > 0, we consider the problem w ε − ε w ε = 0 in , w ε = 1 on ∂ .  In Fig. 14, we plot the approximation of the distance function to the boundary obtained via the result of Theorem 16.
We note that there are other results of approximation of the distance function via PDEs, see [11] and references therein. We recall for example the following result of Bernd Kawohl: Theorem 17 [13,Th. 1] We consider the problem where p corresponds to the p-Laplace operator, defined as follows p v = div(|∇v| p−2 ∇v).
One advantage of such approximation methods is that they allow to introduce relevant PDE based problems that may be easier to consider from a numerical point of view than the initial ones involving the distance function and that are of intrinsic interest. Let us conclude by presenting some examples of such problems: • The average distance problem. Given a set ⊂ R n and a subset ⊂ , the average distance to is defined as follows: where p is a positive parameter.
The main focus here is to study the shapes that minimize the average distance and investigate their properties such as symmetries and regularity. This problem has been introduced in [6,7] and studied by many authors in the last years. For a presentation of the problem, we refer to [15] and to the references therein for related results.
Even if these problems are easy to formulate, they are quite difficult to tackle both theoretically and numerically. It is then interesting to use the approximation results of the distance function to approximate the functional J p by some functional J p,ε ( ) := v p ε dx, where (v ε ) is a family of functions uniformly converging to d(·, ) on when ε goes to 0.
We are then led to consider shape optimization problems of functionals involving solutions of simple elliptic PDEs. Several results for such functionals are easier to obtain such as Hadamard formulas for the shape derivatives, which are of crucial importance for numerical simulations.