Optimal Shape for Elliptic Problems with Random Perturbations

In this paper we analyze the relaxed form of a shape optimization problem with state equation $\{{array}{ll} -div \big(a(x)Du\big)=f\qquad\hbox{in}D \hbox{boundary conditions on}\partial D. {array}.$ The new fact is that the term $f$ is only known up to a random perturbation $\xi(x,\omega)$. The goal is to find an optimal coefficient $a(x)$, fulfilling the usual constraints $\alpha\le a\le\beta$ and $\displaystyle\int_D a(x) dx\le m$, which minimizes a cost function of the form $$\int_\Omega\int_Dj\big(x,\omega,u_a(x,\omega)\big) dx dP(\omega).$$ Some numerical examples are shown in the last section, to stress the difference with respect to the case with no perturbation.


Introduction
The field of shape optimization problems received in the last years a particular attention from the mathematical community, also in view of the many possible applications in high-tech instruments and structures, where increasing the performances or decreasing the weight, even by a small percentage, could be crucial. Several books on the field have been written, exploring the various aspects (theoretical, numerical, modelling, . . . ) that intervene in this very rich subject; we quote for instance [1], [3], [4], [6], [9], [11], [18], [19].
The general framework of a shape optimization problem is the following: given a bounded domain D of R d and a given right-hand side f , for every subdomain A ⊂ D a PDE E A u = f is considered, with given boundary data. The PDE above produces a unique solution u A which, inserted into an integral cost function, provides the final cost The shape optimization problem, under a volume constraint on the class of admissible choices, is then Due to a strong instability of the class of domains, very often an optimal shape does not exist, and the optimization problem is usually relaxed into a more treatable form, where the main unknown is the coefficient of an elliptic PDE on the whole set D. In the present paper we consider the simplest case, where the PDE is of a linear elliptic type with the boundary conditions on ∂D of Dirichlet type u = u 0 on ∂D.
The new fact is that the right-hand side f in (1) is only known up to a random perturbation; more precisely, if (Ω, F, P ) is a probability space, we assume that f (x, ω) = f (x) + ξ(x, ω), where the random perturbation ξ is such that Ω ξ(x, ω) dP (ω) = 0 for a.e. x ∈ D.
There are few references in the literature of this sort of random or stochastic optimal design problems; a general Γ-convergence framework was introduced in [7], while an optimal design problem in a finite dimensional setting was considered in [2].
The homogenization method (see [1], [19]) and the classical tools of nonconvex variational problems (in particular, Young measures, see [16], [17]) are the two mostly used approaches in the mathematical literature to analyze optimal design problems. We will use the Homogenization Theory in order to obtain the existence of a solution and some necessary conditions of optimality.
In the last section we consider some simple cases of loads f (x, ω) and we perform a numerical analysis of the optimal configurations, showing the differences between the deterministic case f (x) and the perturbed one f (x)+ ξ(x, ω).

The optimization problem
We consider a bounded open set D ⊂ R d with a Lipschitz boundary, two constants α and β such that 0 < α ≤ β, and a given value m ∈ (α|D|, β|D|).
We also consider a probability space (Ω, F, P ) and a random map f : Ω → L 2 (D) that we write as where ξ has the property Ω ξ(x, ω) dP (ω) = 0 for a.e. x ∈ D.
which provides a unique solution u a (x, ω). Finally, we consider a cost functional of the form where j(x, ω, u) is measurable, l.s.c. in u, and such that for suitable c > 0 and Λ ∈ L 1 (D × Ω) More general cost functionals, of the form could also be considered, but we limit ourselves to the simpler case (3), having in mind the energy The optimization problem we consider is Note that the optimal coefficient we look for is deterministic, that is it does not depend on the random variable ω.
Besides to problem (4) we will consider, mainly for the numerical purposes, the penalized one where λ is the Lagrange multiplier of the mass constraint.

The state equation
The approach we follow in order to prove the existence of solution of the optimization problem (4) consists in checking that our problem can be seen as a relaxed optimal design problem of another auxiliary optimal design problem and from this relaxed character we deduce the existence of a solution. We focus on the Homogenization Method. Throughout this section, we denote by χ n ∈ L ∞ (D; {0, 1}), n = 1, 2, ..., a sequence of characteristic functions and A n ∈ M d×d a sequence of tensors of the form:

The Homogenization Method
The homogenization method is based on the concept of H-convergence (see [1], [13], [14], [15]). We say that a sequence of tensors {A n } n∈N H-converges to the tensor A * ∈ L ∞ (D, M n×n ) if, for any f such that f (·, ω) ∈ H −1 (D) P -a.e. ω ∈ Ω, the sequence {u n } of solutions of We shall write A n H −→ A * to indicate this kind of convergence.
We consider A, B ∈ M d×d , {χ n } n∈N ⊂ L ∞ (D, {0, 1}) a sequence of characteristics functions, {A n } n∈N the sequence of matrices We assume (which always occurs for a subsequence) that there exist θ ∈ L ∞ (D, [0, 1]) and A * ∈ L ∞ (D, M d×d ) such that In this case A * is called the homogenized tensor obtained by the composition of the two phases A and B, in proportions θ and 1 − θ respectively, and with the microstructure defined by the sequence {χ n } n∈N . In this sense the homogenized tensor A * is characterized by three components, the phases A and B and the proportion θ. Therefore an important issue is to identify all possible homogenized tensors once fixed these three components, this is the so-called G-closure problem.
Fortunately, for the case of two isotropic matrices, the G-closure in the deterministic case is well known (see [1], [10], [13], [15]). We will prove that our "random" G-closure remains equal to the deterministic one. We denote G θ andG θ the G-closure associated with the deterministic and random equations.
We start proving that G θ ⊂G θ and we consider A * ∈ G θ . Therefore there exists a sequence of matrices {A n } n∈N of the form αχ n I d +β ( where u is the solution of the homogenized equation It is enough to observe that when f (x) is replaced by f (x, ω) the above convergence holds P -a.e. ω ∈ Ω, and therefore A * ∈G θ .
We prove now thatG θ ⊂ G θ and we take A * ∈G θ . Therefore there exists a sequence of matrices {A n } n∈N of the form whereũ(·, ω) is the solution of the homogenized equation P -a.e. ω ∈ Ω Integrating the above expressions (6), (7), (8) with respect to the random variable ω ∈ Ω and setting where u is the solution of the homogenized equation ¿From the generality off we can deduce that A * ∈ G θ .

Relaxation
We now consider the classical optimal design problem P -a.e. ω ∈ Ω, and to the volume constraint The lack of optimal solutions for problems of the type (O c ) is well known even in the deterministic case (see [12]).
The basic idea for the relaxation process consists in considering a larger class of admissible designs, in order a new (relaxed) problem on this larger class admits optimal solutions. Having in mind the above Theorem 1, we consider the space of generalized designs Therefore we define the relaxed version (O r ) of the above optimal design problem as P -a.e. ω ∈ Ω, and the volume constraint

Optimal solutions
In this section we consider the above problems (O c ) and (O r ) in the special case when the cost functionals are either the compliance and we will prove that in these situations our original design problem D a(x) dx = m admits optimal solutions. Theorem 3. In the cases either of the compliance or of the energy the optimization problem (4) admits a solution.
We know that the (O r ) problem admits optimal solution (since is the relaxed problem of (O c )), we check that these optimal solutions are solutions of our problem (O) from which we deduce the well-posed character of our problem.
We analyze the optimality condition for (O r ) for the matrix A * . We denote by p the adjoint state, which is the unique solution in H 1 0 (D) P -a.e. ω ∈ Ω of the adjoint state equation We remark that from (10) it follows that in the compliance case we have p = u and in the energy case we have p = −u.
It is clear that ∂ ∂ψ L(M, φ, ψ); ψ 1 = 0 taking φ = u solution of (9). We then determine the solution p so that, for all φ 1 ∈ H 1 (D) P -a.e. ω ∈ Ω, we have ∂ ∂φ L(M, φ, p); φ 1 = 0, which leads to the formulation of the adjoint problem (10). Therefore it is easy to deduce that our cost functional is Gâteaux differentiable with respect to the matrix variable and its derivative in the direction M 1 is given by the above formula with u and p solutions of the state and adjoint equation respectively. Hence if M * if optimal, the optimality condition ¿From the optimality condition (11) we obtain Using the algebraic expression and using the identification of G θ we obtain 4M ∇u · ∇p ≤ λ + θ |∇u + ∇p| 2 − λ − θ |∇u − ∇p| 2 .
Having in mind that in our problem u = p for the compliance and u = −p for the energy, the necessary condition above reads with λ θ = λ + θ for the compliance and λ θ = λ − θ for the energy. If we analyze the optimality condition (13) we obtain that λ θ is an eigenvalue of M * and ∇u(x, ω) is an eigenvector P -a.e ω ∈ Ω, i.e., where u is the solution of the state equation.
For the compliance case u = p and M * ∇u · ∇u = λ + θ |∇u| 2 , there exist several matrices in G θ with this property, and it is enough to take a rank one laminate with normal direction of lamination n orthogonal to ∇u and the optimal volume fraction λ + θ . For the energy case u = −p and M * ∇u · ∇u = λ − θ |∇u| 2 , there exist an unique matrix in G θ with this property which corresponds to a rank one laminate with normal direction of lamination n parallel to ∇u and the optimal volume fraction λ − θ . We would like to stress that for any case the optimal matrix M * is a first order laminate with deterministic optimal volume fraction λ θ and random direction of lamination according with the random value of ∇u.
Therefore, from the analysis of the optimality condition we concluded that the optimal matrix M * ∈ G θ verifies the condition (14). We remark that this condition does not imply that the optimal matrix M * is the isotropic matrix λ θ I d ; the important conclusion of (14) is that λ θ is an eigenvalue of M * and ∇u is an associated eigenvector, where u is the solution of the state equation. In particular, this implies that the optimal value from the (O r ) is attained on the simpler problem Ω θ(x) dx = L Finally, the proof of Theorem 3 reduces to the fact that for θ ∈ L ∞ (D; [0, 1]) one has that λ θ ∈ L ∞ (D; [α, β]), in particular it is enough to take a = λ θ to deduce the existence of optimal solution of our original problem (O).

Numerical analysis of the optimal design problem
We approach in this section the numerical resolution of the problem (O) for the compliance case for which j(x, ω, u) = u(x)f (x, ω) with Dirichlet boundary condition, for which the cost can be written as We first describe an algorithm of minimization and then present some numerical experiments. We treat the problem where ξ is a random variable ξ = ξ(x, ω). Similar computations are also made for the energy case j(x, ω, u) = −u(x)f (x, ω).

Algorithm of minimization
We present the resolution of the optimal design problem (O) using a gradient descent method. In this respect, we compute the first variation of the cost function with respect to a. For any η ∈ R + , η ≪ 1, and anyã ∈ L ∞ (D), we associate to the perturbation a η = a + ηã of a the derivative of I with respect to a in the directionã as follows: Theorem 4. The first derivative of I with respect to a in any directionã exists and takes the form where u is the solution of (15) and p is the solution in H 1 0 (D) of the adjoint problem − div a(x)∇p = f + ξ in D, P -a.e. ω ∈ Ω p = 0 on ∂D.
In order to take into account the volume constraint on a, we introduce the Lagrange multiplier γ ∈ R and the functional I γ (a) = I(a) + γ D a(x) dx.
Using Theorem 4, we then obtain easily that the first derivative of I γ is which leads us to define the following descent direction: In this way, for any function η ∈ L ∞ (D, R + ) with η L ∞ (D) small enough, we have I γ (a + ηã) ≤ I γ (a). The multiplier γ is then determined so that, for any function η ∈ L ∞ (D, R + ), a + ηã L 1 (D) = m, leading to where the function η is chosen such that a + ηã ∈ [α, β] for all x ∈ D. A simple and efficient choice consists of taking η(x) = ε(a(x) − α)(β − a(x)) for all x ∈ D with ε small and positive. Finally we show the descent algorithm to solve numerically the optimization problem (O).

Numerical experiments
In this section we implement the gradient descent algorithm explained in the previous subsection. These sort of problems have been studied by other authors ( [1], [3], [5], [16]), and we will show the numerical results according with previous numerical simulations of the previous authors. We solve the problem (O) on the square domain D = (0, 1) 2 for two phases α = 1 and β = 2, the determinist part of the right-hand side of the state equation f ≡ 1 and we consider the volume constraint m = α+β 2 = 1.5, i.e. we can use the same amount of α or β mass. In order to simplify the numerical computations we choose the random variable ξ with a discrete distribution of probability. We consider two different cases for ξ: and in both cases P ({ξ = χ}) = P ({ξ = −χ}) = 1 2 . It could be possible that the algorithm fall down into local minima of I, for this reason, we consider a constant initialization a 0 = m, without any privilege for the optimal localization.
In the figures below we represent the corresponding optimal mass distribution a. We show numerical results for full deterministic case and the two random cases described above, all for the compliance and energy minimization. The result are qualitatively different for any case.  With respect to the compliance minimization, we observe that the limit densities follow a similar distribution, where the smaller amount of mass is placed as a cross-shape. Taking as reference the picture of Figure 1 (the deterministic case) we observe different densities for the two random cases. For the Case 1, where the random perturbation is at the middle of the square a bigger amount of mass (the white color at the pictures) is distributed for this place and the thickness of the cross is bigger at the corners (in order to save the volume constraint) (see Figure 2). For the Case 2 the effect is the inverse, in this case the random perturbation is near the boundary; the optimal distribution consists in a smaller amount of mass in the middle and a bigger concentration near to the boundary with a smaller thickness of the cross (see Figure 3).
For the energy minimization, the optimal distribution is fully different and it is in accord with previous analysis (see [1], [8]). We take again the deterministic case as the reference design; for this case the simulations show that a bigger amount of mass is placed at the corners and at a square in the middle of the domain of design. For the Case 1 the optimal distribution is very similar to the deterministic one, and the changes correspond with placing a bit more of mass at the corners of the domain. For the Case 2, the effect is the reverse, in this case there are less mass at the corners and almost all the mass is placed at a square in the middle of the domain.
Finally, and in conclusion the numerical experiments indicate that under random forces the optimal distribution consists in placing a bigger amount of mass where this random force acts in, in order to take into account the possibility for the load to have a random variation.