Approximate control of parabolic equations with on-off shape controls by Fenchel duality

We consider the internal control of linear parabolic equations through on-oﬀ shape controls, i.e. , controls of the form M ( t ) χ ω ( t ) with M ( t ) ≥ 0 and ω ( t ) with a prescribed maximal measure. We establish small-time approximate controllability towards all possible ﬁnal states allowed by the comparison principle with nonnegative controls. We manage to build controls with constant amplitude M ( t ) ≡ M . In contrast, if the moving control set ω ( t ) is conﬁned to evolve in some region of the whole domain, we prove that approximate controllability fails to hold for small times. The method of proof is constructive. Using Fenchel-Rockafellar duality and the bathtub principle, the on-oﬀ shape control is obtained as the bang-bang solution of an optimal control problem, which we design by relaxing the constraints. Our optimal control approach is outlined in a rather general form for linear constrained control problems, paving the way for generalisations and applications to other PDEs and constraints.

Both the amplitude and location may be subject to constraints.This problem is a paradigmatic simplification of many practical situations where one can act on a complex system with on-off devices that can be moved in time, while their shape can also be modified.
Along the introduction, we expose our results for general operators A, while first illustrating them in the case of the controlled linear heat equation with Dirichlet boundary conditions      y t − ∆y = u in Ω, y = 0 on ∂Ω, y(0) = y 0 in Ω. ( In this setting, Ω is an open connected bounded subset of R d , with C 2 boundary, and y 0 ∈ L 2 (Ω).
Constrained control.In view of applications where unilateral or bilateral or L ∞ constraints naturally appear, constrained controllability has been an active area of research [1,9,36], whether in finite or infinite dimension.
In various contexts, control constraints have been shown to lead to controllability obstructions, even for unilateral constraints.Some states are out of reach, regardless of how large T > 0 may be [39,43].On the other hand, some states are reachable but only for T large enough: constraints may lead to the appearance of a minimal time of controllability [27,28,29,38].
In the case of unilateral constraints for linear control problems in finite dimension, these obstructions can be categorised thanks to Brunovsky's normal form as done in [28], leading to the existence of a positive minimal time.In infinite dimension, however, we are only aware of obstructions based on the comparison principle (see [27] and [39]).The present work uncovers another type of obstruction, already hinted at in [38].

Main results
As our results require different sets of hypotheses and in order to give a quick glance at the main ideas, we first present them in the simplified context of the heat equation (1).
Here, the notation U + emphasises that we will always deal with constraints that include the nonnegativity constraint, i.e., sets U + such that U + ⊂ {u ∈ L 2 (Ω), u ≥ 0}.Now, when the control u satisfies u ≥ 0, it follows from the parabolic comparison principle satisfied by the Dirichlet Laplacian [17] that ∀t ≥ 0, y(t) ≥ e t∆ y 0 , where (e t∆ ) t≥0 denotes the heat semigroup with Dirichlet boundary conditions.Hence, targets y f which do not satisfy y f ≥ e T ∆ y 0 cannot be reached with nonnegative controls, let alone on-off shape controls.
Taking into account the obstruction to controllability given by the inequality (2), we adapt the usual definition of approximate controllability to the context of nonnegative controls.
More precisely, we say that system (1) is nonnegatively approximately controllable with controls in U + in time T > 0, if for all ε > 0, and all y 0 , y f ∈ L 2 (Ω) such that y f ≥ e T ∆ y 0 , there exists a control u ∈ L 2 ((0, T )× Ω) with values in U + such that the corresponding solution to (1) satisfies y(T )− y f L 2 (Ω) ≤ ε.
On-off shape control.For our first main result, we focus on nonnegative approximate controllability with on-off shape controls: for a fixed L ∈ (0, 1), we consider the constraint set Within the above class of on-off shape controls, we establish nonnegative approximate controllability in arbitrary time (see Theorem 3.1 for the precise and general statement), whatever the value of L ∈ (0, 1).
Theorem A. For any L ∈ (0, 1), T > 0, system (1) is nonnegatively approximately controllable with controls in U L shape in time T .
To establish this result, we draw from the Lions strategy in [26], which develops a constructive approach in studying the approximate controllability of a linear wave equation.The idea is to consider the requirement y(T )−y f L 2 (Ω) ≤ ε as a constraint.With L T u := T 0 e (T −t)∆ u(t) dt and since y(T ) = L T u+e T ∆ y 0 , Lions considers the constrained optimal control problem The infimum satisfies π < +∞ if and only if there exists u ∈ L 2 ((0, T ) × Ω) steering y 0 to a closed ε-ball around y f .To find minimisers, i.e., to build controls, note that From this optimisation problem, one computes its Fenchel dual optimisation problem, which reads where F * T (= F T ) and G * T,ε are the Fenchel conjugates of F T and G T,ε , respectively, and L * T is the adjoint of the linear bounded operator L T : L 2 ((0, T )×Ω) → L 2 (Ω).Recall that for a given p f ∈ L 2 (Ω), p = L * T p f is the solution to the adjoint equation ending at p f , i.e., it solves      Under suitable conditions, the Fenchel-Rockafellar theorem [42] ensures that π = d.As a result, one can then study the dual functional to establish that π = d < +∞, and that its minimum is attained.Typically, one proves that it is coercive, as a consequence of a unique continuation property.Furthermore, the cost function F T is differentiable in this case and the first order optimality condition for the (unique) variable p ⋆ f minimising the dual functional then reads . The optimal control u ⋆ := L * T p ⋆ f is thus constructed from the minimiser of the dual problem p ⋆ f .Accordingly, in this paper we reframe constrained approximate controllability as an optimal control problem, replacing 1 2 u 2 L 2 ((0,T )×Ω) of [26] with a suitable cost functional F T .This constitutes a novel generalisation of the Lions method.
One can choose between two different sufficient conditions for the equality π = d to hold.One regards the primal problem, and the other the dual problem.Importantly, they are not symmetric (although the primal and dual problems are).These hypotheses when used on the primal problem are useless when it comes to proving controllability: they amount to assuming that controllability holds.We here crucially use these hypotheses in terms of the dual problem, see subsection 2.2 and Appendix A.4 for more details.
As the detailed statements in Theorem 3.1 and Proposition 3.8 show: • Instead of using the L 2 norm as in the optimal control problem studied in [26], we will consider the cost functional: where δ {u≥0} (u) = 0 if u ≥ 0 and +∞ otherwise, and the supremum over t ∈ [0, T ] is the essential supremum.
The rather unusual form of the minimisation criterion ( 4) is finely designed so as to handle nonnegativity and the other (bound, volume) constraints we are dealing with.
• The optimal controls have constant amplitude in time, i.e., M (t) ≡ M .
• The proof is constructive: the optimal control u ⋆ can be computed from a unique dual optimal variable p ⋆ f solving the corresponding Fenchel dual problem.This computation generalises what is done in [26] to the broader case of costs that are not differentiable but still convex.More precisely, u ⋆ is given by where h : L 2 (Ω) → R is a function that will be defined in Section 2.3, and p ⋆ solves the adjoint equation ( 3) with p ⋆ (T ) = p ⋆ f .
Obstructions to nonnegative controllability.In the spirit of the unconstrained case, one may wonder whether nonnegative approximate controllability can be achieved with controls acting only in some prescribed time-independent subdomain ω.We emphasise that our first result does not a priori prevent the control from visiting the whole domain Ω.
Our second result proves that visiting the whole Ω is necessary in the following sense: if the sets ω(t), t ∈ (0, T ) do not intersect some fixed open subset of Ω, nonnegative approximate controllability is lost for small times.
Theorem B. Assume that the constraint set U + satisfies the following property: there exists a ball B(x, r) ⊂ Ω with x ∈ Ω and r > 0 such that Then, there exists T ⋆ > 0 such that the control system (1) is not nonnegatively approximately controllable with controls in U + in time T ≤ T ⋆ .
We refer to Theorem 4.1 for the complete statement.Let us mention that obstructions of this type have been reported for similar problems in [38].
Amplitude and time optimal control.In Section 5, we gather several further results regarding the dependence of the amplitude M = M (T, y 0 , y f , ε) on its arguments.Using duality once more, we study its dependence on the final time T .
Focusing on the case y 0 = 0, we then establish an equivalence between the optimal control problem and the related minimal time problem

General results
Theorems A and B above have been stated for the heat equation with Dirichlet boundary conditions, in order to provide the reader with a quick overview of our main results.In fact, they all hold for more general semigroups under suitable hypotheses presented hereafter.
The underlying general setting is that of linear control problems of the form where Ω is an open subset of R d , and A : D(A) → L 2 (Ω) is an operator generating a C 0 semigroup (S t ) t≥0 on L 2 (Ω) [15,37].
In this more general context, we define nonnegative approximate controllability as follows.
Definition 1.1.Given a constraint set of nonnegative controls U + ⊂ L 2 (Ω), we say that system (5) is nonnegatively approximately controllable with controls in U + in time T if for all ε > 0, and all y 0 , y f ∈ L 2 (Ω) such that y f ≥ S T y 0 , there exists a control u ∈ L 2 ((0, T ) × Ω) with values in U + such that the corresponding solution to (5) satisfies y(T ) − y f L 2 (Ω) ≤ ε.
General hypotheses for Theorem A. We have previously presented Theorem A for the heat equation as a paradigmatic example.Nevertheless, the underlying hypotheses on which some of our proofs rely are much more general in nature; we review them below.
• First, we consider the (unusual) unique-continuation like property This property is satisfied as soon as the three assumptions below hold: for y ∈ L 2 (Ω), S t y ∈ D(A) for all t > 0 (for instance, this is true if (S t ) t≥0 is analytic [37]), -the only constant function in D(A) is the zero function, 1 -S t is injective for all t > 0. 2 • Second, we will be interested in analytic-hypoellipticity: where analyticity refers to real-analyticity.
• Third, we will say that (S t ) t≥0 satisfies the comparison principle if The first two properties are sufficient for the generalisation of Theorem A, see Theorem 3.1.The third will play an important role when it comes to minimal controllability times, and is in line with our definition of nonnegative approximate controllability.
Elliptic operators.As a generalisation of the Dirichlet Laplacian, let us discuss a large class of uniformly elliptic operators that do satisfy these properties and to which our obstruction result Theorem B generalises (see Theorem When referring to operators of the form (7), we will always assume that the functions , and that the operator is uniformly elliptic, i.e., there exists θ > 0 such that The adjoint of A is given by and we have D(A * ) = D(A).Both A and A * satisfy the parabolic comparison principle [17], hence they satisfy the comparison principle (6).They also satisfy the three conditions sufficient for the (GUC) property to hold. 3 Furthermore, both ∂ t − A and ∂ t − A * are analytic-hypoelliptic as soon as all functions a ij , b i and c are analytic [35].
1 This is the case for the Dirichlet Laplacian with domain 2 This is the case for groups, such as the wave equation, and for parabolic equations thanks to the parabolic maximum principle.This is also true for analytic semigroups: if Sty = 0 for some t > 0, then Ssy = 0 for all s ≥ t and by analyticity Ssy = 0 for all s ≥ 0, which for s = 0 yields y = 0.
3 The analyticity of the semigroup is well known for this class of elliptic operators on open domains with C 2 boundary.There are clearly no nonzero constant functions in H 2 ∩ H 1 0 .Finally, injectivity follows from the comparison principle (see above footnote).

Proof strategy and related works
In the unconstrained case, approximate controllability of the heat equation is a consequence of the unique continuation property, thanks to a general property of linear control problems (see for example [13,Section 2.3]).In the case of heat equations, the latter property can be obtained by the Holmgren Uniqueness Theorem [3].In contrast to these existence results, the variational approach developed in [26] (see Section 1.2), handles approximate controllability in a constructive manner.
Our strategy consists in extending this approach to the constrained case: the main idea is to find a suitable cost function F T such that optimal controls must satisfy the constraint u ∈ U shape L .A remarkable feature of our strategy lies in how we design the cost function: we do so by building an adequate Fenchel dual function, instead of trying to find the cost function directly.
Constrained controllability.Constrained control problems in infinite dimension have been studied in papers such as [4,5,6,16,20].In [16], sufficient conditions (in the form of unique continuation properties) for controllability results are derived when the control and states are constrained to some prescribed subspaces, but at the expense of controlling only a finite-dimensional subpart of the final state.In [20], the authors deal with a form of approximate controllability of the heat equation akin to ours, focusing on minimal time problems.They derive bang-bang type necessary optimality conditions for minimal time controls, and then build such controls using an auxiliary optimisation problem.
The papers [4,5,6] address constrained exact controllability through modified observability inequalities, thus giving abstract necessary and/or sufficient conditions.One key difference with our work is that constraint sets are assumed to be convex.In fact, all examples handled by [4,5,6] feature isotropic constraints, that is, constraints that are symmetrical with respect to 0, or more generally, are expressed using radial functions (such as norms).This precludes, for instance, any type of positivity constraint.
It is noteworthy that all the above references introduce so-called dual functionals, drawing from the variational formulation of the Hilbert Uniqueness Method.However, the formalism of Fenchel-Rockafellar duality in itself, as developed in [26], has increasingly been abandoned in the literature.Some notable exceptions are [46] in the context of stabilisation and [23] for parameterised problems, both in the unconstrained case.To some extent, the work [5] uses Fenchel duality to study (constrained) null-controllability in some specific settings.
We fully exploit the ideas hinted at in the latter paper by choosing a different type of functional, which allows us to handle anisotropic, non-convex constraints.In contrast with the aforementioned trend in the literature, we work with Fenchel duality, but in a rather unusual way, in that we will focus mainly on the dual problem.The nature of the actual primal problem (optimal control problem) being solved follows effortlessly.To perform the necessary computations, we will make extensive use of convex analysis.Doing so bypasses many technical difficulties thanks to properties of subdifferentials and Fenchel conjugates, among others, and allows for the use of costs which are not differentiable but still smooth in the convex analytic sense.
Bathtub principle for appropriate costs.The second main idea is what underlies our choice of cost function F T , forcing optimal controls to satisfy the required the on-off shape constraint.As the set of on-off shape controls is a non-convex cone, we are led to relaxation, i.e., to consider the closure of its convex hull.In order to build relevant costs, we then rely on the so-called bathtub principle (actually, a relaxed version of it) [25].
For a given function v ∈ L 2 (Ω), the latter principle solves This optimisation problem comes up naturally in some control problems similar to ours [22,30], or in shape optimisation problems [40].
Interpreting the bathtub principle as a Fenchel conjugate leads us to design the unusual cost functional (4).This allows us to design dual problems such that optimal controls exist, and are characterised as maximisers of some bathtub principle.Then, using analyticity properties for solutions of the dual problem, we prove their uniqueness and hence their extremality, thereby uncovering that they are on-off shape controls.
Bang-bang property of optimal controls.Bang-bang controls (i.e., controls that saturate their constraints) are a common feature in time optimal control problems.A growing literature on the heat equation alone [31,33,45,47,49] shows that this property extends well to some infinite-dimensional systems.In our case, we will see that the on-off shape controls we have constructed can be understood as time-optimal controls.As these controls are bang-bang, this yields another occurrence of the bang-bang property in the time-optimal control of the heat equation.
Note, however, that in the references cited above, the controls are constrained to lie in balls of specific function spaces, whereas we consider non-negative constraints on the controls, which is an anisotropic constraint.Moreover, the bang-bang property is usually established separately using optimality conditions, having established controllability at the onset.In our case, the Fenchel-Rockafellar duality approach allows to do all those things simultaneously.

Extensions and perspectives
Operator, boundary conditions.The (GUC) property and analytic-hypoellipticity are two key sufficient properties for nonnegative approximate controllability by on-off shape controls.We have highlighted second-order elliptic operators with analytic coefficients Dirichlet boundary conditions as an example.Our results apply to such operators with Robin boundary conditions of the form a(x)y + b(x)∂ ν y = 0 over ∂Ω (with a, b analytic) as soon as the function a does not vanish on the whole of ∂Ω (more generally, as soon as a is nontrivial on any connected component of ∂Ω).This excludes the important case of Neumann boundary conditions, which remains open.
Our approach also accommodates subelliptic operators.This includes a large class of Hörmander operators, i.e., operators of the form A = m i=1 X 2 i + X 0 + V Id with vector fields X 1 , . . ., X m generating a Lie algebra that equals R d on the whole of Ω.Under general regularity assumptions and boundary conditions, such an operator and its adjoint generate a strongly continuous semigroup on L 2 (Ω), satisfy the comparison principle [7], all three conditions sufficient for the (GUC) property, and are analytichypoelliptic for instance if the characteristic manifold is an analytic symplectic manifold (see [34]).
Finally, going beyond the linear setting is a completely open problem, since our approach fundamentally relies on the Fenchel-Rockafellar theorem which itself requires a bounded linear operator (the role played by L T in our setting).
Control operator.Our results have been stated with the identity control operator.They extend to the nonnegative control of An interesting perspective is to follow our proof strategy with boundary control operators, where on-off shape controls now refer to characteristic functions over the boundary ∂Ω.
Other notions of controllability.In the case of unconstrained controllability with a control acting in some fixed subset ω, any function that can be reached exactly is (at least) analytic in Ω \ ω, preventing exact controllability to hold true.
On the one hand, this argument for (non)-exact controllability by on-off shape controls fails since the control may act everywhere.On the other hand, our approach heavily relies on targeting a ball B(y f , ε) with ε > 0. As a result, exact nonnegative controllability by on-off shape controls is an open and seemingly difficult question.
A related matter is that of the cost of approximate controllability as a function of ε → 0.
Although our focus has been on L 2 -approximate controllability, we mention that one may extend the same methodology to L p -approximate controllability for 1 < p < +∞, by working in duality within the appropriate spaces: the bounded operator underlying the Fenchel-Rockafellar duality is now L T ∈ L(L 2 (0, T ; L p (Ω)), L p (Ω)), meaning that the dual functional is defined on L q (Ω) with q the dual exponent to p.
Controllability in large time.As evidenced by Theorem B, we provide obstructions for small times T .We do not know whether nonnegative approximate controllability holds for sufficiently large times.
Abstract constrained control.The strategy of proof developed in this article hints at generalisations, where the method is applied to abstract linear control problems with abstract constraint sets U.
In particular, we expect it to lead to necessary and sufficient conditions for controllability when U is convex.When U is not convex as is the case for on-off shape controls, this requires to study the convex hull of U, following the relaxation approach.This abstract setting should allow us to discern how one can design a cost function F T , analogous to (4), tailored to a given U.
Further sufficient conditions should be derived to ensure that optimal controls in the convex hull of U actually are in the original constraint set U. In the present work, analytic-hypoellipticity and the (GUC) property play that role in the case of on-off shape controls.
This will be the subject of an ulterior article.
Regularity of the sets ω(t).Another problem is to analyse the complexity of the sets ω(t) occupied by optimal controls over time.For instance, how smooth (BV regularity, number of connected components, etc) are the sets ω(t) achieving approximate controllability?In view of applications, these are important issues for the controls to be implementable in practice.For example, if the sets ω(t) are constrained to depend on a few parameters restricting their geometry, or if they are restricted to rigid movements, controllability is a totally open question.
Homegenisation approach.We acknowledge that an homogenisation approach to establishing nonnegative approximate controllability by on-off shape controls could certainly be pursued.The underlying idea would be to "atomise" the sets ω(t) (see [2]).Contrarily to our technique, however, this approach would not be constructive.
Numerical approximation of optimal controls.Optimal controls are given explicitly in terms of optimisers of the dual problem: the constructive nature of our approach means that optimal controls may be numerically computed, at least on paper.
Providing reliable and efficient methods to compute optimal controls is a difficult issue which has been studied in the case of Lions' cost functional with ε = 0 (i.e., exact controllability) [8,21].Similar results in a generalised setting with our Fenchel-Rockafellar-based approach would be valuable.
Contrary to Lions' cost functional, we note that ad hoc algorithms are required in order to cope with functions that are not necessarily differentiable, as is the case in the present paper.Recent primal-dual algorithms designed for optimisation problems with objective functions of the form F (u) + G(L T u) are likely to be good candidates [11].
Outline of the paper.First, Section 2 lays out the convex analytic framework, that of Fenchel-Rockafellar duality, and how it may be applied to constrained approximate controllability.We then introduce the bathtub principle and interpret it in terms of Fenchel conjugation in order to design a relevant optimal control problem for our purposes.Section 3 is dedicated to the proof of our nonnegative approximate controllability result given by Theorem 3.1, and Section 4 to that of the obstruction result, Theorem 4.1.Finally, Section 5 gathers our results about further obstructions when the control amplitude is bounded, along with our analysis of the corresponding minimal time control problem.
2 Building the optimal control problem

Convex analytic framework
Let H be a Hilbert space.We let Γ 0 (H) be the set of functions from H to ]−∞, +∞] that are convex, lower semicontinuous (abbreviated lsc) and proper (i.e., not identically +∞).For f ∈ Γ 0 (H), we let  be its subdifferential at a point x ∈ H. Various common properties of Fenchel conjugates, support functions and subdifferentials are used throughout the article.These are all recalled in Appendix A, where a few additional lemmas are proved.

Approximate controllability by Fenchel duality ([26])
Let us explain how the approximate controllability problem is reformulated in the context of Fenchel-Rockafellar duality [42] (see A.4 for a general presentation), following the strategy introduced by Lions in [26].We work with the control problem (5), with the control space E := L 2 ((0, T ) × Ω) and the state space L 2 (Ω).
By Duhamel's formula y(T ) = S T y 0 + L T u, the inclusion y(T ) ∈ B(y f , ε) (where the closed ball of center y f and radius ε is with respect to the L 2 (Ω)-norm) can equivalently be written as Given some cost functional F T : E → [0, +∞] ∈ Γ 0 (E), consider the optimal control problem (which we will refer to as the primal problem) where Now consider the Fenchel dual to the above problem, which writes Thanks to the formulae for conjugates, we find T p f solves the adjoint equation Strong duality.The weak duality π ≥ d always holds.According to the Fenchel-Rockafellar duality theorem recalled in Appendix A.4, the existence of , this condition reduces to the existence of a point of continuity of the form L * T p f for F * T .In the cases covered here, we shall check that the chosen F * T is continuous at 0. When strong duality holds, it is therefore equivalent to work with the dual problem, which is easier to handle especially when it has full domain, i.e., its objective function is finite everywhere.
We note that an alternative to establish strong duality is to find u ∈ E such that G is continuous at L T u and F (u) < +∞.This approach is bound to fail here since it would require finding a control achieving the target ball, i.e., assuming that controllability holds.
Non-trivial strong duality.Furthermore, the primal value π is attained if finite, i.e., if this equality is not the trivial +∞ = +∞ (the uncontrollable case).Thus, if d is finite, π is finite as well and attained: we may speak of optimal controls.This requirement that d be finite is by far the subtlest one.It may be tackled by proving that the functional J T,ε underlying the dual problem (written in infimum form inf p f ∈L 2 (Ω) J T,ε (p f )) has a minimum.In practice, we will always find this to be the case, as the dual problem is usually unconstrained (depending on the choice of F T ), unlike the primal problem.Hence, both π and d will be attained and, from Proposition A.8, any optimal dual variable p ⋆ f is such that any optimal control u ⋆ satisfies Proposition 2.1.Assume that, for any y 0 , y f ∈ L 2 (Ω) such that y f ≥ S T y 0 and any ε > 0, • there exists If for any dual optimal variable p ⋆ f , the controls characterised by (10) are in U + , then the control system (5) is nonnegatively approximately controllable with controls in U + in time T .This shows how the choice of the cost F T impacts the existence and properties of optimal controls.More precisely, it must be pointed out that all the hypotheses of Proposition 2.1 are formulated with respect to the dual problem.Accordingly, from the next section onwards, our strategy will be to determine an adequate optimal control problem by designing its dual problem.
Finally, we emphasise that ( 10) is only a necessary condition for the optimality of u ⋆ .It becomes sufficient only when ∂F * T (L * T p ⋆ f ) is reduced to a singleton, which will occur in our case.

Convex analytic interpretation of the bathtub principle
Starting from the set of on-off shape controls of amplitude 1, where | • | denotes the Lebesgue measure, we define the closure of its convex hull (which is also its weak- * closure for the L ∞ (Ω)-topology) Given a fixed v ∈ L 2 (Ω), we consider the (static) maximisation problem This a relaxed version of the so-called bathtub principle, which gives the maximum value as well as a characterisation of maximisers [25].For the sake of readability, we introduce the necessary results for what follows, but refer to Appendix B for a more detailed statement.For a given v ∈ L 2 (Ω), we let and its pseudo-inverse function Finally, we set Remark 2.2.The function Φ −1 v is the Schwarz rearrangement of v, see [19].Lemma 2.3 (relaxed bathtub principle).Let v ∈ L 2 (Ω).The maximum in (13) Furthermore, if all the level sets of the function v have measure zero, the maximum equals {v>h(v)} v and is uniquely attained by We refer to Lemma B.2 for the comprehensive statement of the relaxed bathtub principle.We may interpret the above results as a formula for the support function of U L in L 2 (Ω): First, using the characterisation of the subdifferential given in Appendix A, we arrive at the following characterisation for the solutions to the maximisation problem given in Lemma 2.3: The maximisers of the relaxed bathtub problem are given by the elements of ∂σ U L (v).
Proposition A.4 in Appendix A.2 shows that this implies u ∈ ∂U L .Propositions 2.3 and 2.4 characterise exactly which elements of the boundary ∂U L are involved.

2.4
From the static bathtub principle to the dual problem and its corresponding cost Following Section 2.2 and recalling Proposition 2.1 and (10), we are looking for a cost function F T such that the corresponding optimal controls are on-off shape controls, and we have established that it suffices to find a conjugate functional F * T satisfying two key properties.First, if there exists T is continuous at L * T p f , and if we can provide the existence of a minimiser p ⋆ f of J T,ε , then π is attained and there exists at least one optimal control.Second, any optimal control u ⋆ should satisfy (10) so F * T should be chosen so that the subdifferential ∂F * T (L * T p ⋆ f ) contains only characteristic functions.Given Proposition A.4 and Section 2.3, elements of are bang-bang, in the sense that they are characteristic functions, under some mild conditions that v must satisfy.
To go from the static optimisation problem to the adequate dual problem, we add a time dependency.Moreover, to ensure coercivity of the dual problem, we add a quadratic exponent.All in all, we choose the following conjugate: Since the approximate controllability problem corresponds to G * T,ε := σ B(y f −ST y0,ε) , this defines a dual problem of the form (8). As pointed out in Section 2.2, we are now dealing with an unconstrained optimisation problem (i.e., the domain of the functions involved is the whole space L 2 (Ω)).
We can now derive the corresponding constrained optimisation problem, by computing the actual cost F T associated to the choice (18) for F * T .We find, as announced by (4) in the introduction: Lemma 2.6.The function F * T defined by (18) satisfies Proof.Lemma A.5 in Appendix A.3 shows that F * T ∈ Γ 0 (E).We proceed by computing (F * T ) * .We have ), the definition of the support function together with Lemma A.5 in Appendix show that H ∈ Γ 0 (E) with Furthermore, we find the conjugate of 1 2 H 2 by using (45) in Appendix A.1, which leads to where we used that dom(H) = E. Clearly, and is +∞ otherwise.We end up with The lemma is proved.
Note that We end this subsection by establishing a crucial property satisfied by F * T .It will play a crucial role in proving that the dual functional is coercive, akin to that of the unique continuation property in the Lions strategy described in Section 1.2.Lemma 2.7.For all Proof.It is easily seen that σ U L ≥ 0, and, for v ∈ L 2 (Ω), σ U L (v) > 0 as soon as v > 0 on a set of positive measure, by taking the scalar product of v against a well chosen element of If F * T (L * T p f ) = 0, then, as σ U L ≥ 0, for almost every t ∈ (0, T ), σ U L (L * T p f ) = 0. Thus, L * T p f (t, •) ≤ 0. In particular, since L * T p f ∈ C([0, T ]; L 2 (Ω)), this implies that p f ≤ 0.

Approximate controllability results
In this section, we state and prove our main result on approximate controllability.The full statement for our Theorem A is given with more details below, for general linear operators, satisfying the properties given in Section 1.3.We are considering the following optimal control problem: whose dual problem is Theorem 3.1.Assume that A * satisfies the (GUC) property and that ∂ t − A * is analytic-hypoelliptic.Then for the cost function F T defined by (4), • the strong duality π = d holds, • the dual problem (20) is attained at a unique minimiser p ⋆ f ∈ L 2 (Ω), • there exists a unique optimal control u ⋆ ∈ E for the primal problem (19).
Furthermore, the optimal control is given by where h is defined by (16), and where p ⋆ = L ⋆ T p ⋆ f is the solution of the adjoint equation (9) satisfying p ⋆ (T ) = p ⋆ f .
Remark 3.2.In fact, if ε > 0 is such that y f ∈ B(S T y 0 , ε), we prove that p ⋆ f = 0 and the formula above returns u ⋆ = 0, which obviously does steer the system to the target ball.Throughout this section, we assume the hypotheses sufficient for Theorem 3.1, i.e., that A * satisfies the (GUC) property and that ∂ t − A * is analytic-hypoelliptic.The proof is then scattered into the section as follows: • First, we establish that strong duality holds.
• Second, we prove that the corresponding dual functional is coercive: hence, the dual functional attains its minimum (the dual problem attains its maximum).
• Finally, we investigate the uniqueness of optimal variables.Remark 3.4.As the proofs show, the first two steps and the uniqueness of dual optimal variables are valid for any operator A. In particular, they do not require that A * satisfy the (GUC) property and that ∂ t − A * be analytic-hypoelliptic.Hence, strong duality and existence of optimal controls does not require any specific assumption the semigroup must satisfy.This remark will be of importance in the next subsection where we manipulate optimal controls without making these two hypotheses.
Proof.By the Cauchy-Schwarz inequality, which leads to As a result, we may bound with the Cauchy-Schwarz inequality again hence the continuity of F * T at 0 = L * T 0.
The above lemma shows that the first condition of Proposition 2.1 is satisfied, i.e., strong duality holds.

Coercivity of J T,ε , nonnegative approximate controllability
Proposition 3.6.The functional J T,ε defined by and thus attains its minimum.
Proof.Since we know that J T,ε is convex, proper, strongly lsc, if J T,ε is coercive then and that it is actually attained.We will actually prove a stronger condition than coercivity, namely lim inf Our method of proof follows that of [20,32].Take a sequence p n f L 2 → ∞.We denote q n f := . By positive homogeneity of F * T (of degree 2), we have Let us now treat the remaining case where lim inf n→∞ F * T (L * T q n f ) = 0. Since q n f L 2 = 1, upon extraction of a subsequence, we have q n f ⇀ q f weakly in L 2 (Ω) for some q f ∈ L 2 (Ω).Since L * T ∈ L(L 2 (Ω), E), we have T is convex and strongly lsc on E, it is (sequentially) weakly lsc and taking the limit we obtain F * T (L * T q f ) = 0.By Lemma 2.7, we infer that q f ≤ 0.Then, recalling that the target satisfies y f − S T y 0 ≥ 0 ⇔ y f ≥ S T y 0 , we end up with which concludes the proof.

Characterisation of the minimisers
In this subsection and the subsequent one, it will be convenient to discuss depending on the assumption in which case the target is not reached with the trivial control u = 0. Note that, if (23) is not satisfied, the control u = 0 steers y 0 to the target, and is indeed a control in U L shape .We first remark the following fact: Lemma 3.7.Assumption (23) holds if and only if any minimiser p ⋆ f of (20) satisfies p ⋆ f = 0.
Proposition 3.8.Any optimal control for ( 19) is of the form (21), where p ⋆ denotes the solution of the adjoint equation ( 9) such that p ⋆ (T ) = p ⋆ f , where p ⋆ f is any dual optimal variable.Proof.Let u ⋆ be an optimal control.Thanks to Proposition 3.6, we know that J T,ε defined by (20) attains its minimum.Thanks to Lemma 3.5, we can apply the first identity of (51) in Proposition A.8 (see Appendix A.4) to obtain u ⋆ ∈ ∂F * T (L * T p ⋆ f ), where p ⋆ f is any minimiser of J T,ε , i.e., an optimal dual variable.
We denote p ⋆ the solution of the adjoint equation ( 9) such that p ⋆ (T ) = p ⋆ f .From Lemma 3.7, Using again the notation H(p) := we have H(p ⋆ ) ≥ 0 and dom(H) = L 2 (Ω).Then, applying the generalised chain rule (see [12,Theorem 2.3.9, point (ii)]) with the functions x → 1 2 x 2 and H, we compute the subdifferential of the convex functional Let us first assume that Assumption ( 23) holds.From Lemma (3.7), we have p ⋆ f = 0. We now let t ∈ (0, T ) be fixed and let us justify that all level sets of p ⋆ (t, •) are of measure zero, i.e., Indeed, since the operator ∂ t − A * is analytic-hypoelliptic, we know that p ⋆ (t, •) is analytic on Ω.Hence, its level sets are of measure zero unless p ⋆ (t, •) = S * T −t p ⋆ f is constant.Using the (GUC) property, this leads to p ⋆ f = 0, contradicting (23).Applying Propositions 2.3 and 2.4, and recalling that ∂σ UL (p ⋆ (t, •)) = {χ {p ⋆ (t,•)>h(p ⋆ (t,•))} }, we obtain the result.Now assume that Assumption (23) does not hold.Then p ⋆ f = 0 is optimal and, using the above notations for this specific dual optimal variable, we have p ⋆ = 0, M = 0 hence any optimal control satisfies u ⋆ = 0, which is of the form (21). Remark 3.9.As evidenced by the proof, a weaker (but less workable) property than analytic-hypoellipticity is sufficient to infer that optimal controls are on-off shape controls.Indeed, it suffices to require either one of the following conditions (in decreasing order of strength): (i) All solutions t → p(t) of the adjoint equation such that p(T ) = 0 have zero-measure level sets.
(iii) For all solutions t → p(t) of the adjoint equation such that p(T ) = 0, Note that requirement (iii) is minimal (see Lemma B.2 and Remark B.3).Finally, an even weaker requirement would be to restrict any of the above (i), (ii) or (iii) to a single solution t → p ⋆ (t) of the adjoint equation, namely that with p ⋆ (T ) = p ⋆ f where p ⋆ f is the unique dual optimal variable (see below for the uniqueness of optimal variables).

Uniqueness
Our first uniqueness statement below (i.e., that of the dual optimal variable) is a consequence of Fenchel-Rockafellar duality, and the fact that we work with a Hilbert space, rather than specific properties of the evolution equation under consideration.Remark 3.10.Still applying Proposition A.8, we get Using the Legendre-Fenchel identity (47), we get −p ⋆ f ∈ ∂δ B(y f −ST y0,ε) (L T u ⋆ ).Thanks to Proposition A.4, this means that L T u ⋆ lies at the boundary of the closed ball B(y f − S T y 0 , ε).Proposition 3.11.Under the assumptions of Theorem 3.1, the primal-dual optimal pairs (u ⋆ , p ⋆ f ) are unique.
Proof.Uniqueness of the dual optimal variable.First note that if Assumption (23) does not hold, then 0 is the unique optimal control, i.e., On the other hand, if Assumption (23) holds, according to Remark 3.10, and since the set of minimisers of a convex function is convex, the set {L T u ⋆ , u ⋆ is optimal} is a convex subset of the sphere S(y f − S T y 0 , ε).The closed ball being strictly convex since we are working in the Hilbert space L 2 (Ω), there exists some Thus, in any case, the set of targets reached by optimal controls is always reduced to a single point.Now, let p ⋆ f be a dual optimal variable, and u ⋆ an optimal control.Then, as strong duality holds, Proposition A.7 implies that the pair (u ⋆ , p ⋆ f ) satisfies the two optimality conditions from (25).We then have If Assumption (23) does not hold, then ( 26) and (24 Otherwise, 0 ∈ ∂B(y f − S T y 0 , ε) and ( 48) yield Restricting the function J T,ε defining the dual problem (20) to the above half-line, using the homogeneities of each of its terms, and the fact that It is clear that 0 is the unique minimiser of γ 0 .From ( 27) and ( 28), 0 is the unique dual optimal variable if (23) does not hold.
If Assumption (23) holds, then ( 26) and ( 25) imply Since y ⋆ lies at the boundary of B(y f − S T y 0 , ε), formula (48) yields Restricting J T,ε to the above half-line as previously, we find where, using By coercivity, a > 0, and given Lemma 3.7, we have b < 0. Thus, γ has a unique minimiser λ , and the dual optimal variable is unique.
Uniqueness of the optimal control.If Assumption (23) does not hold, then 0 is the unique optimal control.Now, suppose that Assumption (23) holds.We know from the proof of Proposition 3.8 that a given dual optimal variable uniquely determines one optimal control.Moreover, as we have proved that strong duality holds, we can apply Proposition A.7: for any pair of primal and dual optimal variables, the relations (49) are satisfied.That is, any optimal control u ⋆ is uniquely determined by the unique dual optimal variable p ⋆ f through the identity

Obstructions to controllability
We here prove Theorem B, through the more general result below in the case of second-order uniformly elliptic operators of the form (7). We use the notation A ⊂⊂ B to mean that there exists a compact set K such that A ⊂ K ⊂ B.
Let A be a second-order uniformly elliptic operator of the form (7). Let y 0 = 0 and y f ∈ L 2 (Ω) be any target such that y f ≥ S T y 0 = 0, y f = 0 and supp(y f ) ⊂ B(x, r).Then there exist T ⋆ > 0 and ε > 0 such that for any time T ≤ T ⋆ , no control with values in U + can steer 0 to B(y f , ε).
The proof relies on the following lemma, inspired by [38].
To conclude the proof, we fix any T ≤ T ⋆ and let p be the solution of ( 9) on [0, T ] with p(T ) = p f .Then for all 0 ≤ t ≤ T , p(t) = q(T − t), hence p(t, •) ≥ 0 on Ω \ B(x, r) for all 0 ≤ t ≤ T .
Proof of Theorem 4.1.Upon reducing r, we may without loss of generality assume that B(x, r) ⊂⊂ Ω. Letting K := supp(y f ), we consider p f and T ⋆ as given by Lemma 4.2.
Let T ≤ T ⋆ be fixed.For any control u ∈ E, any y 0 , y f ∈ L 2 (Ω), any solution to the adjoint equation ( 9) such that p(T ) = p f , we have d dt y(t), p(t) L 2 = p(t), u(t) L 2 .As a result and owing to y 0 = 0, We now assume by contradiction that, for any ε > 0 there exists a nonnegative control u ε ∈ E satisfying ∀t ∈ (0, T ), supp(u ε (t)) ∩ B(x, r) = ∅ and steering y 0 = 0 to the ball B(y f , ε) in time T .We inspect the sign of the equality (29) along the controls u ε , ε > 0.
On the one hand, because of the condition (ii) in Lemma 4.2 satisfied by p, and owing to u ε ≥ 0, the right-hand side of ( 29) is nonnegative, i.e., On the other hand, the left-hand side of (29) satisfies Now, y f , p f L 2 < 0, because of (i) in Lemma 4.2.As a result, there exists α > 0 such that p f ≤ −α on K, so that because y f is nonnegative and nontrivial on K by assumption.Hence, for ε > 0 small enough, y(T ), p f L 2 < 0, which contradicts (30).
Remark 4.3.As the proof shows, the obstruction to nonnegative approximate controllability in U + does not rely on the comparison principle, but is of dual nature.As evidenced by the proof above, the core idea is indeed to construct p f and y f such that the equality (29) prevents y(T ) from being close to y f .The proof of Theorem 4.1 follows directly from the existence of p f satisfying the assumptions of Lemma 4.2.Hence, this obstruction to nonnegative approximate controllability is rather general and will be satisfied by any operator (including uniformly second-order elliptic operators of the form (7)) for which such an element p f can be built.
5 Further comments

Properties of the value function in the general case
For general linear operators generating a C 0 semigroup, fixing Ω, L, ε, y 0 and y f , we analyse the dependence with respect to the final time T , for the optimal control problem (19) studied in Section 3 for system (5).By Lemma 3.5 and Proposition 3.6, the optimal control problem ( 19) is well-posed, i.e., optimal controls exist (see also Remark 3.4), hence we may consider When A * satisfies the (GUC) property and ∂ t − A * is analytic-hypoelliptic, M (T ) is the amplitude of the unique optimal control in Proposition 3.8.
Recall that by strong duality, we have where p ⋆ T is the unique minimiser of J T,ε .This is exactly the identity obtained for the HUM method where the cost functional F T is just 1 2 • 2 E .We first establish the continuity of T → M (T ).
Proof.Using (32), we prove the continuity by showing that (p f , T ) → J T,ε (p f ) (given by ( 22)) satisfies the assumptions of Lemma A.9 with H = L 2 (Ω) and Z = (0, +∞).Clearly, the first, second and fourth assumptions are satisfied, hence we are left with proving that (p f , T ) → J T,ε (p f ) is weak-strong lower semicontinuous over L 2 (Ω) × (0, +∞).The last two terms of (22) are easily seen to be weak-strong lower semicontinuous over L 2 (Ω)×(0, +∞), hence we investigate the property for the remaining term F * T (L * T p f ).Given p f ∈ L 2 (Ω) and T > 0, let (p n f ) and (T n ) be two sequences such that p n f ⇀ p f , T n → T .We decompose By weak (sequential) lower semicontinuity of F * T over L 2 (0, T ; L 2 (Ω)), we find that the first term satisfies To conclude, we only need to prove that the second term tends to 0 as n → +∞.
Using the notation q n for the solution to the forward adjoint problem such that q n (0) = p n f , i.e., q n (t) = S * t p n f , we have Using the bound 0 ≤ σ U L (p) ≤ |Ω| 1/2 p L 2 (see the proof of Lemma 3.5) and the estimate S t L(L 2 (Ω)) ≤ C valid for all t ∈ [0, T + 1] with C > 0 some constant independent of n, we have a bounded quantity, and which tends to 0 as n → +∞.
We now study the behaviour of M (T ) near T = 0 and T = +∞.We recall that M (T ) also depends on all other parameters y 0 , y f , ε and L.
We now recall (see [37]) that there exist C s > 0, α ∈ R such that for all t ≥ 0, S t L(L 2 (Ω)) ≤ C s e αt , and the semi-group generated by (A, D(A)) is said to be exponentially stable if α < 0. Proposition 5.2.We have Proof.Let u ⋆ T be an optimal control in time T for the optimal control problem (31), then Now, by definition of our control problem, for all T > 0, , and the result follows.

Obstructions
We further investigate the behaviour of M , and establish results on the corresponding minimal time problem (37).The comparison principle formulated in (6) will be a key ingredient in our study.

Obstruction to reachability and small-time controllability
Given the controllability result of Theorem 3.1, in order to study possible obstructions, we introduce a new bound on the amplitude of the control, of the form: for some M max > 0. Note that such a constraint imposes nonnegativity of the control.With this new constraint on the controls, we illustrate a general property that is well known for finite-dimensional systems: exponential stability prevents reachability.
In particular, the result below holds for uniformly elliptic operators of the form (7) with 0th order coefficient satisfying c ≤ 0. Proposition 5.4.Assume that (S t ) t≥0 is exponentially stable.Let (y 0 , y f ) be such that for all T ≥ 0, y f ≥ S T y 0 and S T y 0 −y f L 2 ≥ δ for some δ > 0.Then, for all 0 < ε < δ there exists M max M (y 0 , y f , ε) > 0 satisfying • if M max > M max (y 0 , y f , ε), there exists a time T > 0 and a control u ∈ E satisfying (36), steering y 0 to B(y f , ε) in time T .If A * satisfies the (GUC) property and ∂ t − A * is analytic-hypoelliptic, the control may be chosen to be in U L shape .• if M max < M max (y 0 , y f , ε), no such control exists.
Moreover, for all M max > 0, the control system (5) is not nonnegatively approximately controllable with controls in {M (u) ≤ M max } in any time T > 0.
Proof.Given Corollary 5.3, the function M (T ) goes to +∞ as T → 0, is bounded away from 0 at infinity, and does not vanish over the interval (0, +∞).Since it is continuous, we define and the first two claims follow.When A * satisfies the (GUC) property and ∂ t −A * is analytic-hypoelliptic, the control may be chosen to be in U L shape by Theorem 3.1.Then, let M max > 0. Taking It follows from the second claim that y 0 cannot be steered to y f in any time T > 0 with a control u such that M (u) ≤ M max .Thus, system (5) is not nonnegatively approximately controllable with such controls in any time T > 0.

Characterisation of minimal time controls
Throughout this section, we let ε > 0, y f ∈ L 2 (Ω), we assume that (23) holds, and let y 0 = 0. Hence we must have y f L 2 > ε and the condition ( 23) is independent of T .Finally, y f ≥ S T y 0 here simply amounts to y f ≥ 0.
Given the obstruction result of Proposition 5.4, we consider the minimal time control problem: From our study of the optimal control problem (19), we know that this minimal time is well defined for λ ∈ M ((0, +∞)).Under appropriate assumptions, we will show that it is reached, and characterise the minimal time controls, by establishing a form of equivalence between the optimal control problem and the corresponding minimal time problem.This is now a well-known feature for parabolic equations (see [20,41,48]).
Further study of the value function M .Using strong duality again, we will establish that M is a non-increasing function under the assumption that A * satisfies the comparison principle (6).We start with the following general lemma: Lemma 5.5.Given any 0 < T 1 < T 2 , and y 0 = 0, for a general unbounded linear operator A, the dual functional defined by (22) satisfies: with equality if and only if Proof.Since y 0 = 0, inequality (38) follows immediately from the comparison of the integral terms in the expression of the J Ti,ε , i ∈ {1, 2}.Moreover, for that is, by definition of the operators L * Ti (see (9) which are obviously related by L Using the definition of the support function σ U L (see the proof of Lemma 2.7), this is equivalent to (39).
Remark 5.8.The proposition above implies in particular that M either decreases on the whole of (0, +∞) to its limit µ − (if T ℓ = +∞), or reaches it at T ℓ < +∞ and then remains constant.
By uniqueness of the dual optimal variable (Proposition 3.11), the first equality implies that From Lemma 5.5, the second equality implies that From ( 42) and ( 43), we get p ⋆ T = p ⋆ f for all T ∈ [T 1 , T 2 ].Now, for T > T 2 , the comparison principle (6) and inequality (43) . By definition of the dual minimiser p ⋆ T of J T,ε , we also have J T,ε (p ⋆ T ) ≤ J T,ε (p ⋆ f ), and then finally, J T,ε (p ⋆ T ) = J T,ε (p ⋆ f ), i.e., p ⋆ T = p ⋆ f .This implies, thanks to (32), that M (T ) = M (T 1 ) = M (T 2 ), which proves the proposition.Remark 5.9.It follows from all the above and (43) that, when A * satisfies the comparison principle is an optimal control on [0, T ] whenever u T ℓ is an optimal control on [0, T ℓ ].
We now establish the relationship between the optimal control problem (31) and the minimal time control problem.
Link with the subdifferential.We now give another characterisation of the subdifferential set, which illustrates the link with convex conjugation: for f ∈ Γ 0 (H), Essentially, the subdifferential is the set of linear forms on which the convex conjugate is attained.Using this characterisation, we then get the Legendre-Fenchel identity, which allows us to "flip" subdifferentials: p ∈ ∂f (x) ⇐⇒ x ∈ ∂f * (p), f ∈ Γ 0 (H), ∀x, p ∈ H.
A.2 Some properties of indicator and support functions Indicator function of a ball in a Hilbert space.Consider the closed unit ball B(0, 1) of H.We have seen before that σ B(0,1) (y) = δ B(0,1) * (y) = y H .
From the Cauchy-Schwarz inequality we know that p, x H ≤ p H x H , it follows that p, x H = p H if and only if x = p p H .This implies that ∂δ B(0,1) Proof.Since F is obviously convex, we only need to justify that F is lsc to infer F ∈ Γ 0 (L 2 (0, T ; H)).We let u n → u be in L 2 (0, T ; H) and must show that F (u) ≤ lim inf F (u n ).Upon extraction of a subsequence, we may assume that F (u n ) → lim inf F (u n ), and that u n (t) → u(t) in H for a.e.t ∈ (0, T ).Then, using successively the lsc of f and Fatou's lemma, we find

A.5 Parametric convex optimisation
Lemma A.9. Let H be a Hilbert space, Z be a metric space, f : H × Z → R ∪ {+∞}.Assume that • ∀x ∈ H, f (x, •) is continuous on Z, • f is sequentially weak-strong lower semicontinuous on H × Z, i.e., ∀x n ⇀ x, ∀α n → α, f (x, α) ≤ lim inf n→+∞ f (x n , α n ), • there exists a unique x α ∈ H such that inf Upper semicontinuity.For x ∈ H fixed, thanks to the continuity of f (x, •), we pass to the limit in f (x, α n ) ≥ m(α n ) and find m(α) = inf Lower semicontinuity.We denote x n = x αn .Let us for the moment admit that (x n ) is bounded.Upon extraction, we may assume that x n ⇀ x for some x ∈ H.By sequential weak-strong lower semicontinuity, Since the left-hand side is bounded from below by m(α), we have proved lower semicontinuity (and in fact x = x α ).
We are left to proving the boundedness of (x n ) to conclude the proof.Assume that (x n ) is not bounded.Upon extraction, we may assume that y n := x n x n H ⇀ y, for some y ∈ H.For any fixed λ > 0, we shall prove that x α + λy minimises f (•, α), which contradicts the fourth assumption that there exists a single minimum point.Indeed, we notice that x n ⇀ x α + λy.
given by the convex lsc function f * (y) := sup x∈H y, x − f (x) , ∀y ∈ H. Support and indicator functions.Given a subset C ⊂ H, the indicator function of C is the function defined by and the support function of C is defined by σ C (p) := sup x∈C p, x = δ * C (p), ∀p ∈ H, i.e., the Fenchel conjugate function of the indicator function of C. Subdifferentials.For f ∈ Γ 0 (H), we let ∂f (x) := {p ∈ H, ∀y ∈ H, f (y) ≥ f (x) + p, y − x },

Remark 3 . 3 .
As mentioned in the introduction, Theorem 3.1 holds for uniformly elliptic operators of the form (7) with analytic coefficients, and in particular the classical heat equation with Dirichlet boundary conditions, on a bounded, open, connected domain with C 2 boundary.
Indicator functions are a crucial tool to encode constraints in convex optimisation problems.Their properties are closely linked to topological properties of their indicated sets: Proposition A.3.We have δ C , σ C ∈ Γ 0 (H) as soon as C is non-empty, convex and closed.The characterisation(46) of the subdifferential yields a useful result on indicator functions: Proposition A.4.Let C ⊂ H be a closed convex set with nonempty interior.Then, for x ∈ H we have the following:x ∈ ∂C ⇐⇒ ∂δ C (x) is a nontrivial cone.Equivalently, by convex conjugation, ∃p = 0, x ∈ arg max v∈C v, p ⇐⇒ ∃p = 0, x ∈ ∂σ C (p) ⇐⇒ x ∈ ∂C.