On the validity of the Euler-Lagrange system without growth assumptions

The constrained minimisers of convex integral functionals of the form $\mathscr F(v)=\int_\Omega F(\nabla^k v(x))\mathrm d x $ defined on Sobolev mappings $v\in \mathrm W^{k,1}_g(\Omega , \mathbb R^N )\cap K$, where $K$ is a closed convex subset of the Dirichlet class $\mathrm W^{k,1}_{g}(\Omega , \mathbb R^N ),$ are characterised as the energy solutions to the Euler-Lagrange inequality for $\mathscr F$. We assume that the essentially smooth integrand $F\colon \mathbb R^{N} \otimes \odot^{k}\mathbb R^{n} \to \mathbb R\cup\{+\infty\}$ is convex, lower semi-continuous, proper and at least super-linear at infinity. In the unconstrained case $K=\mathrm W^{k,1}_{g}(\Omega , \mathbb R^N )$, if the integrand $F$ is convex, real-valued, and satisfies a demi-coercivity condition, then $$ \int_{\Omega} \! F^{\prime}(\nabla^{k} u) \cdot \nabla^{k}\phi \, \mathrm d x =0 $$ holds for all $\phi \in \mathrm W_{0}^{k}( \Omega , \mathbb R^{N})$, where $\nabla^{k} u$ is the absolutely continuous part of the vector measure $D^{k}u$.


Introduction and results
Let Ω be an open and bounded subset of R n , g ∈ W k,1 (Ω) ≡ W k,1 (Ω, R N ), and K a closed convex subset on W k,1 (Ω). We consider the functional defined on K ∩ W k,1 g (Ω), where W k,1 g (Ω) ≡ g + W k,1 0 (Ω) is the Dirichlet class determined by g. Denoting M ≡ R N ⊗ ⊙ k R n , we assume that F : M → R ∪ {+∞} is a convex, lower semicontinuous extended real-valued integrand satisfying moreover the coercivity condition for all ξ ∈ M, where θ : [0, ∞) → [0, ∞) is an increasing convex function satisfying We will also be interested in relaxing the coercivity condition (H1) to demi-coercivity, namely for some linear a and some constants c 1 > 0, c 2 ∈ R. We remark that under (H1) the pointwise definition (1.1) agrees with relaxed definitions of F (v) in the sense of Lebesgue-Serrin-Marcellini type definitions, see e.g. [9]. Further, the existence of minimisers in this set-up follows from the direct method. Under (H2), we need to work with a Lebesgue-Serrin-Marcellini type relaxed version of the functional and consider minimisers in BV k (Ω) ≡ BV k (Ω, R N ), consisting of integrable maps u : Ω → R N whose distributional derivatives up to and including k-th order are bounded Radon measures on Ω. To be precise, we will define: Let Ω ⋐ Ω ′ and g ∈ W k,1 (Ω ′ ) such that´Ω ′ \Ω F (∇ k g) dx < ∞. We then define for u ∈ BV k (Ω), 1 (Ω ′ ), ∇ i u j * ⇀ ∇ i u in BV(Ω) for i = 0, . . . , k − 1, u j → g in W k, 1 (Ω ′ \ Ω)}.
As the precise definitions of extremal and minimiser are important in this paper, we recall the relevant them here. 1 (Ω)), this is equivalent to saying that a mapping u ∈ W k,1 for any v ∈ W k,1 g (Ω) ∩ K.
for any v ∈ K ∩ W k,∞ g (Ω). Note that this entails that F ′ (∇ k u), ∇ k u ∈ L 1 (Ω). Here ∇ k denotes the absolutely continuous part with respect to Lebesgue measure of the gradient. If u ∈ W k,1 (Ω), this agrees with the usual definition of the gradient.
In our set-up, where we only consider convex autonomous integrands, it is easy to see that energy-extremals must be minimisers. The reverse question is considerably more difficult and the answer is delicate already in the case of convex, real-valued integrands satisfying p-growth. We do not discuss the issue of integrands with (p, q)-growth further here but refer to [23,12,18,19,20,13,14] for further discussion and references. The question for convex integrands without growth conditions has been studied in [6]. Our results here extend and strengthen the results of [6] in several directions. Whereas [6] considered convex real-valued integrands of first order with superlinear growth at infinity, we consider convex extended real-valued integrands of k-th order with superlinear growth at infinity and moreover allow to restrain the values of the integrand within a weakly closed subset of W 1,1 (Ω). Further, we are able, in the unconstrained set-up, to relax the super-linear growth to the linear bound (H2).
Considering constrained integrands with super-linear growth, our main result is a direct translation of the main theorem in [6] generalizing it simultaneously to extendedrealvalued integrands.
Our proof follows essentially along the same lines as [6]. We approximate F (·) from below by a sequence of integrands F k (·). The idea is that minimisers u k of the regularised functionals should have the desired properties, converge to the minimiser u of F (·, Ω) and that these properties are retained in the limit. Our approximations are smooth, convex and globally Lipschitz-continuous. The corresponding variational problems are degenerately convex problems of linear-growth. It is known that in general such problems need to be solved in BV (Ω). For integrands satisfying superlinear growth, we avoid the use of BV (Ω) by utilising Ekeland's variational principle. Convergence of u k to u in the L 1 -sense is established using the theory of Young measures.
The key tool driving our arguments is convex duality theory and it is our more careful use of this theory that allows us to extend to essentially smooth integrands. Duality ideas in the context of the calculus of variations already appear in [24] and were used in the context of integrands with linear growth in [22]. In recent developments, the idea has been underused in the opinion of the authors, but has nevertheless been applied in the context of standard growth [7,8,5] and faster than exponential growth [4,3].
We remark that there is an extensive literature on functionals with nonstandard growth and refer to the surveys [19,20] for a general exposition and more references. Further information, regarding in particular functionals with linear and nearly-linear growth can be found in [10,2]. We would like to point out that the energy-extremality of minimisers plays a key role in the approach to these problems. Finally, we remark that in the case of non-convexity of the integrand the situation is considerably more complicated and the results much weaker, see e.g. [15,16,11].
In the case of linear growth (H2), while still proceeding along the same lines of argument, the proof requires more care than in the super-linear case. The key observation is a representation formula that seems to have gone unnoticed in the literature so far.

Theorem 2.
Suppose Ω is a Lipschitz domain. Assume that F : M → R is C 1 , convex and satisfies (H2). Then for u ∈ BV k (Ω) it holds that where we decompose the gradient of u into its absolutely continuous and singular parts Du = ∇u + D s u.
Using this representation we establish the following result: Let Ω be a Lipschitz domain. Suppose that F is C 1 , strictly convex and satisfies (H2). Let u ∈ BV k (Ω) be a minimiser of F (·) in the unconstrained setting The paper is structured as follows. In Section 2 we define our notation and recall some facts from convex duality theory. We establish the representation formula of Theorem 2 in Section 3. The constrained superlinear case is treated in Section 4, while the linear case can be found in Section 5.

Preliminaries
Throughout this paper we denote by c a general constant that may vary from line to line. We denote the standard norm on R n by | · | and we utilise the same norm on M := R N ×n×k so that for matrices ξ, η ∈ M we write ξ, η = trace(ξ T η) for the usual inner product and |ξ| = ξ, ξ 1 2 for the corresponding inner product. We denote for a ∈ R n , b ∈ M by a ⊗ b ∈ M the usual tensor product and note that (a ⊗ b)x = (b · x)a for x ∈ R n and that |a ⊗ b| = |a||b|.
Note that here we view F ′ (ξ) both as an N × n × k matrix and as the corresponding linear form on M. For 1 ≤ p ≤ ∞, we denote by W k,p (Ω) = W k,p (Ω, R N ) the usual Sobolev space. BV k (Ω) = BV k (Ω, R N ) denotes the space of functions with derivatives up to order k that are functions of bounded variation. Given where the latter is defined as the closure of the space of smooth compactly supported test maps C ∞ c (Ω, R N ) in W 1,1 (Ω). We remark that due to Mazur's lemma, weakly closed sets in W 1,1 (Ω) are strongly closed and hence we refer to them simply as closed.
If u ∈ BV k (Ω), we write D k u = ∇ k u + D k s u where ∇ k u is the absolutely continuous part with respect to Lebesgue measure of D k u and D s u denotes the singular part.
Our reference regarding convex analysis is [21], but for the readers convenience we recall some of the key fact we use here. Throughout this discussion f : M → R ∪ {∞} will be a convex function. We denote the domain of f by We say that f is essentially smooth if the following assumptions are satisfied Note that a convex function f , finite on an open convex set C, that is differentiable on C, is continuously differentiable on C. In particular, if f is essentially smooth, then f is continuously differentiable on dom(f ).
We say a proper convex function f is essentially strictly convex if f is strictly convex on every convex subset of dom ∂f . Here ∂f is the subdifferential of f .
We have that

Theorem 4. A lower semi-continuous proper convex function f is essentially strictly convex if and only if its conjugate f * is essentially smooth.
Moreover we make the following observation.
Then dom(f ) has interior points if and only if f * is demi-coercive, that is there exists a linear function L and constants c 1 > 0 and c 2 ∈ R such that Proof. Note that f is necessarily continuous in dom(f ), so that the supremum in (2.1) is well-defined and finite. Assume first that Conversely, if (2.1) holds, we have for 0 < τ < r and θ ∈ R N ×n with |θ| = 1, In particular, we deduce that B r (x 0 ) ⊂ dom(f ).
Finally we record a technical lemma that allows us to pull in the boundary of a Lipschitz domain. The idea is to replicate in this setting the behaviour of the map x → sx on the unit ball.
h z (s) =z and set Ψ(z, t) = h z (t). After possibly reducing the value of t 0 , the maps Ψ, Ψ −1 are Lipschitz-regular diffeomorphisms on a neighbourhood of ∂Ω which we denote V . Moreover the Jacobians of Ψ, Ψ −1 are bounded.
, a sequence of strictly monotonically increasing smooth maps with Using the chain rule we note that Ψ s : Ω ′ → Ω ′ is a Lipschitz-regular diffeomorphism. Denote its inverse by Ψ −1 s : Ω ′ → Ω ′ and note that using the Inverse Function Theorem and the chain rule, Ψ −1 s is also Lipschitz. Further by direct calculation, Ψ s → Id in C 1 (Ω ′ ) as s ր 1. In particular, also JΨ s → 1 uniformly in Ω ′ as s ր 1.

Approximations and a representation formula
We assume that for ξ ∈ M and some c > 0, Note that (after possibly adjusting F by adding an affine function) (3.1) encapsulates both (H1) and (H2).
The main goal of this section is to prove the representation formula of Theorem 2. In order to prove Theorem 2 we need to construct appropriate approximations of F (·).
Proof. Since F is essentially smooth, int(F ) is non-empty. By changing coordinates if necessary, we may assume that 0 is an interior point of dom(F ), that is there is r > 0 such that B r (0) ⊂ int(F ).
Introduce the Fenchel-conjugate of F , Note that, by Lemma 2.1, we have for some c ∈ R, Finally, since F (·) is essentially smooth, F * (·) is essentially strictly convex. For each j > 0 and ξ ∈ M define Note that this is a real-valued, convex, and globally j-Lipschitz function. Since F is lower semi-continuous and convex, we have that Moreover, it is straightforward to check that F * * j (ξ) ≥ θ(ξ). We denote by Φ the standard, radially symmetric, and smooth convolution kernel and set, for ε > 0, Φ ε (ξ) = ε −dimM Φ(ε −1 ξ) for ξ ∈ M. Note that since F j is convex and j-Lipschitz For integers j > 0 and sequences (δ j ), (µ j ) ⊂ (0, ∞), which we specify at a later point, define Clearly F j is convex and j-Lipschitz. For ξ ∈ M and j > 1 we estimate: with the choice In particular, F j (ξ) ր F (ξ) as j ր ∞ pointwise in ξ. By Dini's lemma, the convergence is locally uniform on dom(F ). We note that for ξ ∈ M, locally uniformly on dom(F ). In order to see this, assume that (ξ j ) ⊂ dom(F ) with ξ j → ξ ∈ dom(F ). Consider (F ′ j (ξ j )). As differencequotients of convex functions are increasing in the increment, we have for all η ∈ M and 0 < t ≤ 1, Consequently, we find Letting t → 0, the right-hand side vanishes, proving the asserted locally uniform convergence.
We are now ready to prove Theorem 2.
Proof of Theorem 2. Let u ∈ BV k (Ω). There exists a sequence (u j ) ⊂ W k,1 (Ω) with Due to (H2), (u j ) is bounded in W k,1 (Ω ′ ) and hence we may extract a subsequence such that ∇ k u j Y * ⇀ ν in the sense of Young measures. We then have, c.f. Proposition 3.3. in [17], for all j, so that, by monotone convergence, By Jensen's inequality, To obtain the last line we used the convexity of F . Let {Ψ t } denote the family of Lipschitz-diffeomorphism of Lemma 2.2. We denote by u ε,t = Φ ε ⋆ u t mollification of u t with the standard mollifier Φ ε where Consequently, We record the following Corollary

5)
Moreover, either λ is purely singular or Proof. Due to the result of Theorem 2 we must have equality in all the calculations. In particular, we deduce (3.5) with t = 1. The case t ∈ [0, 1] is now a direct consequence of the convexity of F (·). For the moreover part, we differentate the third line of (3.5) at t = 1 − to find This implies the claim.

Constrained extended real-valued integrands
For the convenience of the reader we recall the relevant set-up. We consider the following problem: Let Ω ⊂ R n be an open and bounded subset of R n and g ∈ W k,1 (Ω). Let K ⊂ W k,1 (Ω) = W k,1 (Ω, R N ) be a closed convex subset. We consider the functional . We assume that F (·) is a convex, extended real-valued integrand satisfying moreover the explicit lower bound The main theorem of this section is Theorem 5. Suppose F : M → R ∪ {∞} is convex, lower semi-continuous, proper, essentially smooth and satisfies (H1). Let g ∈ W k,1 (Ω) ∩ K with F (s∇ k g) ∈ L 1 (Ω) for some s > 1. Then minimisers u ∈ W k,1 g (Ω) ∩ K are characterised by the conditions We point out that, in the case where the constraint corresponds to an obstacle problem, we may infer that, under the assumptions of Theorem 5, the Euler-Lagrange inequality for F (·, Ω) holds in the following strong sense: Suppose Ω is a W 1,1 -extension domain and that K is of the form where ψ ∈ W k,1 (Ω) is such that F (±∇ k ψ) ∈ L 1 (Ω). Under the assumptions of Theorem 5 we have for a minimiser u ∈ W k,1 We remark that in the unconstrained case (K = W k,1 (Ω) by the same proof we have the following variant of Corollary 2.
Corollary 3. Suppose F : M → R ∪ {∞} is convex, lower semi-continuous, proper, essentially smooth and satisfies (H1). Let g ∈ W k,1 (Ω) with F (s∇ k g) ∈ L 1 (Ω) for some s > 1. We have for a minimiser u ∈ W 1,1 and for all φ ∈ W 1,1 (Ω) with compact support contained in Ω satisfying the integrability conditions Proof of Theorem 5. Using Lemma 3.1 we obtain F s such that F s ր F pointwise and locally uniformly on dom(F ). Further F ′ s → F ′ pointwise and locally uniformly on dom(F ).
We begin by proving that Clearly f j is an increasing sequence and we may take v j ∈ W k,1 g (Ω) ∩ K such that Considering (H1) and f j ≤ f < ∞, (v j ) is bounded in W k,1 (Ω). In particular, we may extract a subsequence, not relabelled, such that where v ∈ BV (Ω), i ∈ {0, . . . , k − 1} and ν = ((ν x ) x∈Ω , λ, (ν ∞ x ) x∈Ω ) is a generalised Young-measure. We remark that an implication is that, for any integer s > 1, Recalling that F ∞ j ր F ∞ , by monotone convergence, we may deduce that Using standard results, see [1], we deduce that (∇ k v j ) is equi-integrable on Ω, so that v j ⇀ v weakly in W k, 1 (Ω), where v ∈ W k,1 g (Ω) ∩ K by Mazur's Lemma and since K is closed. By another standard result, the centre of mass of ν x is ∇ k v(x) for almost all x. But now Jensen's inequality and the above allow us to concludê and thus by minimality of u we have proven our claim.
In particular, we may now writê where ε k ց 0. By Ekeland's variational principle we can find u j ∈ W k,1 g (Ω) ∩ K such thatˆΩ Note that, as F (∇ k u) ∈ L 1 (Ω), it must hold that ∇ k u ∈ dom(F ) almost everywhere in Ω. In particular, the second definition makes sense, since F is essentially smooth.
Note that σ j ∈ L ∞ (Ω) (as F j is Lipschitz-continuous) and furthermore holds almost everywhere in Ω. In particular, this implies that F * j (σ j ) ∈ L 1 (Ω). We further comment that F * j is an extended real-valued, lower semi-continuous and convex integrand. Finally To reach this conclusion, we again use that ∇ k u ∈ dom(F ) almost everywhere in Ω.
As a consequence, we deduce that for any Thus the proof is complete.
Proof of Corollary 2. Consider u ∈ W k,1 g (Ω) ∩ K, a minimiser. By the main theorem, u is an energy-extremal and F * (F ′ (∇ k u)) ∈ L 1 (Ω). Since u, is energy-extremal we have thatˆΩ Extend v and ψ to W k,1 (R n , R N ) functions still denoted v and ψ respectively. Let (Ψ ε ) ε>0 be a family of standard smooth mollifier and note that for sufficiently small In particular for such ε > 0, (4.5) Now using Young's inequality, almost every in Ω. In particular, combining these inequalities, Using Jensen's inequality and Fatou's inequality, we deduce in a routine manner that In particular, using (4.6), Fatou's lemma and Vitali convergence theorem, as ε ց 0. Hence we may pass to the limit in (4.5) to conclude the proof.

BV-minimisers
For the convenience of the reader we recall the set-up we work with in this section. We consider the functional . We assume that F (·) is a convex, extended real-valued integrand satisfying moreover the linear bound, for some linear a(·) and c > 0. We extend F (·) to BV k (Ω) as follows: Let Ω ⋐ Ω ′ and g ∈ W k,1 (Ω ′ ) such that We then define for u ∈ BV k (Ω), We prove the following result: Let Ω be a Lipschitz domain. Suppose that F (·) is strictly convex and satisfies (H2). Let u ∈ BV k (Ω) be a minimiser of F (·).
By Lipschitz continuity of F j , σ j ∈ L ∞ (Ω ′ ) and we have the extremality relation σ j , ∇ k w j = F * j (w j ) + F j (∇ k w j ), valid almost everywhere in Ω ′ . It is not hard to check that F * j (ξ) ց F * (ξ) as j → ∞ pointwise in ξ, and thus, in particular, after adding an affine function to F if necessary, for some r > 0. We further note that whereε j = ε i j l j . In particular, we may estimatê We note thatˆΩ as j → ∞. In particular, we deducê Hence (σ j ) is equi-integrable on Ω ′ and thus by Vitali's convergence theorem, σ j → σ in L 1 (Ω ′ ). We deduce that Divσ = 0 in the sense of distributions. By Fatou's lemma, F * (σ) ∈ L 1 (Ω ′ ) and using the duality relation σ s , ∇ k u s = F * s (σ s ) + F s (∇ k u s ) as well as again Fatou's lemma, we conclude that σ, ν x ∈ L 1 (Ω ′ ).
Note that since F is strictly convex, λ is purely singular and consequently ν x = ∇ k u. This concludes the proof.