EXISTENCE OF ABSOLUTE MINIMIZERS FOR NONCOERCIVE HAMILTONIANS AND VISCOSITY SOLUTIONS OF THE ARONSSON EQUATION

. In this paper we study absolute minimizers and the Aronsson equation for a noncoercive Hamiltonian. We extend the deﬁnition of absolutely minimizing functions (in a viscosity sense) for the minimization of the L ∞ norm of a Hamiltonian, within a class of locally Lipschitz continuous func- tions with respect to possibly noneuclidian metrics. The metric structure is naturally associated to the Hamiltonian and it is related to the a-priori regu- larity of the family of subsolutions of the Hamilton-Jacobi equation. A special but relevant case contained in our framework is that of Hamiltonians with a Carnot-Carath´eodory metric structure determined by a family of vector ﬁelds (CC for short in the following), in particular the eikonal Hamiltonian and the corresponding anisotropic inﬁnity-Laplace equation. In this case, the deﬁnition of absolute minimizer can be written in an almost classical way, by the theory of Sobolev spaces in a CC setting. In general open domains and with a prescribed continuous Dirichlet boundary condition, we prove the existence of an absolute minimizer which satisﬁes the Aronsson equation as a viscosity solution. The proof is based on Perron’s method and relies on a-priori continuity estimates for absolute minimizers.


(Communicated by Emmanuel Trélat)
Abstract. In this paper we study absolute minimizers and the Aronsson equation for a noncoercive Hamiltonian. We extend the definition of absolutely minimizing functions (in a viscosity sense) for the minimization of the L ∞ norm of a Hamiltonian, within a class of locally Lipschitz continuous functions with respect to possibly noneuclidian metrics. The metric structure is naturally associated to the Hamiltonian and it is related to the a-priori regularity of the family of subsolutions of the Hamilton-Jacobi equation. A special but relevant case contained in our framework is that of Hamiltonians with a Carnot-Carathéodory metric structure determined by a family of vector fields (CC for short in the following), in particular the eikonal Hamiltonian and the corresponding anisotropic infinity-Laplace equation. In this case, the definition of absolute minimizer can be written in an almost classical way, by the theory of Sobolev spaces in a CC setting. In general open domains and with a prescribed continuous Dirichlet boundary condition, we prove the existence of an absolute minimizer which satisfies the Aronsson equation as a viscosity solution. The proof is based on Perron's method and relies on a-priori continuity estimates for absolute minimizers.

Introduction.
Given an open, connected set Ω ⊂ R n , we study the notion of absolute minimizer and the Dirichlet boundary value problem for the Aronsson equation (from now on AE for short) for a continuous, bounded from below Hamiltonian H : Ω × R n → R of the form Coercivity of the Hamiltonian (i.e. H(x, p) → +∞ as |p| → +∞) will not be assumed, however total controllability of the vectogram f (x, A) in every subdomain of Ω is a crucial ingredient. The AE is written in the following way − D x (H(x, Du(x))) · D p H(x, Du(x)) = 0, x ∈ Ω. (2) where only the directional derivatives of the unknown function u with respect to the family {σ j } appear and A(x) = σ(x)σ t (x) is positive semidefinite. In the eikonal case, by computing derivatives we obtain This last form is understood as the infinity-Laplace equation with respect to the family of vector fields. The AE was first introduced by Aronsson, see e.g. [1,2,3] as the Euler-Lagrange equation of L ∞ variational problems, where one minimizes the functional ess sup x∈Ω H(x, Du(x)) → min (7) among all locally Lipschitz continuous functions u : Ω → R attaining a prescribed boundary condition on ∂Ω. The best known example for this class of problems is the Lipschitz extension problem, where H = H(p) = |p| 2 2 so that the AE then becomes the now usual infinity-Laplace equation.
The main difference with respect to more standard L p minimization problems is that uniqueness of minimizers fails in general and minimizers do not have good stability and comparison properties. Thus one has to define the so-called absolute minimizers in order to have (2) satisfied, i.e. functions that are local minimizers in every subdomain. In [3] Aronsson proved the equivalence between C 2 absolute minimizers and solutions of the infinity-Laplace equation, and the first existence result for absolute minimizers in the Lipschitz extension problem. However absolute minimizers do not have C 2 regularity in general and the corresponding results, using the notion of viscosity solution, have been later proved for the infinity Laplace equation by Jensen [28], see also Juutinen [31,32]. Quite general results of equivalence between absolute minimizers and viscosity solutions of the AE are due to Barron-Jensen-Wang [9], as well as existence of absolute minimizers by approximation with L p variational problems, see also [10] and Champion-De Pascale-Prinari [15] for the Γ−convergence approach by direct method. More recent refinements concerning the derivation of the AE for absolute minimizers are due to Crandall [19] and Crandall-Wang-Yu [20], and to prove that viscosity solutions of the AE are absolute minimizers by Yu [41].
Most of the results in the literature refer, at least implicitly, to coercive Hamiltonians. When we drop this assumption, it is no longer natural to solve the variational problem (7) in the class of locally Lipschitz functions with respect to the Euclidian metric. Indeed existence for the Lipschitz extension problem has been also studied in length spaces, see e.g. [32] and Aronsson-Crandall-Juutinen [4], where the metric is in general different from the euclidian. We notice in this paper that in general, the family of viscosity subsolutions of the HJ equation H(x, Du(x)) = k ∈ R is made of locally Lipschitz continuous functions with respect to a metric naturally defined from the Hamiltonian itself, which we take as the basic tool in our analysis. In this we follow a classical idea by Lions [33] to study first order (coercive) HJ equations. When we loose the usual Lipschitz continuity however, the problem (7) is no longer directly meaningful and we have to interpret it in the sense of viscosity solutions theory that we introduce later, extending the classical problem. A notable exception is that of Hamiltonians having the CC structure (4), when (7) still makes sense if interpreted in view of the theory of Sobolev spaces for a CC family of vector fields, as we explain below in more detail (Example 1), see also the paper by the author [39].
The relationship between (2) and the corresponding Hamilton-Jacobi equations H(x, Du(x)) = k(> 0), H(x, −Du(x)) = k (8) turns out to be crucial in the analysis, in particular allowing to define generalized cone functions. One of the main ingredients of the theory is indeed the equivalence between absolute minimizers and the class of functions enjoying comparison with cones, a property noted by Crandall-Evans-Gariepy [18] for the Lipschitz extension problem, extended to length spaces by Champion-De Pascale [14] and extensively studied by Aronsson-Crandall-Juutinen [4]. A property that we also generalize to our setting. Our main result is the existence by Perron's method of absolute minimizers with a prescribed, arbitrary and continuous Dirichlet boundary condition in open sets for a continuous Hamiltonian as in (1). Moreover, when the Hamiltonian is of class C 1 we can construct an absolute minimizer that satisfies equation (2) in the viscosity sense. The use of Perron's method in the existence theory already appeared in the work of Aronsson [1,2]. We do not need to use L p −type approximation as done in other literature, see e.g. [28,10]. Juutinen also used Perron's method to prove the existence of absolutely minimizing Lipschitz extensions in lenghth spaces, as well as [15], later extended by Julin [30]. Since we cannot rely on a full comparison principle for the AE in our generality, besides the mentioned comparison with cones, Perron's method requires knowledge of a-priori continuity estimates of functions satisfying comparison with cones.
Our approach is strongly inspired by the methods discussed for the infinity-Laplacian for the euclidian metric in [4]. Here, however, we have to adapt their ideas to face several additional difficulties. Among them, besides the lack of coercivity, the dependence of the Hamiltonian on the state variable and the fact that the Hamiltonian is not symmetric in the variable p. We also have to modify the notion of cones, a crucial tool to implement Perron's method, and they turn out not to be absolute minimizers, in general, a fact already noted in [14]. The reasons why solutions of (8) (generalized cones for us in the complement of a point) fail to be absolute minimizers are discussed in [38], showing that solutions of the Hamilton-Jacobi equation are absolute minimizers if and only if they solve (8) as bilateral viscosity solutions. This latter property is quite strong and not to be expected in general by viscosity solutions. When it holds, the cone is the unique absolute minimizer. From optimal control theory, one can find examples of cones that are absolute minimizers but are just locally Hölder continuous and not locally Lipschitz (Example 2).
We finally mention that a few other papers study the Lipschitz extension problem in Carnot groups using explicitly the group structure. Bieske-Capogna [12] derive the infinity Laplace equation for absolute minimizers in Carnot groups, Bieske [11] proves the equivalence between absolute minimizers and viscosity solutions of the infinity Laplace equation in the Grushin space. More general results with an euclidian approach are due to Wang [40] in the case of C 2 and homogeneous Hamiltonians with a CC structure. Wang, by completely different methods, derives the AE for absolute minimizers in CC metric spaces and proves existence and uniqueness in Carnot groups if the HamiltonianH in (4) is moreover independent of x and strictly convex.
As far as we know, uniqueness for (2) remains largely open as most of the known uniqueness results [28,8,4] are for x−independent equations, see also [40]. More recent results on uniqueness of viscosity solutions can be found in Jensen-Wang-Yu [29] as well as examples that explain why uniqueness fails in some cases, for this see also [41,38].
The plan of the paper is as follows. In Section 2, we introduce cone functions, recall some preliminaries on solutions of (8) and discuss our main assumptions. In Section 3, we prove equivalence of absolute minimizers and functions enjoying comparison with cones in our setting. In Section 4, we use comparison with cones to prove a-priori continuity estimates for absolute minimizers and discuss the Harnack inequality for homogeneous Hamiltonians. In Section 5, we prove the existence of absolute minimizers with a prescribed boundary condition in open domains. Finally in Section 6, we show for C 1 Hamiltonians that we can construct an absolute minimizer satisfying the AE.
2. Main assumptions, metrics and cones. In this section we present and discuss our framework and introduce the main assumptions that will hold throughout the paper. It will take some space in order to describe them carefully, however we are going to assume that: (9), (10), (15), (16), (21) and (28) below will always hold throughout the paper.
We consider Hamiltonian functions as in (1), where On the data we assume the following Remark 1. Notice that assumption (10) implies that H, h are bounded from below. We advise the reader that although our standing assumptions are as in (10), for the simplicity of the discussion, from now on we will proceed as if H, h ≥ 0. This is not at all restrictive since if h, H ≥ −M , M ≥ 0 then we can substitute h with h − M and consider in the rest of the paper only distance functions J |k| with slope |k| > 2M . These are introduced later in the section. All the main definitions and results will remain unaffected. If for instance we want to consider the Hamiltonian H(x, p) = |σ t (x)p| − h(x), strictly speaking assuming H, h ≥ 0 would impose h ≡ 0. If however we woud like to allow 0 ≤ h(x) ≤ M , the problem does not change if we consider instead the nonnegative Hamiltonian and, since h(x) − M ≥ −M , we restrict ourselves to J |k| with slope |k| > M . We will avoid making this precise in the following.
The control system associated to (1) and its properties have crucial importance. Namely we consider the system for a suitable choice of control functions a(·) ∈ A ⊂ {a : [0, +∞) → A : measurable}. We call trajectories the solutions of the control system and indicate them as y xo (·; a) or simply as y(·) without the parameters x o , a(·) if no ambiguity will arise. We will make extensive use of the theory of viscosity solutions throughout the paper, especially for first order convex pdes. For this the reader can consult Crandall-Ishii-Lions [17] as well as the books by Bardi-Capuzzo Dolcetta [5], Barles [7] and Fleming-Soner [22]. We recall that an upper semicontinuous function u : Ω → R is a (viscosity) subsolution of the differential inequality if whenever ϕ ∈ C 1 (Ω) and x o ∈ argmax(u − ϕ) then Remark 2. We implicitly refer in the paper to one of the two following structures of the control functions. In the first instance (easier) the set A is compact and we choose A as the set of relaxed controls, namely A = L ∞ (0, +∞; P (A)), where P (A) is the set of Radon probability measures on A. This set is metrizable and compact with respect to the weak* topology. In this case we have to extend f, h, for µ ∈ P (A), as A typical example of this case is a standard framework in optimal control. In this example however f (x, a) = σ(x)a is linear and A = [−1, 1]is convex and we do not really need relaxed xontrols.
In the second instance, A is convex but may be unbounded, and we suppose that the vector field is affine f (x, a) = f 1 (x) + f 2 (x)a and that h(x, ·) is convex, for all x ∈ Ω. In this case we rely on the weak compactness of bounded subsets of L r (0, T ; A). A typical example of this second case is when (here r = 2) as in the case that gives rise to the infinity-Laplace equation.

PIERPAOLO SORAVIA
Consequences that we use below are the following two properties: (i) by the direct method of calculus of variations, we can determine lower semicontinuity with respect to the control of integral functionals of the form t 0 h(y(s), a(s))ds + g(y(t)) when g is lower semicontinuous and then find optimal controls for appropriate value functions; (ii) if for k > 0, u : D → R, D ⊂ Ω open, is an upper semicontinuous viscosity solution of then the suboptimality principle holds, namely for any trajectory of (11) we have as long as y(s) ∈ D for s ∈ [0, t), see [37,26] for the proofs for the two structures of the control functions that we have mentioned. This is also a well understood consequence of the comparison principle for HJ equations.
On the control system (11) we always assume some further strong conditions. The first is a total controllability condition on the vector fields: for all x, z ∈ Ω the set This fact allows, for any k > 0, to define the following function J k (x, z) = inf a(·)∈Ax,z tx,z 0 (h(y(t), a(t)) + k)dt < +∞.
For convenience we will denote J 1 ≡ J. Notice that min{k, 1}J ≤ J k ≤ max{k, 1}J.
Each J k is a semi-distance, namely it may lack symmetry, but it satisfies triangular inequality and we have J k (x, z) = 0 iff x = z. Concerning this last statement, if for instance J(x, z) = 0 then for ε > 0 we can find a ε ∈ A such that t x,z ≤ ε and a r L r (0,tx,z) ≤ M +1 c ε in the case A is unbounded. On the other hand by definition of trajectory |x−z| ≤ C(t x,z +t 1− 1 r x,z a L r (0,tx,z) )e Ltx,t . Thus x = z since ε is arbitrary, by Gronwall inequality. With a similar argument one can easily show that for all K > 0 we can find λ K such that We will always consider families of vector fields such that the semi-distance J is uniformly continuous in the following sense: for any open and bounded set D ⊂⊂ Ω there is a modulus ω D such that Of course this translates into a similar condition on every J k . In this case, for z ∈ Ω, the function V (·) = J k (·, z), is known to be a continuous viscosity solution of the HJ equation H(x, DV (x)) = k, x ∈ Ω\{z}, V (z) = 0.
This question is well studied. It is known, see [6,5] and the references therein, that for given z ∈ Ω, a continuous viscosity solution V : Ω → [0, +∞) such that V (z) = 0, V (x) > 0 if x = z of the HJ equation (17) must coincide in all sublevel sets {x ∈ Ω : V (x) < r} such that {x ∈ Ω : V (x) < r} ⊂ Ω with the function J k (·, z) ("free boundary" of the minimum time function). This is a consequence of the comparison principle for viscosity solutions of the Hamilton-Jacobi equation or of the optimality principle.
Remark 3. Concerning the lack of symmetry of J k , notice that a(·) ∈ A x,z if and only if We will see the HJ equation for the backward systemż = −f (z, a) play an important role pairwise to (17).
We now define generalized cone functions.
In particular for k > 0 a cone is a viscosity solution of while for k < 0 a cone is a viscosity solution of Remark 4. We observe that if C(x) = J k (x, z) + b is a cone with positive slope k for the Hamiltonian H(x, p), then by Remark 3 −C(x) = −J − k (z, x) − b is a cone with negative slope −k for the Hamiltonian H(x, −p). When the semi-distance J k is symmetric, then −C is a real cone with negative slope.
We introduced the cones in the whole reference set Ω. However we need to extend the family of cones used in the following and define constrained generalized cone functions. These are solutions of the HJ equation in open and connected subsets. Given an open, connected and bounded set D ⊂⊂ Ω, for x, z ∈ D we define the family of controls We also define the function Observe that J D is again a semi-distance in D × D. Notice that if z ∈ ∂D and A x,z = ∅, then for a(·) ∈ A x,z , in the above t x,z = τ x , where τ x denotes the exit time of the trajectory y x (·; a) from D, i.e.
Moreover in this case A y,z = ∅ for all y ∈ D. We name the accessible part of the boundary the set ∂ a D = {z ∈ ∂D : A x,z = ∅ for some x ∈ D}. We extend J D k at every boundary point as a lower semicontinuous function J D k : Observe that if z ∈ D and x, y ∈ D then J D k still satisfies the triangular inequality then there is r > 0 sufficiently small such that the following inequality holds for J(x, y) ≤ r where the last equality is a consequence of the next Lemma 2.3 and ω r only depends on the distance of y from ∂D.
We summarize the previous remarks with the following definition.

Definition 2.2.
Given the open, connected and bounded set D ⊂⊂ Ω, we define a cone relative to D with slope k > 0, +∞] is constructed above. We define cones with negative slope according to Definition 2.1.

Remark 5. Notice that in the definition of the cone
Remark 6. We note that the set ∂ a D is a dense subset of ∂D. Indeed let z ∈ ∂D and x n ∈ D be a sequence such that x n → z. Let us choose a sequence of controls {a n } so that with the unconstrained distance t xn,z (a n ) ≤ J(x n , z) + 1 n and let y n = y xn (τ xn , a n ) ∈ ∂ a D, where τ xn is the exit time from D. Therefore by our assumptions n whenever |x n − z| ≤ σ and σ is sufficiently small. On the other hand, by definition of trajectory and standard estimates we get |y n − z| ≤ |x n − y n | + |x n − z| ≤ Cτ xn + Cτ we obtain that y n → z as n → +∞. A useful consequence is the following. Let D ⊂⊂ Ω and u, V : D ∪ ∂ a D → R be two upper and lower semicontinuous functions respectively, that satisfy in the viscosity Then by the superoptimality principle for HJB equations which is proved in the two cases A bounded and unbounded in [37,26], we have that V satisfies the implicit representation formula Thus for all ε > 0 we can find a control such that The superoptimality principle (24) is a more delicate and technical property for supersolutions of HJ equations compared to the suboptimality (14).
The distance previously introduced J k is natural for the control system. The reason is the following simple consequence of the suboptimality principle showing that the class of subsolutions enjoys better regularity. The function J was already introduced in [33] for coercive Hamiltonians and used to study the solvability of the Dirichlet boundary value problem for HJ equations.
Thus it is locally max{1, k}-Lipschitz continuous with respect toĴ.
Proof. Let x, z ∈ D and a ∈ A D x,z . Then by (14) u We proceed with the following Lemma.
Proof. Observe thatĴ(x, z) ≤Ĵ(x, y) +Ĵ(y, z) ≤ 2R. Moreover for all w ∈ ∂D we have J(w, y) ≤ J(w, z) + J(z, y) ≤ J(w, z) + R, so that inf w∈∂D J(w, z) > 3R. For ε < R, by the superoptimality principle (24) applied to V (x) = J(x, z), we find a ∈ A x,z such that where we may suppose that y(t) = z for t < t x,z . Thus y(t) ∈ D and then a ∈ A D x,z . In particular as t → t x,z we obtain The conclusion follows as ε ↓ 0. Example 1. There is a major interesting case contained in our framework, where the Lipschitz continuity of subsolutions of HJ equations has a richer structure. This is the case of what we call Carnot-Carathèodory HJ equations. The starting point is a family of m ≤ n Lipschitz continuous vector fields σ j : Ω → R n , j = 1, . . . , m, which we put as columns of the matrix valued function σ : Ω → R n×m . To such family of vector fields we associate another deterministic optimal control system, namely for any given control functionã(·) ∈ L ∞ (0, +∞, B 1 (0)) (here B 1 (0) ⊂ R m is the euclidian unit ball). We define the family of vector fields {σ j } as a Carnot-Carathèodory family when, for all x, z ∈ Ω, the set We can therefore define the Carnot-Carathèodory distance associated to the family {σ j }. This happens in particular when the matrix A(x) = σ(x)σ t (x) is positive definite at each point x (m = n Riemannian case), or, as a consequence of Chow Theorem, in the sub-Riemannian case, when the vector fields are smooth and satisfy the Hörmander condition, namely the vector space generated by the family of vector fields {σ j : j = 1, . . . , m} and their Lie brackets is R n at each point of Ω. It is also well known that if the vector fields σ j ∈ C ∞ (Ω) and their Lie algebra satisfy the Hörmander finite rank condition of order r, then for any open and connected D ⊂⊂ Ω the distance d CC satisfies the estimate and we can choose r = 1 in the Riemannian case. Thus it satisfies (16).
We consider now a Hamiltonian with the following structure ∈ Ω × R m is a coercive Hamiltonian, namely in this case for some r, δ > 0, In the case σ ≡ I n×n , H is coercive and then d CC (x, y) is the geodetic euclidian distance in Ω which is locally equivalent to the standard euclidian distance.
Observe that the semi-distance J k is also a viscosity subsolution of Thus by the optimality principle applied to the last HJ inequality we locally obtain since the right hand side represents a solution of the corresponding eikonal pde. In particular all of the subsolutions of (13) are also locally d CC -Lipschitz continuous. This property allows to use the well established theory of Sobolev spaces in the context of Carnot-Carathèodory distances as discussed by Franchi-Serapioni-Serra Cassano [24,23], Garofalo-Nhieu [27], see also Franchi-Hajlasz-Koskela [25] and the very recent book by Bonfiglioli-Lanconelli-Uguzzoni [13]. We denote the vector fields σ j as differential operators It is well known, among many other properties, that, if the distance d CC is uniformly continuous as described in (16), then u is d CC -Lipschitz continuous in D ⊂⊂ Ω if and only if u ∈ W 1,∞ X (D). In particular the weak gradient Xu = (X 1 u, . . . , X m u) is defined a.e. in D and it is essentially bounded. Moreover these weak derivatives can be interpreted in an appropriate pointwise sense, see e.g. Pansu [36], Monti-Serra Cassano [35] and Monti [34].
It then makes sense to interpret the HJ inequality in the viscosity sensẽ also asH (x, Xu(x)) ≤ k, a.e. x ∈ D.
(27) We proved in [39] that v satisfies (26) in the viscosity sense if and only if v is d CC -Lipschitz continuous and satisfies (27). The main example of CC-HJ equations of interest is the Carnot-Carathèodory eikonal equation which gives rise to the infinity Laplace equation with respect to the CC family of vector fields. More comments will follow later in Remark 9.
We need one last stringent structure property for our system. We assume that for any open and connected D ⊂⊂ Remark 7. We have two main examples for condition (28). The first is when A is bounded. Then by definition of J k we always have the estimate The second is when A is unbounded, but the Hamiltonian satisfies where ρ : (0, +∞) → (0, +∞) is increasing and surjective. In particular H(x, ·) may be positively s-homogeneous for all x ∈ Ω, as in the case of the eikonal Hamiltonian (5). Indeed notice that w(x) := ρ −1 (k)J(x, z) satisfies H(x, Dw(x)) ≤ kH(x, D x J(x, z)) ≤ k in the viscosity sense for x ∈ Ω. Therefore by (14) for all a ∈ A x,z we get 3. Cone functions and absolute minimizers. In this section we discuss the role of the so called (generalized) cone functions, introduced to study the infinity-Laplace equation by Crandall, Evans, Gariepy [18]. These are viscosity solutions of the Hamilton-Jacobi equation, but have an important role also in the theory of absolutely minimizing functions. We will show how we can characterize absolute minimizers by means of the cone functions that we earlier introduced. The next section will deal with applications of this result, as the general derivation of apriori continuity estimates, and the Harnak inequality for absolute minimizers of homogeneous Hamiltonians. We briefly recall that given a function u ∈ C(D), by (viscosity) superdifferential We introduce the definition of absolute minimizer for a non-coercive Hamiltonian H(x, p) as follows.
The definition of absolute minimizer is really a two sided condition as the next Proposition will show. Proof. We start with a subsolution u ∈ C(D) of H(x, Du(x)) ≤ k and let ϕ ∈ C 1 (D) be such that (−u) − ϕ attains a local maximum at x o ∈ D and u(x o ) = ϕ(x o ). For any given a o ∈ A we consider the constant control function a(·) ≡ a o and the trajectory of the backward systeṁ Therefore for any t > 0 we have z(t−s) = y z(t) (s; a) for s ∈ (0, t) and y z(t) (t; a) = x o . By the suboptimality principle (14) we then obtain

ABSOLUTE MINIMIZERS FOR NONCOERCIVE HAMILTONIANS 411
Therefore we deduce that for any t sufficiently small, as t → 0. Dividing by t > 0 and letting t → 0 we finally get Hence The opposite implication is analogous by applying the previous argument to the Hamiltonian H(x, −p) and its subsolution v = −u.
We now proceed with a few comments on the definition.  in the viscosity sense, respectively. These are equivalent differential inequalities by Proposition 2. It is now immediate to check that u is an absolute minimizer for the Hamiltonian H(x, p), if and only if −u is an absolute minimizer for the Hamiltonian H(x, −p). This is an important symmetry property of the family of absolutely minimizing functions that we will use in the following.
Remark 9. Our definition may look unusual compared to the one we may find in the standard literature for coercive Hamiltonians. So we pause a little to explain it. The classical definition of absolute minimizer aims at minimizing the quantity k(v, D) = ess sup x∈D H(x, Dv(x)) among locally Lipschitz continuous functions with respect to the Euclidean metric. A standard result in viscosity solutions theory, see e.g. [5], shows that for v ∈ or v is a viscosity subsolution of Notice however that H(x, Dv(x)) = H(x, −D(−v)(x)) for a.e. x ∈ D and then (31) is also equivalent to
(32) If we allow Hamiltonians to be noncoercive, subsolutions of HJ equations are only Lipschitz continuous with respect to the distanceĴ. This fact in general does not always translate into nice pointwise differentiability properties of functions, or at least this is not known in full generality, as far as we know. Another very interesting case is that of Carnot-Carathèodory Hamiltonians. Then our definition can again be rephrased in a more elegant and classical fashion. From Example 1, whenH is coercive and {σ j } is a family of CC vector fields, all viscosity subsolutions of are Lipschitz continuous with respect to the Carnot-Carathèdory distance d CC (or d CC -Lipschitz continuous). Since we also haveH(x, Xu(x)) =H(x, −X(−u)(x)) for a.e. x ∈ D, it is then easy to check, see also [39], that Definition 3. Below we refer to the following terminology: u ∈ C(Ω) is a bilateral solution of H(x, Du(x)) = k in Ω if for all ϕ ∈ C 1 (Ω) and x o ∈ Ω which is either a local maximum or a local minimum of u − ϕ then Notice that u ∈ C(Ω) is a bilateral solution of H(x, Du(x)) = k in Ω if and only if u is a viscosity solution of H(x, Du(x)) = k in Ω and w = −u is a viscosity solution of H(x, −Dw(x)) = k in Ω.
We now study basic properties of cones. Our generalized cones satisfy the Hamilton-Jacobi equation but they are not bilateral solutions. Therefore cones are not absolute minimizers in general as these are equivalent properties, see [38]. This is a major difference with the case of the Lipschitz extension problem and the infinity Laplace equation. The following comparison and uniqueness statement holds.
Proposition 3. Let W ⊂⊂ Ω be open and bounded, z / ∈ W . (i) Let C(x) = J k (x, z) + b be a cone with slope k > 0 and u ∈ C(W ) be a viscosity subsolution of H(x, Du(x)) ≤ k, x ∈ W such that u(x) = C(x) for all x ∈ ∂W . Then C ≥ u. If C ∈ C(W ) is a bilateral solution of the HJ equation in W , then C ≡ u. In this case C is an absolute minimizer in W and it is a unique absolute minimizer u ∈ C(W ) of H such that u = C on ∂W .
(ii) Let C(x) = −J |k| (z, x) + b be a cone with slope k < 0 and −u ∈ C(W ) be a viscosity subsolution of is a bilateral solution of the HJ equation in W , then C ≡ u. In this case −C is an absolute minimizer in W and it is a unique absolute minimizer u ∈ C(W ) of H(x, −p) such that u = C on ∂W . The statements hold for cones relative to connected, open sets D ⊃ W as well, in which case we require the cone to be finite (since it may be that z ∈ ∂D ∩ ∂W ).
Proof. The proof remains unchanged if we either deal with cones or cones relative to D ⊃ W . We suppose first that k > 0 and prove (i). Assume that for somex ∈ W we have u(x) > C(x). Notice that by definition of cones, we can selectâ ∈ A, an optimal control for the representation formula of the superoptimality principle (24) applied to C such that τx 0 (k + h(y(s), a(s)))ds + C(y(τx)) ≤ C(x), where y = yx(·;â) is a solution of the control system (11), and τx is the exit time of the trajectory from W . Notice that τx(â) is finite if C(x) < +∞, therefore if we set x * = y(τx) ∈ ∂W , then we obtain ,â(s)))ds, which is a contradiction with (14) applied to u.
We assume now that C is a bilateral solution of the HJ equation, and for somê x ∈ W we have u(x) < C(x). Since C is bilateral viscosity solution, by Proposition 4.5 in [38],x is crossed by an optimal trajectory, i.e. there are x, t > 0 and a control a(·) ∈ A such that y(s; a) ∈ W for all s ∈ (0, t), y(0) = x, y(t) =x and C(x) = t 0 (k + h(y(s), a(s)))ds + C(x).
Since the running cost inside the integral is strictly positive, W is bounded and C ∈ C(W ), then this trajectory can be extended backward. Namely, we may assume that y(0) = x * ∈ ∂W and t * ∈ R is the corresponding time to reach the pointx. Then we compute (y(s), a(s)))ds, which is again a contradiction with (14) applied to u. Indeed by (14) u(y(s)) ≤ t s (k + h(y(r), a(r)))dr + u(x), for all s ∈ (0, t). Then we let s → 0 + .
Concerning the last statement in (i), let u ∈ C(W ) be an absolute minimizer for H such that u = C on ∂W . Then from the fact that C is a bilateral solution of the HJ equation in W it follows that H(x, Du(x)) ≤ k in W . Thus by the above C ≡ u. To show that indeed C is an absolute minimizer, let D ⊂⊂ W and v ∈ C(D) be such that C = v on ∂D and H(x, Dv) ≤k, H(x, −D(−v)) ≤k in D in the viscosity sense. Ifk ≥ k, then we are all set with (30) by the properties of C and the definition of bilateral solution. If insteadk < k then by the first part C ≡ v and we get a contradiction.
The statement (ii) can be reduced to case (i) by referring to −C as a cone with positive slope and −u as a subsolution and using the Hamiltonian H(x, −p).
Example 2. We want to discuss a motivating example for our work, showing that our definition allows numerous absolute minimizers which are not in W 1,∞ loc . By Proposition 3 we just need to consider cones in open sets where they are bilateral solutions of the HJ equation. In the plane, we initially consider the Hamiltonian H(x, y, p x , p y ) = −yp x + |p y | and the corresponding control system for controls a ∈ [−1, 1]. The two vector fields (y, 1), (y, −1) provide total controllability of the control system. One can compute explicitly that the function where g(y) = y 2 /2 for y ≤ 0 and g(y) = −y 2 /2 elsewhere, is a bilateral solution of the HJ equation The function u is of class C 0, 1 2 (W ) but not Lipschitz continuous, if W ⊂ R n \{(0, 0)} intersects the graph of the function g. In order to have a bounded from below Hamiltonian, we now consider H(x, y, p x , p y ) = max{0, −yp x + |p y |} = max We now turn to a characterization of absolute minimizers. We start with the following definition.
We say that u satisfies the Comparison with Cones from Below, u ∈ CCB(U ) for short, if for any open, connected and bounded set V ⊂⊂ U , k < 0, z / ∈ V and cone C(x), either generalized or relative to an open set containing V , with slope k and vertex z we have that We generically say that u satisfies the Comparison with Cones if u ∈ CCA(U ) ∩ CCB(U ) = CC(U ).
Remark 10. Notice that the equation (35) is obvious if the cone C is relative to V and its vertex is not an accessible point of the boundary so this case is not interesting. Also the sup in the right hand side can be computed only at the points of ∂ a V . Observe that u ∈ CCA(U ) for H(x, p) if and only if −u ∈ CCB(U ) for H(x, −p). Therefore if u ∈ CC(U ) for H then −u ∈ CC(U ) for H(x, −p).
We now proceed with the following characterization of absolute minimizers that extends previous results in the literature. Proof. We start by assuming that u ∈ C(U ) satisfies comparison with cones and for V ⊂⊂ U we take a viscosity subsolution v ∈ C(V ) of For all w, z ∈ ∂V we can choose sequences w n , z n ∈ V such that w n → w, z n → z and J V k (w n , z n ) → J V k (w, z). By (14), we know that if a control a n ∈ A V wn,zn and y n (·) = y wn (·; a n ) is the corresponding trajectory of the control system, y n (s) ∈ V , s ∈ (0, t n ), y n (0) = w n , y n (t n ) = z n , then v(w n ) − v(z n ) ≤ tn 0 (h(y n (t), a n (t)) + k)dt.
Hence v(w n ) − v(z n ) ≤ J V k (w n , z n ) and then Therefore since u ∈ CCA(U ), for x ∈ V , z ∈ ∂V we obtain y), for all x, y ∈ V. We have also deduced that for any x ∈ V and a control a(·) ∈ A such that the corresponding trajectory satisfies y x (s; a) ∈ V , for s ∈ [0, t], t > 0 sufficiently small we have that y x (s; a), a(s)) + k)ds + u(y x (t; a)).
By the standard dynamic programming principle approach it is then straightforward to conclude that u is a viscosity subsolution of For the opposite implication, let u ∈ C(U ) be an absolute minimizer. We prove that u ∈ CCA(U ). We take any open and connected V ⊂⊂ U and by contradiction suppose that for a given slope k > 0 and vertex z / ∈ V the open set is nonempty. In the above J k can be a generalized cone or a cone relative to an open set containing V but we avoid here the superscript. In particular W is nonempty only if J k (x, z) = +∞ for all x ∈ V , so z ∈ ∂ a V if say J k is relative to V . We now define the cone C(x) = J k (x, z) + sup w∈∂V {u(w) − J k (w, z)} and notice that C = u on ∂W because C ≥ u on ∂V by definition and C is lower semicontinuous in V \W . From here and the definition of W it also follows that J k (·, z) ∈ C(W ).
Since H(x, DC(x)) ≤ k in the viscosity sense in W , by the assumption on u we must have that H(x, Du(x)) ≤ k in the viscosity sense. Therefore by Proposition 3, we deduce that u ≤ C in W , which is a contradiction with the definition of W .
To prove that u ∈ CCB(U ), we can prove instead that (−u) ∈ CCA(U ) for the Hamiltonian H(x, −p). For this we proceed as above by using the fact that −u is an absolute minimizer for the Hamiltonian H(x, −p).

Remark 11.
We may wonder if a function u ∈ CC(U ) will locally satisfy a differential inequality H(x, Du(x)) ≤ k in the viscosity sense, for k ∈ R large enough. This is suggested by the equivalence with the notion of absolute minimizer. A positive answer to this important and not obvious question, since u is only continuous, will be given in the next section, see Remark 12. 4. Continuity estimates and a Harnak-type inequality. In this section, given D ⊂ Ω open, we look for a-priori local continuity estimates for functions in CCA(D) or CCB(D) that allow us to show that these classes of functions become closed with respect to the sup or inf operation, respectively. Below we use the fact that J(x, z) = max{J(x, z), J(z, x)} is a distance function in the notation B R (z) = {x : J(x, z) ≤ R}. For simplicity of notation here we put d(y) = inf w∈∂D J(w, y).
The following is an important consequence of the comparison with cones. For E ⊂ Ω and R > 0, we denote beloŵ Proof. Let z ∈ E and R < 1 5 inf w∈E d(w). Now notice thatÊ R ⊂⊂ D and choose k > 0 such that Then if we take y ∈ B R (z) and x such that J(x, y) = 2R, we get J(x, z) ≤ 3R so x ∈Ê R . Therefore by (28) we obtain 2 u ∞ ≤ J k (x, y) and then Thus if we define the cone C(·) = u(y) + J k (·, y), by the assumption applied to the set {x : J(x, y) < 2R} ⊂ D, we get Now take any x, y ∈ B R (z) and observe that J(x, y) ≤Ĵ(x, y) ≤Ĵ(x, z) +Ĵ(z, y) ≤ 2R.
We have proved that The Lipschitz estimate follows by reversing the roles of x, y sinceĴ is symmetric.
Remark 12. We notice that from the proof of Proposition 4, by the first inequality in (37) we also show that for x, y ∈ K = {w :Ĵ(w, z) < R}, and all controls a ∈ A K x,y then u(x) ≤ J k (x, y x (t; a)) + u(y x (t; a)) ≤ J K k (x, y x (t; a)) + u(y x (t; a)) ≤ t 0 (h(y x (t; a), a(s)) + k)ds + u(y x (t; a)), for all t ∈ (0, t x,y ). Given x ∈ K this in particular holds for constant controls and short time. By the dynamic programming principle approach, it then follows that u is a viscosity subsolution of for k sufficiently large as in the proof of the previous result. Therefore all functions u ∈ CCA(D) are locally subsolutions of (38) for an appropriate choice of k.
The following consequences, that we can derive from the previous result, follow the corresponding ones proved for the Lipschitz extension problem in [4].

Corollary 1. Suppose that F ⊂ CCA(D) is a family of continuous functions. Let
and assume that h is locally bounded from above. Then h ∈ CCA(D) ∩ C(D).
Proof. We begin with the comparison with cones property. Let U ⊂⊂ D and C be a cone with vertex z / ∈ U . If C is a cone relative to some set V ⊃ U , we may substitute below boundaries with the accessible part of the boundaries. Then the comparison with cones from above implies, By taking the supremum over v ∈ F, we conclude that the corresponding property is satisfied by h.
Thus if v o ∈ F is fixed and we consider the new family of functionsF = {max{v, v o } : v ∈ F}, thenF ⊂ CCA(D), it is a family of continuous functions and h(x) = sup v∈F v(x). However functions inF are locally equibounded. By Proposition 4 the functions inF are locally equicontinuous and the continuity of h follows by locally restricting to a sequence v n ∈F such that h = sup v n .
In the last part of this section, we restrict ourselves to positively homogeneous Hamiltonians of degree s > 0. We observe that locally J k s = kJ for k > 0. Indeed H(x, D x (kJ(x, y))) = k s H(x, D x J(x, y)) = k s in the viscosity sense. Therefore by uniqueness in the eikonal equation (minimum time problem), see [6,5], J k s (·, y) = kJ(·, y) in the open sets {x : J(x, y) < r} whose closure is contained in Ω and similarly J k s (x, ·) = kJ(x, ·) in the open sets {y : J(x, y) < r} whose closure is contained in Ω.
We start with the following useful variation of the comparison with cones statement.
We observe that u ≤ C on ∂V , apply the comparison with cones from above and then let λ → 0.
A simple consequence of the previous estimate is a form of the strong maximum principle. Proof. We show that u is locally constant. We use (39) for x =x and r sufficiently small. For J(x, y) < r < min w∈∂D J(y, w) we obtain and then u(x) ≤ u(y).
The proof of the following result, which is a consequence of (39), is a modification of the corresponding one in [4] obtained for the infinity-Laplace equation, so we skip it. It contains a Harnak inequality for functions enjoying comparison with cones from above or from below for homogeneous Hamiltonians, and a different uniform continuity estimate with the distance functionĴ.

5.
Perron's method and existence of absolute minimizers. The standard method for proving existence of absolute minimizers in the literature for coercive Hamiltonians consists in taking limits of solutions of approximating minimization problems in L p , see Jensen [28] and Barron-Jensen-Wang [9,10]. In this section we prove the existence of absolute minimizers by Perron's method, more a pde rather than a variational approach. Usually the method requires the use of a comparison principle, see [17]. Here we use instead the a priori continuity estimate given in the previous section for functions subject to comparison with cones. We follow the lines of the approach described in Aronsson-Crandall-Juutinen [4] for the Lipschitz extension problem and the infinity laplacian which we adapt to our general setting. For convenience, we will put ourselves in the case Ω = R n . The main result of this section is the following.
Theorem 5.1. Let D ⊂ R n be an open set and g ∈ C(∂D) be such that for some b − , b + ∈ R, k − < 0, k + > 0, z ∈ ∂D the following estimate holds Then there exists an absolute minimizer u : D → R such that u = g on ∂D and We begin with a preliminary result that we use several times in the proofs below. We noticed that cones need not be absolute minimizers. However for their special structure the following property holds.
Proof. We will suppose for simplicity that C(x) = −J(y, x). Then recall that by definition of J we have that in the viscosity sense. Moreover if J(y, ·) − Φ has a minimum at x o then in fact H(x o , DΦ(x o )) = 1 (a value function of an optimal control problem is a bilateral supersolution). Therefore in the viscosity sense since if −J(y, ·)−Φ has a maximum at x o implies that J(y, ·)− (−Φ) has a minimum at x o . Suppose now that −J(y, ·) / ∈ CCA(R n \{y}). Then we can find an open and bounded set D ⊂⊂ Ω, y / ∈ D, a vertex z / ∈ D and k > 0 such that forĈ(x) = J k (x, z) + max w∈∂D {−J(y, w) − J k (w, z)} we have that Indeed −J(y, w) ≤Ĉ(w) for w ∈ ∂D andĈ is lower semicontinuous onW and continuous in W . The coneĈ can be relative to an open set containing D and then above and in the rest of the argument J r (x, z) has to be replaced with J D r (x, z). In this case we also haveĈ ∈ C(W ).
Notice thatĈ satisfies in the viscosity sense and therefore also in the viscosity sense, as seen above. Now if 1 ≥ k then we apply Proposition 3 to (43) and (46) and obtain that −Ĉ(x) ≤ J(y, x) for x ∈ W . If instead 1 ≤ k then we apply Proposition 3 to (44) and (45) and get −J(y, x) ≤Ĉ(x) for x ∈ W . In either case W must be empty which is a contradiction.
The construction of the absolute minimizer starts with the study of the two following envelopes of cones. We define h, h : R n → R by The first step shows that the previous functions are well defined and enjoy suitable properties.
Proof. Notice first that for σ > 0 then b can be used to define h, h, respectively, where z, b ± , k ± are as in the assumption (42). Therefore the two families of cones are nonempty. We proceed to show that for all w ∈ ∂D, ε > 0 there is k > k + > 0 such that the cone C(x) = J k (x, w) + g(w) + ε satisfies C ≥ g on ∂D.
In particular, since the corresponding property holds for h as well, h, h are well defined and satisfy C − ≤ h, h ≤ C + and h = h = g on ∂D.
Indeed we select δ > 0 such that g( for any k > k + . Then we fix k sufficiently large so that where the second inequality is required if z = w. Here we use (28). The claim is that C ≥ g in {x ∈ ∂D : J k + (x, w) > δ}. Thus C ≥ g on ∂D. Suppose on the contrary that the set W = {x ∈ R n : J k + (x, w) > δ, C(x) < C + (x)} is nonempty and notice that it is open and bounded since k > k + . Notice that by the first inequality in (49) we have that if x ∈ ∂W then J k + (x, w) > δ so that in particular C = C + on ∂W . Now notice that by the second inequality in (49) z, w / ∈ W so that the following is satisfied in the viscosity sense Then by Proposition 3 we have that C(x) ≥ C + (x) for x ∈ W , which is a contradiction.
Next we show that h ≤ h. This will show (48), the fact that h, h are locally bounded and from Corollary 1 and Lemma 5.2, h ∈ C(D) ∩ CCB(D), h ∈ C(D) ∩ CCA(D). To this end we take any cones C(x) = J k (x, w) + b and C(x) = −J |k| (w, x) + b with the properties in the definition of h, h respectively and show that C ≤ C in R n . Suppose not, then the set W = {x : C(x) > C(x)} is nonempty, open and bounded. Moreover C = C on ∂W and since C ≥ g ≥ C on ∂D, W cannot contain the two vertices. The contradiction now follows from Proposition 3.
We are left to show the continuity of the two functions on ∂D. This fact follows since h is upper semicontinuous, h is lower semicontinuous and they coincide with g on ∂D. Then for x ∈ ∂D we have and then the continuity of h at x. Similarly for h.
The core of Perron's method is in the following step.
Lemma 5.4. Let u ∈ CCA(D) ∩ C(D) be such that u / ∈ CCB(D). Then we can find a nonempty open set W ⊂⊂ D and a cone C(x) = −J |k| (x, z) + b, with slope k < 0 and vertex z / ∈ W such that u = C on ∂W , u < C in W . If we definê thenû ∈ CCA(D) ∩ C(D).
If moreover H(x, Du(x)) ≤k in D in the viscosity sense (k > 0) then H(x, Dû(x)) ≤k in the viscosity sense.
The cone C may be relative to some open, connected setW , W ⊂W ⊂⊂ D.
Proof. Most of the statement is just a direct consequence of violating the definition of function satisfying the comparison with cones from below. Indeed in that case there areW ⊂⊂ D, k < 0 such that the set is open and nonempty. We only concentrate at boundary points x ∈ ∂W in the case C is a cone relative toW , z ∈ ∂W , the rest of the proof being easier. Since C is continuous inW and upper semicontinuous inW , we can still conclude that u = C on ∂W and C ∈ C(W ). The easier case is for x ∈W ∩ ∂W , then u(x) = C(x) and C is continuous at x. If otherwise x ∈ ∂W ∩ ∂W , then by definition of C we have u(x) ≥ C(x), however by upper semicontinuity of C In particular the functionû defined in (50) is continuous. We now need to prove the statements concerningû. The proof thatû ∈ CCA(D) will be postponed to the more general statement Lemma 6.5. The last statement is instead a consequence of Theorem 3.3. We choose an open set E ⊂⊂ D such that W ⊂⊂ E and since u =û on ∂E andû ∈ CCA(D), we obtain that H(x, Dû(x)) ≤k in the viscosity sense for x ∈ E and hence in D.
Proof of Theorem 5.1. We now put things together and obtain the existence of an absolute minimizer. We define Notice that u is well defined and locally bounded, since h ∈ CCA(D). Then by Corollary 1 and the properties of h, h we have that u ∈ C(D) ∩ CCA(D) and u = g on ∂D. Thus we only need to show that u ∈ CCB(D) to conclude. Suppose by contradiction that this is false. Then we use Lemma 5.4 and constructû as in (50) with the properties above and strictly larger than u. We will reach a contradiction if we can show thatû ≤ h in D. Suppose not, then the set is nonempty, open and contained in W whereû = C and C is a cone with negative slope (possibly relative to W ). It follows that h = C on ∂Ŵ but then, since h satisfies comparison with cones from below, we must have h ≥ C inŴ . This is a contradiction.

Remark 14.
Perron's method proves the existence of absolute minimizers but says nothing about their uniqueness. However, from the argument above we can say the following: in the assumptions of in D. Indeed set D ε = {x ∈ D : dist(x, ∂D) < ε}. By definition, for any cone with positive slope C, We conclude by taking the limit as ε ↓ 0. In Theorem 5.1 we constructed the maximal absolute minimizer that coincides with g on ∂D.
We already proved in Proposition 3 that cones, in regions where they are bilateral solutions of their HJ equation are unique as absolute minimizers.
6. Perron's method and existence of viscosity solutions of Aronsson equation. One of the interesting points in the theory of absolute minimizers is that they satisfy a degenerate, second order pde that plays the role of the Euler-Lagrange equation for the variational problem. In this section we prove the existence of an absolute minimizer for the Hamiltonian H that satisfies the AE as a viscosity solution with prescribed Dirichlet boundary condition u = g ∈ C(∂D) for any open D ⊂ R n with nonempty boundary. The candidate solution is an absolute minimizer constructed as in the previous section. We recall that the definition of viscosity solution for (54) follows the standard one in [17] by looking at the AE as the degenerate elliptic pde in (3).
Remark 15. Notice that by the special structure of the equation, it easily follows from the definition that if u is a viscosity super/subsolution of (54), then −u is a viscosity sub/supersolution of the equation corresponding to the Hamiltonian H(x, −p). Therefore the symmetry in the definition of absolute minimizer is maintained.
The main result of this section is the following existence theorem for (54). Theorem 6.1. Let D ⊂ R n be an open set, H ∈ C(D × R n ) and g ∈ C(∂D) be such that for some b − , b + ∈ R, k − < 0, k + > 0, z ∈ ∂D the following estimate holds Then there exists an absolute minimizer u ∈ C(D) for the Hamiltonian H which is a viscosity solution of the AE (54), such that u = g on ∂D and The result is again the consequence of several lemmas.
The corresponding result holds for the opposite inequality as well.
Now suppose that Φ / ∈ CCB({z : |z − x o | < r}). Then, as we noticed several times before, there is a cone C with negative slope such that the set is open and nonempty. We also know that the cone may be relative to an open and connected setŴ , {x : |x − x o | < r} ⊃Ŵ ⊃ W , it has vertex z / ∈Ŵ , but C ∈ C(W ) and C = Φ on ∂W . As we proved in [38], −C is a supersolution of the AE for the Hamiltonian H(x, −p) so that C is a subsolution of the AE for H. Letx ∈ W be such that 0 < C(x) − Φ(x) = max x∈W C(x) − Φ(x). Then by definition of viscosity subsolution A(x, DΦ(x), D 2 Φ(x)) ≤ 0 which is a contradiction with (56).
The next result follows standard lines and we sketch it for the reader's convenience. Lemma 6.3. Let H ∈ C 1 (D×R n ) and F ⊂ C(D) be a nonempty family of viscosity subsolutions of (54). Define u(x) = sup v∈F v(x) ∈ R and assume that u ∈ C(D). Then u is a viscosity subsolution of (54).
Proof. Let Φ ∈ C 1 (D) be such that u(x o ) = Φ(x o ), u(x) < Φ(x) for 0 = |x−x o | < r, {x : |x − x o | ≤ r} ⊂ D. We have to prove that We consider a sequence of functions v n ∈ F such that v n (x o ) → u(x o ) and define x n , |x n − x o | < r such that v n (x) − Φ(x) ≤ v n (x n ) − Φ(x n ) for |x − x o | ≤ r. By passing to a subsequence, we may suppose that x n → y ∈ D. In particular we Thus y = x o by taking n → +∞. Hence x n is a local maximum for v n − Φ for n large enough and by the assumption we get A(x n , DΦ(x n ), D 2 Φ(x n )) ≤ 0.
The conclusion is obtained as n → +∞.
The next result shows the existence of an absolute minimizer solving (54), once we have suitable global barriers. Lemma 6.4. Let H ∈ C 1 (D × R n ) and h ∈ C(D) ∩ CCA(D), h ∈ C(D) ∩ CCB(D) be respectively a subsolution and supersolution of (54) such that h ≤ h and h = h = g on ∂D. Then there exists an absolute minimizer u ∈ C(D) for H which is a viscosity solution of (54) and such that h ≤ u ≤ h, in D. By Lemma 1 and Lemma 6.3, u ∈ CCA(D) ∩ C(D) and it is a viscosity subsolution of (54). We now want to prove that it is also a supersolution and that u ∈ CCB(D). We start with the supersolution property. Suppose by contradiction that for some Φ ∈ C 2 we have that u − Φ attains a strict local minimum at x o ∈ D, u(x o ) = Φ(x o ) and

Proof. Let
A Thus by Lemma 6.2 we can find r > 0 such that Φ is a (strict) subsolution of (54) in {z : |z − x o | < r} and Φ ∈ CCA({z : |z − x o | < r}. Observe that since h is a supersolution of (54) then we must have u(x o ) < h(x o ), otherwise h − Φ has a local minimum at x o and we reach a contradiction with (57). Thus for possibly smaller r and ε > 0 we may suppose that Φ(z) + ε ≤ h(z) for |z − x o | < r and Φ(z) + ε ≤ u(z) for |z − x o | = r. In particular the set is nonempty, open and u = Φ + ε on ∂W .
Thusû ∈ C(D), h ≤û ≤ h,û(x o ) > u(x o ). If we prove thatû ∈ F then we reach a contradiction with the definition of u. We must prove thatû ∈ CCA(D) and it is a viscosity subsolution of (54). Notice that Φ + ε is still a subsolution of (54) in W . This is a consequence of the next Lemma 6.5.
We are now left to show that u ∈ CCB(D). Suppose not, then we proceed as in Lemma 5.4 and construct a functionû ∈ C(D),û ≤ h as in (50). Therefore we prove again by Lemma 6.5 thatû ∈ F and get a contradiction with the definition of u.
We need the following lemma to complete the proof. Lemma 6.5. Let W ⊂⊂ V ⊂ R n be open, u ∈ C(V ) and w ∈ C(W ) such that w > u in W , w = u on ∂W . Definê x ∈ V \W.
Proof. Observe that the functionû ∈ C(V ). We proceed by contradiction in (i) and suppose thatû / ∈ CCA(V ). Then we can find U ⊂⊂ V ,k > 0, z / ∈ U such that u(x) − Jk(x, z) ≤ max w∈∂U {û(w) − Jk(w, z)} is false. In particular if we define the coneC(x) = Jk(x, z) + max w∈∂U {û(w) − Jk(w, z)} then the set W = {x ∈ U :û(x) >C(x)} is nonempty, open and bounded. As we did several times now, sinceû ≤C on ∂U we haveû =C on ∂Ŵ andC ∈ C(Ŵ ) even ifC is a cone relative to U . In particular since u ∈ CCA(V ) and u ≤û =C on ∂Ŵ , we must have u ≤C inŴ . HenceŴ ⊂ W and w =C on ∂Ŵ . Since w ∈ CCA(W ) then we conclude that u = w ≤C inŴ , which is a contradiction. Now we show (ii). Suppose that Φ ∈ C 2 is such thatû − Φ has a local maximum atx ∈ V andû(x) = Φ(x). Then forx ∈ W , the subsolution property is satisfied becauseû = w is a subsolution in W . On the other hand, forx ∈ V \W we have that u(x) = u(x) = Φ(x) and Φ ≥û ≥ u aroundx so that u − Φ has a local maximum atx and the subsolution property is satisfied again.
Proof of Theorem 6.1. We start by considering the functions h, h as in the construction of the absolute minimizer in Section 5. We can apply to them Lemma 6.3 since, as we proved in [38], the cones with positive slope are supersolutions of (54) while the cones with negative slope are subsolutions. Therefore h is a subsolution of (54) while h is a supersolution. We now use h, h as the barriers of Lemma 6.4 to build a viscosity solution of (54).