THE DIRICHLET PROBLEM FOR KOLMOGOROV-FOKKER-PLANCK TYPE EQUATIONS WITH ROUGH COEFFICIENTS

We establish the existence and uniqueness, in bounded as well as unbounded Lipschitz type cylinders of the forms UX ×VY,t and Ω×R m ×R, of weak solutions to CauchyDirichlet problems for the strongly degenerate parabolic operator L := ∇X · (A(X,Y, t)∇X) +X · ∇Y − ∂t, assuming that A = A(X,Y, t) = {ai,j(X,Y, t)} is a real m × m-matrix valued, measurable function such that A(X,Y, t) is symmetric, bounded and uniformly elliptic. Subsequently we solve the continuous Dirichlet problem and establish the representation of the solution using associated parabolic measures. The paper is motivated, through our recent studies [11, 12, 13], by a growing need and interest to gain a deeper understanding of the Dirichlet problem for the operator L in Lipschitz type domains. The key idea underlying our results is to prove, along the lines of Brezis and Ekeland [3, 4], and inspired by the recent work of S. Armstrong and J-C. Mourrat [2] concerning variational methods for the kinetic Fokker-Planck equation, that the solution can be obtained as the minimizer of a uniformly convex functional. 2000 Mathematics Subject Classification. 35K65, 35K70, 35H20, 35R03.


Introduction
This paper is devoted to a study of existence and uniqueness of weak solutions to Cauchy-Dirichlet problems for the strongly degenerate parabolic operator where (X, Y, t) := (x 1 , ..., x m , y 1 , ..., y m , t) ∈ R m × R m × R =: R N +1 , N := 2m, m ≥ 1. A = A(X, Y, t) = {a i,j (X, Y, t)} is assumed to be a real m × m-matrix valued, measurable function such that A(X, Y, t) is symmetric and for some 1 ≤ κ < ∞ and for all ξ ∈ R m , (X, Y, t) ∈ R N +1 . We recall that the prototype for the operators in (1.1), i.e. the operator K. N was partially supported by grant 2017-03805 from the Swedish research council (VR).
was originally introduced and studied by Kolmogorov in a famous note published in 1934 in Annals of Mathematics, see [10]. Kolmogorov noted that K is an example of a degenerate parabolic operator having strong regularity properties and he proved that Using the terminology introduced by Hörmander, see [9], we say that K is hypoelliptic. Naturally, for operators as in (1.1), assuming only measurable coefficients and (1.2), the methods of Kolmogorov and Hörmander can not be directly applied in a study of weak solutions and estimates thereof. Regularity theory for weak solutions, assuming only measurable coefficients and (1.2), was developed in e.g. [15] and [8], where local Hölder continuity of weak solutions was proved. The Dirichlet problem for degenerate parabolic equations as in (1.1) have been studied under stronger assumptions on the coefficients, in particular see [14] where the coefficients are assumed to be Hölder continuous.
Let U X ⊂ R m be a bounded Lipschitz domain and let V Y,t ⊂ R m × R be a bounded domain with boundary which is C 1,1 -smooth, i.e. C 1,1 with respect to Y as well as t. Let N Y,t denote the outer unit normal to V Y,t . We introduce ∂ K (U X ×V Y,t ) will be referred to as the Kolmogorov boundary of U X ×V Y,t , and the Kolmogorov boundary serves, in the context of the operator L, as the natural substitute for the parabolic boundary used in the context of the Cauchy-Dirichlet problem for uniformly elliptic parabolic equations.
In this paper we first establish the existence and uniqueness of appropriate weak solutions to the Cauchy-Dirichlet problem (1.4) Lu = g * in U X × V Y,t , u = g on ∂ K (U X × V Y,t ).
We refer to Section 2 for the precise definition of the function spaces, i.e. W (U X × V Y,t ), L 2 Y,t (V Y,t , H −1 X (U X )), used, as well as the notions of weak solutions to (1.4), see Definition 1. We prove the following theorem.
where U X ⊂ R m is a bounded Lipschitz domain and V Y,t ⊂ R m × R is a bounded domain with boundary which is C 1,1 -smooth. Let g * ∈ L 2 Y,t (V Y,t , H −1 X (U X )), g ∈ W (U X × V Y,t ). Then there exists a unique weak solution u ∈ W (U X × V Y,t ) to the problem in (1.4) in the sense of Definition 1. Furthermore, there exists a constant c, independent of u and g, but depending on m, κ, and U X × V Y,t , such that Taking the analysis on U X × V Y,t , and in particular Theorem 1.1, as the starting point we then consider weak solutions in the unbounded setting Ω × R m × R and in this case we assume that Ω ⊂ R m is an (unbounded) Lipschitz domain (1.5) where ψ : R m−1 → R is a Lipschitz function. I.e. we consider the problem In this case we refer to Section 4 for the precise definition of the function spaces, i.e. W (R N +1 ), W loc (Ω × R m × R), L 2 Y,t (R m × R, H −1 X (R m )), used as well as the notions of weak solutions, see Definition 2. In this setting we prove the following theorems: an existence theorem and a uniqueness theorem.
Theorem 1.2. Assume that A satisfies (1.2) with constant κ. Let Ω ⊂ R m , m ≥ 1, be a (unbounded) Lipschitz domain as defined in (1.5) and let g ∈ W (R N +1 ), g * ∈ L 2 Y,t (R m × R, H −1 X (R m )). Then there exists a weak solution u ∈ W loc (Ω × R m × R) to the problem in (1.6) in the sense of Definition 2. Let Ω ⊂ R m , m ≥ 1, be a (unbounded) Lipschitz domain as defined in (1.5) and let g ∈ Finally, we consider the continuous Dirichlet problem and the representation of the solution using associated parabolic measures. Theorem 1.4. Assume that A satisfies (1.2) with constant κ as well as (1.7). Let Ω ⊂ R m , m ≥ 1, be a (unbounded) Lipschitz domain as defined in (1.5) and let ϕ ∈ W (R N +1 )∩C 0 (R N +1 ). Let u = u ϕ be the unique bounded weak solution given by Theorem 1.2 and Theorem 1.3 with g = ϕ, g * ≡ 0. Then u ∈ C(Ω × R m × R). Furthermore, there exists, for every (X, Y, t) ∈ Ω × R m × R, a unique probability measure This paper is motivated, through our recent studies [11,12,13], by a growing need and interest to gain a deeper understanding of the Dirichlet problem for the operator L in Lipschitz type domains. In [11,12,13] we have developed a number of results and estimates concerning solutions to Lu = 0 in Lipschitz type domains adapted to the dilation structure and the (non-Euclidean) Lie group underlying the operator L. Indeed, in [11] our results include scale and translation invariant boundary comparison principles, boundary Harnack inequalities and doubling properties of associated parabolic measures. In [12] we establish results concerning associated parabolic measures and their absolute continuity with respect to a surface measure, and in particular we study when the associated Radon-Nikodym derivative defines an A ∞weight with respect to the surface measure. Finally, in [13] we establish a structural theorem concerning the absolute continuity of elliptic and parabolic measures which allowed us to reprove several results previously established in the literature as well as deduce new results in, for example, the context of homogenization for operators of Kolmogorov type. Our proof of the structural theorem is based on some of the results in [11] concerning boundary Harnack inequalities for operators of Kolmogorov type in divergence form with bounded, measurable and uniformly elliptic coefficients. An impetus for developing [11,12,13] has been the recent results concerning the local regularity of weak solutions to the equation in (1.4) established in [8]. In [8] the authors extended the De Giorgi-Nash-Moser (DGNM) theory, which in its original form only considers elliptic or parabolic equations in divergence form, to hypoelliptic equations with rough coefficients including the operator L. Their result is the correct scaleand translation-invariant estimates for local Hölder continuity and the Harnack inequality for weak solutions.
The point is that in [11,12,13], as well as [8], questions concerning the existence and uniqueness of weak solutions to Dirichlet and Cauchy-Dirichlet problems for the operator L are constantly present and we have not been able to find, beyond what we have learned from [2], coherent treatments in the literature of the problems in (1.4), (1.6). This has been the main motivation for completing this work.
We prove Theorem 1.1 by proving that the solution to (1.4) can be obtained as the minimizer of a uniformly convex functional. The fact that a parabolic equation can be cast as the first variation of a uniformly convex integral functional was first discovered in [3,4] and for a modern treatment of this approach, covering uniformly elliptic parabolic equations of second order in the more general context of uniformly monotone operators, we refer to [1] which in turn is closely related to [7], see also [6]. More precisely, this paper is inspired by the interesting paper of S. Armstrong and J-C. Mourrat [2] concerning variational methods for the kinetic Fokker-Planck equation. In [2], S. Armstrong and J-C. Mourrat consider Kramers equation and the time-dependent version of this equation which, in our coordinates, reads This equation is often referred to as the kinetic Fokker-Planck equation. In [2], f (v, x, t) is a function of velocity, position and time, in our notation (v, x, t) = (X, Y, t). In [2], S. Armstrong and J-C. Mourrat develop a functional analytic approach to the study of the Kramers and kinetic Fokker-Planck equations which parallels the classical H 1 theory of uniformly elliptic equations. In particular, they identify a function space analogous to H 1 and develop a well-posedness theory for weak solutions of the Dirichlet problem in this space. They prove new functional inequalities of Poincaré and Hörmander type and use these to obtain the C ∞ regularity of weak solutions. They also use the Poincaré-type inequality established to give an elementary proof of the exponential convergence to equilibrium for solutions of the kinetic Fokker-Planck equation which mirrors the classic dissipative estimate for the heat equation. The proof of Theorem 1.1 follows the proof developed in [2] for the Kramers and kinetic Fokker-Planck equations. To compare our setting to that of [2], we first note that the kinetic Fokker-Planck equation is different from the operator L and in [2] the authors consider domains of the form R m × V Y,t , where V Y,t is as above, or domains of the form R m × U Y × J where U Y ⊂ R m is a bounded domain with C 1 -boundary and J ⊂ R is a bounded interval. In this sense they consider unbounded domains and their natural function space in the variable X is the Sobolev space which uses exp(−|X| 2 /2) dX as the underlying measure. The unbounded setting, and the Gaussian Poincaré inequality, force the authors in [2] to establish new Poincaré inequalities as a novel part of their argument. In this paper we initially work on truly bounded domains U X × V Y,t assuming that U X ⊂ R m is a bounded Lipschitz domain and our result can be readily extended to domains of the form U X × U Y × J. Taking the analysis on U X × V Y,t as the starting point we then, as discussed, consider weak solutions in the unbounded setting Ω × R m × R assuming that Ω ⊂ R m is an (unbounded) Lipschitz domain in the sense of (1.5). In particular, compared to [2] we in the unbounded case have a geometric restriction in the X-variable and none in the (Y, t)-variables. The latter geometric setting is, in particular, in line with the set up in [13]. In Theorem 1.3 and Theorem 1.4 the assumption in (1.7) is only used in a qualitative fashion.
As a general comment, one reason to establish the existence and uniqueness of weak solutions in Sobolev spaces encoding integrability up to the boundary of weak derivatives, in this case for ∇ X u, is that one can then often relax the assumption on the coefficients to being only measurable, bounded and elliptic when proving boundary type estimates of the type considered in [11,12,13]. As another general comment, we believe that it is a good project to develop versions of the estimates in [11,12,13] in the geometric settings of U X × V Y,t and U X × U Y × J and in particular near the subsets of where now N Y denotes the outer unit normal to U Y .
The rest of the paper is organized as follows. In Section 2 we, in the geometric setting of U X × V Y,t , introduce function spaces, define weak solutions and prove Poincaré inequalities. While strictly speaking not necessary in our setting, we here prove a version on bounded domains, Lemma 2.3 below, of the novel Poincaré inequality established in [2]. In Section 3 we prove Theorem 1.1 following [2]. In Section 4 we prove Theorems 1.2-1.4.

Preliminaries
Throughout the section we let U X ⊂ R m and V Y,t ⊂ R m × R be bounded domains. In addition we assume that U X ⊂ R m is a Lipschitz domain and that and V Y,t is a domain with boundary which is C 1,1 -smooth.
2.1. Function spaces. We denote by H 1 X (U X ) the Sobolev space of functions g ∈ L 2 (U X ) whose distribution gradient in U X lies in (L 2 (U X )) m , i.e.
In particular, equivalently we could define H 1 X (U X ) as the closure of C ∞ (U X ) in the norm ||·|| H 1 X (U X ) . Note that as H 1 X,0 (U X ) is a Hilbert space it is reflexive, so that (H 1 X,0 (U X )) * = H −1 X (U X ) and (H −1 X (U X )) * = H 1 X,0 (U X ), where () * denotes the dual. Based on this we let H −1 X (U X ) denote the dual to H 1 X,0 (U X ) acting on functions in H 1 X,0 (U X ) through the duality pairing ·, · := ·, · H −1 and, as mentioned above, , respectively. We end the subsection with the following elementary lemma.
Lemma 2.1. Let U X and V Y,t be as above. Then But then This completes the proof.
. We then say that u is a weak solution to

Poincaré inequalities.
Lemma 2.2. There exists a constant c, 1 ≤ c < ∞, depending only on m, and the diameter of U X , such that Proof.
). Then, using the standard Euclidean Poincaré inequality, we see that Simply squaring and integrating with respect to (Y, t) ∈ V Y,t gives the result. Proof.
In the following we let f U X and f U X ×V Y,t denote the averages of f = f (X, Y, t) over U X and U X × V Y,t with respect to dX and dX dY dt, respectively. To start the proof of the lemma we first note, using the standard Euclidean Poincaré inequality, that Next we are going to prove that there exists c, We start by proving the estimate for the time derivative of f U X . To do this we select a smooth function ξ 0 ∈ C ∞ 0 (B(0, 2R)) such that Using these properties of ξ 0 , we can write where By integration by parts, . Using (2.7) and the fact that ξ 0 has compact support, This completes the proof of the estimate in (2.8) involving the time derivative. To estimate the terms involving ∂ y i , we fix i ∈ {1, . . . , m} and we choose a smooth function ξ i ∈ C ∞ 0 (B(0, 2R)) satisfying where δ ij is the Kronecker delta. Using this we see that Proceeding as above we deduce that This completes the proof of (2.8) and we can conclude that (2.10) Using (2.10), and Lemma 3.1 in [2], we see that can be estimated from above by . In particular, we have proved that (2.11) Note that the above deductions hold for all .
where σ Y,t is the surface measure on ∂V Y,t . We now choose a non-trivial f 1 so that and so that, after a normalization, Note that, by construction, To proceed, using f 1 and (2.13) we split the mean of f as and we estimate the two terms on the right side separately. For the first term we have This completes the estimate for the first term in (2.15). For the second term in (2.15), we use (2.14) to derive . The last term is then estimated using (2.11). This finishes the proof for , and hence a standard density argument finishes the proof for f ∈ W Y,t,0 (U X × V Y,t ).

Proof of Theorem 1.1
The purpose of this section is to prove Theorem 1.1. We therefore fix U X × V Y,t ⊂ R N +1 , g * and g as in the statement of Theorem 1.1. To ease the notation we will at instances use the notation Given an arbitrary pair (f, f * ) such that where the infimum is taken with respect to the set , should be interpreted as stating that for all φ ∈ L 2 (V Y,t , H 1 X,0 (U X )). We intend to prove, given (g, is uniformly convex, that its minimum is zero, and that the associated minimizer is the unique f ∈ (g + W 0 ) that solves the equation . Note that by construction this means that f ∈ W (U X × V Y,t ) is a solution to the problem in (1.4) in the sense of Definition 1. In particular, f − g ∈ W 0 (U X × V Y,t ) holds as desired.
) be fixed and let A(g, g * ) be the set Then A(g, g * ) is non-empty.
Proof. Let f = g and consider the equation for dY dt-a.e (Y, t) ∈ V Y,t . By the Lax-Milgram theorem this equation has a (unique) solution v(·) = v(·, Y, t) ∈ H 1 X,0 (U X ) and as f ∈ W . In particular, and hence A(g, g * ) is non-empty.
Then the functional J is uniformly convex on A(g, g * ).
Proof. By Lemma 3.1 we know that A(g, g * ) = ∅. A straightforward calculation shows that whenever (f ′ , j ′ ) ∈ A(g, g * ) and (f, j) ∈ A(0, 0). Hence, to prove the lemma it suffices to prove that there exists 1 ≤ c < ∞, depending on m, κ, and U X × V Y,t , such that for every (f, j) ∈ A(0, 0) we have . Considering (f, j) ∈ A(0, 0), by expanding the integrand in the definition of J [f, j], using symmetry of A, and using that and consequently (3.9) However, using that for all (f, j) ∈ A(0, 0). To prove (3.8), and hence that J is uniformly convex on A(g, g * ), it therefore suffices to prove that , for all f ∈ W 0 . Put together this yields (3.8) and the proof of the lemma is complete.
As the functional J is uniformly convex over A(g, g * ) there exist a unique minimizing pair (f 1 , j 1 ) ∈ A(g, g * ) such that Note that, by construction and by the ellipticity of A, we have There is a one-to-one correspondence between weak solutions to Lu = g * in U X × V Y,t , such that (u − g) ∈ W 0 , and null minimizers of J[·, g * ].
Proof. To prove the lemma we need to prove that for every f ∈ g + W 0 , we have Indeed, the implication " =⇒ " is clear since if f solves Lu = g * in the weak sense, then Conversely, if J[f, g * ] = 0, then f = f 1 and J [f 1 , j 1 ] = 0. This implies that and since ∇ X · (Aj 1 ) = g * − (X · ∇ Y − ∂ t )f 1 , we recover that f = f 1 is indeed a weak solution of Lu = g * . In particular, the fact that there is at most one solution to Lu = g * is clear.
To complete the proof of Theorem 1.1 it remains to prove that In order to do so, we introduce the perturbed convex minimization problem defined, for every As we see that the inequality in (3.11) that we intend to prove can be equivalently stated as G(0) ≤ 0. We first intend to verify that the function G is convex and then reduce the problem of showing (3.11) to that of showing that the convex dual of G is non-negative.
Lemma 3.4. G is a convex, locally bounded from above and lower semi-continuous functional on L 2 Y,t (V Y,t , H −1 X (U X )).
Proof. For every pair (f, j) satisfying (f + g, j) ∈ A(g, f * + g * ), we have and thus In particular, subtracting V Y,t f * (·, Y, t), f (·, Y, t) dY dt from the expression above we see that Taking the infimum over all (f, j) satisfying the affine constraint (f + g, j) ∈ A(g, f * + g * ) we obtain the quantity G(f * ), i.e., G(f * ) can be expressed as we see that we can argue as in (3.9) and (3.10) to conclude that as f ∈ W 0 . In particular, G(f * ) can be expressed as the infimum of with respect to (f, j) such that (f + g, j) ∈ A(g, f * + g * ). The expression in the last display is convex as a function of (f, f * , j) and this proves that G is convex. Furthermore, using (3.5), (3.6), (3.7), and (3.12) we can conclude that the infimum of the expression in the last display is finite, hence G(f * ) < ∞. In particular, the function G is locally bounded from above. These two properties imply that G is lower semi-continuous, see [ We denote by G * the convex dual of G, defined for every Let G * * be the bidual of G. Since G is lower semi-continuous, we have that G * * = G (see [5,Proposition I.4.1]), and in particular, In order to prove that G(0) ≤ 0, it therefore suffices to show that To continue we note that we can rewrite G * (h) as where the supremum is taken with respect to Proof. To prove the lemma we need to prove that (− ). Using that we take a supremum in the definition of G * we can develop lower bounds on G * by restricting the set with respect to which we take the supremum. Here, for f ∈ W 0 , we choose to restrict the supremum to (f, j, f * ) where j = j 0 is a solution of ∇ X · (Aj 0 ) = g * − (X · ∇ Y − ∂ t )g and f * := (X · ∇ Y − ∂ t )f . Recall from (3.7) that such a j 0 ∈ (L 2 Y,t (V Y,t , L 2 X (U X ))) m exists. With these choices for j and f * , the constraint (3.15) is satisfied, and we obtain that Then, again arguing as in (3.9), (3.10), Hence we have the lower bound where the supremum now is with respect to Note that by replacing f by −f in the above argument we also obtain a lower bound. In particular, where the supremum is taken over ) and this observation proves (3.16). Lemma 3.5 gives at hand that in place of (3.13), we have reduced the matter to proving that As we are to establish a lower bound on G * , we may restrict to taking the supremum over f * such that . By combining this with (3.18) and (3.19) we see that Hence the proof that G * (h) ≥ 0, and hence the final piece in the proof of existence in Theorem 1.1 for general g ∈ W (U X × V Y,t ), is to prove the following lemma.
Proof. To start the proof of the lemma we first note that we have, as f ∈ W 0 , that , and hence we can replace f * by f * + (X · ∇ Y − ∂ t )f in the variational formula (3.14) for G * to get where the supremum now is taken with respect to (3.21) subject to the constraint as well as the constraint (3.19). Next using that Note that the last integral is not necessarily zero as f ∈ W 0 only implies that f = 0 on ∂ K (U X × V Y,t ) and not necessarily on {(X, Y, t) ∈ U X × ∂V Y,t | (X, −1) · N Y,t ≤ 0}. In any case, using the identity in the last display we see that where the supremum still is with respect to (f, j, f * ) as in (3.21) subject to (3.19) and (3.22).

Consider an arbitrary pair
We think of ψ δ as a function on U X × V Y,t that does not depend on X. Given (b,f ) as above we introduce As as δ → 0. In particular, to prove the lemma it suffices to prove thatG * (h) ≥ 0.
To start the proof of thatG * (h) ≥ 0, recall that h ∈ W ∩ C ∞ X,0 (U X × V Y,t ). Furthermore, in the definition ofG * (h) we have the supremum with respect tof ∈ W ∩ C ∞ X,0 (U X × V Y,t ). Hence, we can produce a lower bound onG * (h) by choosingf = −h when considering the supremum as long as, say, Γ ≥ 2||h|| L 2 where the supremum is over every (j, f * , b) as above. However, as and thereforẽ where still the supremum is over (j, f * , b) as above. Using Lemma 3.7 below we have that where the supremum is taken with respect to all Using this we can conclude that with the supremum ranging over all j ∈ (L 2 Y,t (V Y,t , L 2 X (U X ))) m and f * ∈ L 2 Y,t (V Y,t , H −1 X (U X )) satisfying the constraint (3.22). We now simply select j = ∇ X (h + g) ∈ (L 2 Y,t (V Y,t , L 2 X (U X ))) m and then where we use the notation (s) − := max{0, −s} for s ∈ R. As h is smooth, and U X and V Y,t are bounded domains, we have Let, for any r ≥ 0, We claim that To prove this it is enough to prove that where c = c(m, ψ) < ∞. To see this we first note that (3.32) |hψ r | ≤ 2|h|, |∇ X hψ r | ≤ |∇ X h| + cr|h|. Hence, together with the estimates By construction, b r vanishes on ∂ K (U X × V Y,t ). Together with (3.30), this yields that b r ∈ W ∩ C ∞ K,0 (U X × V Y,t ). Furthermore, Noting that the right hand side above has non-negative limit, we can complete the proof of the lemma.
In particular, Again arguing as in (3.9) and (3.10), see (3.12), we can deduce that Hence, using (3.36), (3.37), (1.2), Cauchy-Schwarz and Young's inequality we see that , where ǫ ∈ (0, 1) is a degree of freedom and c = c(m, κ, ǫ) is a constant. Using Lemma 2.2 we can therefore conclude that In this section we prove Theorem 1.2, Theorem 1.3 and Theorem 1.4. Throughout the section we let U X ⊂ R m , U Y ⊂ R m , and J ⊂ R be bounded domains. In addition we assume that U X ⊂ R m is a bounded Lipschitz domain. We let the space The definition of weak solution to (1.4) in the sense of Definition 1, but with V Y,t replaced by U Y × J, is defined analogously. Let Ω ⊂ R m , m ≥ 1, be a (unbounded) Lipschitz domain as defined in (1.5), with boundary defined by a Lipschitz function ψ with Lipschitz constant M . We Given Ω ⊂ R m as above we let W X,0 (Ω × R m × R) be the closure of all functions in C ∞ 0 (Ω × R m × R), which are zero on ∂Ω × R m × R, in the norm of W (R N +1 ) = W (R m × R m × R).
). We say that u is a weak solution to if u ∈ W loc (Ω × R m × R) and the following holds. First, 4.1. Proof of Theorem 1.2. Recall that the natural family of dilations for the operator L, (δ r ) r>0 , on R N +1 , is (4.5) δ r (X, Y, t) = (rX, r 3 Y, r 2 t), for (X, Y, t) ∈ R N +1 , r > 0. In the following we will, without loss of generality, assume that 0 ∈ ∂Ω. We let, for R > 0, (4.6) Furthermore, we let V Y,t ⊂ R m ×R be a bounded domain with a boundary which is C 1,1 -smooth and contains (0, 0) ∈ R m × R. For R > 0 we introduce FixR > 0 and consider R ≥R. Then the inequality in the last display implies that In particular, if R ≥R then {u R } is uniformly bounded in the norm of W (UR X × VR Y,t ). Using this we see that there exists a subsequence of {u R }, which we denote by {u jR }, such that andũR satisfiesũR ∈ W (UR X ×VR Y,t ) and is such that (ũR −g) ∈ W ∆R X ,0 (UR X ×VR Y,t ). Furthermore, uR satisfies (4.10) with U R X × V R Y,t replaced by UR X × VR Y,t and for all φ ∈ L 2 Y,t (VR Y,t , H 1 X,0 (UR X )). In particular,ũR is a weak solution in UR X × VR Y,t with the correct boundary data, i.e. g, on ∆R X . Next, consider a sequence {R i },R i → ∞ as i → ∞. Then by the argument outlined, andũR i is a solution in UR i X ×VR i Y,t in the sense discussed. Hence, taking a diagonal subsequence, denoted {uRl l }, we see that uRl l →ũ as l → ∞, andũ is such that, for every R > 0 ). In particular,ũ is a weak solution to the problem to the problem in (4.2) in the sense of Definition 2. This completes the proof of the existence ofũ and hence the proof of the theorem.

4.2.
Proof of Theorem 1.3. Assume that u ∈ W loc (Ω × R m × R) ∩ L ∞ (Ω × R m × R) is a weak solution to the problem in (4.2), with g ≡ 0 ≡ g * , in the sense of Definition 2. We want to prove that u ≡ 0 on Ω × R m × R. We can without loss of generality assume that 0 ∈ ∂Ω. Let V Y,t ⊂ R m × R be a bounded domain with a boundary which is C 1,1 -smooth and contains (0, 0) ∈ R m × R. Given R > 0 we let U R X and V R Y,t be defined as in (4.6) and (4.7), respectively. We first note that the weak maximum principle holds in bounded domains U R X ×V R Y,t . Indeed, by Theorem 1.1 we have uniqueness to the problem in (1.4) Therefore it suffices to prove the weak maximum principle for the regularized operators (4.14) Here ǫ > 0 is small and A ǫ is a regularization of A, constructed by convolution of A with respect to an approximation of the identity with parameter ǫ, such that A ǫ → A a.e. on compact subsets of R N +1 as ǫ → 0. The weak maximum principle for L ǫ in U R X × V R Y,t can be proved as in Lemma 6.1 in [11].
Let R > 0 be large enough to ensure that Let η > 0 be given and assume that u ≥ 2η at some point in Ω×R m ×R. Note that the maximum principle that we are to prove is known to hold for K. Therefore, using Hölder continuity estimates for K, as well as estimates for the fundamental solution of K, see for example Theorem 3.2 and Lemma 4.17 in [11], we can conclude that there exists 0 < δ = δ(η, M * ) ≪ 1 such that u < η in (Ω × R m × R) \ (U  (1.7). Let ϕ ∈ W (R N +1 ) ∩ C 0 (R N +1 ). Consider the regularized operator L ǫ introduced in (4.14). Applying Theorem 1.2, Theorem 1.3 as well as Theorem 3.1 in [11], in Ω × R m × R and for L ǫ , we can conclude that there exists a unique bounded weak solution u ǫ ∈ W loc (Ω × R m × R) ∩ C(Ω × R m × R), in the sense of Definition 2, to the problem (4.15) L ǫ u ǫ = 0 in Ω × R m × R, u ǫ = ϕ on ∂Ω × R m × R.
It is readily seen that u ǫ → u in W loc (Ω × R m × R), that ω ǫ K → ω K weakly on ∂Ω × R m × R in the sense of measures, that u ∈ W loc (Ω × R m × R) ∩ L ∞ (Ω × R m × R) is the unique bounded weak solution, in the sense of Definition 2, to the problem in (4.15) and that u(X, Y, t) = ∂Ω×R m ×R ϕ(X,Ỹ ,t) dω K (X, Y, t,X,Ỹ ,t).
Therefore, to prove the theorem it only remains to prove that u ∈ C(Ω × R m × R), i.e. to prove that u is continuous up to the boundary. To prove this we first introduce some notation.