Fundamental solutions for Kolmogorov-Fokker-Planck operators with time-depending measurable coefficients

We consider a Kolmogorov-Fokker-Planck operator of the kind studied by Lanconelli-Polidoro in [Rend. Sem. Mat. Univ. Politec. Torino 52 (1994)], where the leading coefficients $a_{ij}$, instead of being constant, are bounded measurable functions of t. We construct an explicit fundamental solution for this operator, study its property, show a comparison result between this function and the fundamental solution of some model operators with constant $a_{ij}$, and show the unique solvability of the Cauchy problem under various assumptions on the initial datum.

where {aij (t)} q i,j=1 is a symmetric uniformly positive matrix on R q , q ≤ N , of bounded measurable coefficients defined for t ∈ R and the matrix B = {bij} N i,j=1 satisfies the assumptions made by Lanconelli-Polidoro in [13], which make the corresponding operator with constant aij hypoelliptic. We construct an explicit fundamental solution Γ for L, study its property, show a comparison result between Γ and the fundamental solution of some model operators with constant aij, and show the unique solvability of the Cauchy problem for L under various assumptions on the initial datum. 1

Introduction
We consider a Kolmogorov-Fokker-Planck (from now on KFP) operator of the kind: where: (H1) A 0 (t) = {a ij (t)} q i,j=1 is a symmetric uniformly positive matrix on R q , q ≤ N , of bounded measurable coefficients defined for t ∈ R, so that for some constant ν > 0, every ξ ∈ R q , a.e. t ∈ R.
Lanconelli-Polidoro in [13] have studied the operators (1.1) with constant a ij , proving that they are hypoelliptic if and only if the matrix B = {b ij } N i,j=1 satisfies the following condition. There exists a basis of R N such that B assumes the following form: ( where every block B j is a m j × m j−1 matrix of rank m j with j = 1, 2, . . . , κ, while the entries of the blocks denoted by * are arbitrary. It is also proved in [13] that the operator L (corresponding to constant a ij ) is left invariant with respect to a suitable (noncommutative) Lie group of translations in R N . If, in addition, all the blocks * in (1.4) vanish, then L is also 2-homogeneous with respect to a family of dilations. In this very special case, the operator L fits into the rich theory of left invariant, 2-homogeneus, Hörmander operators on homoegeneous groups.
Coming back to the family of hypoelliptic and left invariant operators with constant a ij (and possibly nonzero blocks * in (1.4)), an explicit fundamental solution is known, after [11] and [13].
A first result of this paper consists in showing that if, under the same structural assumptions considered in [13], the coefficients a ij are allowed to depend on t, even just in an L ∞ -way, then an explicit fundamental solution Γ can still be costructed. It is worth noting that, under our assumptions (H1)-(H2), L is hypoelliptic if and only if the coefficients a i,j 's are C ∞ functions, which also means that Γ is smooth outside the pole. In our more general context, Γ will be smooth in x and only locally Lipschitz continuous in t, outside the pole. Our fundamental solution also allows to solve a Cauchy problem for L under various assumptions on the initial datum, and to prove its uniqueness. Moreover, we show that the fundamental solution of L satisfies two-sided bounds in terms of the fundamental solutions of model operators of the kind: whose explicit expression is more easily handled. This fact has other interesting consequences when combined with the results of [13], which allow to compare the fundamental solution of (1.5) with that of the corresponding "principal part operator", which is obtained from (1.5) by annihilating all the blocks * in (1.4). The fundamental solution of the latter operator has an even simpler explicit form, since it possesses both translation invariance and homogeneity.
To put our results into context, let us now make some historical remarks. Already in 1934, Kolmogorov in [10] exhibited an explicit fundamental solution, smooth outside the pole, for the ultraparabolic operator For more general classes of ultraparabolic KFP operators, Weber [17], 1951, Il'in [9], 1964, Sonin [16], 1967, proved the existence of a fundamental solution smooth outside the pole, by the Levi method, starting with an approximate fundamental solution which was inspired by the one found by Kolmogorov. Hörmander, in the introduction of [8], 1967, sketches a procedure to compute explicitly (by Fourier transform and the method of characteristics) a fundamental solution for a class of KFP operators of type (1.1) (with constant a ij ). In all the aforementioned papers the focus is to prove that the operator, despite of its degenerate character, is hypoelliptic. This is accomplished by showing the existence of a fundamental solution smooth outside the pole, without explicitly computing it.
Kupcov in [11], 1972, computes the fundamental solution for a class of KFP operators of the kind (1.1) (with constant a ij ). This procedure is generalized by the same author in [12], 1982, to a class of operators (1.1) with time-dependent coefficients a ij , which however are assumed of class C κ for some positive integer κ related to the structure of the matrix B. Our procedure to compute the fundamental solution follows the technique by Hörmander (different from that of Kupcov) and works also for nonsmooth a ij (t).
Based on the explicit expression of the fundamental solution, existence, uniqueness and regularity issues for the Cauchy problem have been studied in the framework of the semigroup setting. We refer here to the article by Lunardi [14], and to Farkas and Lorenzi [7]. The parametrix method introduced in [17,9,16] was used by Polidoro in [15] and by Di Francesco and Pascucci in [5] for more general families of Kolmogorov equations with Hölder continuous coefficients. We also refer to the article [4] by Delaure and Menozzi, where a Lipschitz continuous drift term is considered in the framework of the stochastic theory. For a recent survey on the theory of KFP operators we refer to the paper [1] by Anceschi-Polidoro, while a discussion on several motivations to study this class of operators can be found for instance in the survey book [2, §2.1].
The interest in studying KFP operators with a possibly rough time-dependence of the coefficients comes from the theory of stochastic processes. Indeed, let σ = σ(t) be a N × q matrix, of rank q, let B as in (1.4), and let (W t ) t≥t0 be a q-dimensional Wiener process. Denote by (X t ) t≥t0 the solution to the following N -dimensional stochastic differential equation Then the forward Kolmogorov operator K f of (X t ) t≥t0 agrees with L up to a constant zero order term: Moreover, the backward Kolmogorov operator K f of (X t ) t≥t0 acts as follows Note that K f is the transposed operator of K b . In general, given a differential operator K, its transposed operator K * is the one which satisfies the relation Notation 1.1 Throughout the paper we will regard vectors x ∈ R N as columns, and, we will write x T , M T to denote the transpose of a vector x or a matrix M . We also define the (symmetric, nonnegative) N × N matrix Before stating our results, let us fix precise definitions of solution to the equation Lu = 0 and to a Cauchy problem for L. u is jointly continuous in R N × I; for every t ∈ I, u (·, t) ∈ C 2 R N ; for every x ∈ R N , u (x, ·) is absolutely continuous on I, and ∂u ∂t (defined for a.e. t) is essentially bounded for t ranging in every compact subinterval of I; for a.e. t ∈ I and every x ∈ R N , Lu (x, t) = 0. Definition 1. 3 We say that u (x, t) is a solution to the Cauchy problem (a) u is a solution to the equation Lu = 0 in R N × (t 0 , T ) (in the sense of the above definition); In the following, we will also need the transposed operator of L, defined by The definition of solution to the equation L * u = 0 is perfectly analogous to Definition 1.2.
We can now state precisely the main results of the paper.
Theorem 1.4 Under the assumptions (H1)-(H2) above, denote by E(s) and C(t, t 0 ) the following N × N matrices for s, t, t 0 ∈ R and t > t 0 . Then the matrix C (t, t 0 ) is symmetric and positive for every t > t 0 . Let for t > t 0 , Γ = 0 for t ≤ t 0 . Then Γ has the following properties (so that Γ is a fundamental solution for L with pole (x 0 , t 0 )).
(i) In the region . Moreover Γ and and ∂ α+β Γ ∂x α ∂x β 0 are Lipschitz continuous with respect to t and with respect to t 0 in any region H ≤ t 0 + δ ≤ t ≤ K for fixed H, K ∈ R and δ > 0. lim |x|→+∞ Γ (x, t; x 0 , t 0 ) = 0 for every t > t 0 and every x 0 ∈ R N . lim |x0|→+∞ Γ (x, t; x 0 , t 0 ) = 0 for every t > t 0 and every x ∈ R N . (ii) For every fixed (x 0 , t 0 ) ∈ R N +1 , the function Γ (·, ·; x 0 , t 0 ) is a solution to Lu = 0 in R N × (t 0 , +∞) (in the sense of Definition 1.2); (iii) For every fixed (x, t) ∈ R N +1 , the function Γ (x, t; ·, ·) is a solution to . Then there exists one and only one solution to the Cauchy problem (1.9) (in the sense of Definition 1.3, with T = ∞) such that u ∈ C 0 b R N × [t 0 , ∞ or u (t, ·) ∈ L p R N for every t > t 0 , respectively. The solution is given by and is C ∞ R N with respect to x for every fixed t > t 0 . If moreover f is continuous and vanishes at infinity, then u (·, t) → f uniformly in R N as t → t + 0 .
(v) Let f be a (possibly unbounded) continuous function on R N satisfying the condition for some α > 0. Then there exists T > 0 such that there exists one and only one solution u to the Cauchy problem (1.9) satisfying condition for some C > 0. The solution u (x, t) is given by (1.14) for t ∈ (t 0 , T ). It is C ∞ R N with respect to x for every fixed t ∈ (t 0 , T ).
(vi) Γ satisfies for every x 0 ∈ R N , t 0 < t the integral identities for every x, y ∈ R N and s < τ < t.
Remark 1.5 Our uniqueness results only require the condition (1.16). Indeed, as we will prove in Proposition 4.14 all the solutions to the Cauchy problem (1.9), in the sense of Definition 1. (1.15), do satisfy the condition (1.16). Remark 1.6 All the statements in the above theorem still hold if the coefficients a ij (t) are defined only for t belonging to some interval I. In this case the above formulas need to be considered only for t, t 0 ∈ I. In order to simplify notation, throughout the paper we will only consider the case I = R.
The above theorem will be proved in section 4. The second main result of this paper is a comparison between Γ and the fundamental solutions Γ α of the model operators (1.5) corresponding to α = ν, α = ν −1 (with ν as in (1.2)). Specializing (1.12) to the operators (1. where, here and in the following, C 0 (t) = C (t, 0) with A 0 (t) = I q (identity q × q matrix). Explicitly: where I q,N is the N × N matrix given by Then: The above theorem will be proved in section 3. The following example illustrates the reason why our comparison result is useful.

Example 1.8 Let us consider the operator
with a (t) measurable and satisfying Let us compute Γ (x, t; 0, 0) in this case. We have: Therefore we find, for t > 0: so that, explicitly, we have On the other hand, when considering the model operator The comparison result of Theorem 1.7 then reads as follows: Plan of the paper. In §2 we compute the explicit expression of the fundamental solution Γ of L by using the Fourier transform and the method of characteristics, showing how one arrives to the the explicit formula (1.12). This procedure is somehow formal as, due to the nonsmoothness of the coefficients a ij (t), we cannot plainly assume that the functional setting where the construction is done is the usual distributional one. Since all the properties of Γ which qualify it as a fundamental solution will be proved in the subsequent sections, on a purely logical basis one could say that §2 is superfluous. Nevertheless, we prefer to present this complete computation to show how this formula has been built. A further reason to do this is the following one. The unique article where the analogous computation in the constant coefficient case is written in detail seems to be [11], and it is written in Russian language.
In §3 we prove Theorem 1.7, comparing Γ with the fundamental solutions of two model operators, which is easier to write explicitly and to study. In §4 we will prove Theorem 1.4, namely:

Computation of the fundamental solution Γ
As explained at the end of the introduction, this section contains a formal computation of the fundamental solution Γ. To this aim, we choose any (x 0 , t 0 ) ∈ R N +1 , and we look for a solution to the Cauchy Problem by applying the Fourier transform with respect to x, and using the notation We have: By the standard properties of the Fourier transform, it follows that then the problem (2.1) is equivalent to the following Cauchy problem that we write in compact form (recalling the definition of the A (t) given in (1.8)) as Now we solve the problem (2.2) by the method of characteristics. Fix any initial condition η ∈ R N , and consider the system of ODEs: We plainly find t(s) = t 0 + s and ξ(s) = exp sB T η, so that the last equation becomes Hence, substituting s = t − t 0 , η = exp (t 0 − t) B T ξ, recalling the notation introduced in (1.11), we find and note that if hence it is enough to compute the antitransform of G 0 (ξ, t, t 0 ). In order to do that, the following will be useful: The above formula is a standard known result in probability theory, being the characteristic function of a multivariate normal distribution (see for instance To apply the previous proposition, and antitransform the function G 0 (ξ, t, t 0 ), we still need to know that the matrix C (t, t 0 ) is strictly positive. By [13] we know that the matrix C 0 (t) (see (1.18)) is positive, under the structure conditions on B expressed in (1.4). Exploiting this fact, let us show that the same is true for our C (t, t 0 ): In particular, the matrix C (t, t 0 ) is positive for t > t 0 . Proof.
Integrating for s ∈ (t 0 , t) the previous inequality we get Analogously we get the other bound. By the previous proposition, the matrix C (t, t 0 ) is positive definite for every t > t 0 , since, under our assumptions, this is true for C 0 (t − t 0 ). Therefore we can invert C (t, t 0 ) and antitransform the function G 0 (ξ, t, t 0 ) in (2.5). Namely, applying Proposition 2.1 to C (t, t 0 ) −1 we get: Hence we have computed the antitransform of G 0 (ξ, t, t 0 ), and by (2.6) this also implies Hence the (so far, "formal") fundamental solution of L is which is the expression given in Theorem 1.4.

Comparison between Γ and fundamental solutions of model operators
In this section we will prove Theorem 1.7. The first step is to derive from Proposition 2.2 an analogous control between the quadratic forms associated to The following algebraic fact will help: The first implication is already proved in [15,Remark 2.1.]. For convenience of the reader, we write a proof of both.
Proof. Let us fix some shorthand notation. Whenever (3.1) holds for two symmetric positive matrices, we will write C 1 ≤ C 2 . Note that for every symmetric For any symmetric positive matrix C, we can rewrite C = M T ∆M with M orthogonal and ∆ = diag (λ 1 , ..., λ n ). Letting C 1/2 = M T ∆ 1/2 M , one can check that C 1/2 is still symmetric positive, and C 1/2 C 1/2 = I. Moreover, writing Then, applying (3.2) with G = C −1/2 1 we get Next, applying (3.2) to the last inequality with G = C −1/2 1 Finally, applying (3.2) to the last inequality with so the first statement is proved. To show the inequality on determinants, we can write, since Letting M be an orthogonal matrix that diagonalizes C −1/2 2 so we are done. Applying Propositions 3.1 and 2.2 we immediately get the following: Proposition 3.2 For every ξ ∈ R N and every t > t 0 we have for every t > t 0 .
We are now in position to give the Proof of Thm. 1.7. Recall that C 0 (t) is defined in (1.18). From the definition of the matrix C (t, t 0 ) one immediately reads that, letting C ν (t, t 0 ) be the matrix corresponding to the operator L ν , one has From the explicit form of Γ given in (1.12) we read that whenever the matrix in particular this relation holds for Γ ν . Then (1.12), (3.5), (3.6) imply (1.17). Therefore (3.3) and (3.4) give: Analogously, As anticipated in the introduction, the above comparison result has further useful consequences when combined with some results of [13], where Γ α is compared with the fundamental solution of the "principal part operator" L α having the same matrix A = αI q,N and a simpler matrix B, actually the matrix obtained from (1.4) annihilating all the * blocks. This operator L α is also 2homogeneous with respect to dilations and its matrix C 0 (t) (which in the next statement is called C * 0 (t)) has a simpler form, which gives a useful asymptotic estimate for the matrix of L α . Namely, the following holds: such that the following holds. If we let The above result allows to prove the following more explicit upper bound on Γ for short times: There exist constants c, δ ∈ (0, 1) such that for 0 < t 0 − t ≤ δ and every x, x 0 ∈ R N we have: Proof. By (1.19) and the properties of the fundamental solution when the matrix A (t) is constant, we can write: On the other hand, and by Proposition 3.3 there exist c, δ ∈ (0, 1) such that for 0 < t ≤ δ and every T are measurable and uniformly essentially bounded for (t, σ, t 0 ) varying in any region H ≤ t 0 ≤ σ ≤ t ≤ K for fixed H, K ∈ R. This implies that the matrix is Lipschitz continuous with respect to t and with respect to t 0 in any region H ≤ t 0 ≤ t ≤ K for fixed H, K ∈ R. Moreover, C (t, t 0 ) and det C (t, t 0 ) are jointly continuous in (t, t 0 ). Recalling that, by Proposition 2.2, the matrix C (t, t 0 ) is positive definite for any t > t 0 , we also have that C (t, t 0 ) −1 is Lipschitz continuous with respect to t and with respect to t 0 in any region H ≤ t 0 + δ ≤ t ≤ K for fixed H, K ∈ R and δ > 0, and is jointly continuous in (t, t 0 ) for t > t 0 .
From the explicit form of Γ and the previous remarks we conclude that Γ (x, t; x 0 , t 0 ) is jointly continuous in (x, t; x 0 , t 0 ) for t > t 0 , smooth w.r.t. x and x 0 for t > t 0 and Lipschitz continuous with respect to t and with respect to t 0 in any region H ≤ t 0 + δ ≤ t ≤ K for fixed H, K ∈ R and δ > 0.
Moreover, every derivative ∂ α+β Γ ∂x α ∂ β x0 is given by Γ times a polynomial in (x, x 0 ) with coefficients Lipschitz continuous with respect to t and with respect to t 0 in any region H ≤ t 0 + ε ≤ t ≤ K for fixed H, K ∈ R and ε > 0, and jointly continuous in (t, t 0 ) for t > t 0 .
In order to show that Γ and ∂ α+β Γ ∂x α ∂x β 0 are jointly continuous in the region R 2N +2 * (see (1.13)) we also need to show that these functions tend to zero as (x, t) → y, t + 0 and y = x 0 . For Γ, this assertion follows by Proposition 3.4: for y = x 0 and (x, t) → y, t + 0 we have → 0 and the same is true for Γ (x, t; x 0 , t 0 ) . To prove the analogous assertion for ∂ α+β Γ ∂x α ∂x β 0 we first need to establish some upper bounds for these derivatives, which will be useful several times in the following.
With the previous bounds in hands we can now prove the following: ∂ α x ∂ β y Γ (x, t; y, s) (4.6) for every x, y ∈ R N , for constants C, C ′ depending on n, m and the compact set. In particular, for fixed t > s we have lim |x|→+∞ ∂ α x ∂ β y Γ (x, t; y, s) = 0 for every y ∈ R N lim |y|→+∞ ∂ α x ∂ β y Γ (x, t; y, s) = 0 for every x ∈ R N for every multiindices α, β.
Proof. (i) The matrix C (t, s) is jointly continuous in (t, s) and, by Proposition 2.2 is positive definite for any t > s. Hence for t, s ranging in a compact subset of {(t, s) : t ≥ s + ε} we have for some c, c 1 > 0 only depending on n, m and the compact set. Hence by (4.5) and (1.12) we get (4.6). Let now t, s be fixed. If y is fixed and |x| → ∞ then (4.6) gives If x is fixed and |y| → ∞, (ii) Applying (4.5) together with Proposition 3.4 we get that for some δ ∈ (0, 1), whenever 0 < t − s < δ we have Next, we recall that by Proposition 3.2 we have and an analogous bound holds for C ′ (t, s), for small (t − s). Hence we get (4.7).
If now x 0 = y are fixed, from (4.7) we deduce With the above theorem, the proof of point (i) in Theorem 1.4 is complete.

Remark 4.3 (Long time behavior of Γ)
We have shown that the fundamental solution Γ (x, t; y, s) and its spacial derivatives of every order tend to zero for x or y going to infinity, and tend to zero for t → s + and x = y. It is natural to ask what happens for t → +∞. However, nothing can be said in general about this limit, even when the coefficients a ij are constant, and even in nondegenerate cases. Compare, for N = 1, the heat operator for which as t → +∞.

Γ is a solution
In this section we will prove points (ii), (iii), (vi) of Theorem 1.4. We want to check that our "candidate fundamental solution" with pole at (x 0 , t 0 ), given by (1.12), actually solves the equation outside the pole, with respect to (x, t). Note that, by the results in § 4.1 we already know that Γ is infinitely differentiable w.r.t. x, x 0 , and a.e. differentiable w.r.t. t, t 0 . Before proving the theorem, let us establish the following easy fact, which will be useful in the subsequent computation and is also interesting in its own: Γ (x 0 , t; y, t 0 ) dy = 1.
Proof. Let us compute, for t > t 0 : Next, where we used the relation det (exp B) = e Tr B , holding for every square matrix B. Hence Proof of Theorem 4.4. Keeping the notation of Proposition 4.1, and exploiting (4.1)-(4.2) we have To shorten notation, from now on, throughout this proof, we will write C for C (t, t 0 ) , and To compute the t-derivative appearing in (4.11) we start writing First, we note that Also, note that B commutes with E (t) and B T commutes with E (t) T . Second, differentiating the identity C .−1 C = I we get In turn, at least for a.e. t, we have By (4.14) this gives Inserting (4.13) and (4.15) in (4.12) and then in (4.11) we have Exploiting (4.10), (4.9) and (4.16) we can now compute LΓ: To conclude our proof we are left to check that, in the last expression, the quantity in braces identically vanishes for t > t 0 . This, however, is not a straightforward computation, since the term ∂ t (det C) is not easily explicitly computed. Let us state this fact as a separate ancillary result.
To prove this proposition we also need the following Lemma 4.7 For every N × N matrix A, and every x 0 ∈ R N we have: Proof of Lemma 4.7. The second identity is obvious for symmetry reasons.
As to the first one, where the integrals corresponding to the terms with i = j vanish for symmetry reasons.
Proof of Proposition 4.6. Taking ∂ ∂t in the identity (4.8) we have, by (4.16), for almost every t > t 0 , and letting again x = Ex 0 + 2C 1/2 y inside the integral where we used Lemma 4.7. Finally, since similar matrices have the same trace, so we are done. The proof of Proposition 4.6 also completes the proof of Theorem 4.4.

Remark 4.8 Since, by Theorem 4.4, we can write
the function ∂ t Γ satisfies upper bounds analogous to those proved in Theorem 4.2 for ∂ 2 xixj Γ. Let us now show that Γ satisfies, with respect to the other variables, the transposed equation, that is: we have, for every fixed (x, t) L * (Γ (x, t; ·, ·)) (y, s) = 0 for a.e. s < t and every y.
Proof. We keep the notation used in the proof of Proposition 4.1: Exploiting (4.3) and (4.4) we have, by a tedious computation which is analogous to that in the proof of Theorem 4.4, So we are done provided that: Proposition 4.10 For a.e. s < t we have Proof. Taking ∂ ∂s in the identity (4.8) we have, by (4.16), for almost every s < t, and letting again x = E (t − s) x 0 + 2C 1/2 (t, s) y inside the integral, applying Lemma 4.7 and (4.17), with some computation we get s) are similar, they have the same trace, so the proof is concluded.

The Cauchy problem
In this section we will prove points (iv), (v), (vii) of Theorem 1.4.
We are going to show that the Cauchy problem can be solved, by means of our fundamental solution Γ. Just to simplify notation, let us now take t 0 = 0 and let C (t) = C (t, 0). We have the following: Then: (a) if f ∈ L p R N for some p ∈ [1, ∞] or f ∈ C 0 b R N (bounded continuous) then u solves the equation Lu = 0 in R N × (0, ∞) and u (·, t) ∈ C ∞ R N for every fixed t > 0.
(b) if f ∈ C 0 R N and there exists C > 0 such that (1.15) holds, then there exists T > 0 such that u solves the equation Lu = 0 in R N × (0, T ) and u (·, t) ∈ C ∞ R N for every fixed t ∈ (0, T ).
The initial condition f is attained in the following senses: (iii) If f ∈ C 0 * R N (i.e., vanishing at infinity) then (iv) If f ∈ C 0 R N and satisfies (1.15), then Proof. From Theorem 4.2, (i), we read that for (x, t) ranging in a compact subset of R N × (0, +∞), and every y ∈ R N , for suitable constants c, c 1 > 0. Moreover, by Remark 4.8, |∂ t Γ| also satisfies this bound (with n = 2). This implies that for every f ∈ L p R N for some p ∈ [1, ∞], (in particular for f ∈ C 0 b R N ) the integral defining u converges and Lu can be computed taking the derivatives inside the integral. Moreover, all the derivatives u xi , u xixj are continuous, while u t is defined only almost everywhere, and locally essentially bounded. Then by Theorem 4.4 we have Lu (x, t) = 0 for a.e. t > 0 and every x ∈ R N . Also, the x-derivatives of every order can be actually taken under the integral sign, so that u (·, t) ∈ C ∞ R N . This proves (a). Postponing for a moment the proof of (b), to show that u attains the initial condition (points (i)-(iii)) let us perform, inside the integral in (4.18), the change of variables Let us now proceed separately in the three cases. (i) By Minkowsky's inequality for integrals we have dz. Next, This means that for every t ∈ (0, 1) we have Let us show that for a.e. fixed z ∈ R N we also have → 0 as t → 0 + , this will imply the desired result by Lebesgue's theorem. .

Now
: for z fixed and t → 0 + , because 2E (−t) C (t) 1/2 z → 0 and the translation operator is continuous on L p R N .
It remains to show that which is not straightforward. For every fixed ε > 0, let φ be a compactly supported continous function such that f − φ p < ε, then and we are done.
(ii) Let f ∈ L ∞ R N , and let f be continuous at some point and as in point (ii) Let us show that for every fixed z we have hence by Lebesgue's theorem we will conclude the desired assertion.
For every ε > 0 we can Since φ is compactly supported, there exists R > 0 such that for every t ∈ (0, 1) Since φ is uniformly continuous, for every ε > 0 there exists δ > 0 such that for 0 < t < δ we have |φ (E (−t) (x)) − φ (x)| < ε whenever |x| < R. So we are done. Let us now prove (b). To show that u is well defined, smooth in x, and satisfies the equation, for |x| ≤ R let us write Since f is bounded for |y| < 2R, reasoning like in the proof of point (a) we see that LI (x, t) can be computed taking the derivatives under the integral sign, so that LI (x, t) = 0. Moreover, the function To prove the analogous properties for II (x, t) we have to apply Theorem 4.2, (ii): there exists δ ∈ (0, 1) , C, c > 0 such that for 0 < t < δ and every x, y ∈ R N we have, for n = 0, 1, 2, ...
Recall that |x| < R and |y| > 2R. For δ small enough and t ∈ δ 2 , δ we have with constants depending on δ, n. Therefore, if α is the constant appearing in the assumption (1.15), which shows that for δ small enough LII (x, t) can be computed taking the derivatives under the integral sign, so that LII (x, t) = 0. Moreover, the function Applying point (ii) to f (y) χ B2r(0) we have as (x, t) → (x 0 , 0). Let us show that II → 0. By (3.7) we have For y fixed with |y| > 2R, hence |x 0 − y| = 0, we have Since |y| > 2R, |x 0 | < R, for x → x 0 we can assume |x| < 3 2 R and for t small enough we have |x − E (t) y| ≥ c |y| for some c > 0, hence for t small enough Hence by Lebesgue's theorem II → 0 as (x, t) → (x 0 , 0) , and we are done.
is given by We next prove a uniqueness results for the Cauchy problem (1.9). In the following we consider solutions defined in some possibly bounded time interval [0, T ).
If u 1 and u 2 are two solutions to the same Cauchy problem satisfying (1.16) for some C > 0, then u 1 ≡ u 2 in R N × (0, T ).
Proof. Because of the linearity of L, it is enough to prove that if the function u := u 1 − u 2 satisfies (4.19) with f = 0 and (1.16), then u(x, t) = 0 for every (x, t) ∈ R×(0, +∞). We will prove that u = 0 in a suitably thin strip R×(0, t 1 ), where t 1 only depends on L and C, the assertion then will follow by iterating this argument. Let t 1 ∈ (0, T ] be a fixed mumber that will be specified later. For every positive R we consider a function h R ∈ C ∞ (R N ), such that h R (ξ) = 1 whenever |ξ| ≤ R, h R (ξ) = 0 for every |ξ| ≥ R + 1/2 and that 0 ≤ h R (ξ) ≤ 1. We also assume that all the first and second order derivatives of h R are bounded by a constant that doesn't depend on R. We fix a point (y, s) ∈ R N × (0, t 1 ), and we let v denote the function v (ξ, τ ) := h R (ξ) Γ (y, s; ξ, τ ) .
By (1.1) and (1.10) we can compute the following Green identity, with u and v as above.
We now integrate the above identity on Q R,ε and apply the divergence theorem, noting that v, ∂ x1 v, . . . , ∂ xN v vanish on the lateral part of the boundary of Q R,ε , by the properties of h R . Hence: (4.20) Concerning the last integral, since the function y → h R (y)u(y, s) is continuous and compactly supported, by Theorem 4.11, (iii) we have that as ε → 0 + , since Γ is a bounded function whenever (ξ, ε) ∈ R N × (0, s/2), and u(·, ε)h R → 0 either uniformly, if the inital datum is assumed by continuity, or in the L p norm. Using the fact that Lu = 0 and u(·, 0) = 0, we conclude that, as |y| < R, (4.20) gives Since L * Γ(y, s; ξ, τ ) = 0 whenever τ < s, we have therefore the identity (4.21) yields, since ∂ ξi h R = 0 for |ξ| ≤ R, We claim that (4.22) implies for some positive constant C 1 only depending on the operator L and on the uniform bound f the derivatives of h R , provided that t 1 is sufficiently small. Our assertion then follows by letting R → +∞. So we are left to prove (4.23). By Proposition 3.4 we know that, for suitable constants δ ∈ (0, 1), c 1 , c 2 > 0, for 0 < s − τ ≤ δ and every y, ξ ∈ R N we have: (4.24) Moreover, from the computation in the proof of Theorem 4.9 we read that
We now fix β = 4α and then fix T small enough such that 1 2 − c 2 T β ≥ 1 4 , so that for t ∈ (0, T ) we have So we are done. The previous uniqueness property for the Cauchy problem also implies the following replication property for the heat kernel: Lu (x, t) = 0 for t > τ u (x, τ ) = Γ (x, τ ; y, s) where the initial datum is assumed continuously, uniformly as t → τ . Since v (x, t) = Γ (x, t; y, s) solves the same Cauchy problem, by Theorem 4.13 the assertion follows.