Some $L_{p}$-estimates for elliptic and parabolic operators with measurable coefficients

We consider linear elliptic and parabolic equations with measurable coefficients and prove two types of $L_{p}$-estimates for their solutions, which were recently used in the theory of fully nonlinear elliptic and parabolic second order equations in \cite{DKL}. The first type is an estimate of the $\gamma$th norm of the second-order derivatives, where $\gamma\in(0,1)$, and the second type deals with estimates of the resolvent operators in $L_{p}$ when the first-order coefficients are summable to an appropriate power.

Let d ≥ 1 be an integer and let R d be a Euclidean space of points x = (x 1 , ..., x d ). Consider an operators L of the form where and below in the article the summation convention is enforced, a(t, x) = (a ij (t, x)) is a uniformly nondegenerate and bounded matrixvalued, b(t, x) = (b i (t, x)) is an R d -valued, and c(t, x) is a real-valued measurable functions defined on R d+1 = {(t, x) : t ∈ R, x ∈ R d }.
In this article we are going to discuss two types of estimates for operators like L, which were recently used in the theory of fully nonlinear elliptic and parabolic second order equations in [1].
The first type (see Theorems 1.8 and 1.9) is about the possibility to estimate the integrals of |D 2 u| γ with some γ ∈ (0, 1) through the L p -norm of Lu and the sup norm of u, where D 2 u is the Hessian of u. This seemingly very weak estimate, discovered for elliptic equations by F.H. Lin, recently played a crucial role in the theory of fully nonlinear elliptic and parabolic equations with VMO "coefficients" (see [1]). In [1] we use a result stated in [8] without proof. Even though the proof is not difficult it is still worth presenting it with all details especially because on our way we obtain some new nontrivial information such as Lemma 1.6 or its probabilistic counterpart Theorem 3.1. One more point worth mentioning is that unlike F.H. Lin, who used a rather delicate reversed Hölder's inequality which was proved an the basis of Gehring's lemma, we are using a basic result of Krylov-Safonov, which provided the foundation of the theory of fully nonlinear elliptic and parabolic second-order equations. In fact, we need its version obtained in [8] just by analyzing the corresponding arguments in [6]. To obtain the above mentioned estimate we assume that b and c are bounded. Similar estimates we give for |Du|, where Du is the gradient of u.
The second type of results deals with estimates of the L p -norm of µu through the L p -norm of µu − Lu if µ > 0 with a constant independent of µ if µ is large (see Theorems 4.4 and 5.3). As we have noted these theorems are also used in [1] in particular cases when the drift coefficients are bounded. However, even in this case we could not find a direct reference to the result we needed and, therefore, our explanation in [1] contains the words such as "by analyzing the proof...". Here we prove the corresponding result with all details and also give its generalization for the case in which b is in L q with an appropriate q ≤ p.
For constants K ≥ 0 denote by L δ,K the set of operators L of type (0.1), when a(t, x) = (a ij (t, x)) is S δ -valued and b and c are such that |b| + c ≤ K, c ≥ 0.
Let L 0 δ,K be a subset of L δ,K consisting of operators with infinitely differentiable coefficients.
For ρ, r > 0 introduce Our first goal in this section is to prove the following parabolic version of the main result of [9] by F.H. Lin. Theorem 1.1. There are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that for any L ∈ L δ,K and u ∈ W 1,2 d+1,loc (C 2,1 ) ∩ C(C 2,1 ) we have This theorem is stated as Corollary 4.2 in [8] but no proof is given there. We fill this gap in this article.
The following theorem is proved in [8].
For elliptic operators we have the following version of Theorem 1.2.
and assume that u ≥ 0 on ∂B 1 and there exists an operator L ∈ L δ,K with coefficients independent of t such that Lu ≤ 0 in B 1 . Then there exist constants γ = γ(δ, d, K) ∈ (0, 1] and N = N(δ, d, K) such that for any λ > 0 Proof. First assume that u ∈ C 2 (B 1 ) and define a function v = v(t, x) by v(t, x) = u(x). By the maximum principle u ≥ 0 in B 1 . Therefore v satisfies the assumptions of Theorem 1.2 and (1.4) in this particular case follows from (1.2).
In the general case, introduce f = −Lu and find a sequence of operators L n ∈ L δ,K , n = 1, 2, ..., with smooth coefficients converging (a.e.) to the corresponding coefficients of L. Also let f n ∈ C 1 (B 1 ), n = 1, 2, ..., be a sequence of nonpositive functions such that f n → f in L d (B 1 ). Define u n ∈ C 2 (B 1 ) as unique solutions of equations L n u n = −f n with zero boundary condition. Since, u n ≤ u on ∂B 1 and inB 1 owing to the Alexandrov estimate. Now we recall that the convergence almost everywhere implies the convergence in distribution and conclude that is continuous. Since the right-hand side of (1.4) is continuous in λ and the left-hand side is right continuous, we have (1.4) for all λ > 0 and the theorem is proved.
As in the case of Corollary 1.3 we have the following.
Corollary 1.5. Under the conditions of Theorem 1.4 for any γ ′ ∈ (0, γ) we have Here is a useful generalization of Corollary 1.3.
Obviously, u ε ∈ C(C 1 ) ∩ W 1,2 d+1 (C 2,1 ). We also modify the coefficients of L in such a way that the new g and f are just shifted and dilated original g and f , respectively, times ε 2 . If (1.5) holds for u ε , then we obtain it as is by letting ε ↑ 1 by the monotone convergence theorem owing to the continuity of u inC 2,1 . By the way, we do not assume that the integral in the right-hand side of (1.5) is finite. Thus indeed we may concentrate on u ∈ W 1,2 d+1 (C 2,1 ). Then observe that if the integral in the right-hand side of (1.5) is infinite, we have nothing to prove. Therefore, we may assume that it is finite. Then g ∈ L d+1 (C 2,1 ) since g + ≥ g ≥ Lu. It follows that f ∈ L d+1 (C 2,1 ) as well.
Now take an operator L ′ ∈ L 0 δ and introduce (1.5) were true with f ′ , g ′ in place of f, g for any L ′ , then by approximating L by operators L ′ we would obtain (1.5) in its original form.
Therefore, in the rest of the proof without losing generality we assume that L ∈ L 0 δ and introduce functions v and w as W 1,2 d+1 (C 2,1 ) solutions of Lv = −f, Lw = −g with zero condition on ∂ ′ C 2,1 for v and with condition w = −u on ∂ ′ C 2,1 . The existence and uniqueness of v and w is a classical result (see, for instance, [11]).
Clearly, v = u + w and by the maximum principle v ≥ 0. By Corollary 1.3, for an appropriate γ, the left-hand side of (1.5) is less than a constant times v γ (0, 0) ≤ u γ + (0, 0) + w γ + (0, 0). After that it only remains to use the parabolic Alexanrdov estimate. The lemma is proved.
For elliptic operators Lemma 1.6 becomes the following.
Lemma 1.7. There are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that, if L ∈ L δ,K has coefficients independent of t and u ∈ W 2 d,loc (B 1 ) ∩ C(B 1 ) and The proof is based on Corollary 1.5 and consists of repeating the proof of Lemma 1.6 with obvious changes. Of course, at the last step one applies the original Alexandrov estimate rather than its parabolic version.
Proof of Theorem 1.1. Introduce h = Lu, take an operator L ′ ∈ L δ/2,K , and observe that According to (1.5) and the parabolic Alexanrdov estimate We now use the arbitrariness of L ′ . Obviously, there exists an ε = ε(δ, d) > 0 and an operator L ′ ∈ L δ/2 with lower order coefficients coinciding with the ones of L and such that With such an operator (1.6) becomes (1.1). The theorem is proved.
The reader understands that the following result is obtained by mimicking the proof of Theorem 1.1 and using Lemma 1.7 instead of Lemma 1.6. Theorem 1.8. There are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that for any L ∈ L δ,K with the coefficients inde- Next result is stronger than Theorem 1.1 and looks like the right parabolic counterpart of Theorem 1.8. It is proved in [1] on the basis of Theorem 1.1. We give it with a proof just for completeness. Theorem 1.9. Let u ∈ C(C 1 )∩W 1,2 d+1,loc (C 1 ). Then there are constants γ ∈ (0, 1] and N, depending only on δ, d, and K, such that for any L ∈ L δ,K we have Proof. First as in the proof of Lemma 1.6 one reduces the general situation to the one in which u ∈ W 1,2 d+1 (C 1 ). Then we may also assume that the coefficients of L are infinitely terminal and lateral conditions being u. The existence and uniqueness of such a solution is a classical result (see, for instance, Theorem 7.17 of [11]). By uniqueness v = u in C 1 , so that owing to Theorem 1.1, The theorem is proved.

Estimating |Du|
Lemma 2.1. There are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that, if L ∈ L δ,K and u ∈ W 1,2 d+1,loc (C 2,1 ) ∩ C(C 2,1 ), then we have Proof. It certainly suffices to concentrate on smooth u. In that case observe that By Lemma 1.6 with an appropriate γ After that it only remains to use Jensen's inequality and again the parabolic Alexandrov estimate. The lemma is proved. We also have (2.2) for elliptic operators. Therefore, as above, Lemma 1.7 yields Theorem 2.2. There are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that, if L ∈ L δ,K has the coefficients independent of t and u ∈ W 2 d,loc (B 1 ) ∩ C(B 1 ), then we have Here is our estimate of Du in the parabolic case.
. Then there are constants γ ∈ (0, 1] and N, depending only on δ, K, and d, such that for any L ∈ L δ,K we have This theorem is derived from Lemma 2.1 in the same way as Theorem 1.9 is derived from Theorem 1.1.

Probabilistic versions
Let (Ω, F , P ) be a complete probability space endowed with an increasing filtration of σ-fields F t ⊂ F , t ≥ 0, each of which is complete with respect to P, F . By P we denote the predictable σ-field on , where K and δ are fixed constants.
Proof. First assume that f is infinitely differentiable in (t, x). Consider the following Bellman's equation: in C 2,1 with zero boundary data on ∂ ′ C 2,1 . By Theorem 6.4.1 of [6] this problem has a unique solution bounded and continuous inC 2,1 and having bounded and continuous in C 2,1 derivatives ∂ t u, Du, and D 2 u. Actually, to apply Theorem 6.4.1 of [6] directly we need to have the term −u in the left-hand side of (3.2). However, this is easily achieved by introducing a new function v such that u = e −t v. By the maximum principle u ≥ 0. Obviously, for t < τ . Furthermore, it is easy to see that there exists an operator L ∈ L δ,K such that Lu = −f , so that by Lemma 1.6 where m t is a martingale. Upon plugging in t = 4, observing that 4 ∧ τ = τ and u(τ, x τ ) = 0, and taking the expectations of the extreme terms we obtain After that to prove (3.1) for infinitely differentiable f , it only remains to use (3.4). The parabolic Alexandrov estimate in probabilistic terms (see, for instance, Theorem 2 of [4] or Theorem 2.2.4 of [5]) implies that for any Borel g ≥ 0 where N = N(d, δ, K). This easily allows us to extend (3.1) to the set of bounded Borel f (vanishing for t ≤ 1). Finally, applying the monotone convergence theorem we get the desired result. The theorem is proved.
We have derived Theorem 3.1 from Lemma 1.6, but actually Theorem 3.1 is equivalent to Lemma 1.6 In probabilistic terms Lemma 1.7 means the following.
Theorem 3.2. There exist constants γ ∈ (0, 1) and N ∈ (0, ∞) depending only on δ, K, and d, such that if f (x) is a nonnegative function on B 1 and τ = inf{t ≥ 0 : |x t | = 1}, We leave it to the reader to follow closely the proof of Theorem 3.1 by using the corresponding results for elliptic equations from [6].
The probabilistic versions of Lemmas 1.6 and 1.7 allow one to give different proofs of Theorems 1.8 and 1.9. We only show this on the example of Theorem 1.8.
Proof of Theorem 1.8. First as in the proof of Lemma 1.6 we may assume that u ∈ W 2 d (B 1 ). Then we find an ε = ε(δ, d) > 0 and an operator L ′ ∈ L δ/2,K with coefficients independent of t such that One knows (see, for instance, [2] or [5]) that there always exist (Ω, F , P ), F t , and w t as in the beginning of the section, and there exists an F t -adapted continuous R d -valued process x t , t ≥ 0, on Ω such that with probability one for all t ≥ 0 From [4] (see the comments after Theorem 3 there and see Theorem 4 of [2]) or [5] we know that Itô's formula is applicable to Therefore, By using the probabilistic version of the Alexandrov estimate and the fact that 0 ≤ c ≤ K we conclude Now observe that the above argument is applicable for ε = 0 and L ′ = L in which case we get and to obtain (1.7) from (3.6) it only remains to use the probabilistic version of the Alexandrov estimate once more. The theorem is proved.
Remark 3.3. One of consequences of Theorem 3.2 is obtained when one takes f to be the indicator function of a Borel G ⊂ B 1 . Then estimate (3.5) says that |G| 1/γ is less than a constant N times the average time spent by x t in G before exiting from B 1 , where |G| is the Lebesgue measure of G.
It turns out that even in such estimates of the average time spent by x t in G before exiting from B 1 the constant γ can be very small when δ is small, so that there is no hope to get (3.5) with large γ for arbitrary f .
For instance, take d = 2, b = c = 0, Then let e 1 be the first basis vector and set Also let G be the indicator of B r + e 1 /2, where r ∈ (0, 1/2). Next solve the equation in B 3/2 + e 1 /2 with zero boundary condition. Then the value at zero of this solution will be the average time spent by x t in G before exiting from B 3/2 + e 1 /2 and since the latter contains B 1 ⊃ G, u(0) is greater than the average time spent by x t in G before leaving B 1 . It turns out that which equals a constant times |G| 1/γ with Thus, γ can be made as small as we wish on the account of taking δ small enough or ε close to 1. One solves (3.7) in polar coordinates with pole at e 1 /2. Then, if ρ is the polar radius, our function u(x) is written as v(ρ) and v satisfies with boundary conditions v ′ (0) = 0 and v(3/2) = 0. The latter equation is easily solvable by using an appropriate integrating factor, yields a function v with bounded second-order derivative, and after noting that u(0) = v(1/2) leads to (3.8).

Estimates in L p of resolvent operators. Parabolic case
For a domain Q ∈ R d+1 denote by ∂ ′ Q the parabolic boundary of Q, that is the set of all points (t 0 , x 0 ) ∈ ∂Q, for each of which there exists a κ > 0 and a continuous R d -valued function x(t) defined on [t 0 − κ, t 0 ] such that x(t 0 ) = x 0 and (t, x(t)) ∈ Q for t ∈ [t 0 − κ, t 0 ). In case Q = R d+1 we have ∂Q = ∂ ′ Q = ∅.
Take p ∈ [d + 1, ∞) and introducê where the intersection is taken over all bounded open subsets G of Q. Set W 1,2 p = W 1,2 p (R d+1 ), L p = L p (R d+1 ) and denote by C(Q) the set of bounded continuous functions onQ. Next, let 1 (t, x), ..., b d (t, x)) and real-valued function c(t, x) be defined on R d+1 . Set fix a δ > 0 and K ∈ [0, ∞) and impose the following.
The function c is nonnegative and bounded.
Our main goal in this section is to establish estimates like for u ∈ W 1,2 p (Q) vanishing on ∂ ′ Q, where N is a constant and the function λ(µ) > 0 for µ > 0 behaves like µ as µ → ∞. The linear behavior of λ(µ) for large µ is, of course, the best one could expect.
In a particular case of bounded b ≡ 0 as we will see from Corollary 4.7 one can take λ(µ) = µ 2 for µ close to 0.
On the other hand, if, for some θ, λ ∈ (0, ∞), we can find an appropriate constant µ, then Assumption 4.1 (ii) is satisfied and one can find µ θ (λ) satisfying (4.2) for all θ, λ ∈ (0, ∞). Indeed, then (|b|−µ) + ∈ L d+1 for some µ ∈ [0, ∞). The latter means that b = b 1 +b 2 , where To satisfy our requirement for µ θ (λ) to be increasing and continuous, as is easy to see, one can just take In particular, for any λ, ν ∈ (0, ∞) and M := sup |b 1 | R d+1 and one can take ν θ λ 1/2 + M as µ θ (λ) in Remark 4.2 if one chooses In the following main result of the section the case Q = R d+1 is allowed. Of course, in that case no conditions on the values of u on ∂ ′ Q are necessary.
Before we prove this theorem we extract a few corollaries.  In all cases N = N(d, p, δ).
Indeed, take µ θ (λ) from Remark 4.3. Then for any µ > 0 one can find λ(µ) > 0 such that which implies that (4.5) holds with λ = λ(µ). After that it only remains to prove that This is easily done after observing that x := µ/M 2 and y := λ(µ)/M satisfy x = (K + ν θ )y 2 + y and the rest is left to the reader.
Here is a particular case of Corollary 4.5 when p = d + 1.
Corollary 4.6. If b 2 ∈ L d+2 , then for any µ > 0 and u ∈Ŵ 1,2 d+1 (Q) ∩ C(Q), such that u ≤ 0 on ∂ ′ Q and condition (4.3) is satisfied, we have if µ ≥ (K + ν θ + 1)M 2 , and In the case of bounded b we have a version of Theorem 4.4, which is easier to memorize. The first part of the following result was used, for instance, in [1].
Corollary 4.7. Assume that b is bounded and set M = sup |b|. Then for any µ > 0 and u ∈Ŵ 1,2 d+1 (Q) ∩ C(Q), such that u ≤ 0 on ∂ ′ Q and condition (4.3) is satisfied, we have Indeed, we have ν θ = 0 and (K + 1) (d+1)/p ≤ K + 1 whereas Remark 4.8. The fact that in Corollary 4.7 we have the factors µ and µ 2 in different ranges of µ may look suspicious. In Example 5.6 we give an argument partially explaining this effect.
Indeed, define λ > 0 from This is possible since µ θ (λ) is an increasing and continuous function of λ. Then (4.5) holds. Since µ ≤ µ ′ we have that λ ≤ λ ′ and Remark 4.10. Unfortunately, in general, there is no control on how fast µ θ (λ) may grow to infinity as λ → ∞. Accordingly, we do not know how slow the solution of (4.7) may go to infinity as µ → ∞, so that we were able to prove the natural rate µ −1 of decay of the resolvent operator R µ of L only if b 2 ∈ L d+2 . Actually, the author conjectures that for some b, if p ∈ [p + 1, p + 2), the L p → L p -norm of R µ may have as slow power decay as we wish as µ → ∞. Still from our results we have that λ → ∞ as µ → ∞ so that the norm of R µ as an operator in L p does go to zero as µ → ∞. We will see later that in the elliptic case we will not have this issue.
Proof of Theorem 4.4. Take a ε > 0 and define Obviously Q ε is a bounded domain, u − ε = 0 on ∂ ′ Q, and u − ε ∈ W 1,2 d+1 (Q ε ). If the assertions of the theorem are true when Q is bounded, u ∈ W 1,2 d+1 (Q) ∩ C(Q), and u ≤ 0 on ∂ ′ Q, then, applying them to Q ε and u − ε and passing to the limit as ε ↓ 0 on the basis of the monotone convergence theorem, we obtain the assertions in full generality. Therefore, we may assume that Q is bounded, u ∈ W 1,2 d+1 (Q) ∩ C(Q), and u ≤ 0 on ∂ ′ Q.
Next let b n = bI |b|≤n and observe that the original µ θ (λ) is satisfying (4.2) with b n in place of b. Hence, if the theorem is true under one more additional assumption that b is bounded, then under the conditions in , then the right-hand side of (4.4) is infinite and we have nothing to prove. In case (b i D i u) − ∈ L d+1 (Q), we can pass to the limit in (4.8) and obtain (4.4). Similar situation occurs in the case of assertion (ii) of the theorem. Therefore, in the rest of the proof of the theorem we may assume that b is bounded.
Using approximations we convince ourselves that we may also assume that a ij , c ∈ C ∞ b . In that case introduce I as the set of µ > 0 for each of which the operator µ − L as an operator from W 1,2 d+1 to L d+1 is onto, invertible, and the inverse R µ := (µ − L) −1 is bounded as an operator from L d+1 onto W 1,2 d+1 . Obviously I is an open subset of (0, ∞). It is well known (see, for instance, [7]) that all large µ are in I. Therefore, it makes sense to introduce µ ′ as the smallest number such that (µ ′ , ∞) ⊂ I.
Also notice that, if u ∈ W 1,2 d+1 (Q) ∩ C(Q), u ≤ 0 on ∂ ′ Q, and µ > µ ′ , then by the maximum principle (see, for instance, Theorem 3.4.2 of [6]) in Q we have It follows that for any p and reduces the proof of the theorem to proving that µ ′ < Kλ + µ θ (λ)λ 1/2 for any λ > 0 and that as long as f ≥ 0 and µ ≥ Kλ + µ θ (λ)λ 1/2 . First we deal with p = d + 1 and then we use the Marcinkiewicz interpolation theorem. For µ > µ ′ denote by N µ the norm of R µ as an operator acting from L d+1 into L d+1 , that is the least constant N such that R µ g L d+1 ≤ N g L d+1 (4.10) for all g ∈ L d+1 .
Observe that owing to [4] for any nonnegative f ∈ C ∞ 0 there exists a nonnegative function ψ λ , which is λ-convex in x, decreasing in t and such that for any ε > 0 (see equation (29) in [4]) where the notation v ε stands for a standard mollification of v with kernel of support diameter ε. Furthermore, where the first estimate is a combination of estimates (12) and (13) of [4] and the second one is obtained before Theorem 3 of [4]. The above cited results of [4] are obtained there by using the theory of controlled diffusion processes. The more PDE oriented reader may like to use Theorem 3.2.8 of [6]. We emphasize that the above constants N depend only on d and δ.
To finish considering the case that p = d + 1 it suffices to show that To this end suppose that µ ′ ≥ Kλ + µ θ (λ)λ 1/2 . Then by the above for any u ∈ W 1,2 d+1 the inequality holds if µ > µ ′ and by continuity if µ = µ ′ as well. Furthermore, as is well known, under our additional assumptions on the coefficients of L, there are constants M i < ∞ such that for any u ∈ W 1,2 Owing to (4.19) (recall that λ > 0 is fixed) where M 3 is independent of u and µ ≥ µ ′ . By the method of continuity applied with respect to µ this estimate implies that µ ′ ∈ I, which yields the desired contradiction with the definition of µ ′ . This proves that (4.9) holds for p = d + 1 and any f ∈ L p as long as µ ≥ Kλ + µ θ (λ)λ 1/2 . Next observe that the maximum principle implies that R µ is well defined as an operator in L ∞ and its norm is less than or equal to µ −1 for any µ > 0. Then we obtain that (4.9) holds for p ≥ d + 1 and all f ∈ L p by the Marcinkiewicz interpolation theorem. The positivity of the operator R µ and the monotone convergence theorem allows us to conclude that (4.9) holds for p ≥ d + 1 and all f ≥ 0 as long as µ ≥ Kλ + µ θ (λ)λ 1/2 . The theorem is proved.

Estimates in L p of the resolvent operators. Elliptic case
Take p ∈ [d, ∞), a domain Q ⊂ R d , introduce W 2 p (Q) as the usual Sobolev space andŴ 2 p (Q) as the collection of all u which belong to W 2 p (G) for any bounded subdomain of G. Also denote ) and real-valued function c(x) be defined on R d . Set fix a δ > 0 and K ∈ [0, ∞) and impose the following.
The function c is nonnegative and bounded.
As in Remark 4.2 Assumption 5.1(ii) is satisfied if and only if there exists a θ ∈ (0, ∞) such that one can find an appropriate µ θ and in this case one can find an appropriate µ θ for any θ ∈ (0, ∞).
In the following theorem the case Q = R d is allowed. Of course, in that case no conditions on the values of u on ∂Q are necessary. where N = N(d, δ, p).
Observe that, if λ = 1, then, owing to [3], for any nonnegative f ∈ C ∞ 0 there exists a nonnegative function ψ λ , which is λ-convex in x and L 0 ψ ε λ − λtr aψ ε λ ≤ −f ε , (5.4) where the notation v ε stands for a standard mollification of v with kernel of support diameter ε (see the proof of Lemma 1 of [3]). Furthermore (see equation (22) in [3] and the end of the proof of Theorem 2 of [3]), sup ψ λ ≤ Nλ −1/2 f L d , (5.5) N(d, δ). Actually, dilations show that one can take any λ > 0. These results are obtained in [3] by probabilistic methods. In terms of PDEs the existence of ψ λ with the properties described above can be found in Theorem 3.2.3 of [6].
We also know that |Dψ ε λ | ≤ ψ ε λ √ λ. Therefore (5.4) implies that By the maximum principle ψ λ ≥ R µ f ε − R µ (|b| √ λ + λtr a − µ)ψ ε λ and (5.5) and (5.6) yield By using the Alexandrov estimate and Fatou's lemma we can pass to the limit in (5.8) as ε ↓ 0 and then we obtain (5.9) We have proved (5.9) for nonnegative f ∈ C ∞ 0 . The Alexandrov estimate and Fatou's lemma allow us to carry over this estimate to arbitrary nonnegative f ∈ L d . As in the proof of Theorem 4.4 constant N µ is also the smallest constant for which (5.3) holds for all nonnegative g. Therefore, now (5.9) implies that For µ ≥ Kλ + µ θ λ 1/2 the factor of N µ in (4.18) is dominated by