Exit Laws of Isotropic Diffusions in Random Environment from Large Domains

This paper studies, in dimensions greater than two, stationary diffusion processes in random environment which are small, isotropic perturbations of Brownian motion satisfying a finite range dependence. Such processes were first considered in the continuous setting by Sznitman and Zeitouni [20]. Building upon their work, it is shown by analyzing the associated elliptic boundary-value problem that, almost surely, the smoothed (in the sense that the boundary data is continuous) exit law of the diffusion from large domains converges, as the domain's scale approaches infinity, to that of a Brownian motion. Furthermore, a rate for the convergence is established in terms of the modulus of the boundary condition.


Introduction
The purpose of this paper is to characterize, in dimension d ≥ 3, the smoothed exit distributions from large domains associated to the diffusion in random environment determined by the generator (1. 3) The continuity of the boundary data f in this setting corresponds to a necessary smoothing of the exit distribution since, and as described below, the presence of traps (which, loosely speaking, mean portions of space where the drift has a strong effect) along the boundary preclude, in the case of discontinuous boundary data, an almost sure characterization of the exit measures defining the solutions v in the limit.
Exit laws of isotropic diffusions where 2 τ is by definition the exit time from U of the rescaled process X ·/ 2 . The behavior of this rescaled process on R d was characterized in [21], where it was proven that, on a subset of full probability and for a deterministic α > 0, X ·/ 2 converges as → 0 in law on R d to a Brownian motion with variance α. (1.6) See Section 2 for the precise statement and details. The primary aim of this paper is to establish the analogous result for the exit distribution, and this is achieved by characterizing, on a subset of full probability, the asymptotic behavior as → 0 of the solution to (1.5). See Theorem 7.4 for the precise statement. Theorem 1.1. Assume d ≥ 3. There exists a subset of full probability such that, for every bounded domain U ⊂ R d satisfying an exterior ball condition, the solution of (1.5) converges uniformly on U , as → 0, to the solution ∆u = 0 in U, u = f on ∂U. (1.7) The proof relies strongly upon the results obtained in [21], and in particular between a comparison there obtained, with scaling analogous to (1.2), between solutions of the parabolic equation (1.8) and the parabolic analogue of (1.7) with high probability and on large scales in space and time. The details of this argument are presented in Section 3.
The comparison is used, as it was in [21], to construct a coupling between the process defined by the generator (1.1) and a Brownian motion with variance approximately α along a discrete sequence of time steps. See Proposition 5.1 in Section 5. This coupling allows for the introduction of a discrete version of (1.2). Namely, the process is evaluated along the aforementioned discrete sequence of time steps and stopped as soon as it hits a neighborhood of the complement of the domain. However, the approximation suggested here is typically insufficient to characterize the limiting behavior of solutions to (1.2) and its rescaling (1.5) since the time steps are not sufficiently fine to preclude the emergence of traps created by the drift which, in this setting, are twofold. Considering the process associated to generator (1.4), the presence of the singular in 1 drift can first act to confine the particle to create, in expectation, an exponentially growing in 1 exit time. The probability that the exit time is large is first controlled, though sub-optimally, by Proposition 4.1 in Section 4.
Second, the drift can repel the process from the boundary, and thereby make impossible the existence, in general, of barriers which are effective at scales greater than . This difficulty is overcome by combining, in Section 7, the coupling obtained in Proposition 5.1 with estimates concerning the exit time of Brownian motion from the slightly inflated domains where δ → 0 as → 0. These estimates are proven in Propositions 6.2 and 6.3 of Section 6. Then, at points near the boundary ∂U , the exit of the Brownian motion from a somewhat larger domain of the type U δ is shown to compel, with high probability, the exit of the diffusion in random environment from U . See Proposition 7.2 of Section 7. It is this fact that establishes the efficacy of the discrete approximation and ultimately the proof of Theorem 1.1.
Finally, in Section 8, the convergence established in Theorem 1.1 is made quantitative assuming first that the boundary data f is the restriction of a bounded, uniformly EJP 22 (2017), paper 63.
continuous function on R d . Namely, assume f ∈ BUC(R d ) with modulus denoted σ f . (1.9) The result is the following, and its precise statement appears as Theorem 8.1.
Theorem 1.2. Assume d ≥ 3 and (1.9). There exist constants 0 < c 0 , c 1 < 1 and C > 0 such that, on a subset of full probability, for all > 0 sufficiently small depending on ω, the solutions of (1.5) and (1.7) satisfy A standard extension argument then allows Theorem 1.2 to be extended to arbitrary continuous functions on the boundary, provided the domain is smooth. In this case, assume the domain U is smooth, (1.10) and assume f ∈ C(∂U ) with modulus σ f . (1.11) Observe that, since U is bounded, a continuous function on the boundary is necessarily uniformly continuous. The result for smooth domains is the following, and its precise statement appears as Corollary 8.2. Theorem 1.3. Assume d ≥ 3, (1.9) and (1.11). There exist constants 0 < c 0 , c 1 < 1, C 1 = C 1 (U ) > 0 depending upon the domain and C > 0 such that, on a subset of full probability, for all > 0 sufficiently small depending on ω, the solutions of (1.5) and (1.7) c0 + Cσ f (C 1 c1 ).
Diffusion processes on R d in the stationary ergodic setting were first considered in the case b = 0 by Papanicolaou and Varadhan [18]. Furthermore, in the case that (1.2) can be rewritten in divergence form, these diffusions and associated boundary value problems were studied in Papanicolaou and Varadhan [17], and further results have been obtained by De Masi, Ferrari, Goldstein and Wick [6], Kozlov [12], Olla [15] and Osada [16]. However, for general drifts b which are neither divergence free nor a gradient of a stationary field, considerably less is known.
Indeed, the results of [21], which apply to the isotropic, perturbative regime described above, and which were later extended by the author [8,9], are the only such available. To this point, the characterization of the asymptotic behavior of boundary value problems like (1.5) in the continuous setting has remained open. However, some results do exist for the analogous discrete framework. Bolthausen and Zeitouni [4] characterized the exit distributions from large balls (so, taking U = B 1 ) of random walks in random environment which are small, isotropic perturbations of a simple random walk, and their work was later refined by Baur and Bolthausen [3] under a somewhat less stringent isotropy assumption. Finally, Baur [2] has recently obtained results concerning the exit time from large balls of processes satisfying a quenched symmetry assumption along a single coordinate direction.
The methods of this paper differ significantly from those of [2,3,4], which develop an induction scheme to propagate estimates concerning the convergence of the exit law of the diffusion in random environment to the Brownian measure on the boundary of the ball, by instead adapting the results and philosophy of [21] from the parabolic setting. Furthermore, the methods apply to arbitrary bounded domains satisfying an exterior ball condition.
The paper is organized so that, in Section 2, the notation and assumptions are presented and, in Section 3, the most relevant aspects of [21] are reviewed and the EJP 22 (2017), paper 63. primary probabilistic statement concerning the random environment is presented. In Section 4, the exit time of the process in random environment is controlled in probability, and the global coupling between the process in random environment and Brownian motion is constructed in Section 5. The exit time of Brownian motion at points near the boundary of the inflated domains U δ is controlled in Section 6, and the efficacy of the discrete approximation, as defined through the coupling, and ultimately the proof of Theorem 1.1 are presented in Section 7. Finally, the rates of convergence in Theorems 1.2 and 1.3 appear in Section 8.

Notation
Elements of R d and [0, ∞) are denoted by x and y and t respectively and (x, y) denotes the standard inner product on R d . The gradient in space and derivative in time of a scalar function v are written Dv and v t , while D 2 v stands for the Hessian of v. The spaces of k × l and k × k symmetric matrices with real entries are respectively written M k×l and S(k). If M ∈ M k×l , then M t is its transpose and |M | is its norm |M | := tr(M M t ) 1/2 . If M is a square matrix, the trace of M is written tr(M ). The Euclidean distance between and, for an index A and a family of measurable functions the sigma-algebra generated by the random variables f α (x, ·), for x ∈ A and α ∈ A, is is the collection of smooth, compactly supported functions on R d . The closure and boundary of U ⊂ R d are written U and ∂U . For f : R d → R, the support of f is denoted Supp(f ). Furthermore, B R and B R (x) are respectively the open balls of radius R centered at zero and x ∈ R d . For a real number r ∈ R, the notation [r] denotes the largest integer less than or equal to r. Finally, throughout the paper C represents a constant which may change from line to line but is independent of ω ∈ Ω unless otherwise indicated.

The random environment
The random environment is indexed by a probability space (Ω, F, P). Every element ω ∈ Ω corresponds to a realization of the environment described by the coefficients A(·, ω) and b(·, ω) on R d in dimension at least three: assume d ≥ 3. the coefficients A : R d × Ω → S(d) and b : R d × Ω → R d are bi-measurable functions satisfying, for each x, y ∈ R d and ω ∈ Ω, A(x + y, ω) = A(x, τ y ω) and b(x + y, ω) = b(x, τ y ω). (2.3) The diffusion matrix and drift are bounded and Lipschitz uniformly for ω ∈ Ω: there exists C > 0 such that, for all x ∈ R d and ω ∈ Ω, |b(x, ω)| ≤ C and |A(x, ω)| ≤ C, (2.4) and, for all x, y ∈ R d and ω ∈ Ω, In addition, the diffusion matrix is uniformly elliptic uniformly in Ω: there exists ν > 1 such that, for all x ∈ R d and ω ∈ Ω, The coefficients satisfy a finite-range dependence: there exists R > 0 such that, The diffusion matrix and drift satisfy a restricted isotropy condition: for every orthogonal transformation r : R d → R d which preserves the coordinate axes, for every x ∈ R d , (A(rx, ·), b(rx, ·)) and (rA(x, ·)r t , rb(x, ·)) have the same law. (2.8) Additionally, it will later be necessary to assume that the diffusion is a small perturbation of Brownian motion in the sense that, for a small η 0 > 0, See assumption (3.1) below and the detailed discussion in Section 3.
The final assumptions concern the domain. The domain U ⊂ R d is open and bounded.
(2.9) Furthermore, U satisfies an exterior ball condition: there exists r 0 > 0 so that, for each x ∈ ∂U there exists x * ∈ R d such that and beginning from x, see Strook and Varadhan [20,Chapter 6,7]. The law of the solution to this martingale problem and the expectation on the space of continuous paths EJP 22 (2017), paper 63.
C([0, ∞); R d ) will be written P x,ω and E x,ω . Almost surely with respect to P x,ω , a path X · ∈ C([0, ∞); R d ) satisfies the stochastic differential equation for A(x, ω) = σ(x, ω)σ(x, ω) t , and for B · a standard Brownian motion under P x,ω with respect to the canonical right-continuous filtration on the space C([0, ∞); R d ).
The translational and rotational invariance implied in law by (2.3) and (2.8) do not imply any invariance properties, in general, for the quenched laws P x,ω . However, the law of the process with respect to the annealed measure P x := P P x,ω is translationally and rotationally invariant. In particular, with respect to the annealed expectation (2.13) and, for all orthogonal transformations r preserving the coordinate axes and for every (2.14) This fact plays an important role in [21] to preclude, with probability one, the emergence of ballistic behavior of the rescaled process in the asymptotic limit.
Similarly, for each n ≥ 0 and x ∈ R d , let W n x denote the Wiener measure on C([0, ∞); R d ) corresponding to Brownian motion with variance α n beginning from x. The corresponding expectation will be denoted E W n x . Almost surely with respect to W n x , a path X · ∈ C([0, ∞); R d ) satisfies the stochastic differential equation

A remark on existence and uniqueness
The boundedness (2.4), Lipschitz continuity (2.5) and ellipticity (2.6) of the coefficients together with the boundedness (2.9) and regularity (2.10) of the domain guarantee the well-posedness, for every ω ∈ Ω, of equations like for f ∈ C(∂U ) and g ∈ C(U ), in the class of bounded continuous functions. See, for instance, Friedman [10,Chapter 3]. Furthermore, if τ denotes the exit time from U , then for continuous initial data f (x) satisfying, for instance and to the extent that it will be applied in this paper, |f (x) see [14,Exercise 9.12]. Analogous formulas hold for the constant-coefficient elliptic and parabolic equations associated, for each n ≥ 0, to the measures W n x . Since these facts are well-known, and since the solution to every equation encountered in this paper admits an explicit probabilistic description, the presentation will not further reiterate these points.

The inductive framework and probabilistic statement
In this section, the aspects of [21] most relevant to this work will be introduced. The interested reader will find a full description of the inductive framework in [21], which was later reviewed in the introductions of [8,9]. Forgive, therefore, the terse explanation offered here.
First, it is necessary to assume that the diffusion represents a small perturbation of Brownian motion. That is, for η 0 > 0 to be fixed small in (3.18) below, assume that, for each x ∈ R d and ω ∈ Ω, |A(x, ω) − I| < η 0 and |b(x, ω)| < η 0 , where I denotes the d × d identity matrix. This assumption guarantees that, up to a finite time, the process is almost-surely well-approximated by a Brownian motion in the sense of Controls 3.1 and 3.2 below. Fix a Hölder exponent β ∈ 0, 1 2 and a constant a ∈ 0, β 1000d . L a n 5 and L n+1 := n L n , (3.3) so that, for L 0 sufficiently large, it follows that 1 2 L 1+a n ≤ L n+1 ≤ 2L 1+a n . For each n ≥ 0, for c 0 > 0, let κ n := exp(c 0 (log log(L n )) 2 ) andκ n := exp(2c 0 (log log(L n )) 2 ), (3.4) where, as n tends to infinity, notice that κ n is eventually dominated by every positive power of L n . Furthermore, define, for each n ≥ 0, D n := L n κ n andD n := L nκn , (3.5) where the preceding remark indicates the scales D n andD n are larger but grow comparably with the previously defined scales L n .
The following constants enter into the probabilistic statements below. Fix m 0 ≥ 2  In the arguments to follow, it will be essential that these assumptions guarantee δ and M 0 are sufficiently larger than a.
In order to apply the finite-range dependence, it will be frequently necessary to introduce a stopped version of the process. Define, for every element X · ∈ C([0, ∞); R d ), the path (3.8) and, for each n ≥ 0, the stopping time The effective diffusivity of the ensemble at scale L n is defined by where the localization is applied in order to exploit the diffusion's mixing properties.
The convergence of the α n to a limiting diffusivity α is proven in [21,Proposition 5.7]. The results of [21] obtain an effective comparison on the parabolic scale (L n , L 2 n ) in space and time, with improving probability as n → ∞, between the solutions (3.9) and solutions to the approximate limiting equation (3.10) In order to simplify the notation define, for each n ≥ 0, the operators (3.11) and the difference operator Since solutions of (3.9) will not, in general, be effectively comparable with solutions of (3.10) globally in space, it is necessary to introduce a localization. For each v > 0, define χ(y) := 1 ∧ (2 − |y|) + and χ v (y) := χ y v for y ∈ R d , (3.13) and define, for each x ∈ R d and n ≥ 0, Furthermore, in order to account for the scaling of the initial data which appears in (1.2), the comparison of the solutions is necessarily obtained with respect to the rescaled global Hölder-norms, defined for each n ≥ 0, by See for instance the introductions of [9,21] for a more complete discussion concerning the necessity of these norms as opposed, say, to attempting an (in general, false) L ∞comparison.
The following control is the statement propagated by the arguments of [21], and expresses the desired comparison between solutions (3.9) and (3.10), as written using the operator S n from (3.12) and localized by χ n,x from (3.14), in terms of the |·| n -norm from (3.15) of the initial data.
Note carefully that this statement is not true, in general, for all triples x ∈ R d , ω ∈ Ω and n ≥ 0. However, as described below, it is shown in [21, Proposition 5.1] that such controls are available for large n, with high probability, on a large portion of space.
It will also be necessary to obtain tail-estimates for the diffusion in random environment. The type of control propagated in [21] involves exponential estimates for the probability under P x,ω that the maximal excursion X * L 2 n defined in (3.8) is large with respect to the time elapsed.
As with Control 3.1, it is simply not true in general that this type of estimate is satisfied for all triples (x, ω, n). However, it is shown in [21, Proposition 2.2] that such controls are available for large n, with high probability, on a large portion of space.
It is necessary to obtain a lower bound in probability for the event, defined for each n ≥ 0 and x ∈ R d , B n (x) = { ω ∈ Ω | Controls 3.1 and 3.2 hold for the triple (x, ω, n). } . (3.16) Notice that, in view of (2.3), for all x ∈ R d and n ≥ 0, P(B n (x)) = P(B n (0)), (3.17) and observe that B n (0) does not include the control of traps described in [21, Proposition 3.3], which play in important role in propagating Control 3.1, and from which the arguments of this paper have no further need.
The following theorem proves that the complement of B n (0) approaches zero as n tends to infinity, see [21, Theorem 1.1].
Henceforth, in addition to assumption (2.11), the constant η 0 > 0 quantifying the perturbation (3.1) and the constants L 0 and c 0 defining the induction scheme will be fixed to satisfy the hypothesis of Theorems 3.1 and 3.2 appearing above.
Fix constants L 0 , c 0 and η 0 satisfying the hypothesis of Theorems 3.1 and 3.2. (3.18) The events which, following an application of the Borel-Cantelli lemma, come to define the event on which Theorem 1.1 is obtained are chosen to ensure that Controls 3.1 and 3.2 are satisfied for a sufficiently small scale as compared with 1 . Fix the smallest integer m > 0 satisfying the inequality The integer m is chosen to be the smallest integer for which, for all n ≥ 0 sufficiently large, it follows that L n+1 L n−m < L 2 n−1 . The idea will be to use Theorem 3.2 in order to obtain Controls 3.1 and 3.2 at scale L n−m on the entirety of the rescaled domain U/ whenever L n ≤ 1 < L n+1 . Since, for all n ≥ 0 sufficiently large, it follows from the boundedness of U and (3.3) that, whenever The following proposition proves that, as n → ∞, the probability of the events A n rapidly approaches one, since the exponent 2d(1 + a) 2 − M0 2 is negative owing to (3.2) and (3.7). Proposition 3.3. Assume (2.11), (3.1) and (3.18). For each n ≥ m, for C > 0 independent of n, Proof. Fix n ≥ m. Theorem 3.2 and (3.17) imply using (3.3) that, for C > 0 independent of n, Therefore, and, since the definition of m implies that

A quenched upper bound for the exit time in probability
The purpose of this section is to obtain an upper bound in probability for the exit time from the rescaled domain U/ of the process associated to the generator The reason for obtaining such an estimate will be seen in Section 5, where the process in random environment is coupled with high probability to a deterministic Brownian motion. Since this coupling cannot be expected to hold globally in time, it is necessary to ensure with high probability that the exit time from U/ occurs before the estimates deteriorate.
It will be shown that, as a consequence of the Hölder estimate stated in Control 3.1, whenever the environment and scale satisfy ω ∈ A n and L n ≤ 1 < L n+1 then, as n → ∞, the exit time from the rescaled domain U/ occurs before time L 2 n+2 with overwhelming probability. Define, for each > 0, the exit time where the final equality is particularly prescient in view of (1.6) and the scaling associated to the generator In terms of this rescaled generator, the following proposition proves that, for environments ω ∈ A n and scales L n ≤ 1 < L n+1 , as n → ∞, paths X ·/ 2 exit U with overwhelming probability prior to time 2 L 2 n+2 .
Proposition 4.1. Assume (2.11), (3.1) and (3.18). For all n sufficiently large, for every ω ∈ A n , and for all > 0 satisfying L n ≤ 1 < L n+1 , there exists C > 0 independent of n such that Proof. Using the boundedness of the domain in (2.9), choose R ≥ 1 satisfying U ⊂ B R and choose n 1 ≥ 0 such that, for every n ≥ n 1 , Henceforth, fix n ≥ n 1 , ω ∈ A n and L n ≤ 1 < L n+1 . Define the smooth cutoff function satisfying 0 ≤ χ B R ≤ 1 with Then, consider the solution (4.5) The function v will be compared via Control 3.1 with the solution The conditions ω ∈ A n and (4.3) guarantee that for every x ∈ U / the conclusion of Control 3.1 is satisfied and, therefore, using (3.2), (3.3) and (4.4), for C > 0 independent of n, To conclude, the size of v (x, L 2 n+2 ), which measures the likelihood that a Brownian motion with variance α n+2 and beginning from x resides outside B R+1 at time L 2 n+2 , is estimated using Theorem 3.1 and the Green's function.

The global coupling
The comparison implied by Control 3.1 on scale (L n , L 2 n ) between the vector-valued solutions of the parabolic equation and the approximate homogenized equation asserts that, after using the localization estimate implied by Control 3.2 and the choice of constants (3.3), (3.4) and (3.5) to localize and bound the initial data with respect to the |·| n -norm, for C > 0 independent of n, where W n x is the Wiener measure on C([0, ∞); R d ) corresponding to Brownian motion with variance α n beginning from x.
Formally, then, provided (what will be discrete) copies of the diffusion in random environmentX · and Brownian motionB · are chosen with help of the Kantorovich-Rubinstein theorem and are defined with respect to the same measure on an auxiliary probability space (Ω,F,P), a Chebyshev inequality should yield The purpose of this section will be to formalize and iterate this intuition along a discrete sequence of time steps.
Solutions of (5.1) with initial data f admit a representation in terms of the Green's which is the density of the diffusion beginning from x in environment ω at time t. See [10, Chapter 1] for a detailed discussion of the existence and regularity of these densities, and which follow from assumptions (2.4), (2.5) and (2.6). The formula for the solution is then Similarly, solutions of (5.2) with initial data f admit the analogous representation in terms of the heat kernel To simplify the notation in what follows, for each n ≥ 0, define and the analogous heat kernel The following proposition constructs a Markov process such that the transition probabilities of first coordinate X · are determined by p n,ω and, those of the second coordinate X · by p n . Furthermore, the difference |X · − X · | satisfies a version of (5.4) with respect to the underlying measure. The construction follows closely the proof of [21, Proposition 3.1], and is included for the convenience of the reader and due to the mildly different formulation adapted to the arguments in this paper. The proof relies upon the Kantorovich-Rubinstein Theorem, see Dudley [7,Theorem 11.8.2], applied to the metrics EJP 22 (2017), paper 63. measures ν and ν on R d assigning finite mass to the metric d n , in the sense that The function D n (·, ·) is sometimes referred to as the Kantorovich-Rubinstein or Wasserstein metric.
The choice of constants in the following proposition will be applied to spacial scales  .1) and (3.18). For every n ≥ m, ω ∈ Ω and x ∈ R d , there exists a probability measure Q n,x = (Q n,x,ω ) on the canonical sigma-algebra of the space (R d × R d ) N such that, under Q n,x , the coordinate processes X · and X · respectively have the law of a Markov chain on R d , starting from x, with transition kernels p n−m,ω (·, ·) and p n−m (·, ·). Furthermore, Q n,x is such that, whenever ω ∈ A n and Proof. Fix n ≥ m and ω ∈ Ω. Let M 1 (R d × R d ) denote the set of probability measures on R d × R d with the topology of weak convergence. Exponential estimates imply, for each x ∈ R d , the integrals in (5.5) corresponding to the kernels ν x = p n−m (x, ·) and ν x = p n−m (x, ·) are finite, see [10, Chapter 1, Theorem 12]. The Kantorovich-Rubinstein theorem, see (5.6), therefore implies that, for each x ∈ R d , the subset is non-empty, owing to (5.6), and compact. The compactness follows because, for each x ∈ R d , the collection K x is tight owing to the exponential decay of the Green's functions.
where the first equality follows from the weak convergence, the second equality follows from (5.8) and the final equality follows from the triangle inequality applied to the metric D n−m (·, ·). Since the definition ( The transition distribution of the Markov chain beginning at (x, y) For each x ∈ R d , the measure Q n,x is defined as the law of the Markov chain (X · , X · ) with transition kernelp ·,· and initial distribution (x, x).
Notice that, if A ⊂ R d is a Borel subset and k ≥ 0, then, using (5.8), (5.9) and (5.11), for each x ∈ R d and (y, and, similarly, where the final line uses the translation invariance and symmetry of the heat kernel. This completes the proof of existence. It remains to show (5.7).
) be arbitrary. The triangle inequality and definition of d n−m imply that, writing E Qn,x for the expectation with respect to Q n,x , where, using (5.8), (5.10), (5.11) and the strong Markov property, Therefore,  (5.14) The second term is bounded using Control 3.2 since ω ∈ A n and y ∈ [−L 2 n+2 , L 2 n+2 ] d .
it follows from the definition of L n in (3.3) that, for C > 0 independent of n, The following corollary then follows immediately by taking γ = L n−m in Proposition 5.1. Observe that (3.2) and (3.7) imply the exponent 16a − δ is negative.

Estimates for the exit time of Brownian motion
In this section estimates are obtained, in expectation and near the boundary of the domain, for the exit time of a Brownian motion. The role of the exterior ball condition comes in the proof of these estimates. Namely, there exists (a now fixed) r 0 > 0 such that, for every x ∈ ∂U , there exists x * ∈ R d satisfying Furthermore, define, for each δ > 0, the inflated domain U δ := x ∈ R d | dist(x, U ) < δ , (6.2) and notice, as a consequence of (6.1), for every 0 < δ < r 0 , U δ satisfies the exterior ball condition with radius (r 0 − δ). Essentially, it will be necessary to understand, in expectation, the exit time of Brownian motion from the sets U δ and U , as δ → 0, at points within distance δ from the boundary.
The first step is to consider the exit time of Brownian motion from the annular domains centered at the origin and defined, for each pair of radii 0 < r 1 < r 2 < ∞, by A r1,r2 := B r2 \ B r1 .
For each pair (r 1 , r 2 ) let τ r1,r2 denote the exit time and recall that, in expectation and with respect to the Wiener measure W n x defining Brownian motion with variance α n beginning from x, the function u n r1,r2 (x) := E W n x (τ r1,r2 ) for x ∈ A r1,r2 satisfies the equation 1 + αn 2 ∆u n r1,r2 = 0 in A r1,r2 , u n r1,r2 = 0 on ∂A r1,r2 . bound for these solutions most effective in a neighborhood of ∂B r1 . The estimate necessarily depends upon the pair (r 1 , r 2 ), which in the application to follow will be fixed independently of n ≥ 0.
Finally, the uniform control of the {α n } ∞ n=0 provided by Theorem 3.1 implies that, for which completes the argument.
Passing from the annular regions A r1,r2 to the domain U and its inflations U δ , for δ > 0 small, is now straightforward. Define, for each x ∈ R d and pair of radii (r 1 , r 2 ), the and, for each δ > 0, the exit times The following corollary controls the expectation of τ and τ δ in an approximately δneighborhood of the respective boundaries. Recall the radius r 0 in (6.1) quantifying the exterior ball condition.
Corollary 6.2. Assume (2.11), (3.1) and (3.18). For every 0 < δ < r0 2 , for every n ≥ 0, for C > 0 independent of n and δ, Proof. For each 0 < δ < r0 2 it follows from (6.3) that U δ satisfies the exterior ball condition with radius r 0 − δ. Fix r 2 > r0 2 such that, whenever x ∈ ∂U δ and x * ∈ R d satisfy The existence of r 2 chosen uniformly for 0 < δ < r0 2 is guaranteed by the boundedness of U assumed in (2.9). Since the stopping time τ δ ≥ τ almost-surely with respect to W n x , the first statement is subsumed by the second, which will be shown henceforth.

The discrete approximation and proof of Theorem 1.1
The purpose of this section is to complete the almost-sure characterization, as → 0, of the solution The strategy will be, for scales satisfying L n ≤ 1 < L n+1 , to approximate the continuous process X · by the discrete process constructed in Proposition 5.1 corresponding to time steps of order L 2 n−m . The choice of m in (3.19) guarantees, in view of the definitions of L n in (3.3) andD n in (3.5), that there exists ζ > 0 such that, for C > 0 independent of n, L n+1Dn−m ≤ CL 2−ζ n−1 .
Therefore, moving forward, The constant ζ will appear in the rate of homogenization, as shown in Theorem 8.1 and Corollary 8.2. It is therefore worthwhile to observe that the choice of ζ in (7.4) is not optimal, because the choice of m was made so as to optimize the probabilistic statement appearing in Proposition 3.3. Since the probabilistic estimates necessarily deteriorate as m → ∞, in order to optimize the choice of ζ one would choose the maximal m for which the set (7.16) defined below has full measure. However, since the arguments of this paper and those appearing in [21] are at many points sub-optimal for the rate, this additional computation is omitted.
Introduce, for each > 0 and n ≥ m, the discrete stopping times which quantify the first time in the discrete sequence kL 2 n−m k≥0 that the path X · resides in theD n−m neighborhood of the complement of U/ . It is not true that τ ,n 1 ≤ τ for every path X · , however, for scales L n ≤ 1 < L n+1 , the failure of this inequality will be controlled in probability by the exponential estimate appearing in Control 3.2.
This estimate, in conjunction with the exponential controls established by Control 3.2, effectively provides a barrier for equation (7.1) near of boundary ∂U of a quality that, for general such equations, is impossible to obtain. And, therefore, shows that the discretely stopped process X τ ,n Proof. Fix n 1 ≥ m such that, for every n ≥ n 1 , for r 0 the constant quantifying the exterior ball condition in (6.1), 2D n−m ≤ r 0 L n 2 . (7.7) Henceforth, fix n ≥ n 1 , > 0 satisfying L n ≤ 1 < L n+1 and x ∈ R d satisfying dist(x, (U/ ) c ) ≤ 2D n−m . Recall that τ ,δ denotes the exit time from the δ-neighborhood of U/ , and after choosing δ = 2D n−m , Corollary 6.3 and (7.7) imply that, for C > 0 independent of n, Therefore, for ζ > 0 defined in (7.4), for C > 0 independent of n, Then, by Chebyshev's inequality, for C > 0 independent of n, In order to conclude, using the translational invariance of the heat kernel and the Markov property, and owing to exponential tail estimates for Brownian motion on scale And, since the choice of constants (3.3), (3.4) and (3.5) guarantee the existence of C > 0 independent of n satisfying exp(−cκ 2 n−m ) ≤ CL −ζ n−1 , and since for n ≥ 0 sufficiently large L 2 n−m < 1 2 L 2 n−1 , in combination (7.8) and (7.9) assert that which, since x satisfying dist(x, (U/ ) c ) ≤ 2D n−m , satisfying L n ≤ 1 < L n+1 and n ≥ n 1 were arbitrary, completes the argument.
The following proposition relies upon the random subsets {A n } ∞ n=m defined in (3.20).
Recall, for each n ≥ m, the set A n guarantees that for every ω ∈ A n , x ∈ [−L 2 n+2 , L 2 n+2 ] d and scale between L n−m to L n+2 , the Hölder estimates from Control 3.1 and localization estimates from Control 3.2 are satisfied. The remaining arguments will require no further use of Control 3.1, since the coupling obtained in Corollary 5.2 encodes already its purpose, but the localization estimate from Control 3.2 will be used.
The following proposition establishes, on the event A n , the desired comparison between the continuous exit time τ and discrete stopping time τ ,n 1 with respect to P x,ω for large n and on scales satisfying L n ≤ 1 < L n+1 . Proposition 7.2. Assume (2.11), (3.1) and (3.18). For each n ≥ m sufficiently large, for every > 0 satisfying L n ≤ 1 < L n+1 and for every ω ∈ A n , for C > 0 independent of n, Proof. Fix n 1 ≥ 0 as in Proposition 7.1 such that, for each n ≥ n 1 , Furthermore, fix n 2 ≥ 0 such that, whenever n ≥ n 2 , which guarantees, whenever n ≥ n 2 and L n ≤ 1 < L n+2 , the containment U / ⊂ [− 1 2 L 2 n+2 , 1 2 L 2 n+2 ] d and therefore, for every x ∈ U / , the conclusion of Corollary 5.2. Henceforth, fix n ≥ max(n 1 , n 2 , m), > 0 satisfying L n ≤ 1 < L n+1 , ω ∈ A n and x ∈ U / . Recall the measure Q n,x defining the Markov chain (X · , X · ) on (R d × R d ) N , and which effectively acts in its respective coordinates as a discrete version of the process in random environment or Brownian motion with variance α n−m along the sequence kL 2 n−m k≥0 in time. Let C n denote the event and recall, owing to Corollary 5.2, for C > 0 independent of n, Q n,x (C n ) ≤ Cκ n−m L 16a−δ n−m . It follows by definition that τ ≤τ . Furthermore, the definition of Q n,y and the Markov property imply that (7.12) where the first term on the righthand side is bounded by (7.10) and, in the final term, the stopping times are defined by which is merely the analogue of τ ,n 1 defined for the first coordinate of (X · , X · ), and which is the analogue ofτ defined for the first coordinate of (X · , X · ).
The event on the righthand side of (7.12) is decomposed one step further as (7.14) It remains to bound the second term of (7.13). Define the discrete stopping time T ,n which is the analogue of τ ,n 2 defined for the second coordinate of the process (X · , X · ). On the event C c n , for every 0 and whenever dist(X k , (U/ )) ≥D n−m , it follows that dist(X k , (U/ )) ≥D n−m − L n−m > 0.
Therefore, on the event (C c n , T ,n the Markov property, the definition of Q n,x and Proposition 7.1 imply, for C > 0 independent of n, Therefore, owing to the choice of ζ > 0 in (7.4) and since τ ≤τ by definition (7.11), the string of inequalities (7.12), (7.13), (7.14) and (7.15) imply, for C > 0 independent of n, which, since x ∈ U / , n sufficiently large, L n ≤ 1 < L n+1 and ω ∈ A n were arbitrary, completes the argument.
The subsets A n now come to define the event on which the conclusion of Theorem 1.1 is obtained. Recall Proposition 3.3, which states that, for each n ≥ m, for C ≥ 0 independent of n, and notice that the definition of L n in (3.3) and the negative exponent 2d(1 + a) 2 Therefore, using the Borel-Cantelli lemma, let Ω 0 ⊂ Ω denote the subset of full probability Ω 0 = { ω ∈ Ω | There exists n = n(ω) such that ω ∈ A n for all n ≥ n. } . (7.16) Observe here that the set Ω 0 is independent of U and the boundary data.
Before shortly proceeding with the proof, it is convenient to recall some notation. For each n ≥ 0, let u n denote the solution αn 2 ∆u n = 0 in U, u n = f on ∂U, (7.17) and let u denote the solution ∆u = 0 in U, u = f on ∂U.
The following fact is immediate by uniqueness and Theorem 3.1, and states simply that the exit distribution from U of a Brownian motion is independent of its (non-vanishing) variance, which corresponds to a time-change.
For each n ≥ 0, u n = u on U . (7.19) Similarly, for each > 0 and ω ∈ Ω, let u denote the solution L ω u = 0 in U, u = f on ∂U. The following theorem proves that, on the event Ω 0 , as → 0 the solutions u converge uniformly to u on U whenever the boundary data is the restriction of a smooth function defined on the whole space. Namely, assume f ∈ C ∞ c (R d ). (7.21) This restriction will be removed by a standard approximation argument in Theorem 7.4. Proof. Fix ω ∈ Ω 0 and n 1 ≥ m such that ω ∈ A n for every n ≥ n 1 and such that, whenever n ≥ n 1 , and the conditions of Propositions 7.1 and 7.2 are satisfied. Furthermore, choose 0 ≥ 0 such that, whenever 0 < < 0 , it follows that L n ≤ 1 < L n+1 implies n ≥ n 1 .
The proof will rely upon the previously encountered continuous and discrete stopping times, defined for each > 0 and n ≥ 0, by and will use the representation for the expectation E x ,ω associated to the diffusion beginning from x in environment ω. The discrete approximation. Fix 0 < < 0 , the n ≥ 0 satisfying L n ≤ 1 < L n+1 and x ∈ U . First, decompose the representation in terms of the discrete approximation by ) . (7.22) It will be shown that the first term of (7.22) is negligible.
Decompose the expectation of the difference like The event τ + L 2 n−m implies by definition that the process beginning from X τ travels further thanD n−m in time L 2 n−m . Therefore, the Markov property, ω ∈ A n , the choice of n 1 and the exponential estimates provided by Control 3.2 imply that In view of Proposition 7.2 and the choice of 0 < < 0 , the second term (7.25) is bounded, for C > 0 independent of n, by The first term of (7.25) is separated into the events The Markov property and, since ω ∈ A n and 1 Ln+1 < ≤ 1 Ln , it follows from Control 3.2 that, for C > 0 independent of n, And, the identical argument at scale L n−m implies that the second term of (7.27) satisfies, for C > 0 independent of n, Therefore, inequalities (7.28) and (7.29) bound (7.27) and show, for C > 0 independent of n, Then, combining this inequality with (7.26) to bound (7.25), for C > 0 independent of n, since there exists C > 0 such that exp(−κ n−m ) ≤ CL −ζ n−1 for every n ≥ m, And, using this inequality with (7.24), the expectation of the difference (7.23) can be estimated in the form, for C > 0 independent of n, again using the fact that there exists C > 0 independent of n such that exp(−κ n−m ) ≤ CL −ζ n−1 for all n ≥ m, Therefore, in view of the decomposition (7.22) and the estimate (7.31), for C > 0 independent of n, This estimate proves the efficacy of the discrete approximation defined by the stopping time τ ,n 1 . It will now be shown that the discretely stopped process is a good approximation of Brownian motion via the coupling estimate obtained in Corollay 5.2.
The coupling. Recall the measure Q n, x defining the discrete Markov chain (X · , X · ) on (R d × R d ) N , and which effectively acts in the respective coordinates as discrete versions of the process in random environment and Brownian motion with variance α n−m along the sequence kL 2 n−m k≥0 in time. Let C n denote the event which, owing to Corollary 5.2 and ω ∈ A n with n ≥ n 1 , satisfies, for C > 0 independent of n, Q n, x (C n ) ≤ Cκ n−m L 16a−δ n−m .
(7.34) Furthermore, define as before the discrete stopping time which is the analogue of τ ,n 1 in the first coordinate.
The definition of Q n, x and the Markov property imply that, writing E Q n, x for the expectation with respect to Q n, x , ) . (7.35) As before, it will be shown that the expectation of the difference is negligible. Decompose it in terms of the event C n to obtain The first term of (7.36) is bounded using (7.34) which implies, for C > 0 independent of n, The second term of (7.36) is further decomposed in the form observing here that T ,n 1 = k corresponds to the process at time kL 2 n−m . The first term of (7.38) is bounded using the definition of the set C c n , T ,n 1 < ( Ln+2 Ln−m ) 2 and ≤ 1 Ln , which imply, for C > 0 independent of n, The second term of (7.38) is bounded using the control for the exit time obtained in Proposition 4.1, and in particular line (4.7) which applies equally to the discrete sequence since L 2 n−m divides L 2 n+2 , to yield, for C > 0 independent of n and ζ > 0 defined in (7.4), Therefore, inequalities (7.39) and (7.40) bound (7.38), for C > 0 independent of n, by and together with the choice of ζ > 0 in (7.4) and (7.37), the expectation of the difference in (7.36) can be estimated in the form, for C > 0 independent of n, And therefore, using (7.35), for C > 0 independent of n, L n−m L n .
(7.41) It remains to recover the exit distribution of Brownian motion from the second term in the difference. Recovering the exit distribution of Brownian motion. The arguments here are essentially the unwinding, in terms of Brownian motion, of what led from (7.22) to (7.32). Define the discrete stopping time T ,n which is the analogue of τ ,n 2 defined in (7.6) for the second coordinate. After performing decompositions analogous to (7.36) and (7.38), it follows by an identical argument that, for C > 0 independent of n, As before, the expectation of the difference is shown to be negligible. The first term of (7.44) is written Then, since on the event (C c n , T ,n it follows from (7.43), the Markov property, the definition of Q n, x and Proposition 7.1 that the first term of (7.45) is bounded, for C > 0 independent of n, by The second term of (7.45) is then further decomposed according to the event EJP 22 (2017), paper 63. and its complement. The first term is then bounded using exponential estimates for Brownian motion on scaleD n−1 , and the second term is bounded using the differentiability of f and the fact that ≤ 1 Ln . Together, these yields the estimate, for C, c > 0 independent of n, Therefore, combining (7.45), (7.46) and (7.47), and using the fact that there exists C > 0 independent of n ≥ 1 such that exp(−cκ 2 n−1 ) ≤ CL −ζ n−1 , equation (7.44) yields the estimate, for C > 0 independent of n, And, after repeating exactly the argument leading to (7.42), for C > 0 independent of n,   ) − E W n−m independent of n, and in exact analogy with the bound obtained in (7.47), the first term of (7.52) is bounded by The second term of (7.52) is handled similarly to (7.27) but in the reverse order. Here,   L n + C f L ∞ (R d ) L −ζ n−1 ) = 0, and because x ∈ U , ω ∈ Ω 0 and 0 < < 0 were arbitrary, completes the proof.
The final theorem of this section extends Theorem 7.3 to boundary data f ∈ C(∂U ).

The quantitative estimate
In this final section, a rate for the convergence appearing in Theorem 7.4 is first established for boundary data which is the restriction of a bounded, uniformly continuous function on R d . That is, assume f ∈ BUC(R d ) with modulus σ f .  The rate of homogenization now follows.  Proof. Fix ω ∈ Ω 0 . The only observation is that, in every step of the proof of Theorem 7.3 involving the continuity of f , the Lipschitz estimates can be replaced by estimates using the modulus σ f . And, therefore, since ω ∈ Ω 0 and in view of the final estimate (7.57), whenever > 0 is sufficiently small and n ≥ 0 satisfies L n ≤ 1 < L n+1 , for C > 0 independent of n and ω, Then, it follows from the definition of the constants (3.3), (3.4) and (3.5) that, for all n ≥ 0 sufficiently large and whenever L n ≤ 1 < L n+1 , which, since ω ∈ Ω 0 was arbitrary, completes the argument.
The following final corollary extends Theorem 8.1 to general continuous boundary data provided the domain U is smooth. Notice that, in the case U = B r , which allows for an explicit radial extension, or whenever the domain U is smooth, see the Product Neighborhood Theorem in Milnor [13,Page 46], every continuous function f ∈ C(∂U ), which is necessarily uniformly continuous by compactness, admits a continuous extensioñ f ∈ BUC(R d ). Therefore, for sufficiently smooth domains, assumption (8.1) is always satisfied.
Therefore, assume f ∈ C(∂U ) and that the domain U is smooth. The proof of Corollary 8.2 is an immediate consequence of Theorem 8.1 and the preceding remark.