IDLA on the Supercritical Percolation Cluster

We consider the internal diffusion limited aggregation (IDLA) process on the infinite cluster in supercritical Bernoulli bond percolation on Euclidean lattices. It is shown that the process on the cluster behaves like it does on the Euclidean lattice, in that the aggregate covers all the vertices in a Euclidean ball around the origin, such that the ratio of vertices in this ball to the total number of particles sent out approaches one almost surely.

1. Introduction 1.1. Background and discussion. Given a graph, IDLA defines a random aggregation process, starting with a single vertex and growing by a vertex in each time step. To begin the process, we specialize a vertex v to be the initial aggregate on the graph. In each time step, we send out one random walk from v. Once this walk exits the aggregate, it stops, and the new vertex is added to the aggregate. Let I(n) be the aggregate in step n, thus I(1) = {v} and |I(n)| = n.
This process is a special case of a model Diaconis and Fulton introduced in [DF91]. In the setting where the graph is the d-dimensional lattice, Lawler, Bramson and Griffeath in [LBG92] used Green function estimates to prove the process has a Euclidean ball limiting shape. Let B r be all vertices in Z d of Euclidean distance less than r from the origin. The main theorem in [LBG92] implies that for any > 0, I(|B R |) ⊃ B (1− )R for all sufficiently large R with probability one. Seeking to generalize this result to other graphs, we note that convergence of a random walk on the lattice to isotropic Brownian motion plays an important role in convergence of IDLA on the lattice to an isotropic limiting shape. However, this property by itself is in general not enough for IDLA to have such an inner bound, as the following example shows. Consider the three dimensional Euclidean lattice, and choose a vertex v at distance R from the origin. Let r = R 0.9 and let B r (v) be the set of vertices of distance less than r from v. Remove all edges but one from the boundary of B r (v), and denote by v the only vertex in B r (v) with a neighbor outside of B r (v). Let us look at I(|B 2R |). A rough calculation gives that the average number of visits to v is of order R −1 |B 2R |. Since at least |B r (v)| = R 2.7 visits to v are needed to fill B r (v), we don't expect the ball to be full after an order of R 3 particles have been sent out. Repeating this edge removal procedure for balls of radius R 0.9 n at distance R n from the origin where R n = 2 n R, will ensure that there is never a Euclidean inner bound. However, a random walk from the origin on this graph will converge to Brownian motion because our disruptions are sublinear. We will not give full proofs of these facts, but hope they convince the reader that to get an inner bound some kind of local regularity property is needed.
The main theorem of this paper states that IDLA on the supercritical cluster in the lattice has a Euclidean ball as an inner bound. The two main tools that are used to show this are a quenched invariance principle [BB07] which gives us convergence in distribution to Brownian Motion from a fixed point, and a Harnack inequality from [Bar04], which give us oscillation bounds on harmonic functions in all small balls close to the origin. The latter allows us to establish the local regularity missing from the above example.
1.2. Assumptions and statement. Consider supercritical bond percolation in Z d with the origin conditioned to be in the infinite cluster. Let the graph Γ(V, E) be the natural embedding in R d of the infinite cluster, i.e. V ⊂ R d . Fixing this embedding for Γ, we get two separate (but comparable, see [AP96]) distances. We denote by |x − y| or d(x, y) Euclidean distance between points x, y, and by d Γ (x, y) the graph distance between them. If one of the points is not in V , d Γ (x, y) = ∞. Let B r (x) = {v ∈ V | d(x, v) < r} be the vertices contained in a Euclidean ball of radius r and center x. We abbreviate B r (0) as B r . To differentiate between such a set of vertices and a full ball in R d , we denote by bold lowercase the following: b r (x) = {y ∈ R d | d(x, y) < r}. Let d r (x) be a box of side r with center x, and as above, let D r (x) be the vertices in this box.
By existing work, which we will reference later, we know that with probability one, the graph of the supercritical cluster, Γ, in its natural R d embedding, satisfies the following assumptions: (1) A random walk on Γ converges weakly in distribution to a Brownian Motion as defined in 1.4. (2) Convergence to vertex density a G > 0 as defined in 1.5.
(3) A uniform upper bound on exit time from a ball as defined in 1.6. (4) A Harnack inequality as defined in 1.7. We denote by X t a discrete "blind" random walk on Γ defined as follows. For x ∈ Γ and t ∈ N ∪ {0}, P (X t+1 = y|X t = x) = 1/2d if y is a neighbor of x in Γ, and P (X t+1 = x|X t = x) = 1 − deg x/2d. We prefer this walk since its Green functions are symmetric, a fact which will be useful later on. For non-integer t we set X t = X t .
Note that assumption 4 does not imply 3, see [Del02] for a counterexample.
Using only above assumptions and that V ⊂ Z d (which serves mostly to simplify notation and could be replaced by weaker conditions), we show the IDLA process starting at 0 will have a Euclidean ball inner bound as stated in the following theorem: Let I(n) denote the random IDLA aggregate of n particles starting at 0.
Theorem 1.1. Almost surely, for any > 0, we have that for all large enough R, We fix with which we prove the above for the rest of the paper.
1.3. Outline. In the remainder of this section we state our assumptions precisely, and explain why these assumptions are valid almost surely for the infinite cluster in supercritical percolation. Then, to prove IDLA inner bound, we need to show that for each vertex v ∈ V the Green function from 0 and expected exit time from a ball around 0 of a random walk starting at v, behave similarly to those functions of a Brownian motion. The exact statement needed appears in Lemma 3.1. The invariance principle gives us integral convergence of these functions to the right value, but to improve them to pointwise statements, we must show that they are locally regular. We use the Harnack inequality from [Bar04] to prove they are Hölder continuous. This is a similar scheme to improving a CLT to an LCLT, as was done in [BH09] using a different method than ours.
In section 2, we start by proving a lemma comparing expected exit time from a set to the Green function of a point in the set. Then, we show how our Harnack assumption leads to an oscillation inequality which we use to show regularity of the Green function and expected exit time of the walk in a ball when we are far from the boundary and the center. Next, in section 3, we use our assumption of an invariance principle to show that in small balls the integral of these functions approaches a tractable limit that can be calculated by knowledge of Brownian motion. Finally, in section 4, we utilize these estimates to prove theorem 1.1.
Remark 1.2. Since the paper makes use of results based on ergodic theory, there is no rate estimate for the convergence in theorem 1.1. Also, on the Euclidean lattice there is an almost sure outer bound with sublinear fluctuations from the sphere. Another interesting question that is not treated here, is whether a similar outer bound holds in our setting.
1.4. Weak convergence to Brownian Motion. In this and the next three subsections, we give a precise formulation of each of our assumptions from above, and argue that they hold a.s. on Γ, the supercritical cluster. Let i.e. the continuous functions from the closed interval Thus w R (t) is a scaled linear interpolation of X t (defined in 1.2) with its restriction to [0, T ] an element of C d T . We say that assumption 1 holds if for any T > 0, the law of w R (t) on C d T converges weakly in the supremum topology to the law of a (not necessarily standard) Brownian Motion (B t : 0 ≤ t ≤ T ) . Lemma 1.3. Assumption 1 holds for Γ with probability one.
Proof. This is Theorem 1.1 in [BB07]. Another paper with a similar invariance principle is [MP05].
Let B(t) denote the Brownian motion weak limit of X t .
1.5. Convergence to positive density. Assumption 2 holds if there exists a positive a G such that for any δ, γ > 0 and all sufficiently large R we have that for That is, balls and boxes of size δR to R, have vertex density in (a G − γ, a G + γ) for all large enough R.
Lemma 1.4. Assumption 2 holds for Γ with probability one.
Proof. Let θ(p) be the probability for 0 to be in the infinite cluster. θ(p) is positive in the supercritical regime. From Theorem 3 in [DS88] for d = 2 and from (2) on p. 15 of [Gan89] for d > 2, we know that for any ρ > 0 there is a positive c = c(ρ) such that where x ∈ R d and n ∈ N. Recall D n (x) are the vertices in a box of side-length n and center x. Theorem 2 in [DS88] proves the easier density upper bound for d = 2, which under trivial modifications gives that for all d The above, together with Borel Cantelli gives that if we choose γ > 0, x ∈ b 1 , then for all large enough R So we have the result for small boxs of size ρR. To expand it to larger boxes, let D be any box of diameter between δR to R. We partition the box D R into ρR-sized boxes and choose ρ = ρ(δ) small enough so that the number of boxes that intersect both D and D R \D is negligible compared to the number that intersect D. Proving for balls is similar.
1.6. Uniform Bound on Exit time from ball. Denote by τ r (x) the first time the walk leaves B r (x), and write τ r for τ r (0). Assumption 3 holds if there is a c E such that for any δ > 0 and all large enough R, if x ∈ B R and r > δR then The assumption holds for Γ with probability one as a direct consequence of the following lemma, proved below.
Lemma 1.5. With probability one, there is an L such that for all δ > 0 and all large enough R, if x ∈ B R and r > δR then Proof. Since we prove for all R outside a bounded interval, it suffices to prove the above for r = δR with fixed δ > 0. To prove this holds a.s. for supercritical percolation clusters, we use a heat kernel upper bound given in Theorem 1 of Barlow's paper [Bar04]. The proof in [Bar04] is for continuous walks with a mean time of one between jumps. However, it can be transferred to our discrete walk using Theorem 2.1 of [BB07]. We state an implication of what was proved. With probability one there exists a function from V to N, {T x < ∞} x∈V where T x is sublinear in the sense that Tx x → 0, and there exist positive constants c 1 , c 2 such that for any x ∈ V , the following holds. For all y ∈ B t (x), t ≥ T x : Barlow's result actually states that almost surely T x grows slower than a logarithmic function of x and gives stronger Gaussian bounds for the heat kernel from below and above. Using the heat kernel upper bound (1.3), the convergence to zero of Tx x , and the upper bound on vertex density resulting from Γ ⊂ Z d , we have that for any K and all large enough R, if x ∈ B R and r = δR, then Thus for some K , for all large enough R, and all x ∈ B R , P x (X K r 2 ∈ B r (x)) < 1 2 . Next, for any positive L, we use the Markov property to upper bound P x (τ r (x) > LK r 2 ) by which proves the claim.
We will later use that assumption 3 implies L 1 convergence of We rewrite the left hand side of (1.4) and use the Markov property with the exit time assumption.
Assumption 4 holds if there is a 1 < c H < ∞ such that for all δ > 0 we have that for all large enough R, if x ∈ B R , r > δR and h is a function that is non-negative and harmonic on B 2r (x) then h.
In order to simplify the paper, the inequality above is formulated for Euclidean distance rather than graph distance. We show that since the distances are almost surely comparable on Γ, this is the same. We start by proving the assumption holds for graph distance, using Lemma 2.19 and Theorem 5.11 of [Bar04]. We The lemma, using appropriate parameters and Borel Cantelli, tells us that for all large enough R, B Γ R log R are very good balls. Specifically, all balls contained in B Γ R log R of graph radius larger than R 1/4(d+2) have a positive volume density and satisfy a weak Poincaré inequality as explained in (1.15) and (1.16) of the same. Theorem 5.11 then tells us that for very good ball of graph radius R log R, all x ∈ B Γ (R log R)/3 satisfy that for any function h non-negative harmonic on whereĉ H (d, p) > 0 is a constant dependent only on dimension and percolation probability. Since all but a finite number of balls of graph radius R log R are very good and R is o(R log R) we have assumption 4 for graph distance.
Next, we transfer this to the Euclidean balls formulation of (1.5). By Theorem 1.1 of [AP96], we have that for some k > 0, ρ(d, p) > 0 and M < ∞, Let A(x, y) be the event {d Γ (x, y) > ρm}. Union bounding the probability for A(x, y) over every pair of points in B R of Euclidean distance greater than c log R for some large c(k), we upper bound the probability of any such event occuring by which is summable for d > 1. Hence by Borel Cantelli, almost surely for all large enough R, we have that for any x, y ∈ B R with |x − y| > c log R, d Γ (x, y) < ρ |x − y|. Since log R is o(R), this implies that for any δ > 0 and all large enough R, for any x ∈ B R , Hence, given a function that is non-negative harmonic on B 2r (x) it is also such on B Γ 2r (x). For large enough R, we use (1.6) and (1.7) to get (1.5) for B ρ −1 r (x) with constantĉ H . A routine chaining argument (see e.g. (3.5) in [Del02]) transfers this to B r (x) as required with new constant c H .

Pointwise bounds on Green function and expected exit time
In this section, we show the assumptions of a uniform bound on exit time from a ball along with the Harnack inequality give us pointwise bounds of the Green function and expected exit time of a random walk.
As a convention, we use plain c and C to denote positive constants that do not retain their values from one expression to another, as opposed to subscripted constants c i , that do. In general, these constants are graph dependent, and in the context of percolation, can be seen as random functions of the percolation configuration. However, we view the graph as being fixed and satisfying the assumptions stated in subsection 1.2.
We start with a general lemma on the relation between expected exit time from a set and the expected number of visits to a fixed point in the set.
Let Z ⊂ Γ and let τ Z be the first hitting time of Z for X t . For x ∈ Γ we set G Z (x) = E x τ Z t=0 1 {Xt=x} , the expected number of visits to x of a walk starting at x before τ Z .
Recall from 1.2 that X t has a positive staying probability at certain vertices. Since we apply electrical network interpretation to estimate hitting probabilities, we prove the above for Y t , the usual discrete simple random walk, with 0 probability to stay at a vertex. This implies (2.1) for X t as well, since the expected exit time cannot decrease, and the Green functions for X t can only grow by a 2d factor since the escape probabilites for X t and Y t are at most a 2d factor apart. Fix x ∈ Γ, set T 0 = 0, and define for each i ∈ N the r.v.'s We show there are positive constants k 1 , k 2 dependent only on d such that By electrical network interpretation (see e.g. [DS84]), the probability for a walk beginning at x to hit ∂B Γ r (x) before returning to x is (2d Since Γ is infinite and connected, for any r there is a connected path of r edges from x to ∂B Γ r (x). By the monotonicity principle, C ef f (r) is at least the conductance on this path, which is r −1 . Thus the probability to hit some y ∈ ∂B Γ r (x) before returning to x is at least (2dr) −1 . Next, let y ∈ ∂B Γ r (x). By the Carne-Varopoulos upper bound (see [Var85]), and thus, for some k 1 (d), k 2 (d) > 0 and all r > 1, the probability that a walk starting at y ∈ ∂B Γ r (x) does not hit x in the next k 1 r 2 / log r steps, by union bound, is greater than k 2 . Together with our lower bound on the probability that we arrive at such a y ∈ ∂B Γ r (x), we get (2.2). Next, let g = inf {i : T i > τ Z } be the number of visits of Y t to x before hitting Z, including t = 0 . g is a geometric random variable with mean G = G Z . Let α = 1 2 ln (4/3) and note that since there is a constant in (2.1) and τ Z ≥ 1, we can assume G > 2. Thus We further assume G > 2 α ∨ 16 k2 so that αG − 1 > αG/2 and G k2 16 > 1. Let A be the event that there is an 1 ≤ i ≤ (αG − 1) ∧ i * such that ρ i > k 1 k2 16 G 2 / log k2 16 G Note that i * ≤ αG−1 implies A. Thus by (2.2) and the independence of consecutive excursions from x, which is smaller than 1/4.
This implies the lemma since τ Z > g−1 i=1 ρ i , and for k = k 1 k 2 2 /256, Next, we state the fact that a Harnack inequality implies an oscillation inequality. For a set of vertices U and a function u, let osc U (u) = max U (u) − min U (u).
Proposition 2.2. Let x ∈ Γ and assume that for some r > 0 we have that any function h that is non-negative and harmonic on B 2r (x) satisfies (1.5) on B r (x). Then for any h that is harmonic on B 2r (x), we have Proof. We quote a proof from chapter 9 of [Tel06].
Since v is non-negative and harmonic on B 2r (x), the Harnack inequality (1.5) is satisfied here, so we have Similarly, we have Summing up these two inequalities, we obtain

whence (2.3) follows.
Iterating this on the Harnack assumption 4 we get Corollary 2.3. For any α > 0 there is an M (α) such that for any η > 0 and for all R > R η , if x ∈ B R , r ≥ ηR and h is a harmonic function on B M (α)r (x) then We use this to show regularity of the green function and for expected exit time.
Lemma 2.4. There is a c G ( ) such that for all large enough R, and any x ∈ Proof. Any two vertices in u, v ∈ B (1− )R \ B R can be joined by a path of n < C(d)/ overlapping balls B /4 (x 1 ), . . . , B /4 (x n ) that are all subsets of B (1− /2)R \ B ( /2)R such that x 1 = u and x n = v. Since G τ R (0, x) is positive and harmonic in Harnack assumption 4 tells us that c and using the exit time bound in assumption 3, we get Lemma 2.5. For any β > 0, there is a δ > 0 such that for all large enough R, any x, y ∈ B (1− )R \ B R satisfying |x − y| < δR also satisfy Proof. G τ R (0, x) is positive and harmonic in B (1− /2)R \ B ( /2)R and bounded by c G R 2−d . We use corollary 2.3 with α = βc −1 G and η = / (2M (α)). This gives us the lemma with δ = η.
Lemma 2.6. For any β > 0, there is a δ > 0 such that for all large enough R, any x, y ∈ B (1− )R satisfying |x − y| < δR also satisfy: Proof. Fix x ∈ B (1− )R and for δ > 0 determined below, fix some y ∈ B δR (x). Define the r.v. τ 1 to be the random time it takes a walk starting somewhere inside B R (x) to exit B R (x), and let τ 2 = τ R − τ 1 , i.e. the additional time it takes the walk to exit B R . Note that E x [τ 2 ] is harmonic in B R (x), and that from exit time assumption 3 for all large R it is bounded by c E R 2 . We use corollary 2.3 with α = βc −1 E /2 and η = /M (α), to get that for δ ≤ η

Domination of Green function
Let Ω = Γ N∪{0} and let P B (·) denote a probability measure on paths in Ω starting at 0 that make X t a "blind" simple random walk as defined in subsection 1.2. P B is pushed forward to a measure on C d T by w R (t) as defined in subsection 1.4. To contrast, we call P (·) the Wiener measure on curves corresponding to the Brownian motion B(t) which is the weak limit of w R (t). Thus, for fixed T , C d T is the probability space on which P B (w R (t)) converges to P in distribution. Write E B [·] and E [·] for the corresponding expectations.
3.1. Integral Convergence of expected exit time. Since we assume control of convergence to Brownian Motion only from 0, we must describe the expected exit time from an arbitrary point in the unit ball as a function of Brownian motion that starts at 0. We do this by conditioning the Brownian motion to hit a small box containing that point and measuring the additional time needed to exit the unit ball.
We denote the first hitting time of a set Z by τ Z . This hitting times may refer to the Brownian motion B(t), scaled linearly interpolated walks w R (t), or the discrete random walk X t . Another implicit part of this notation is the starting point of the walk or curve. The correct interpretation should be evident from context, and will be stated otherwise. Some notation used in this section was introduced in subsections 1.2 and 1.4.
2T is the event that the curve hits a small box around u before time T . Henceforth, to avoid the complication where a vertex after scaling is in the boundary of a box d θ (u), we always take u to have rational coordinates, and the side length θ to be irrational. This will suffice as our scale parameter R is a natural number. Secondly we take T to be an integer so that a curve w R (t) hits d θ (u) until T if and only if it hits a vertex in D Rθ (Ru) until T R 2 .
Since we are interested in estimating the behavior of X t and not just its interpolation, we define A * = A * (R, T ) = ω ∈ Ω : τ D Rθ (Ru) (X t (ω)) ≤ T R 2 . A * ⊂ Ω is the event a vertex in D Rθ (Ru) is visited until time T R 2 . Thus for all R ∈ N, and for all ω ∈ Ω, X t (ω) ∈ A * ⇐⇒ w R (ω, t) ∈ A.
Let τ + be the first exit time of the unit ball b 1 after hitting d θ (u). Let k : C d 2T → R be defined as Analogously, let τ + R be the first exit time of B R after hitting D Rθ (Ru), and let is bounded by T and is discontinuous on ∂A, a set of Wiener measure zero. Therefore, by the Portmanteau theorem and our assumption of weak convergence, By the strong Markov property for Brownian motion, we may average over the hitting point.
where τ = τ b c 1 , the first exit time from the unit ball (we start measuring time at B(τ d θ (u) )). Note that R 2 k(w R (ω, t)) measures the time that the unscaled interpolated walk w 1 (ω, t) takes to get from the boundary of d Rθ (Ru) to b c R , but what interests us is the span between the first time that X t (ω) takes a value in D Rθ (Ru) to the first time it takes a value in the complement of B R . R 2 k * (X t (ω)) measures this time. By the strong Markov property for random walks If the unscaled interpolated curve w 1 (ω, t) crosses the boundary of d Rθ (Ru), it will hit a vertex in D Rθ (Ru) in less than one time unit. The same is true for exiting b R . Thus for all ω in our probability space By weak convergence, for any fixed T, P B (w R (ω, t) ∈ A(T )) → P (A(T )), and since A * ⇐⇒ A, In summary, we have some average on the boundary vertices of D Rθ (Ru) of the function R −2 E B x (τ R ∧ T R 2 ), that is close as we like to an average on d θ (u) of a Brownian motion's expected time to exit the unit ball.
3.2. Integral convergence of Green function. For a fixed T > 0, θ > 0 and u ∈ b 1 let h : C d T → R be defined for w(t) ∈ C d T as follows: h(ω) measures a curve's occupation time of d θ (u) before leaving b 1 and until time T .
Since h(B(ω, t)) is bounded by T and is discontinuous on curves whose occupation time of ∂d θ (u) before exit of b 1 is positive -a set of Wiener measure zero. Thus the Portmanteau theorem gives: killed on leaving the unit ball or when time T is reached. Again, we are measuring the time that a linearly interpolated curve spends in a set, while we would like to have control over the time the random walk itself spends in the set. However, E [h (B(t))] is a continuous function of θ, and for any δ > 0 the time the discrete walk spends in D Rθ (Ru) is eventually sandwiched between the time the unscaled interpolated walk w 1 (ω, t) spends in d R(θ+δ) (Ru) and d R(θ−δ) (Ru). Thus: where G B τ R ∧T R 2 is the Green function of the walk from 0, killed on leaving B R or when time T R 2 is reached.

Pointwise domination of Green function.
Lemma 3.1. ∀ > 0 ∃R and η > 0 s.t. ∀R >R the following holds: This is the main result needed for IDLA lower bound, and will proved be in this section.
First, it is known (see, e.g., [LBG92] p.2121) for Brownian motion starting at zero, and killed on exiting the unit ball, that the Green function g τ (0, x) and expected hitting time E x (τ ) are continuous functions with the property that |b 1 |g τ (0, x)− E x (τ ) descends strictly monotonically to zero as x goes from 0 to 1. This is true for any Brownian motion, in particular, B(t), the weak limit of X t on the graph starting at 0. Thus for any > 0 the minimum of the difference between the two when ||x|| ∈ [ , 1 − ] is bounded away from zero. The Lebesgue monotone convergence theorem implies that g τ ∧T (0, x) g τ (0, x) and E x (τ ∧ T ) E x (τ ) as T → ∞. Since all functions involved are continuous and converge monotonically on a compact set, by Dini's theorem, the convergence is uniform. Thus we have the following uniform bounding of the difference away from zero: ∃γ( ) > 0, T s.t. ∀u, ||u|| ∈ [ /2, 1 − /2], E u (τ ∧ T ) + γ < |b 1 |g τ ∧T (0, u).
Since E u (τ ∧ T ), g τ ∧T (0, u) converge uniformly with T , they are uniformly equicontinuous in the variable u in the closed interval [ /2, 1 − /2]. We may then choose a θ > 0 such that for all large enough T , any average of g τ ∧T (0, ·) in a box of side θ with center u, is close to g τ ∧T (0, u) for any u ∈ [ , 1 − ]. We have the analogous claim for E u (τ ∧ T ). Thus: Lemma 3.2. For any positive , there is γ( ) > 0 such that for all large enough T and all small enough θ In the above, dµ is an arbitrary probability measure (total mass one) on the boundary of d θ (u), while the integral on the right is by d-dimensional Lebesgue measure.
We apply lemmas 2.5 and 2.6 to get a δ > 0 for which (2.6) and (2.5) hold with β =γ for all large enough R.γ > 0 is some multiple of γ from lemma 3.2 that we determine later. Set θ to be small enough so that d θ is covered by a ball of radius δ. Increase T further if necessary so that (1.4) holds with β = γ 4 . Fix T and θ for the remainder of the proof.
We cover b (1− ) \ b by a finite number of open θ-boxes, and so to prove lemma 3.1, it suffices to prove the implication in the restricted setting of x ∈ D Rθ (Ru) where u is the center of an arbitrary box in our θ-net. Now, we show the Green function at every point in this box is close to the continuous one. g τ ∧T (0, x)dx. By (3.2), we have for large enough R: |D Rθ (Ru)| . By density assumption (2), for all R large enough α R − θ −d |b 1 | <γ. Using the triangle inequality on where for the right inequality, we used (2.4) to bound G τ R ∧T R 2 (0, x) and (Rθ) d as a bound on the number of vertices.
We set θ such that (2.5) bounds the difference between the maximum and minimum of G τ R (0, x) in D Rθ (Ru) byγR 2−d for all large enough R. Thus, since G τ R (0, x) > G τ R ∧T R 2 (0, x) we have for any x ∈ D Rθ (Ru) Recall γ from lemma 3.2. We now determineγ so that for all large enough R, for any x ∈ D Rθ (Ru) Next we show that the expected hitting time from any point is close to the continuous one.
3.3.2. Pointwise expected hitting time estimate. Recall we chose T such that for Combining this with (3.1) we have for large enough R where for each R, λ (R) x are non-negative and sum to one and dµ is some probability measure on ∂d θ (u).
We set θ such that (2.6) holds with β =γ < γ/2, so for large enough R we have for any x ∈ D Rθ (Ru) Putting the above together with (3.3) and 3.2, for all large enough R, we get that for any x ∈ D Rθ (Ru) The above along with our upper bound on E x (τ R ) from assumption 3, implies lemma 3.1.

Lower Bound
We begin by formally defining the IDLA process. Let (X n t ) n∈N t≥0 be a a sequence of independent random walks starting at 0. The aggregate begins empty, i.e. I(0) = ∅, and I(n) = I(n − 1) ∪ X n t where t = min t {X n t / ∈ I(n − 1)}. Thus we have an aggregate growing by one vertex in each time step.
As in [LBG92], we fix z ∈ B (1− )R \ B R , and look at the first |B R | walks. Let A = A(z, R) be the event z ∈ I(|B R |). We show the probability this does not happen decreases exponentially with R.
Let M = M (z, R) be the number of walks out of the first |B R | that hit z before exiting B R . Let L = L(z, R) be the number of walks out of the first |B R | that hit z before exiting B R , but after leaving the aggregate. Then for any a, P (A c ) < P (M = L) < P (M ≤ a) + P (L ≥ a).
We choose a later to minimize the terms. In order to bound the above expression, we calculate the average of M and L, E [M ] = |B R | P 0 (τ z < τ R ).
E [L] is hard to determine, but each walk that contributes to L, can be tied to the unique point at which it exits the aggregate. Thus, by the Markov property, if we start a random walk from each vertex in B R and letL be those walks that hit z before exiting B R , P (L > n) ≤ P (L > n), and so it suffices to bound P (L > a). E[L] is a sum of independent indicators: Since both M andL are sums of independent variables, we expect them to be close to their mean. Our aim now becomes showing that for some δ > 0 and all large enough R, By standard Markov chain theory, Using this and symmetry of the Green function, it is enough to show for all large enough R In the first line we use thatL is a sum of independent indicators, and the variance of such a sum is smaller than the mean. The second line is an application of Chernoff's inequality. Similarly, using (4.1)  To lower bound E[L] = E z [τ R ] (G τ R (z, z)) −1 we use (2.1) to write that for some M , and since E z [τ R ] ≥ R we have E L (z, R) > cR 1/2 log R .
The Weizmann Institute of Science, 76100 Rehovot, Israel E-mail address: shellef@gmail.com