The growth exponent for planar loop-erased random walk

We give a new proof of a result of Kenyon that the growth exponent for loop-erased random walks in two dimensions is 5/4. The proof uses the convergence of LERW to Schramm-Loewner evolution with parameter 2, and is valid for irreducible bounded symmetric random walks on any two-dimensional discrete lattice.


Overview
Let S be a random walk on a discrete lattice Λ ⊂ R d , started at the origin. The looperased random walk (LERW) S n is obtained by running S up to the first exit time of the ball of radius n and then chronologically erasing its loops.
The LERW was introduced by Lawler [9] in order to study the self-avoiding walk, but it was soon found that the two processes are in different universality classes. Nevertheless, LERW is extensively studied in statistical physics for two reasons. First of all, LERW is a model that exhibits many similarities to other interesting models: there is a critical dimension above which its behavior is trivial, it satisfies a domain Markov property, and it has a conformally invariant scaling limit. Furthermore, LERWs are often easier to analyze than these other models because properties of LERWs can often be deduced from facts about random walks. The other reason why LERWs are studied is that they are closely related to certain models in statistical physics like the uniform spanning tree (through Wilson's algorithm which allows one to generate uniform spanning trees from LERWs [30]), the abelian sandpile model [6] and the b-Laplacian random walk [10] (LERW is the case b = 1).
Let Gr(n) be the expected number of steps of a d-dimensional LERW S n . Then the d dimensional growth exponent α d is defined to be such that For d ≥ 4, it was shown by Lawler [10,11] that α d = 2 (roughly speaking, in these dimensions, random walks do not produce many loops and LERWs have the same growth exponent as random walks). For d = 3, numerical simulations suggest that α 3 is approximately 1.62 [1] but neither the existence of α 3 , nor its exact value has been determined rigorously (it is not expected to be a rational number). In the two dimensional case, it was shown by Kenyon [7] that α 2 exists for simple random walk on the integer lattice Z 2 and is equal to 5/4. His proof uses domino tilings to compute asymptotics for the number of uniform spanning trees of rectilinear regions of R 2 and then uses the relation between uniform spanning trees and LERW to conclude that α 2 = 5/4. In this paper, we give a substantially different proof that α 2 = 5/4. Namely, we prove Theorem 1.1. Let S be an irreducible bounded symmetric random walk on a twodimensional discrete lattice started at the origin and let σ n be the first exit time of the ball of radius n. Let S n be the loop-erasure of S[0, σ n ] and Gr(n) be the expected number of steps of S n . Then Gr(n) ≈ n 5/4 .
The proof of Theorem 1.1 uses the fact that LERW has a conformally invariant scaling limit called radial SLE 2 . Radial Schramm-Loewner evolution with parameter κ ≥ 0 is a continuous random process from the unit circle to the origin in D. It was introduced by Schramm [23] as a candidate for the scaling limit of various discrete models from statistical physics. Indeed, he showed that if LERW has a conformally invariant scaling limit, then that limit must be SLE 2 . In the later paper by Lawler, Schramm and Werner [20], the convergence of LERW to SLE 2 was proved. Other models known to scale to SLE include the uniform spanning tree Peano curve (κ = 8, Lawler, Schramm and Werner [20]), the interface of the Ising model at criticality (κ = 16/3, Smirnov [26]), the harmonic explorer (κ = 4, Schramm and Sheffield [24]), the interface of the discrete Gaussian free field (κ = 4, Schramm and Sheffield [25]), and the interface of critical percolation on the triangular lattice (κ = 6, Smirnov [27] and Camia and Newman [4,5]). There is also strong evidence to suggest that the self-avoiding walk converges to SLE 8/3 , but so far, attempts to prove this have been unsuccessful [21].
One of the reasons to show convergence of discrete models to SLE is that properties and exponents for SLE are usually easier to derive than those for the corresponding discrete model. It is also widely believed that the discrete model will share the exponents of its corresponding SLE scaling limit. However, the equivalence of exponents between the discrete models and their scaling limits is not immediate. For instance, Lawler and Puckette [17] showed that the exponent associated to the non-intersection of two random walks is the same as that for the non-intersection of two Brownian motions. In the case of discrete models converging to SLE, different techniques must be used, since the convergence is weaker than the convergence of random walks to Brownian motion. To the author's knowledge, the derivation of arm exponents for critical percolation from disconnection exponents for SLE 6 by Lawler, Schramm and Werner [19] and Smirnov and Werner [28] is the only other example of exponents for a discrete model being derived from those for its SLE scaling limit.
There are three main reasons for giving a new proof that α 2 = 5/4. The first is to give another example where an exponent for a discrete model is derived from its corresponding SLE scaling limit. The second reason is that the convergence of LERW to SLE 2 holds for a general class of random walks on a broad set of lattices. This allows us to establish the exponent 5/4 for irreducible bounded symmetric random walks on discrete lattices of R 2 , and thereby generalize Kenyon's result which holds only for simple random walks on Z 2 . Finally, in the course of the proof we establish some facts about LERWs that are of interest on their own. Indeed, in a forthcoming paper with Martin Barlow [2], we use a number of the intermediary results in this paper to obtain second moment estimates for the growth exponent.
There are two properties of SLE 2 that suggest that α 2 = 5/4. The first is that the Hausdorff dimension of the SLE curves was established by Beffara [3], and is equal to 5/4 for SLE 2 . However, we have not found a proof that uses this fact directly. Instead, we use the fact that the probability that a complex Brownian motion from the origin to the unit circle does not intersect an independent SLE 2 curve from the unit circle to the circle of radius 0 < r < 1 is comparable to r 3/4 . This and other exponents for SLE were established by Lawler, Schramm and Werner [18]. We use this fact to show that the probability that a random walk and an independent LERW started at the origin and stopped at the first exit time of the ball of radius n do not intersect is logarithmically asymptotic to n −3/4 . We then relate this intersection exponent 3/4 to the growth exponent α 2 and show that α 2 = 5/4.

Outline of the proof of Theorem 1.1
While many of the details are quite technical, the main steps in the proof are fairly straightforward. Let Es(n) be the probability that a LERW and an independent random walk started at the origin do not intersect each other up to leaving B n , the ball of radius n. As we mentioned in the previous section, the fact that Gr(n) ≈ n 5/4 follows from the fact that Es(n) ≈ n −3/4 . Intuitively, this is not difficult to see. Let z be a point in B n that is not too close to the origin or the boundary. In order for z to be on the LERW path, it must first be on the random walk path; the expected number of times the random walk path goes through z is of order 1. Then, in order for z to be on the LERW path, it cannot be part of a loop that gets erased; this occurs if and only if the random walk path from z to ∂B n does not intersect the loop-erasure of the random walk path from 0 to z. This is comparable to Es(n). Therefore, since there are on the order of n 2 points in B n , Gr(n) is comparable to n 2 Es(n), and so it suffices to show that Es(n) ≈ n −3/4 . The above heuristic does not work for points close to the origin or to the circle of radius n, and so the actual details are a bit more complicated.
Given l ≤ m ≤ n, decompose the LERW path S n as S n = η 1 ⊕ η * ⊕ η 2 (see Figure 1). Define Es(m, n) to be the probability that a random walk started at n m l η * η 2 η 1 Figure 1: Decomposition of a LERW path into η 1 , η 2 and η * the origin leaves the ball B n before intersecting η 2 . Notice that Es(m, n) is the discrete analog of the probability that a Brownian motion from the origin to the unit circle does not intersect an independent SLE 2 curve from the unit circle to the circle of radius m/n. As mentioned in the previous section, the latter probability is comparable to (m/n) 3/4 [18]. Therefore, using the convergence of LERW to SLE 2 and the strong approximation of Brownian motion by random walks one can show that there exists C < ∞ such that the following holds (Theorem 5.6). For all 0 < r < 1, there exists N such that for all n > N , Unfortunately, N in the previous statement depends on r, so one cannot simply take r → 0 to recover Es(n). Therefore, one has to relate Es(n) to Es(m, n). This is not as easy as it sounds because the probability that a random walk avoids a LERW is highly dependent on the behavior of the LERW near the origin. Nevertheless, we show (Propositions 5.2 and 5.3) that there exists C < ∞ such that It is then straightforward to combine (1) and (2) to deduce that Es(n) ≈ n −3/4 (Theorem 5.7).
To prove (2), we let l = m/4 in the decomposition given in Figure 1. Then in order for a random walk S and a LERW S n not to intersect up to leaving B n , they must first reach the circle of radius l without intersecting; this is Es(l). Next, we show that with probability bounded below by a constant, η * is contained in a fixed half-wedge (Corollary 3.8). We then use a separation lemma (Theorem 4.7) which states that on the event Es(l), S and S n are at least a distance cl apart at the circle of radius l. This allows us to conclude that, conditioned on the event Es(l), with a probability bounded below by a constant, S will not intersect η * . Finally, we use the fact that η 1 and η 2 are "independent up to constants" (Proposition 4.6) to deduce that Formula (2) then follows because m = 4l and thus Es(l) is comparable to Es(m).

Structure of the paper
In Chapter 2, we give precise definitions of random walks, LERWs and SLE and state some of the basic facts and properties that we require. In Chapter 3, we prove some technical lemmas about random walks. Section 3.1 establishes some estimates about Green's functions and the probability of a random walk hitting a set K 1 before another set K 2 . Section 3.2 examines the behavior of random walks conditioned to avoid certain sets. Finally, in Section 3.3 we prove Proposition 3.12 which states the following. For a fixed continuous curve α in the unit disc D, the probability that a continuous random walk on the lattice δΛ exits D before hitting α tends to the probability that a Brownian motion exits D before hitting α. Furthermore, if one fixes r, then the convergence is uniform over all curves whose diameter is larger than r.
Chapter 4 is devoted to proving two results for LERW that are central to the main proof of the paper. The first is Proposition 4.6 which states that if 4l ≤ m ≤ n then η 1 and η 2 are independent up to a multiplicative constant (see Figure 1). The second result is a separation lemma for LERW. This key lemma states the following intuitive fact about LERW: there exist positive constants c 1 and c 2 so that, conditioned on the event that a random walk and a LERW do not intersect up to leaving the ball B n , the probability that the random walk and the LERW are at least distance c 1 n apart when they exit the ball B n is bounded below by c 2 . Separation lemmas like this one are often quite useful in establishing exponents; a separation lemma was used in [12] to establish the existence of the intersection exponent for two Brownian motions and in [28] to derive arm exponents for critical percolation.
In Chapter 5, we prove that the growth exponent α 2 = 5/4. To do this, we first relate the non-intersection of a random walk and a LERW to the non-intersection of a Brownian motion and an SLE 2 . Using the fact that the exponent for the latter is 3/4, we deduce the same result for the former (Theorem 5.7). Finally, we show how this implies that the growth exponent α 2 for LERW is 5/4 (Theorem 1.1).

Acknowledgements
I would like to thank Wendelin Werner for suggesting this problem to me. This work was done while I was a graduate student at the University of Chicago and I am very grateful to my advisors Steve Lalley and Greg Lawler for all their patient help and guidance.
2 Definitions and background 2.1 Irreducible bounded symmetric random walks Throughout this paper, Λ will be a two-dimensional discrete lattice of R 2 . In other words, Λ is an additive subgroup of R 2 not generated by a single element such that there exists an open neighborhood of the origin whose intersection with Λ is just the origin. It can be shown (see for example [16,Proposition 1.3.1]) that Λ is isomorphic as a group to Z 2 . Now suppose that V ⊂ Λ \ {0} is a finite generating set for Λ with the property that the first nonzero component of every x ∈ V is positive. Suppose that κ : where the random variables X k are independent with distribution p. Then S is a symmetric, irreducible random walk with bounded increments. It is a Markov chain with transition probabilities p(x, y) = p(y − x).
If X = (X 1 , X 2 ) has distribution p, then is the covariance matrix associated to S. There exists a unique symmetric positive definite matrix A such that Γ = A 2 . Therefore, if S j = A −1 S j , then S is a random walk on the discrete lattice A −1 Λ with covariance matrix the identity. Since a linear transformation of a circle is an ellipse, it is clear that if we can show that the growth exponent α 2 is 5/4 for random walks whose covariance matrix is the identity, then α 2 will be 5/4 for random walks with arbitrary covariance matrix. Therefore, to simplify notation and proofs, throughout the paper S will denote a symmetric, irreducible random walk on a discrete lattice Λ with bounded increments and covariance matrix equal to the identity.

A note about constants
For the entirety of the paper, we will use the letters c and C to denote constants that may change from line to line but will only depend on the random walk S (which will be fixed throughout).
Given two functions f (n) and g(n), we write f (n) ≈ g(n) if lim n→∞ log f (n) log g(n) = 1, and f (n) ≍ g(n) if there exists 0 < C < ∞ such that for all n If f (n) → ∞ and g(n) → ∞ then f (n) ≍ g(n) implies that f (n) ≈ g(n), but the converse does not hold.

Subsets of C and Λ
Recall that our discrete lattice Λ and our random walk S with distribution p are fixed throughout.
Given z ∈ C, let be the open disk of radius r centered at z in C, and be the ball of radius n centered at z in Λ. We write D r for D r (0), B n for B n (0) and let D = D 1 be the unit disk in C.
We use the symbol ∂ to denote both the usual boundary of subsets of C and the outer boundary of subsets of Λ, where the outer boundary of a set K ⊂ Λ (with respect to the distribution p) is The context will make it clear whether we are considering a given set as a subset of C or of Λ. We will also sometimes consider the inner boundary ∂ i K = {x ∈ K : there exists y ∈ Λ \ K such that p(x, y) > 0}.
We let K = K ∪ ∂K and K • = K \ ∂ i K.
A path with respect to the distribution p is a sequence of points We say that a set K ⊂ Λ is connected (with respect to the distribution p) if for any pair of points x, y ∈ K, there exists a path ω ⊂ K connecting x and y. Given l ≤ m ≤ n, let Ω l be the set of paths ω = [0, ω 1 , . . . , ω k ] ⊂ Λ such that ω j ∈ B l , j = 1, . . . , k − 1 and ω k ∈ ∂B l . Let Ω m,n be the set of paths λ = [λ 0 , λ 1 , . . . , λ k ′ ] such that λ 0 ∈ ∂B m , λ j ∈ A m,n , j = 0, 1, . . . , k ′ − 1 and λ k ′ ∈ ∂B n , where A m,n denotes the annulus B n \ B m .

Basic facts about Brownian motion and random walks
Throughout this paper, W t , t ≥ 0 will denote a standard complex Brownian motion. Given a set K ⊂ Λ, let be first exit times of the set K. We also let be the first hitting times of the set K. We let σ n = σ Bn and use a similar convention for σ n , ξ n and ξ n . We also define the following stopping times for Brownian motion: Depending on whether the Brownian motion is started inside or outside D, τ D will be either an exit time or a hitting time.
Suppose that X is a Markov chain on Λ and that K ⊂ Λ. Let For x, y ∈ K, we let denote the Green's function for X in K. We will sometimes write G X (x, y; K) for G X K (x, y) and also abbreviate G X K (x) for G X K (x, x). When X = S is a random walk, we will omit the superscript S.
Recall that a function f defined on K ⊂ Λ is discrete harmonic (with respect to the distribution p) if for all z ∈ K, For any two disjoint subsets K 1 and K 2 of Λ, it is easy to verify that that the function is discrete harmonic on Λ \ (K 1 ∪ K 2 ). The following important theorem concerning discrete harmonic functions will be used repeatedly in the sequel [16, Theorem 6.3.9]. for all x, y ∈ nA ∩ Λ.
Suppose that X is a Markov chain with hitting times ξ X K = min{j ≥ 0 : X ∈ K}.
Given two disjoint subsets K 1 and K 2 of Λ, let Y be X conditioned to hit K 1 before K 2 (as long as this event has positive probability). Then if we let h(z) = P z ξ Using this fact, the following lemma follows readily.

Lemma 2.2.
Suppose that X is a Markov chain and let Y be X conditioned to hit . Then for any x, y ∈ K, y).
Finally, we recall an important theorem concerning the intersections of random walks and Brownian motion with continuous curves. 1. There exists a constant C < ∞ such that the following holds. Suppose that α : [0, t α ] → C is a continuous curve such that α(0) = 0 and α(t α ) ∈ ∂D r . Then if z ∈ D r , 2. There exists a constant C < ∞ such that the following holds. Suppose that ω is a path from the origin to ∂B n . Then if z ∈ B n , Proof. The statement about Brownian motion can be found, for example, in [14,Theorem 3.76]. The statement about random walks was originally proved in [8]; a formulation that is closer to the one given above can be found in [15].
Let n = inf{i : s i = m}. Then Note that one may obtain a different result if one performs the loop-erasing procedure backwards instead of forwards. In other words, if we let λ R = [λ m , . . . , λ 0 ], then in general, L(λ R ) = L(λ) R . However, if λ has the distribution of a random walk, then L(λ R ) has the same distribution as L(λ) R [10, Lemma 7.2.1]. Now suppose that S is a random walk on Λ and K is a proper subset of Λ. We define the LERW S K to be the process In other words, we run S up to the first exit time of K and then erase loops. We write S n for S Bn . We also define the following stopping times. Given A ⊂ K, we let If either A or K is a ball B n , we replace A or K by n in the subscript or superscript. Different sets K will produce different LERWs S K , but one can define an "infinite LERW" as follows. For ω ∈ Ω l , and n > l let µ l,n (ω) = P S[0, σ n l ] = ω .
The µ l are consistent and therefore there exists a measure µ on infinite self-avoiding paths. We call the associated process the infinite LERW and denote it by S. In this paper, we will consider both the infinite LERW S, and LERWs S K obtained by stopping a random walk at the first exit time of K and then erasing loops.
Suppose that X is a Markov chain and ω = [ω 0 , . . . , ω k ] is a path in Λ with respect to p X . One can write down an exact formula for the probability that the first k steps of the loop-erased process X K are equal to ω. Letting A j = {ω 0 , . . . , ω j }, j = 0, . . . , k, A −1 = ∅, and G X (.; .) be the Green's function for X, we define Then [13], We can use the previous formula to show that while LERW is certainly not a Markov chain, it does satisfy the following "domain Markov property": for any Markov chain X, if we condition the initial part of X to be equal to ω, the rest of X can be obtained by running X conditioned to avoid ω and then loop-erasing.
Lemma 2.4 (Domain Markov Property). Let X be a Markov chain, K ⊂ Λ and ω = [ω 0 , ω 1 , . . . , ω k ] be a path in K (with respect to p X ). Define a new Markov chain Y to be X started at ω k conditioned on the event that Proof. Let G X (.; .) and G Y (.; .) be the Green's functions for X and Y respectively. Then by formula (5), and by Lemma 2.2,

Schramm-Loewner evolution
In this subsection, we give a brief description of Schramm-Loewner evolution. For a much more thorough introduction to SLE, see for instance [14] or [29]. Suppose that γ : [0, ∞] → D is a simple continuous curve such that γ(0) ∈ ∂D, γ(0, ∞] ⊂ D and γ(∞) = 0. Then by the Riemann mapping theorem, for each t ≥ 0, there exists a unique conformal map g t : D \ γ(0, t] → D such that g t (0) = 0 and g ′ t (0) > 0. The quantity log g ′ t (0) is called the capacity of D \ γ(0, t] from 0. By the Schwarz Lemma, g ′ t (0) is increasing in t and therefore, one can reparametrize γ so that g ′ t (0) = e t ; this is the capacity parametrization of γ. For each t ≥ 0, one can verify that exists and is continuous as a function of t. Also, g t and U t satisfy Loewner's equatioṅ Therefore, given a simple curve γ as above, one produces a curve U t on the unit circle satisfying (6). One calls U t the driving function of γ.
The idea behind the Schramm-Loewner evolution is to start with a driving function U t and use that to generate the curve γ. Indeed, given a continuous curve U : [0, ∞] → ∂D and z ∈ D, one can solve the ODE (6) up to the first time T z that g t (z) = U t . If we let K t = {z ∈ D : T z ≤ t} then one can show that g t is a conformal map from D \ K t onto D such that g t (0) = 0 and g ′ t (0) = e t . We note that there does not necessarily exist a curve γ such that K t = γ[0, t] as was the case above.
The radial Schramm-Loewner evolution arises as a special choice of the driving Brownian motion. Then the resulting random maps g t and sets K t are called radial SLE κ . It is possible to show that with probability 1, there exists a curve γ such that containing 0 (see [22] for the case κ = 8 and [20] for κ = 8). In [22] it was shown that if κ ≤ 4 then γ is a.s. a simple curve and if κ > 4, γ is a.s. not a simple curve. One refers to γ as the radial SLE κ curve. One defines radial SLE κ in other simply connected domains to be such that SLE κ is conformally invariant. Given a simply connected domain D = C, z ∈ D and w ∈ ∂D, there exists a unique conformal map f : D → D such that f (0) = z and f (1) = w. Then SLE κ in D from w to z is defined to be the image under f of radial SLE κ in D from 1 to 0.
We will focus on the case κ = 2, and throughout γ : We conclude this section with precise statements of the two facts about SLE 2 that were mentioned in the introduction: the intersection exponent for SLE 2 and the weak convergence of LERW to SLE 2 .
Theorem 2.5 (Lawler, Schramm, Werner [18]). Let γ be radial SLE κ from 1 to 0 in D and for 0 < r < 1, let τ r be the first time γ enters the disk of radius r. Let W be an independent complex Brownian motion started at 0. Then In particular, ν = 3/4 for SLE 2 .
In order to state the convergence of LERW to SLE 2 we require some notation. Let Γ denote the set of continuous curves α : [0, t α ] → D (we allow t α to be ∞) such that α(0) ∈ ∂D, α(0, t α ] ⊂ D and α(t α ) = 0. We can make Γ into a metric space as follows.
where the infimum is taken over all continuous, increasing bijections θ : Note that d is a pseudo-metric on Γ, and is a metric if we consider two curves to be equivalent if they are the same up to reparametrization. Let f be a continuous function on Γ, γ be radial SLE 2 , and extend S n to a continuous curve by linear interpolation (so that the time reversal of n −1 S n is in Γ), then Theorem 2.6 (Lawler, Schramm, Werner [20]).

Some results for random walks
In this section we establish some technical lemmas concerning random walks that will be used repeatedly in the sequel.

Hitting probabilities and Green's function estimates
Recall that ξ K is the first hitting time of the set K and G(.; Λ \ K) is the Green's function in the set Λ \ K.
Proof. We begin by showing that for any K ⊂ Λ, z ∈ Λ \ K and y ∈ ∂ i K, To prove this, we proceed as in the proof of [10, Lemma 2.1.1]. Let Note that τ is not a stopping time. However, since τ < ξ K , Applying the previous equality to K = K 1 ∪ K 2 , we get that By reversing paths, one sees that Thus, However, by reversing paths yet again, which completes the proof of the lemma. 1. There exists c > 0 and N such that for all l ≥ N the following holds. Suppose that K ⊂ Λ contains a path connecting 0 to ∂B l . Then for any x ∈ B l , 2. There exists c > 0 and N such that for all N ≤ 2l < n, the following holds. Suppose that K ⊂ Λ contains a path connecting ∂B 2l to ∂B n . Then for any Proof. Proof of (1): We assume that N is sufficiently large so that for all l ≥ N , each of the steps below works. First of all, we may assume that z ∈ B l/4 since if z ∈ B l , If p is the distribution of the random walk S, let m = max{|x| : p(x) > 0}.
Since K connects 0 to ∂B l , there exists a subset K ′ of K such that for each i = 1, . . . , ⌊l/m⌋, there is exactly one point It is clear that if the lemma holds for K ′ then it will hold for K. Therefore, we assume that K has this property. By [16, Proposition 6.3.5], there exists a constant C such that if z ∈ B l , Therefore, if y, z ∈ B l with |z − y| < l/2, and l is large enough, Let V be the number of visits to K before leaving B 2l . Then for any z ∈ B l/4 , since there are at least l/(4m) points within distance l/2 from z, Also, since there are at most 2j/m points in K within distance j from z ∈ B l , Therefore, for any x ∈ B l , Proof of (2): We again let N be large enough so that if l ≥ N the following steps work. For x ∈ ∂B 2l , there exists c > 0 such that for all l large enough, Therefore, we may assume that n > 4l. We will show that if K ⊂ Λ contains a path connecting ∂B 2l to ∂B 4l , then It suffices to show that for all z, y ∈ B 4l \ B 2l , For if we can show (7), then we can proceed as in the proof of (1).
To prove the left inequality, we note that for z ∈ ∂B l/4 (y), and by approximation by Brownian motion, one can bound the latter from below by a uniform constant. We now prove the right inequality in (7). By the monotone convergence theorem, However, since B m \ B l is a finite set, we can apply [16, Proposition 4.6.2] which states that where a denotes the potential kernel. By [16,Theorem 4.4.3], Therefore, However, because |z| < 4l, a standard estimate [16, Proposition 6.4.1] shows that Therefore, Lemma 3.3. There exists C < ∞ and N such that for all N ≤ 2l ≤ n, the following holds. Suppose that K ⊂ Λ contains a path connecting ∂B 2l to ∂B n . Then for any Proof. Without loss of generality, we may assume that K ⊂ Λ \ B 2l . In that case, σ 2l < ξ K ∧ σ n for all walks started in B l and therefore, However, by Lemma 3.2, for any w ∈ ∂B 2l , Lemma 3.4. There exists c > 0 and N such that for N ≤ 2l ≤ n the following holds. Suppose K ⊂ Λ \ B 2l contains a path connecting ∂B 2l to ∂B n . Then for z ∈ B l , Proof. To begin with, we claim that it suffices to show that for z ∈ ∂B l such that To see this, note that Therefore it suffices to show that for all z ∈ B l , However, for z ∈ B l , Furthermore, by the discrete Harnack inequality, for any y, y ′ ∈ ∂B l , Therefore, the lemma will follow once we prove (8).
Let z ∈ ∂B l be such that Then, By Lemma 3.2, for any w ∈ ∂B 2l , Thus, which completes the proof.

Random walks conditioned to avoid certain sets
Proposition 3.5. There exist constants N and c > 0 such that for all n ≥ N the following holds. Suppose that K ⊂ Λ \ B n (n, 0) where B n (n, 0) denotes the ball of radius n centered at (n, 0) (see Figure 2). Then, where W denotes standard two-dimensional Brownian motion. Then h is the solution to the Dirichlet problem with boundary value 1 [−π/4,π/4] . Therefore, we can express h as is the Poisson kernel for the unit disk. One can compute h (it is easier to consider the problem on H and then map back via a conformal transformation): We now establish three basic facts about h that we will use below.
1. Let D 1 (1) be the disk of radius 1 centered at the point 1. We claim that for all (1)). Thus, to prove the claim, it suffices to show that takes its maximal value at t = π for 2π/3 ≤ t ≤ 4π/3. Since one has an explicit formula for h, this is left as an exercise for the reader or the reader's Calculus students.
These results can also be obtained from the explicit formula for h.
Assume that n is large enough so that B rn ⊂ nD where r is as in the previous paragraph. We let h n (z) = h(z/n) which is harmonic in nD. Then for z ∈ B rn , define Then h n is discrete harmonic in B rn and agrees with h n on ∂B rn .
A natural question to ask is how close does the discrete harmonic solution h n approximate the continuous harmonic solution h n ? By [16,Corollary 6.2.4], for all z ∈ B rn , By Taylor's theorem, for any C 4 function f and z ∈ Λ, where R is the range of the walk S and M 4 (f ) is the L ∞ norm of the sum of the fourth derivatives of f in the disk D R (z).
Since the random walk S has covariance matrix the identity (we have been assuming that S has this property but this is the first place we use it), one can show that L is actually a multiple of the continuous Laplacian. Thus, Lh n = 0. Furthermore, since the fourth derivatives of h are bounded on rD, M 4 (h n ) is bounded by Cn −4 in B rn . Therefore, combining all the previous remarks (and letting CR 4 = C since R depends only on the random walk S which we've fixed), we obtain that for z ∈ B rn , We now have all the pieces we need to prove the proposition. Let z be any point in B rn \ B n (n, 0), and fix x ∈ Λ such that Re(x) > 0. Then by Taylor's theorem and our previous observations about h, if n is large enough so that x is in B rn , where M 2 (h) is the L ∞ norm of the sum of the second order derivatives of h in rD.
Since ∂h ∂x (0) > 0, it is clear that for n sufficiently large, Thus, This implies that for n sufficiently large, since h(0) = 1/4. Recall that r was defined so that for all z such that r < |z| < 1, and |arg(z)| > π/3, h(z) < 1/8. Therefore, Since x is independent of K and n, and hence, Finally, Lemma 3.6. For 0 < θ < π, there exist c(θ), N (θ) and α(θ) such that the following holds. For n > N , and z ∈ Λ with N < |z| < n, let W be the wedge Then, Remark By comparison with Brownian motion, one expects that α(θ) = π/θ would be the optimal constant. However, in this paper we will only need the existence of α and not its exact value.
Proof. It is clear that we can make α(θ) non-increasing in θ, therefore, without loss of generality, take θ < π/2. Also, without loss of generality, assume arg(z) = 0. Let W be the cone We define a random sequence of points {z k } ⊂ W as follows. We let z 0 = z. Then, given z k , we let B k be the largest ball centered at z k such that B k ⊂ W , r k be the radius of B k and let z k+1 = S(σ B k ) where S is a random walk starting at z k .
We note that z j = z k for all j ≥ k if and only if z k ∈ ∂ i W . We make N (θ) large enough to ensure that if |z| > N then z / ∈ ∂ i W . In this case, there exists c ′ (θ) > 0 such that r 0 ≥ c ′ (θ) |z|.
Let E k denote the event that z k+1 = z k and that On the event E k , r k+1 ≥ (1 + 2 sin(θ/4))r k = c(θ)r k , and |z k+1 | 2 ≥ |z k | 2 + r 2 k (we use the fact that θ < π/2 for the second assertion). Therefore, if E 0 , . . . , E j all hold, then r k ≥ c k r 0 ≥ c k c ′ |z| for k = 1, . . . , j. Therefore, Since c > 1, it follows that if we let j be the smallest integer such that Finally, by the invariance principle, there exists a constant c ′′ (θ) and N such that for n ≥ N , P (E k ) ≥ c ′′ for all k. Therefore, Corollary 3.7. Fix θ 1 , θ 2 ∈ (0, π/2). There exist N , α and c > 0 depending only on θ 1 + θ 2 such that the following holds. Let N ≤ l < m < n, and z ∈ ∂B m . Let W be the half-wedge 1. Let r = min{m sin θ 1 , m sin θ 2 , m − l}. Then for any K ⊂ B m , 2. Let r ′ = min{m sin θ 1 , m sin θ 2 , n − m}. There exists β = β(θ 1 + θ 2 , l/m) such that for any K ⊂ Λ \ B m , Notice that in both cases, the right hand side depends only on θ 1 , θ 2 , and the ratios l/n and m/n.
Proof. Both parts of the corollary are proved similarly. We prove 1 in detail, and indicate the modifications needed to prove 2. Without loss of generality, assume that arg(z) = 0. The quantity r defined in the statement of the corollary is the radius of the largest ball with center z whose closure is contained in the half-infinite wedge We can apply Proposition 3.5 to the ball B = B(z, r), to obtain that there exists a constant c > 0 such that Let y be any point on ∂B such that |arg(y − z)| ≤ π/4, and let B = B(y, r/2). Note that B ⊂ W \ B m . There exists a point w ∈ ∂ B such that y is on the bisector of the angle formed from the lines joining w to the two outermost "corners" of W , and let W be the wedge with vertex w, radius s and such that x 1 and x 2 are on ∂ W .
The wedge W will have aperture θ ≥ (θ 1 + θ 2 )/2 and y will be on the axis of symmetry of W . Therefore, by Lemma 3.6, .
To finish the proof of 1, let c * = c r n α . Then, The proof of 2 is similar. In this case, the angle θ of the wedge W will be such that This is why β will also depend on l/m. Besides this observation, the proof of 2 is identical to the proof of 1.
The following corollary is similar to the previous one, except that here we are conditioning to avoid sets that are on either side of the half-wedge.
Suppose that K 1 ⊂ B n contains a path connecting ∂B an to ∂ i B n , and K 2 ⊂ Λ \ B 4n contains a path connecting ∂B 4n to ∂B 4bn . Let K = K 1 ∪ K 2 . Then for any z ∈ ∂B n , y ∈ ∂B 4n with |arg(z)| < θ/2, |arg(y)| < θ/2, one obtains that there exists a constant c = c(a, θ) such that Now suppose that w ∈ ∂B 2n ∩ W * . Then by Lemma 3.1, However, for w ∈ ∂B 2n ∩ W * , G(w; W \ ({y} ∪ K)) ≥ G(w; B c(θ)n (w)), and therefore by Lemma 3.3, Thus, By the strong Markov property, However, by Lemma 3.4, there exists c(θ, b) such that for x ∈ ∂ i B 3n ∩ W * , Furthermore, by the discrete Harnack inequality, there exists c > 0 such that for all Similarly, Therefore using part 2 of Corollary 3.7,

Random walk approximations to hitting probabilities of curves by Brownian motion
Given a random walk S on a discrete lattice Λ, we can make S into a continuous curve S t by linear interpolation and define S (n) t = n −1 S n 2 t . Now fix a continuous curve α : [0, t α ] → D. In this section, we will compare the probability that a Brownian motion W t started at the origin leaves the unit disk before hitting α to the probability that S (n) t started at the origin leaves the unit disk before hitting α. By the invariance principle, one can show that as n tends to infinity, the latter probability approaches the former. What is more difficult is to show that this convergence is uniform in α as long as the diameter of α is sufficiently large. This is Proposition 3.12 and is the main result of the section.
For 0 < δ < 1, let A δ denote the annulus Given a curve α : Recall that D δ (z) is the disk of radius δ centered at z.
We construct a continuous curve ω as follows. Given any z = e iθ ∈ ∂D, let We let ω(0) = α(t 1 ), then follow the curve α from α(t 1 ) to α(t 2 ) (we might be following the curve backwards), then the ray r δ (z 2 ) from α(t 2 ) to z 2 , then ∂D clockwise from z 2 to z 1 , and finally the ray r δ (z 1 ) back to α(t 1 ). See Figure 6. From the definition of the t k and z k , and the fact that α is simple, ω is a closed simple curve. Therefore, by the Jordan curve theorem, ω separates the plane into two disjoint connected components. Furthermore, because θ 2 −θ 1 < 2π, the winding number of ω with respect to 0 is 0. Therefore, 0 is in the unbounded component defined by Now suppose that θ 2 − θ 1 ≥ 2π. Let z 1 = e iθ 1 and z 2 = e iθ 2 . In order to prove the lemma for this case, we claim that it suffices to show that either r δ (z 1 ) ∩ α[0, t α ] or r δ (z 2 ) ∩ α[0, t α ] contains two points whose Argument differs by a nonzero multiple of 2π. For suppose that w 1 = |w 1 | e iθ 1 = α(s 1 ) and w 2 = |w 2 | e iθ 1 = α(s 2 ) are such that Arg(w 2 ) − Arg(w 1 ) = 2kπ, k = 0, and w 1 and w 2 are chosen so that arg(α(t)) = θ 1 C δ z 1 z 2 Figure 5: The set C δ and the points z 1 , z 2 and α(t 1 ), α(t 2 ) ω Figure 6: The curve ω in the case θ 2 − θ 1 < 2π for t between s 1 and s 2 . Also, without loss of generality, |w 1 | < |w 2 |. Then we can consider the curve ω that starts at w 1 , follows α from w 1 to w 2 , and then returns to w 1 along the ray r δ (z 1 ) (see Figure 7). By construction, ω is a closed simple curve whose winding number is nonzero. Therefore, ω contains 0, and since ω ⊂ D, it disconnects 0 from ∂D. This shows that r δ (z 1 ) ∪ α[0, t α ] ∪ r δ (z 2 ) disconnects 0 from ∂D, from which the lemma follows. In order to show that either r δ (z 1 ) ∩ α[0, t α ] or r δ (z 2 ) ∩ α[0, t α ] contains two points whose Argument differs by a nonzero multiple of 2π, we let r k and t k be such that r k = sup{|α(t)| : Arg(α(t)) = θ k } and α(t k ) = r k e iθ k , k = 1, 2. We assume to the contrary that both {re iθ 1 : r 1 < r ≤ 1} ∩ α[0, t α ] = ∅ and {re iθ 2 : r 2 < r ≤ 1} ∩ α[0, t α ] = ∅. Then we can define two curves ω 1 and ω 2 as follows. ω 1 starts at r 1 e iθ 1 , travels along α to r 2 e iθ 2 , follows the ray r δ (z 2 ) to z 2 , then travels along ∂D clockwise to z 1 , and finally returns to r 1 e iθ 1 along r δ (z 1 ). We define ω 2 in the same way except that we travel along ∂D clockwise. Then by our assumptions, ω 1 and ω 2 are both closed simple curves with nonzero winding number, and hence both contain the origin, a contradiction.
Proof. Let T be large enough so that P 0 {τ D > T } < ǫ. By the strong approximation of Brownian motion by random walk [16, Theorem 3.5.1], there exists a sequence S n of random walks defined on the same probability space as W so that if S (n) t = n −1 S n n 2 t is defined as above, then almost surely, Therefore, we can let N be such that for n ≥ N , where C is the larger of the constants in the Beurling estimates (Theorem 2.3). Now fix n ≥ N and let τ * = τ α ∧ τ D and σ * = ξ α ∧ σ D . Suppose first that τ * < σ * , and let W τ * = w, S (n) τ * = z. Suppose further that τ D < T and that Then on this event, |z − w| ≤ C −2 ǫ 2 r. Since both α and ∂D are continuous curves, by the Beurling estimates, letting D = D r (w), The case where σ * < τ * is proved in the same way, using the Beurling estimates for Brownian motion.
2. For all ǫ > 0, there exists δ > 0 and N such that for all n ≥ N and α : [0, t α ] → D, Proof. By Lemma 3.9, there exist z 1 , z 2 ∈ ∂D such that However, by rotational symmetry of Brownian motion, is the same for all z ∈ ∂D. Since, D δ (z) shrinks to a single point as δ tends to 0, the right-hand side above can be made to be less than ǫ by making δ small enough. This proves 1. The proof of 2 is the same as 1, except that we cannot use any sort of rotational symmetry. Therefore, we must show that there exists δ > 0 and N such that for all n ≥ N and z ∈ ∂D, Let δ > 0 be small enough so that for all z ∈ ∂D, where τ 2 is the hitting time of the circle of radius 2 by the Brownian motion W . We now apply Lemma 3.10 to obtain that there exists N such that for all n ≥ N , there exists a simple random walk S, defined on the same probability space as W such that This implies that Proof. By Lemmas 3.10 and 3.11 , there exists δ > 0 and N such that for all n ≥ N , the following holds. There exists a Brownian motion W and a random walk S defined on the same probability space such that for all continuous curves α : where τ * = τ α ∧ τ D and σ * = ξ α ∧ σ D .
We will show that the proposition holds with this choice of N . Note that We will show that P 0 (E) < ǫ. The proof that P 0 (F ) < ǫ is entirely similar. Recall that D 1−δ denotes the ball of radius 1 − δ and that A δ denotes the annulus D \ D 1−δ . Then, However by (9), P 0 (E 1 ) < ǫ, and by (11), P 0 (E 2 ) < ǫ and P 0 (E 3 ) < ǫ.

Some results for loop-erased random walks 4.1 Up to constant independence of the initial and terminal parts of a LERW path
For this section only, we no longer restrict our random walks to be two-dimensional. When it is necessary to specify what dimension we are in, we will denote the dimension by d.
Although we have avoided using it up to now, it will be convenient to use "big-O" notation in this section. Recall that f (n) = O(a(n)) if there exists C < ∞ such that f (n) ≤ Ca(n).
Here, C can depend on the dimension but on no other quantity. We will also write Recall that for a natural number l, Ω l denotes the set of paths ω = [0, ω 1 , . . . , ω k ] such that ω j ∈ B l , j = 0, 1, . . . , k − 1 and ω k ∈ ∂B l .
Given a set K such that B l ⊂ K, and such that we define µ l,K on Ω l to be the measure obtained by running a random walk up to the first exit time σ K of K, loop-erasing and restricting to B l . More precisely, for ω ∈ Ω l , If B l ⊂ K 1 and B l ⊂ K 2 are such that for either i = 1 or i = 2, we define a measure µ l,K 1 ,K 2 on Ω l as follows. Let X denote random walk conditioned to leave K 1 before K 2 (as long as this has positive probability; if not, µ l,K 1 ,K 2 is not defined). Then for ω ∈ Ω l , we let This is the measure on Ω l obtained by running X up to σ X K 1 , loop-erasing and restricting to B l . Note that µ l,K is equal to µ l,K,Λ .
In this section, we establish some relations between the measures defined above. In fact we will show that for n ≥ 4 and any K 1 and K 2 such that B nl ⊂ K 1 and B nl ⊂ K 2 , (

Proposition 4.4)
This implies that if B 4l ⊂ K 1 and B 4l ⊂ K 2 then (recall that the symbol ≍ means that each side is bounded by a constant multiple of the other side, the constant depending on the random walk S and on nothing else). We use these facts to prove that for a LERW S n , η 1 l ( S n ) and η 2 4l,n ( S n ) (see the definitions in section 2.3) are independent up to constants (Proposition 4.6).
For ω ∈ Ω l and y ∈ ∂B l , Proof. Let y 0 be such that We will show that P y 0 {σ nl < ξ ω } ≤ C log n which will clearly imply the result for all y ∈ ∂B l .
Proof. Let K = K 1 ∩ K 2 . Let X be a random walk conditioned to exit K 1 before K 2 . Then by formula (5), The function h is harmonic in B nl and ω k ∈ B l . Therefore, by the difference estimates for harmonic functions [16,Theorem 6.3.8], and thus, Hence, it suffices to show that Let y 0 ∈ ω be such that Then Therefore, and hence A similar argument shows that Now let y be any point on the path ω. Then since B nl is a subset of both K 1 and K 2 , Let y 1 be such that and y 2 be such that Then by (12) and (13), if d = 2, The lower bound and the case d ≥ 3 follows in the same way.
We now define a measure on unrooted loops in Λ. See [16,Chapter 9] for more details.
A rooted loop η = [η 0 , η 1 , . . . , η k ] is a path in Λ such that η 0 = η k ; η 0 is called the root of the loop. We say that two rooted loops η and η ′ are equivalent if η ′ = [η j , η j+1 , . . . , η k−1 , η 0 , . . . , η j ] for some j. We call the equivalence classes under this relation unrooted loops. We will denote by η the unrooted loop corresponding to the rooted loop η. Recall the notation Notice that this does not depend on the root of η and therefore p( η) is well defined for unrooted loops η.
We define a measure m on the set of unrooted loops as follows. Given an unrooted loop η, let α( η) be the number of distinct rooted representatives of η. Then we define where | η| denotes the number of steps of a representative of η. Any two representatives of η have the same number of steps so that m is well defined.
Proof. By Formula (5), for any ω ∈ Ω l , Let e(n) = (log n) −1 if d = 2 and e(n) = n 2−d if d ≥ 3. Let ω ′ = [ω ′ 0 , . . . , ω ′ k ′ ] be any other path in Ω l . We will show that and For this will imply that One then gets the other bound by reversing the roles of K 1 and K 2 . We first show (15). Since B nl ⊂ K 1 If d ≥ 3, then [16, Proposition 6.4.2] for z ∈ ∂B nl , One gets a similar formula with K 2 replacing K 1 and ω ′ replacing ω, from which (15) follows for the case d ≥ 3.
To prove (15) for the case d = 2, we first note that [10, Lemma 2.1.2] Furthermore, for z ∈ ∂B nl , By applying Lemma 4.1 and [10, Lemma 2.1.2] again we get that for y ∈ ∂ i B l , Thus, Therefore, and hence, with a similar lower bound. We get similar bounds with ω ′ replacing ω and K 2 replacing K 1 from which (15) follows.
Let η * be such that | η * | =< η > and such that Suppose first that d = 2. Then for j ≤ l/2 and z ∈ ∂B j , where the exponent 1/2 comes from the Beurling estimates (Theorem 2.3). If l/2 < j ≤ l, then Therefore, The case d ≥ 3 is easier. In this case, Thus, Corollary 4.5. Recall that S denotes an infinite LERW. Suppose that n ≥ 4, K is such that B nl ⊂ K, and ω ∈ Ω l . Then, In particular, Proof. This follows immediately from Proposition 4.4 and the definition of the infinite LERW S: We conclude this section with the proof that η 1 and η 2 are independent (up to constants) for the LERW S n . 1 + O( l m ) P η 1 l S n = ω P η 2 m,n S n = λ d ≥ 3.
Proof. We fix l, m and n throughout and let η 1 = η 1 l , η 2 = η 2 m,n . Let X be a random walk started at 0 conditioned to leave B n before returning to 0. Then X and S n have the same distribution. Let Y be a random walk started on ∂B n according to harmonic measure from 0 and conditioned to hit 0 before returning to ∂B n . By reversing paths, for all z ∈ ∂B n , Therefore, X and Y R (the time-reversal of Y ) have the same distribution. Recall that one obtains the same distribution on LERW by erasing loops from random walks forwards or backwards. Therefore, if ω and λ are as above, Now let Z be a random walk starting at λ 0 , conditioned to hit 0 before leaving B n \ λ. Then by the domain Markov property for LERW (Lemma 2.4), However, by again reversing paths as above, and noting that the loop-erasure of a random walk starting at 0 and conditioned to avoid 0 after the first step has the same distribution as the loop-erasure of an unconditioned random walk,

The separation lemma
Throughout this section S will be a random walk and S will be an independent infinite LERW. Let F k denote the σ-algebra generated by For positive integers j and k, let A k be the event and T k j be the integer valued random variable The goal of this section is to prove the following separation lemma which states that, conditioned on the event A k that the random walk S and the infinite LERW S do not intersect up to the circle of radius k, the probability that they are further than some fixed distance apart from each other at the circle of radius k (D k ≥ c 1 ) is bounded from below by a constant c 2 > 0.
Theorem 4.7 (Separation Lemma). There exist constants c 1 , c 2 > 0 such that for all k, The proof of Theorem 4.7 depends on two lemmas. Lemma 4.8 roughly states that the probability that S and S stay close together without intersecting each other is very small. More precisely, the probability that T j−1 ≥ (1 + cj 2 2 −j )T j and that the paths don't intersect is less than 2 −βj 2 . Lemma 4.9 states that if S and S are separated, then there is a substantial probability that they stay separated and don't intersect. To wit, if {T j > k} and A T j hold, then the probability that A 2k and {D 2k ≥ 2 −j } hold is greater than 2 −αj . The proof of the separation lemma then combines the two lemmas to show that then conditioned on A 2k , there is a probability bounded below that S and S separate to some fixed distance before leaving the ball of radius 2k no matter how close the two paths were upon leaving the ball of radius k.
Proof. We let j 0 be such that for all j ≥ j 0 , cj 2 2 −j < 1/2. Since k is fixed we will write T j for T k j from now on. We suppose that S[0, σ(T j )] and S[0, σ(T j )] are any paths such that T j ≤ 3k 2 holds. We also assume that D T j < 2 −j+1 or else there is nothing to prove. Now consider K := S[0, σ((1 + cj 2 2 −j )T j )] and let ρ = inf{n ≥ σ(T j ) : dist(S n , K) ≤ 2 −j+1 |S n |}.
Notice that even though we assume that D T j < 2 −j+1 , ρ is not necessarily equal to σ(T j ).
If ρ > σ((1 + 4 · 2 −j )T j ) then this means that T j−1 < (1 + 4 · 2 −j )T j . However, if ρ ≤ σ((1 + 4 · 2 −j )T j ), then by the Beurling estimates for random walk (Theorem 2.3), there exists c ′ < 1 such that The same estimate will hold starting at T j + 8k2 −j , k = 0, 1, . . . , ⌊cj 2 /8⌋. Therefore, Lemma 4.9. There exists α < ∞ and c > 0 such that for all j and k, Proof. Since k is fixed, we will omit the superscript k from now on. Let z 1 = S(σ T j ) and z 2 = S( σ T j ). Without loss of generality, we may assume that T j < 2k (or else there is nothing to prove) and also that arg(z 2 ) < arg(z 1 ). Note that |z 1 | = |z 2 | = T j and k ≤ T j ≤ 2k. Suppose that A T j holds. By definition of T j , there exists c > 0 and half-wedges Using Lemma 2.4 and Proposition 3.5, it is easy to verify that there exists a global constant c ′ such that Now consider the half-wedges Applying Lemma 2.4 and Corollary 3.7 to W ′ 1 and W ′ 2 , one obtains that for any z ′ 1 ∈ ∂W 1 such that |z ′ The result then follows since W ′ 1 and W ′ 2 are distance c2 −j T j apart and S and S are independent.
Proof of Theorem 4.7. We again fix k and let where c is chosen so that s ≤ 3/2. We also let j 0 be such that for j ≥ j 0 , 2 −βj 2 +αj < 1, where α and β = β(c) are as in Lemmas 4.8 and 4.9.
To prove the theorem, it suffices to show that for all m, By Lemma 4.9, it is enough to find a constant c ′ 2 such that P T j 0 ≤ 3k/4 A k ; 2 −m ≤ D k/2 < 2 −m+1 ≥ c ′ 2 . In fact, we will show that Then, However, Therefore, by Lemmas 4.8 and 4.9, Using the same techniques, one can prove a "reverse" separation lemma. Let S be a random walk started uniformly on the circle ∂B n and conditioned to hit 0 before leaving B n . Let X be the time reversal of S n (so that X is also a process from ∂B n to 0). As before, for k ≤ n, let Then, Theorem 4.10 (Reverse Separation Lemma). There exists c 1 , c 2 > 0 such that 5 The growth exponent

Introduction
Recall that W t denotes standard complex Brownian motion and γ denotes radial SLE 2 in D started uniformly on ∂D.
In this chapter we will consider random walks and independent LERWs. We will view them as being defined on different probability spaces so that P {.} and E [.] denote probabilities and expectations with respect to the LERW, while P {.} and E [.] will denote probabilities and expectations with respect to the random walk. For m ≤ n, we define Es(m, n), Es(n) and Es(n) as follows.
Es Es(m, n) is the probability that a random walk from the origin to ∂B n and the terminal part of an independent LERW from m to n do not intersect. Es(n) is the probability that a random walk from the origin to ∂B n and the loop-erasure of an independent random walk from the origin to ∂B n do not intersect. Es(n) is the probability that a random walk from the origin to ∂B n and an infinite LERW from the origin to ∂B n do not intersect. In section 5.2, we prove that for m < n, Es(n) can be decomposed as Es(n) ≍ Es(m) Es(m, n).
In section 5.3, we use the convergence of LERW to SLE 2 (Theorem 2.6) and the intersection exponent 3/4 for SLE 2 (Theorem 2.5) to show that We then combine these two results to show that Es(n) ≈ n −3/4 . Finally, in section 5.4, we show how the fact that Es(n) ≈ n −3/4 implies that Gr(n) ≈ n 5/4 . Before proceeding, we prove the following lemma which shows that Es(n) and Es(4n) are on the same order of magnitude.
Proof. By Corollary 4.5, it suffices to show that It is clear that the left hand side is greater than or equal to the right hand side. To prove the other direction, we will use the separation lemma (Theorem 4.7). Given a point z ∈ ∂B n , let W (z) be the half-wedge where c 1 is as in the statement of the separation lemma. We also let By the strong Markov property for random walk, By Lemma 2.4 and Corollary 3.7, Finally, by the separation lemma, and therefore, Proof. Let l = ⌊m/4⌋ and fix η 1 = η 1 l and η 2 = η 2 m,n . For any path η in Ω n ,

Intersection exponents for SLE 2 and LERW
In this section, we use the convergence of LERW to SLE 2 to show that for 0 < r < 1, Es(rn, n) ≍ r 3/4 . We combine this result with the decomposition Es(n) ≍ Es(rn) Es(rn, n) from the previous section to obtain that Es(n) ≈ n −3/4 . We recall the notation introduced in Section 2.6. Let Γ denote the set of continuous curves α : [0, t α ] → D (we allow t α to be ∞) such that α(0) ∈ ∂D, α(0, t α ] ⊂ D and α(t α ) = 0. We can make Γ into a metric space as follows. If α, β ∈ Γ, we let where the infimum is taken over all continuous, increasing bijections θ : [0, t α ] → [0, t β ]. Note that d is a pseudo-metric on Γ, and is a metric if we consider two curves to be equivalent if they are the same up to reparametrization.
Recall (Theorem 2.6) that LERW converges weakly to SLE 2 on the space (Γ, d). We want to apply this result to the functions f r defined as follows. Given 0 < r < 1 and α ∈ Γ, we let where ρ r = inf{t : |α(t)| = r}.
We also define f r to be identically 1 for r ≥ 1 (think of ρ r = 0 in that case, so that the above probability is 1). Recall that Theorem 2.5 states that if γ is SLE 2 then Unfortunately, the f r are not continuous on the space (Γ, d). However, the following lemma shows that they can be approximated by continuous functions. Lemma 5.4. For all 0 < r < 1, there exists a function f r that is uniformly continuous on the space (Γ, d) such that for all α ∈ Γ f r/2 (α) ≤ f r (α) ≤ f 2r (α).
The latter can be made arbitrarily small by choosing δ small enough. By reversing the roles of α and β, one gets a similar lower bound, proving that f r is uniformly continuous.
Lemma 5.5. There exists C < ∞ such that the following holds. Given a random walk S and an independent LERW S n , we extend them to continuous curves S t and S n t by linear interpolation. Then for all 0 < r < 1, there exists N = N (r) such that for n ≥ N , 1 C r 3/4 ≤ E P S[0, σ n ] ∩ η 2 rn,n ( S n ) = ∅ ≤ Cr 3/4 .
By Lemma 5.4, f r (n −1 S n ) ≤ f 2r (n −1 S n ), and f 2r is continuous in the metric (Γ, d). Therefore, by the weak convergence of LERW to SLE 2 described at the beginning of this section, there exists N 2 such that for n ≥ N 2 , E f 2r (n −1 S n ) ≤ E f 2r (γ) + r 3/4 where γ denotes SLE 2 . Furthermore, applying first Lemma 5.4, and then Theorem 2.5, Therefore, the upper bound holds for N = max{N 1 , N 2 }. The lower bound is proved in the same fashion.
We now prove the analogue of the previous lemma for the case where S and S n are discrete processes. The reason why the discrete case does not follow immediately from the continuous case is that we allow random walks that "jump", and therefore it's possible for two realizations of S and S n to avoid each other on the lattice Λ but to intersect after they are made continuous curves by linear interpolation.
Theorem 5.6. There exists a constant C such that the following holds. For all 0 < r < 1, there exists N = N (r) such that for n ≥ N , 1 C r 3/4 ≤ Es(rn, n) ≤ Cr 3/4 .
Proof. Fix 0 < r < 1. The lower bound follows immediately from Lemma 5.5 and the fact that if the discrete processes intersect each other so too will the continuous curves.
To prove the upper bound we introduce some notation that will be used only in this proof. Let S[0, . . . , σ n ] denote the discrete set of points in Λ visited by S between S 0 and S(σ n ). We will write S[0, σ n ] to denote the continuous set of points in C visited by the continuous curve S t from S 0 to S(σ n ). We use similar notation for S n . In addition, we let η 2 = η 2 rn,n S n [0, . . . , σ n ] be the terminal part of the discrete LERW curve and η 2 = η 2 rn,n S n [0, σ n ] be the terminal part of the continuous LERW curve.
As in the proof of Lemma 3.11, one can choose δ > 0 small enough so that for all n sufficiently large, and for all z ∈ ∂B n , P 0 {S[0, σ n ] ∩ B δn (z) = ∅} < r 3/4 .
Furthermore, given such a δ, we can choose ǫ > 0 and N such that for all n ≥ N , and all z ∈ ∂B n , the following holds. Let y ∈ Λ be the closest point to (1 − ǫ)z. Then, Lemma 5.8. Fix z ∈ B n . Let S be a random walk and let X be an independent random walk started at z conditioned to hit 0 before leaving B n . Then P z ∈ S n [0, σ n ] = G n (0, z)P z L(X[0, ξ X 0 ]) ∩ S[1, σ n ] = ∅ .
We finally have all the tools needed to prove our main theorem.
Hence, As before, the last quantity is comparable to Es(r). Therefore, for all z such that n/4 ≤ |z| ≤ 3n/4, P z ∈ S n [0, σ n ] ≥ cG n (0, z) Es(r). This proves the lower bound since ǫ was arbitrary.