One-point function estimates for loop-erased random walk in three dimensions

In this work, we consider loop-erased random walk (LERW) in three dimensions and give an asymptotic estimate on the one-point function for LERW and the non-intersection probability of LERW and simple random walk in three dimensions for dyadic scales. These estimates will be crucial to the characterization of the convergence of LERW to its scaling limit in natural parametrization. As a step in the proof, we also obtain a coupling of two pairs of LERW and SRW with different starting points conditioned to avoid each other.


Introduction and main results
for the first time that S exits from a ball of radius 2 n centered at 0. Let x n be the one of the nearest point from 2 n x among Z 3 . Finally, we set a n,x = P x n ∈ LE(S[0, T ]) (1.3) for the probability that LERW hits x n where LE(λ) stands for the loop-erasure of a path λ (see Section 2.2 for its precise definition). Now we can state the main theorem of this paper.
In order to prove Theorem 1.1, it turns out that we need to estimate the following non-intersection probability of simple random walk and LERW. Let S 1 and S 2 be independent simple random walks on Z 3 started at the origin. We write T i n for the first time that S i exits from a ball of radius n. We are interested in Es(n) : the probability that LERW LE(S 1 [0, T 1 n ]) and simple random walk S 2 [1, T 2 n ] do not intersect (we denote this non-intersection event by A n ). In this paper, we will show the following theorem. Theorem 1.2. There exist c > 0 and δ > 0 such that for all n ∈ Z + , Es(2 n ) = c2 −αn 1 + O(2 −δn ) , (1.8) where α is the exponent in ( 1.2).
This theorem immediately implies a lot of up-to-constants estimates for LERW. We summarize them in the following corollary but postpone its proof till the end of Section 3.2. Write M n = len LE(S 1 [0, T 1 n ]) (1.9) for the number of lattice steps for LERW and let Es(·, ·) be another escape probability defined in ( 2.8). Before finishing this subsection, it may be worth mentioning one of the motivations of this work. It turns out that our results in this work help us characterize how 3D LERW converges to its scaling limit. In particular, it is a key ingredient for giving a natural time-parametrization of the scaling limit of 3D LERW. Some progress towards in this direction will be made in [15].

Some words about the proof
Let us here explain a sketch of proofs for the main results. We recall that S is the simple random walk on Z 3 started at the origin and that T stands for the first time that S exits from B(n) a ball of radius n. We write γ = LE(S[0, T ]) for the loop-erasure of the simple random walk path. Take a point x ∈ B(n). In order for γ to hit the point x, the following two conditions are required: where σ x stands for the last time (up to T ) that S hits x. Considering the time reversal of LE S[0, σ x ] and translating the path, we can relate the probability of the second condition (ii) to the non-intersection probability Es(n) defined as in ( 1.7). In fact, it is known that the probability that γ hits x is comparable to n −1 Es(n) if x is not too close to the origin and the boundary of B(n) (see [21] for this). Thus, loosely speaking, the proof of Theorem 1.1 boils down to that of Theorem 1.2.
We will now explain how to prove Theorem 1.2. Since the existence of the scaling limit of LERW (we denote the scaling limit by K) is already proved by Gady Kozma in [4], in order to estimate on Es(2 n ), it is natural to compare it with the non-intersection probability of K and a Brownian motion both started at the origin. However, this approach unfortunately does not work without modification because there is still a non-negligible "gap" between the simple random walk and Brownian motion as well as LERW and K. The idea to deal with this issue is that somehow we separate the starting points of LERW and simple random walk wide enough so that the gap becomes negligible.
Let us here be more precise on how to rigorize the idea of separating starting points. Note that for clarity of presentation we may not use the same notation as Sections 3 through 5.
• For two sequences {f n } and {g n }, we write f n g n if f n = g n 1 + O 2 −δn for some constant δ > 0.
• Take q ∈ (0, 1). For a path λ and integer k ≥ 0, we denote the first time that λ exits from B 2 (1−kq)n by t k,q . We set t = t 0,q .
• We write γ n = LE(S 1 [0, T 1 2 n ]) and write λ n = S 2 [1, T 2 2 n ] where T i m stands for the first time that S i exits from B(m). Notice that λ n (0) = 0. Since it is proved in [16] that the distribution of γ n [0, t 1,q ] is sufficiently close to that of γ n−1 [0, t 1,q ], we have P (F ) P (F ) and (note that A n ⊂ F and A n−1 ⊂ F ) b n P A n F P A n−1 F (1. 12) where A n := A 2 n is the event considered in Es(2 n ) (see ( 1.7)), F is the event that γ n [0, t 1,q ] does not intersect with λ n [0, t 1,q ] and F is the event that γ n−1 [0, t 1,q ] does not intersect with λ n−1 [0, t 1,q ]. See Lemma 3.1 for more details.
• To write P A n F explicitly, we introduce the following function g(γ, λ). Suppose that we have a pair of two paths (γ, λ) such that they are lying in B 2 (1−q)n (denote the set of such pairs by Γ). Let X be a random walk started at the endpoint of γ and conditioned that X [1, t] does not intersect with γ. Also let Y be the simple random walk started at the endpoint of λ. Then the function g is defined by for (γ, λ) ∈ Γ. Note that the starting point of γ does not necessarily coincide with that of λ for (γ, λ) ∈ Γ.
• Why does ( 1.15) hold? To see this, assume that the non-intersection event considered in ( 1.13) occurs. This event forces X and Y not to return to an inner ball B 2 (1−2q)n with high probability. Therefore, the initial part of (γ, λ) is not important for computing g(γ, λ).
• This observation allows us to write P A n F E g γ n [t 2,q , t 1,q ], λ n [t 2,q , t 1,q ] F . (1.16) Namely, in order to deal with P A n F , we only have to control the (conditional) distribution of the end part γ n [t 2,q , t 1,q ], λ n [t 2,q , t 1,q ] conditioned on the event F . With this in mind, for (γ, λ) ∈ Γ, let • The events A n , A n−1 , F and F are defined by replacing γ n , γ n−1 , λ n and λ n−1 by γ n , γ n−1 , λ n and λ n−1 respectively in the definition of A n , A n−1 , F and F defined in the fourth item. For example, A n is the event that γ n does not intersect with λ n , and F is the event that γ n−1 [0, t 1,q ] does not intersect with λ n−1 [0, t 1,q ], etc. Then, by the same reason for the equation ( 1.12), we have P (F ) P (F ) and P (A n ) P (A n−1 ) (1.18) since the distribution of γ n [0, t 1,q ] is close to that of γ n−1 [0, t 1,q ] by [16].
• The same ideas used to show ( 1.16) gives that P A n F E g γ n [t 2,q , t 1,q ], λ n [t 2,q , t 1,q ] F . (1.19) As in the equation ( 1.17), we define the probability measure ν by ν(γ, λ) = P γ n [t 2,q , t 1,q ], λ n [t 2,q , t 1,q ] = (γ, λ) F . (1.20) • Here is the second key observation. It follows from some coupling technique that the total variation distance between µ and ν is small enough so that P A n F P A n F . See bullets later in this subsection for a brief explanation why this coupling works. Rigourous arguments are wrapped up in Section 4 with the help of a recent work [12]. This observation also gives that P A n−1 F P A n−1 F . Therefore, we have Namely, the original starting points (= the origin) are replaced by the two different poles of B 2 (1−3q)n .
• This replacement of the starting points can be carried out for b n−1 , which enables us to compare b n and b n−1 via multiscale analysis established in [4]. In fact, taking q > 0 sufficiently small, we see that b n = b n−1 1 + O 2 −δn for some δ > 0. This gives ( 1.11).
O λ n λ n (t 1,q ) Let us also give a brief explanation on the coupling in the second key observation above.
• It is well known that LERW is not Markovian per se but can still be regarded as a Markov process if one records all of its history. In fact, consider an infinite LERW (abbreviated as ILERW later on) η and write η n for η[0, T 2 n ], the part of the path stopped at first exiting B(2 n ), then one can construct a formal Markov process (η 0 , η 1 , η 2 , . . .) with corresponding transition probability at each "step". Letting η m,n = η[T 2 m , T 2 n ] for m ≤ n. It is also well known that LERW enjoys a weak asymptotic independence: the correlation of η k and η m,n decays like O(2 k−m ) (or η m,∞ for ILERW). Hence, it is possible to couple two ILERW's η x and η y started from x, y ∈ B(2 k ), such that η x n+k,∞ = η y n+k,∞ with probability 1 − O(2 −βn ). Key observations that leads to this coupling are: i) 3D LERW rarely "back-trackes" very far. In fact, the actual configuration in B(2 k ) barely matters for the distribution of the path after reaching ∂B(2 k+m ), if m is large. Hence, it is possible to find an m such that if two LERW's are coupled for m steps, then the probability of getting decoupled ever is bounded by, say, 1/2.
ii) At each step, there is a uniform positive probability for η x and η y to be coupled for m steps from the next step. More precisely, for any realizations of η x n and η y n , there exists c > 0, such that with probability greater than c, η x n+1,n+m+1 = η y n+1,n+m+1 . iii) The exponential convergence rate follows from a combination of i) and ii), by bundling every (m + 1) steps as a giant step.
• In the same spirit, it is possible to couple a pair of ILERW and SRW started from a pair of (not necessarily distinct) points inside B(2 k ) and conditioned to not intersect until first exit of B(2 2n+k ), with another such pair, such that their paths agree from first exiting B(2 n+k ) onward with probability 1 − O(2 −βn ). In this case, observation i) above is still easily verifiable and observation ii) follows thanks to an auxiliary result generally known as the "separation lemma". For the form that satisfies our setup, see Theorem 6.1.5 of [21] or Claim 3.4 of [19].
• As a prototype of such coupling, although with a slightly different setup, has already been proved by Greg Lawler in [12], we will not reinvent the wheel here; instead, we are going to show that it is possible to obtain the coupling described above through "tilting" the coupling in [12], as they are in fact intimately related. In [12], Lawler considered the law of a pair of ILERW's (η 1 , η 2 ) both started from the origin and tilted its law by 1 η 1 n ∩η 2 n ={0} exp(−L n (η 1 n , η 2 n )), (1.22) where L n (η 1 n , η 2 n ) stands for the loop term of loops in B(2 n ) that touch both η 1 n and η 2 n . Then, it is shown that it is possible to couple (η 1 , η 2 ) with another pair of ILERW (η 1 , η 2 ) started from different initial configurations inside B(2 k ), and tilted similarly, such that η i (n+k)/2,n = η i (n+k)/2,n , i = 1, 2 with probability greater than 1 − O(2 −β(n−k) ) for some β > 0.
• In fact, if one decompose the SRW in our setup as LERW and loops from an independent loop soup, then the conditioning that ILERW and SRW do not intersect can be interpreted as an ILERW and a LERW do not intersect plus loops that touches both paths do not appear, which is in a way very similar to the tilting of ( 1.22), despite a few stitches in the definition for it is not trivial to deal with the replacement of ILERW by LERW. In Section 4, we are going to deal with this issue and then add back loops to obtain the coupling we need. Figure 2: A schematic sketch of the coupling. Dashed and continuous curves represent LERW's and SRW's respectively. Although it is impossible to couple the beginning parts of walks, we can find a coupling such that both LERW's and SRW's agree after first exiting of B(2 (k+n)/2 ) with high probability.
Finally, let us explain the structure of this paper. In Section 2, we introduce notations and discuss some basic properties of LERW. We prove Theorem 1.2 and Corollary 1.3 in Section 3, assuming coupling results from the next section. Section 4 is dedicated to the discussion and proof of various couplings of two pairs of LERW and SRW conditioned to avoid each other, crucial to the proof of both main theorems. Finally, we give a proof of Theorem 1.1 in Section 5. As it resembles a lot the proof of Theorem 1.2, we will be less pedagogical in the presentation.
We write | · | for the Euclid distance in R 3 . For n ≥ 0 and z ∈ Z 3 , we write B(z, n) := x ∈ Z 3 |x − z| < n .
If z = 0, we write B(n) and B(n) = B(2 n ) for short. We write D = {x ∈ R d |x| < 1} and D for its closure.
For any path η, we write T x,r (η) for the first time that η hits ∂B(x, r) the outer boundary of B(x, r). We write T r (η) for the case that x = 0. Let T r (η) = T 2 r (η). We will drop the dependence on η in the notation whenever there is no confusion.
For a subset A ⊂ Z d , we let ∂A = {x / ∈ A| there exists y ∈ A such that |x − y| = 1} We write A := A ∪ ∂A. Given a subset A ⊂ Z d and r > 0, we write rA := {ry | y ∈ A}.
Throughout the paper, we will use various letters, e.g. S, S 1 , S 2 , R 1 , R 2 , etc., to represent simple random walks on Z 3 and will use ad-hoc notations for its probability law. Unless otherwise indicated, we use E • with the same sub-and superscripts for the corresponding expectation of a probability measure P • . For the probability law and the expectation of S started at z, we use P z and E z respectively.
For a subset A ⊂ Z 3 and x, y ∈ A, we write where τ = inf{t S(t) ∈ ∂A}, for Green's function in A.
We use c, C , · · · to denote arbitrary positive constants which may change from line to line and use c with subscripts, i.e., c 1 , c 2 , . . . to denote constants that stay fixed. If a constant is to depend on some other quantity, this will be made explicit. For example, if C depends on δ, we write C δ .

Loop-erased random walk, reversibility and domain Markov property
In this subsection, we will give the definition of the loop-erased random walk (LERW) and review some known facts about it, especially the time reversibility and the domain Markov property. As we are working in the case of d = 3, we will only state things for Z 3 .
In general, we will use the term loop-erased random walk, or LERW, loosely to refer to the (random) SAP obtained by loop-erasing from some finite SRW. However, to make things precise, we have to specify the stopping time of this SRW. A very common scenario is the following: let D be a finite subset of Z 3 and let S x be the SRW started from x stopped at the first exit of D, when we call LE(S x ) the LERW from x stopped at exiting D. In contrast, if S x is an infinite SRW started from x ∈ Z 3 , then LE(S x ) will be referred to as an infinite LERW or ILERW started from x.
"LERW stopped at exiting B(n)" and ILERW are different stochastic objects. Moreover, the law of the former and that of the latter truncated at first exiting B(n) differ greatly, especially at the ending parts. However, if we only look at beginning parts, they still look pretty similar. The following quantative lemma is excerpted from [16].
Lemma 2.2 (Corollary 4.5 of [16]). Given 0 ∈ D ⊂ Z 3 , let λ • be an ILERW started at the origin, and λ a LERW started from 0 stopped at exiting D. Let P • and P be their respective laws. Moreover, suppose n ≥ 2 and l ≥ 0 satisfying B(nl) ⊂ D. Truncate λ • and λ at first exit of B(l) and denote by λ • l and λ l respectively. Then, for all ω ∈ Γ l , For a path λ[0, m] ⊂ Z d , we define its time reversal λ R by λ R := [λ(m), λ(m − 1), · · · , λ(0)]. Note that in general, LE(λ) = (LE(λ R )) R . However, as next lemma shows, the time reversal of LERW has same distribution to the original LERW. Let Λ m be the set of paths of length m started at the origin. Lemma 2.3 (Lemma 7.2.1 of [10]). For each m ≥ 0, there exists a bijection T m : Λ m → Λ m such that for each λ ∈ Λ m , we have Moreover, we know that that λ and T m λ visit the same edges in the same directions with the same multiplicities.
Note that LERW is not a Markov process. However it satisfies the domain Markov property in the following sense.

Random walk loop soup and LERW
In this subsection, we give another description of LERW through random walk loop soup measure. We refer readers to Sections 4 and 5 of [11] for detailed discussions in this direction. Let λ be a loop-erased random walk from x ∈ D ⊆ Z 3 stopped at exiting D. For generality of notation we do not require D to be a finite set. For instance, if D = Z 3 , then λ is actually an ILERW. Let η be a SAP in D of length n such that η[0, n − 1] ⊂ D and write τ = η(n), then Esc η,D τ := P τ S[1, 2, . . .] hits ∂D before hitting η denotes the escape probability for SRW, and Note that if τ ∈ ∂D, then Esc η,D (τ ) = 1. Let m denote the (unrooted) random walk loop measure defined in Section 5 of [11]. Then, This description may seem mysterious to readers unfamiliar with the subject, but what it actually does is nothing more than weighting each SAP by the total weight of all SRW paths whose chronological loop-erasure gives this SAP. Conversely, starting from a LERW path, we are also able to "add back" loops from a loop soup and obtain a SRW. More precisely, letting λ be the LERW as above, and let L be an independent Poissonian loop soup with intensity m. Then we can add back loops from L to λ through the following procedure.
-For each such loop, choose a representative rooted at λ(j) (if there are several representatives then choose uniformly among all possibilities).
-Concatenate all these loops in the order they appear in the soup and call the concatenated loop l j .
• Insert l j 's into λ in the following order (note that l j starts and stops at λ(j)): and call the new path γ.
Then, γ has the law of the SRW started at x stopped at the first exit of D.

Escape probability and scaling limit
As we discussed in Section 1.2, the probability that a LERW and an independent simple random walk do not intersect up to exiting a large ball, which is referred to as escape probability, is a key object in the paper.
Definition 2.6. Let 0 < m < n. Let S 1 and S 2 be independent SRW's on Z 3 started at the origin, and write P for their joint distribution. We define escape probabilities Es(n) and Es(m, n) as follows: let

7)
and let Es(m, n) : More precisely, we first consider the loop erasure of a random walk up to exiting B(n), then we only look at the loop erasure after the last visit to B(m). Es(m, n) is the probability that this part of the loop erasure does not intersect an independent simple random walk up to the first exiting of B(n).
As the most accurate asymptotics of Es(n) and Es(m, n) are given in Corollary 1.3, we will not talk about existing weaker estimates in the form of ( 1.2), but only state a fact which will be used later. It is showed in Lemma 7.2.2 of [21] that lim n→∞ log Es 2 (1−q)n , 2 n log 2 −αqn = 1. (2.9) where α is the same as in ( 1.2).
Finally, we review some known facts about the scaling limit of LERW in three dimensions, whose existence was first proved in [4]. We refer to [19] for properties of this limit. Let S be a simple random walk started at the origin on Z 3 . Remind the definition of D in Section 2.1. Write (2.10) We write H(D) for the metric space of the set of compact subsets in D with the Hausdorff distance d H .
Thinking of LEW n as random elements of H(D), let P (n) be the probability measure on H(D) induced by LEW n . Then [4] shows that P (2 j ) is a Cauchy sequence with respect to the weak convergence topology, and therefore P (2 j ) converges weakly to some limit probability measure ν. We write K for the random compact subset associated with ν and call K the scaling limit of LERW in three dimensions. It is also shown in [4] that K is invariant under rotations and dilations.

Non-intersection probability
This section is dedicated to the proof of Theorem 1.2 and is organized in a hierarchical structure. We lay out the structure of the whole proof in Section 3.

Notations and the proof of Theorem 1.2
We start with introducing notations for various walks and paths we are going to discuss in this section. Then we state without proof a few key propositions that compare the non-intersection probabilities under different setups. After that, we give a proof of Theorem 1.2 assuming these intermediate results. At last, we give the proof of Corollary 1.3. Let n ∈ Z + . Let S 1 and S 2 be independent SRW's on Z 3 . Write for the loop-erasure of S 1 up to T n assuming that S 1 (0) = x. Using the notation above, γ n where t (resp. u) denotes the first time that γ n x hits the boundary of B k (resp. B l ). Write γ n for γ n 0 . Let be S 2 assuming that S 2 (0) = x. Write λ for λ 0 . As introduced in Section 1, we are interested in the event and the quantity a n := Es(2 n ) = P (A n ), for α as in ( 1.2). Write b n = a n a n−1 .
We fix some q ∈ (0, 1/10) whose explicit value will be specified in Prop. 3.4. We also assume that we always take large n such that n ≥ 30/q. Now, let . Then we have a n = P A n A n,q P A n,q and a n−1 = P A n−1 A − n,q P A − n,q . (3.6) The following lemma shows that the probability of A n,q is very close to that of A − n,q , allowing us to relate b n to the ratio of two conditional probabilities. Lemma 3.1. It follows that for all n and q ∈ (0, 1) We postpone its proof to Section 3.2.
As explained in Section 1, we will relate quantities such as A n and A n,q to non-intersection probabilities of SRW and LERW started at a mesoscopic distance. To this end, we introduce the following notations. Let be a LERW started from x 1 stopped at exiting T n and SRW started from y 1 stopped at exiting T n . Set which are the analog of A n and A n,q defined in ( 3.3) and ( 3.5). Write We also write γ 1 := LE(S 1 [0, T n−1 ]) assuming that S 1 (0) = x 1 and define We claim that conditional probabilities that appear in ( 3.7) can be replaced with a small error by corresponding conditional probabilities for γ 1 and λ 1 . and We will postpone the proof to Section 3.7 and dedicate Sections 3.3 -3.6 to preparatory works. Also, we note that this proposition relies on the coupling result from Section 4. As a corollary, we have: We now introduce quantities that correspond to the scale 2 n−1 . Let assuming that S 1 (0) = x 2 and S 2 (0) = y 2 . We then define and similarly, let γ 2 := LE(S 1 [0, T n−2 ]) assuming that S 1 (0) = x 2 and Similar to ( 3.13) (note that the only difference between C • and B • is the scale), we also have 14) The following proposition states that the probability of B n and B − n are actually close to that of C n and C − n . We postpone its proof to Section 3.8.
Proposition 3.4. There exist universal constants c 1 > 0 and q 1 > 0 such that for all n ≥ 1 and q ∈ (0, q 1 ), We are now ready to prove Theorem 1.2.
Proof of Thm. 1.2 assuming Lemma 3.1, Propositions 3.2 and 3.4. To prove ( 1.8), it suffices to show that there exists universal constants c 1 , N > 0 such that for all n ≥ N , Recall ( 3.13) and ( 3.14). By Proposition 4.4 of [16] as in the proof of Lemma 3.1, we have Hence, it follows that there exists a universal constant δ > 0 such that for all n and q ∈ (0, 1) The claim ( 3.17) hence follows by Proposition 3.4 with appropriately chosen q and N = 30/q (see above 3.5). This finishes the proof of Theorem 1.2.
Proof of Corollary 1.3. The first statement of ( 1.10) follows from Proposition 6.
which gives the third statement. Finally, exponential tail bounds on M n as in Theorem 8.1.6 and Theorem 8.2.6 of [21] ensure the tightness of M n /n 2−α .
Before ending this subsection, we introduce some path spaces which will be used in the following sections. We write Γ for the set of paths satisfying We also write Λ for the set of paths satisfying (ii) above only.

Proof of Lemma 3.1
Proof of Lemma 3.1. Since for the probability that S 2 up to T (1−q)n and η do not intersect. Using the function f , we see that However, by Corollary 4.5 of [16], for any η ∈ Γ, we have This finishes the proof of this lemma.

Decomposition of paths and weak independence
In this subsection, we decompose the paths at their first exit of B (1−q)n and state without proof some preliminary results for Proposition 3.2.
Remind the definition of Λ and Γ at the end of Section 3.1. Define Take (η 1 , η 2 ) ∈ C. Let w i be the endpoint of η i lying on ∂B (1−q)n . Write R 1 , R 2 for two independednt simple random walks started at w 1 , w 2 and be the probability that the loop-erasure of X and R 2 do not intersect. We are now able to re-write P (A n ) in terms of g: Hence stands for the conditional distribution on A n,q . The next proposition measures the magnitude of g(η 1 , η 2 ) in terms of a function h of (η 1 , η 2 ) we are going to define below and Es(·, ·) in Definition 2.6. We postpone its proof till Section 3.6.
Remark 3.6. We now explain the significance of Prop. 3.5. The function h measures closeness of η 1 and η 2 in the following sense. Let where w i stands for the endpoint of η i . It turns out that However, we note that if D(η 1 , η 2 ) ≤ 1 3.4 Asymptotic independence of g(η 1 , η 2 ) from initial parts The goal of this subsection is to show that roughly speaking, g does not depend on the "initial part" of (η 1 , η 2 ). In other words, if two pairs (η 1 , for i = 1, 2, then g(η 1 , η 2 ) is very close to g(η 3 , η 4 ).
We recall that the C, the set of pairs of paths, was defined as in ( 3.25). Take (η 1 , η 2 ) ∈ C. We denote the endpoint of η i by w i which lies on ∂B (1−q)n . We also define a truncating operation on paths by (3.33) We want to consider an analog of g(η 1 , η 2 ) for π(η 1 , η 2 ). With this in mind, we write (note the difference of X defined here and X in ( 3.26)) R 1 , R 2 for two independednt simple random walks started at w 1 , w 2 and Note that g(η 1 , η 2 ) is a function of π(η 1 , η 2 ) and it does not depend on the initial part of (η 1 , η 2 ).
We next define an analog of h(η 1 , η 2 ) for π(η 1 , η 2 ) (see ( 3.31) for the definition of h). To do it, let We define Again we remark that h(η 1 , η 2 ) is a function of π(η 1 , η 2 ). An easy modification of the proof of Proposition 3.5 gives that Proposition 3.7. One has g(η 1 , η 2 ) h(η 1 , η 2 )Es 2 (1−q)n , 2 n . (3.37) The following proposition shows that g(η 1 , η 2 ) is close enough to g(η 1 , η 2 ) for "typical" (η 1 , η 2 ) in the sense that h(η 1 , η 2 ) is not too small. More precisely, we have Proposition 3.8. There exists C < ∞ such that for all n, q ∈ (0, 1) and (η 1 , η 2 ) ∈ C satisfying Proof. We follow the notations introduced at the beginning of Section 3.3. Take (η 1 , η 2 ) ∈ C satisfying ( 3.38) and set Then by definition, we have We first show that P H 2 is close to P H 2 . It is clear that P H 2 ≤ P H 2 since π(η 1 ) ⊂ η 1 . On the other hand, we have In oder to bound the RHS of the inequality above, set We also let By the strong Markov property and Proposition 1.5.10 of [10], Then by Proposition 6.1.1 of [21] and Proposition 1.5.10 of [10] again, we see that Moreover, by the strong Markov property as above, it follows that By ( 3.37) and ( 3.38), we see that Combining this with ( 3.41), we have Therefore, by ( 2.9) and the fact that α < 1, we have Thus, using ( 3.45) and ( 3.43), we conclude that Finally, using ( 3.41), ( 3.42) and ( 3.46), we have which completes the proof.

Comparison of conditional probabilities
The goal of this subsection is ( 3.58) and ( 3.60), in which P (A n | A n,q ) and P (B n | B n,q ) are both rewritten (with a small error term) into weighted sums of g(·, ·) which allows an easy comparison using results from Section 4.
We recall that µ n,q was defined as in ( 3.29) which is a probability measure on C obtained by the conditional distribution on A n,q . We also recall the decomposition of P (A n |A n,q ) in ( 3.28). The next proposition shows that we can replace g(η 1 , η 2 ) in the RHS of ( 3.28) with g(η 1 , η 2 ) with small enough error terms. Proposition 3.9. One has that and let C 2 = C \ C 1 . By the separation lemma (see Theorem 6.1.5 of [21] or Claim 3.4 of [19] for the separation lemma), we see that there exists a universal constant c, c > 0 such that for all n and q ∈ (0, 1) (3.48) If η 1 and η 2 are c-well-separated, then it is easy to see that there exists c > 0 Therefore, we have Combining this with Proposition 3.8, we have which gives the proposition.
The following corollary is a by product of the proof above (see ( 3.49)).
Corollary 3.11. It follows that We now turn to P (B n |B n,q ). We let µ n,q be the probability measure on C which is induced by As the next proposition can be proved very similarly, we will omit its proof.
Proposition 3.12. It follows that 3.6 Proof of Propositions 3.5 and 3.7 As two propositions are extremely similar, we will only prove Proposition 3.5.We treat two directions of ( 3.30) in Lemmas 3.13 and 3.15 separately. We recall the definition of Es(n) and Es(m, n) in Definition 2.6.
Lemma 3.13. There exists c < ∞ such that for all n, q and (η 1 , η 2 ) ∈ C, Proof. Take (η 1 , η 2 ) ∈ C. Recall the definition of F = F n,q in ( 3.56). Then we have . Define Then it follows that By Proposition 4.6 of [16], since γ n [t 1 , T n ] and γ n [0, T (1−q)n+1 ] are "independent up to constant", we see that where P i stands for the probability law of S i assuming S i (0) = 0 and Y i are defined by Therefore by Harnack principle, it follows that Y 1 and Y 2 1 λ[0, T (1−q)n ] = η 2 are also "independent up to constant". Thus, we have Again by Harnack principle, we see that By Proposition 6.2.1, 6.2.2 and 6.2.4 of [21], we see that On the other hand, Therefore, by domain Markov property of LERW (see Lemma 2.4), we have which completes the proof of ( 3.61).
The following claim can be proved in a similar way.
Corollary 3.14. For all (η 1 , η 2 ) ∈ C, The next lemma shows the opposite direction. Proof. We will follow the proof of Proposition 5.3 of [16] and Proposition 6.2.4 of [21]. We recall that t 1 is the last time that γ n lies in ∂B 2 (1−q)n+3 (see ( 3.62) for t 1 ). Let (these notations pertain only in this proof) and set A = W ∪ B (1−q)n+1 . Let K 1 , K 2 be sets of paths defined by Then we have Note that X is a function of γ 1 while Y is a function of γ 2 . By domain Markov property of LERW and Lemma 6.2.3 of [21], it follows that there exists c > 0 such that for all η ∈ K 1 and η ∈ K 2 P 1 γ ⊂ W γ 1 = η, γ 2 = η ≥ c.

This gives
However, by Proposition 4.6 of [16], we see that It follows from (6.43) of [21] that Therefore, it suffices to show that To prove ( 3.65), by the separation lemma (see (6.13) of [21] for the version of the separation lemma that we need here), we see that there exists some universal constant c > 0 such that (see 3.48 for definition of being well-separated): which finishes the proof of ( 3.64).

Proof of Proposition 3.2
Proof of Proposition 3.2. We only prove ( 3.11) as ( 3.12) follows in a similar manner. To show ( 3.11), it suffices to show P A n A n,q − P B n B n,q ≤ 1 + O(2 −δqn ) P A n A n,q . (3.66) We observe that applying Prop. 4.2 with (k, N ) there equal to (1 − 3q)n, (1 − q)n , we have where || · || TV stands for the total variation distance. Hence, Thus, ( 3.66) follows by rewriting the leftmost and rightmost expression above back to conditional probabilities, thanks to ( 3.58) and ( 3.60). This finishes the proof of ( 3.11).

Proof of Proposition 3.4
We recall that λ 1 and λ 2 stands for the SRW on Z 3 started at y 1 and y 2 , respectively. We also recall that In order to keep coherence of notation in this subsection we will use notations on the right hand side above in the proposition below.
Proposition 3.16. There exist universal constants c 3 > 0 and q 1 > 0 such that for all n ≥ 1 and q ∈ (0, q 1 ), Proof. We will closely follow the proof of Proposition 7.1.1 of [21]. We seek to replace the SRW in both probabilities in ( 3.68) by Wiener sausages (see ( 3.78) and ( 3.79)), and establish an inequality between them (which is ( 3.75)). Lemma 3.2 of [7] proves that it is possible to couple λ 2 and W , a Brownian motion in R 3 started at y 2 , on the same probability space P 2 such that where for some universal constants a, b > 0. Throughout this proof, we will assume (λ 2 , W ) is defined on the same probability space as above. We also write E 2 for the corresponding expectation. Define the event J 2 by Then by Theorem 3.17 of [17], it follows that We now consider the Wiener sausage.For a discrete or continuous path η and L ∈ R, write (η) +L := x ∈ R 3 there exists y ∈ η[0, len(η)] such that |x − y| ≤ 2 L (3.73) for its sausage of radius 2 L . We also write We let R = (R(j)) j≥0 be the simple random walk on 2Z 3 started at x 1 and let γ 2 = LE(R[0, T n ]) be its loop-erasure up to T n .
By Theorem 5 of [4], there exist deterministic universal constants q 0 ∈ (0, 1), c 0 ∈ (0, 1 4 ) and c 1 < ∞ such that for all q ∈ (0, q 0 ) and W satisfies J 2 then it follows that where P R stands for the probability law of R while P 1 stands for the law of S 1 (or equivalently law of γ 1 ). Thus, the two probabilities in ( 3.75) are functions of W 2 . (Note that we can take q 0 = 1 15 × min{ 1 6 , 8 , δ2 8 } where and δ 2 are universal constants as in the proof of Theorem 5 of [4] for the case that G 1 = Z 3 and G 2 = 2Z 3 . Taking q 0 like this form and conditioned W on J 2 , we can take universal deterministic constants c 0 and c 1 such that they do not depend on the starting point, see (132) of [4].) Next, we will replace each probability of ( 3.75) by the corresponding non-intersection probability of LERW and SRW as in the proof of Proposition 7.1.1 of [21]. We start with the left one. Note that Therefore, taking expectation with respect to W , we have for all q ∈ (0, q 0 ). But the scaling property of the Brownian motion ensures that the law of W 2 with W 2 started from y 2 coincides with the law of the Brownian motion B 1 := B[0, T n ] started from y 1 up to T n . Thus, we have for all q ∈ (0, q 0 ). Now we compare the probability of RHS of ( 3.77) and P (B n ). We again assume that λ 1 and B are coupled such that the Hausdorff distance between them is ≤ 2 2n 3 with probability at least 1 − ae −2 bn for some universal constants a, b > 0 (this is possible by Lemma 3.2 of [7]). Applying Lemma 4.8 of [4] (see Theorem 3.1 of [19] for a stronger version of it), it follows that there exists universal constants c 2 , ρ > 0 such that for all q ∈ (0, q 0 ), Combining this with our coupling of λ 1 and B, we see that Similarly, we see that Set q 1 = min{ c0ρ 10 , q0 10 }. Note that q 1 is a universal constant. We have showed that there exists a universal constant c 3 such that for all q ∈ (0, q 1 ), An inequality in the opposite direction also follows similarly. This gives ( 3.68).

Coupling
In this section, we establish various couplings of pairs of loop-erased walk and simple random walk conditioned to avoid each other up to some point under different setup and different initial configurations. As a corollary we obtain ( 3.67) which is a key ingredient in Section 3. As the prototype of such couplings already appears in [12], in this work we will give a direct proof, but rather argue through fine-tuning the coupling result from [12]. For more discussion, see the beginning of Section 4.2.

Setup and statement
We start by giving a brief introduction to our coupling. Pick k, n > 0 (not necessarily an integer) and N ≥ 2n + k. Let γ be an ILERW and λ be a SRW both in Z 3 with γ(0) = λ(0) = 0, independent of each other. We write η = (γ, λ) for the pair of walks and write P = P 0,0 for its law. We write for the walks truncated at the first exit of B N . Let x, y ∈ B k . Similarly, let η = (γ , λ ) where γ is an ILERW with γ (0) = x and λ is a SRW with λ (0) = y, again, independent from each other. We write P x,y for its joint law. For N ≥ k, we define γ N , λ N = λ [0, T N ] and η N similarly.
We write for the event that γ N and λ N have no intersection and define the event similarly. Note that in the second case, there is no need to exclude λ N (0). We write and P N x,y [η ∈ ·] = P x,y [η ∈ ·|U N ], (4.4) for the laws of η and η conditioned on U N and U N , respectively. For N ≥ n + k, we write η N = n η N if the paths agree from their first exit from B N −n onwards, i.e., where γ N −n,N = γ[T N −n , T N ], with other notations defined similarly.
We are now ready to state the our coupling.
Proposition 4.1. There exist β 1 > 0 and c 1 < ∞ such that for all n, k > 0, N ≥ 2n + k, x, y ∈ B k , there is a coupling Q of η under P N and η under P N x,y , such that uniformly for any self avoiding path γ • from x to ∂B N . This is possible thanks to Lemma 2.2. Note that conditioning on γ M N = γ N , we have This observation, along with the coupling P between γ and γ N , allows us to modify the coupling in

Variations on a coupling by Lawler
In this subsection,we are going to restate the coupling result from [12] under the setup that suits our needs.
In the course of proving the existence of infinite two-sided infinite loop-erased random walk (ITLERW), Greg Lawler considered a pair of ILERW's started from the origin, conditioned to not intersect each other up to some level and then tilted by a loop term. Then he constructed a coupling between such a pair of loop-erased walks and another pair conditioned on some prefixed initial configurations. As we are going to see, this coupling is intimately related to the non-intersection probability of a LERW and a SRW. For instance, if we consider η under P n (see the previous subsection for precise definition), and let ι be the loop erasure of λ n , then the law of (γ n , ι) can also be described through a tilting by loop terms. Hence, it is possible to modify the coupling from [12] to obtain Prop. 4.1. However, as the setup and tilting terms are slightly different in [12] and in our case, some care must be taken.
We start by restating the coupling in [12]. Let γ 1 and γ 2 be two independent ILERW starting from 0 and record their joint law by M. For 0 < k < N , write M N for the law of M tilted by where L N (γ 1 N , γ 2 N ) is the loop measure of loops in B N that touch both γ 1 N and γ 2 N . Let g 1 , g 2 be two SAP's started from 0 and stopped at first exiting B k , such that (4.8) Let M g be the law of γ 1 and γ 2 conditioned on (γ 1 k , γ 2 k ) = (g 1 , g 2 ) (in this case we write γ 1,g and γ 2,g for γ 1 and γ 2 ) and let M N g be M g tilted by G N (γ 1,g N , γ 2,g N ).
Remark 4.4. Note that in [12], the definition of L N (γ 1 N , γ 2 N ) for d = 3 is the measure of loops in B N \{0} that touches both γ 1 N and γ 2 N . Our choice in ( 4.7) does not change the tilted probability law but gives us some convenience in notation below when we do not start both walks from the same point any more.
We are now ready to state the original version of this coupling. Note that although the original version used e as the ratio between exponential scales, it is not a problem for us since it was stated explicitly in [12] that exponents do not have to be integers.  [12]). There exists β 3 > 0 and c 3 < ∞ such that for all k, n > 0, N ≥ 2n + k and any g = (g 1 , g 2 ) satisfying ( 4.8), we can find a coupling Q ∞ of M N and M N g such that Remark 4.6. Although it is tempting to claim that we can obtain the coupling in Proposition 4.1 by appropriately "adding back" loops from an independent loop soup to γ 2 and γ 2,g simultaneously, it is in fact imprecise due to the fact that the distributions of the "tip" of an LERW and an ILERW differ greatly (also, there are a few stitches in choice of loop terms). We will not discuss this in detail here but mention that in order to generate objects with the right distribution, one has to be very careful both in the sampling of γ's and the choice of loop terms (e.g. L N (γ 1 N , γ 2 N ) in ( 4.7)) in tilting procedures.
In fact, the "initial configuration" in the definition above does not have to be a nearest-neighbor SAP. As we now explain, this coupling also works under more general setups, especially when walks start from points other than the origin. Pick x, y ∈ B k and let (γ 1,x , γ 2,y ) be two independent ILERW starting from x and y respectively and record their joint law by M x,y . Define (γ 1,x N , γ 2,y N ) accordingly. Again let M N x,y be the law of M x,y tilted by G N (γ 1,x N , γ 2,y N ). We now state a variant of Prop. 4.5. Note that we still denote the coupling by Q ∞ .
We now explain briefly this variant holds.
• The constant in the separation lemmas (Lemmas 2.28 and 2.29 in [12]) stays unchanged and hence is uniform if one replaces paths from the origin by a pair of paths with different starting points, for it is inherited from Lemma 2.11, ibid., where the constant does not depend on (in the notation of that lemma in [12]) the choice of A as long as it is a subset of C n ; • Throughout the proof in [12] the probability of the coupling getting destroyed is always bounded by the probability that a (conditioned) random walk returns to the ball B k , see e.g. Lemma 2.32, ibid, hence the argument is still valid for the setup of Prop. 4.7.
• Also, we note that the change from n to 3n/2 is merely for the convenience of the coherence of notations in this paper.
Remark 4.8. Although it is not needed in this work, we would like to mention that the coupling in [12] actually works for even more general initial configurations which can just be two subsets of B k with a terminal point (in other words, starting points for the walks). For more discussion, see [15].

Proof of Proposition 4.1
We now give a proof of Prop. 4.1 through fine-tuning the coupling in Prop. 4.7. First, we claim that it suffices to prove the following coupling, which serves as a link between ILERW-SRW couplings of this work and the ILERW-ILERW couplings of [12]. For more comments, see the beginning of Section 4.2.
Pick N > k > 0. Let γ be an ILERW and γ be a LERW stopped at exiting ∂B N with γ(0) = γ(0) = 0, independent from each other. We write by N for its joint law and write N for their joint law tilted by G N (γ N , γ) (see ( 4.7)) for the definition of G N ). Similarly, let γ x be an ILERW and γ y be a LERW stopped at exiting ∂B N with γ(0) = x and γ(0) = y, independent from each other. We write by N x,y for its joint law and write N x,y for their joint law tilted by G N (γ x N , γ y ).
Proposition 4.9. There exist β 4 > 0 and c 4 < ∞ such that for all n, k > 0, N ≥ 2n + k + 1 and any x, y ∈ B k , there is a coupling Q of (γ, γ) under N and (γ x , γ y ) under N x,y , such that Independently from Q, sample a loop soup L with intensity measure m (see above ( 2.6) for definition of m) and denote the product measure by Q. We then add loops from L that stay inside B N and do not touch γ N to γ, and add loops inside B N that do not touch γ x N to γ y , according to the procedure described in Prop. 2.5. Also, to both γ and γ y we attach an independent SRW that starts from the terminal point respectively and denote the new, concatenated paths by λ and λ y respectively.
We claim that (γ, λ) and (γ x , λ y ) have the law of P N and P N x,y respectively, as required in Prop. 4.1. To verify this claim, it suffices to check the distribution of (γ N , λ N ) and (γ x N , λ y N ). For brevity we only check the first one. Let be the "energy" function for (γ N , γ N ) under N. Similarly, let be the the "energy" function for (γ N , λ N ) under P N . Given ι † and a loop soup L † , let λ † (ι † , L † ) stand for the path formed by adding back loops in L † to ι † . Then, to verify the claim above, it suffices to check (4.13) where summation is over all possible realizations of a loop soup. Here we let P stand for the law of L. Let µ ∞ be the law of an ILERW starting from 0 and µ N be the law of LERW from 0 stopped at T N . Thus, we can rewrite ( 4.11) as Let p 0 be the law of simple random walk from 0 stopped at first exiting B N , then 15) where L † (γ † , ι † ) stands for the set of loops that touch both γ † and ι † . Comparing ( 4.14) and ( 4.15), it suffices to show that given γ † and ι † such that γ † ∩ ι † = {0}, where the summation is again over all possible realizations of a loop soup. But this follows from the definition of L N and the restriction property of Poissonian loop soups. Hence, we have verified ( 4.13). Now it suffices to show that with the construction above, We observe that if λ N = n+1 λ y N , then at least one of must happen. We can bound the probability of the former by c2 −n/2 through a classical estimate on loop measures, see for instance Lemma 2.6 in [12] and that of the latter by c2 −βn through ( 4.10). This finishes the proof of ( 4.17) as well as Prop. 4.1.
We now turn to Prop. 4.9. To construct the coupling in Prop. 4.9 for N + 1 from Prop. 4.7 for N , we tilt the law of (γ N , γ) and (γ N , γ ) from Q ∞ to Q by "extra loop terms" (we will explain what this means immediately below). To show that under the new law Q paths are also coupled in the sense of ( 4.10) with high probability, it suffices to show that 1) the "Radon-Nikodym" derivative is uniformly bounded; 2) if paths have been coupled for many steps, then the "Radon-Nikodym" derivative should not differ too much and the laws of the "tips" we need to add from step N to N + 1 do not differ too much either.
In order to describe the tilting procedure we need to introduce some notations. As in the proof above, let µ ∞ i be the law of an ILERW γ i started from the origin, i = 1, 2, and µ N be the law of LERW γ 2 N from 0 stopped at T N . Let Γ N be the set of paths from the origin stopped at first exiting B N . Let υ i ∈ Γ N and ζ i = υ i ⊕ ι i ∈ Γ N +1 , i = 1, 2. We use bold fonts to denote a pair of paths, i.e., • = (· 1 , · 2 ) as a shorthand. Thus the decomposition above is written as γ N +1 = γ N ⊕ ι.
Note that the law N can be written as follows: , and the law M N can be written as follows. .
For g ∈ Γ N +1 , let z be the terminal point of g N and decompose g as g N ⊕ ι and define a new probability law of γ N +1 by where p z is the probability law of W , a simple random walk started from z. In other words, the law of µ ∞ can be described as: take an ILERW, truncate it at first exit of B N , then regard it as if it were part of γ 2 N +1 under µ N +1 , and "attach the tail" through the conditional law under µ N +1 . Hence, for all then (γ N +1 ) N under M has the same marginal of γ N under M N .
We define γ x,y N +1 ∼ M x,y similarly. For υ ∈ Γ N +1 × Γ N +1 , we write the Radon-Nikodym derivative we need to investigate by .
We define Z x,y (υ x,y ) similarly. As in Section 3 of [12], we have the following properties of Z and Z x,y . We will only sketch its proof as it is very similar to Prop. 3.1 in [12].
To check ( 4.19), it suffices to express both Z and Z x,y in loop terms and see that if υ = n υ x,y , then the ratio Z(υ)/Z x,y (υ x,y ) can be bounded by the exponential of loop terms of loops connecting B c N and B N −n , which gives the right-hand side of the inequality in ( 4.19).
Proof of Prop. 4.9. We start with Q ∞ from Prop. 4.7. First, sample (γ 1 N , γ 2 N ) and (γ 1,x N , γ 2,y N ) according to Q ∞ . Attach to γ 1 N an SRW conditioned to avoid γ 1 N , erase loops, and stop at exiting B N +1 and to γ 2 N an SRW stopped at exiting B N +1 conditioned to avoid γ 2 N , both independent from (γ 1 N , γ 2 N ) and of each other. We denote the pair of attached paths by (ι 1 , ι 2 ) and write Similarly, we attach to (γ 1,x N , γ 2,y N ) a pair of (ι 1,x , ι 2,y ) and write Then it is easy to see that γ N +1 and γ x,y N +1 has the law of M and M x,y . We now claim that it is still possible to couple γ N +1 and γ x,y N +1 (with little abuse of notation we still call it Q ∞ ) such that for some β > 0, To prove this, it suffices to show that on the event (γ 1 N , γ 2 N ) = 3n/2 (γ 1,x N , γ 2,y N ) , the conditional law of (ι 1 , ι 2 ) and (ι 1,x , ι 2,y ) under M and M x,y respectively has a total variation distance uniformly bounded by c2 −3n/2 . Here "uniformly" means regardless of actual configuration of (γ 1 N , γ 2 N ) and (γ 1,x N , γ 2,y N ). This follows from Lemma 2.2.
We finish by constructing a new measure Q through "tilting" γ N +1 and γ x,y N +1 in Q ∞ by Z and Z x,y respectively. Then, ( 4.21) combined with ( 4.18) for the "bad" case η N = 3n/2 η g N and ( 4.19) for the good case η N = 3n/2 η g N guarantees that for some c 4 < ∞ and β 4 > 0, This finishes the proof of ( 4.10) (note that N in the setup of ( 4.10) is N + 1 here).
Remark 4.11. The crucial observation here that leads to the proof above is that the ratio between µ ∞ [γ N ∈ ·] and µ N +1 [γ N ∈ ·] are uniformly bounded from above and below. This is not true for µ ∞ [γ N ∈ ·] and µ N [γ N ∈ ·]. See also Remark 4.6.
At the end of this subsection, we state another coupling which is related to but not a direct consequence of Prop. 4.1. Although we do not need it in this work, we still state it here as it is a strengthened version of Prop. 4.2 which should have a place in the family portrait of couplings that appear in this section. We will not provide its proof here but remark that it follows from a modification of the tilting arguments in the proof above. In this case, we will need to tilt both γ 1 N and γ 2 N and derive bounds similar to ( 4.18) and ( 4.19).
In the notation of Proposition 4.2, we consider η M under P  x,y , such that (4.23)

One-point function estimates for LERW
The goal of this section is to establish the main result of this work, namely Theorem 1.1. We lay out the main structure of the proof in Section 5.1, and then give the proof two key propositions in Sections 5.2 and 5.3 respectively.

Outline of the proof
We start by a recap on the setup. Let D be the unit open ball in R 3 and let D be its closure. Fix x ∈ D \ {0}. We write x n for the nearest point from 2 n x in Z 3 . As introduced in Section 1, we are interested in a n,x := P x n ∈ LE(S[0, T n ]) , where S is the SRW started from the origin and T n = T 2 n (S). We first claim that in order to establish ( 1.4), it suffices to estimate Green's function and a non-intersection probability under a setup which is slightly different from that of Section 3. Let X = X n be a simple random walk started at x n conditioned that τ 0 < T n , where τ 0 stands for the first time that it hits the origin. When no confusion arises, we write X • for LE(X[0, τ 0 ]) as a shorthand and keep the dependence on n implicit. As a convention, we will (and will only) omit the X in the notation when it comes to hitting times for X n . Let Y be an independent simple random walk started at x n and write Y • for Y [1, T n (Y )].
Lemma 5.1. With the notation above, Proof. Let Z be a random walk started at the origin conditioned that it hits x n before hitting ∂B(2 n ), independent of Y . Write for the last time that Z hits x n up to T n (Z). Then, by Proposition 8.1.1 of [21], we have Remark 5.2. The one-point function a n and the expected length M 2 n are intimately related quantities. Loosely speaking, for a 'typical' point x, in ( 5.2), G Bn (0, x n ) 2 −n while the non-intersection probability in the RHS is comparable to Es(2 n ). Thus, taking sum for x ∈ Z 3 , we see that E(M 2 n ) is comparable to 2 2n Es(2 n ) which gives an intuitive explanation for ( 1.10).
It is known (see Proposition 1.5.9 of [10]) that there exists a universal constant a > 0 such that Thus, it suffices to estimate By Proposition 1.5.10 of [10], it follows that there exists a universal constant b > 0 such that (Compare this with ( 5.5).) Therefore, what we really need to estimate is the numerator of the fraction in ( 5.6).
We will first deal with part (A). The next lemma show that we may consider X q : =LE(X n [0, T (1−q)n ]) instead of LE(X n [0, τ 0 ]) (i.e. X • ) for the non-intersection probability. As its proof is long and technical, we postpone its proof to Section 5.2.
Lemma 5.3. There exists a universal constant δ > 0 such that for all n, q ∈ (0, 1) and x ∈ D \ {0}, We now discuss part (B). By the strong Markov property and Proposition 1.5.10 of [10], we have where P y is the law of the SRW W that starts from y and c > 0 in the last line is a universal constant. Therefore, it suffices to estimate i.e., we do not need to worry about X[T (1−q)n , τ 0 ]. Let Note that X q and Y • implicitly depend on n. We will show that there exist a universal constant ρ > 0 and a constant c x > 0 depending only on x such that for all n by proving that there exists universal constants r > 0 (in fact, r = 2 −(1+α) ) and ρ > 0 such that This is in turn proved through the following proposition.
Proposition 5.4. Let d x = min{|x|, 1 − |x|}. There exist universal constants c > 0, δ > 0 and q 0 > 0 such that for all n and x ∈ D \ {0}, if we let q = q 0 , Now we are ready to prove the main theorem of this paper.
and that for all n ≥ N x and x ∈ D \ {0} with |x| ∈ [ 1 2 , 1), This shows that there exist universal constants b 1 , b 2 > 0 and N x ∈ N depending only on x ∈ D \ {0} such that for all n ≥ N x and x ∈ D \ {0}.
This ensures that r x = 2 −(1+α) for all x ∈ D \ {0} and that for some c x > 0 depending only on x.
We recall (see ( 5.2), ( 5.6) and ( 5.8)) that It follows from ( 5.5) and ( 5.7) that Therefore, by ( 5.21), we have Here c x = a b · c x . It follows from ( 5.15) and ( 5.16) that c x satisfies where a 1 , a 2 > 0 are universal constants. Thus, we finish the proof of the theorem.

Proof of Lemma 5.3
We define k 0 , k 1 ∈ N as follows (note that d x > 0).
• k 0 is a unique integer satisfying • k 1 is the smallest integer satisfying d x 2 n−k1 3 < 1.
Case 1: X ∼ ⊂ D k0 . In this case, we define In other words, this is the case where X ∼ ⊂ D k0 .
Case 3: X ∼ ∩ D 0 = ∅. In this case, we define (5.25) We will first deal with P (H 1 ). Note that Suppose that H 1 ∩ {k 2 = k} occurs. Then we see that i.e., the loop-erased walk X q up to the first time that it hits ∂B (1−q)n+k coincides with that for X • since X ∼ does not "destroy" the initial part of X q . Therefore, We remark that Thus, However, it follows that

Thus, we have
Similarly, we have Thus, it follows that On the other hand, it is not difficult to see the above inequality in the other direction as well. This completes the proof.

Proof of Proposition 5.4
As the proof is very similar to that in Section 3, we will state results in a parallel way. As for the proof we will be brief and less pedagogical in the presentation of the argument. Notations are introduced right before the proposition where it first appears. The proof of Proposition 5.4 is at the end of this subsection. Recall that q ∈ (0, 1).
• Let H i : The following proposition is similar to Lemma 3.1.
Proposition 5.5. One has Proof. Note that by Lemma 5.3 and ( 5.9), we have where c > 0, δ > 0 are universal constants. Also we recall that f n,x is defined as in ( 5.11). Using the same constants c, δ as above, similarly we have Therefore, we have This gives which finishes the proof of ( 5.26).
We now decompose the conditional probabilities in ( 5.26), just as in Section 3.3. Before stating the parallel result, let us first introduce a few path spaces and probability measures associated with them.
We are now ready to state the decomposition result similar to Prop. 5.6 for the walks introduced above. Again we omit the proof for brevity. Compare this with the ( 3.60) (versus ( 3.58)).

(5.40)
Now we can change the starting points. The following proposition is parallel to ( 3.13) and ( 3.14).
where in the last equality we used the following fact which again follows from Propositions 4.2 and 4.4 of [16]. Combining this with ( 5.28), it follows that The following proposition is similar to Prop. 3.4.
Proposition 5.9. There exist universal constants c 1 > 0, c 2 > 0 and q 2 > 0 such that for all n ≥ 1 and q ∈ (0, q 2 ) Similarly, for all n ≥ 1 and q ∈ (0, q 2 ) P LE(S 2 1 ) ∩ S 2 2 = ∅, G 2 = P LE(S 4 1 ) ∩ S 4 2 = ∅, G 4 1 + O d −c1 x 2 −c2qn ) . Proof. We will show that the numerator of the first fraction of ( 5.47) is well approximated by that of the second fraction by using the same idea as in the proof of Proposition 3.4. We first couple S 3 2 with the Brownian motion B 3 (t) started at y 3 q so that the Hausdorff distance between S 3 2 [0, T n−1 ] and B 3 [0, T n−1 ] is less than 2 2n/3 with probability at least 1 − c exp{−2 cn } for some universal constants c, c > 0 (this is possible by Lemma 3.1 of [7]). We write B 3 = B 3 [0, T n−1 ] for the trace of the Brownian motion.
Take , δ and δ 2 are the constants as in the proof of Theorem 5 of [4] for that case of G 1 = Z 3 and G 2 = 2Z 3 in the statement of the theorem. (For some technical reason, we assume < c 4 where c 4 is a universal constant coming from Lemma 3.3 of [4].) Note that these three constants are universal. Taking these three universal constants, let ρ = 1 10 · min{ , δ, δ 2 } and write B 3,1 = x ∈ R 3 there exists y ∈ 2B 3 such that |x − y| ≤ 2 3n 4 + 20 · 2 (1−ρ)n B 3,2 = x ∈ R 3 there exists y ∈ 2B 3 such that |x − y| ≤ 2 3n 4 for sets of points within a distance 2 3n 4 + 20 · 2 (1−ρ)n and 2 3n 4 of 2B 3 (i.e., Wiener sausages). Let WriteS 1 1 for the simple random walk on 2Z 3 started at x 1 q . We also writeS 1 As in the proof of Proposition 3.4, having conditioned B 3 on on A, we will compare for sufficiently small q > 0 via Theorem 5 of [4]. For this purpose, take q 1 := ρ/100. We assume q ∈ (0, q 1 ). We now apply Theorem 5 of [4] with the parameters in the following table: .
We note that we can take C 0 and c 0 as universal constants because if q ≤ q 1 • The LHS of (132) of [4] is bounded above by C2 −4 n/5 for some universal constant C. We have the same upper bound for |p 7 − p 8 | in line -8 page 133 of [4].
• For the constants K and k in (137) of [4], we can take K as a universal constant and can take k = −1, because the LHS of (137) of [4] can be approximated by the probability that the coupled Brownian motion as in Section 3.4 of [4] avoids the boundary of D \ D q,n even though its starting point is close to the boundary. Namely, since q < q 1 and < c 4 (see Lemma 3.3 of [4] for c 4 ), the LHS of (137) is bounded above by C2 − n/2 for some universal constant C.
• By the same reason as above, we can take C and c of (138) of [4] as universal constants.
• The other constants appeared in the comparison between p i and p i+1 in the proof of Theorem 5 of [4] can be taken as universal constants.
We are now ready to prove the main result of this subsection.