Cut Times for Simple Random Walk

Let $S(n)$ be a simple random walk taking values in $Z^d$. A time $n$ is called a cut time if \[ S[0,n] \cap S[n+1,\infty) = \emptyset . \] We show that in three dimensions the number of cut times less than $n$ grows like $n^{1 - \zeta}$ where $\zeta = \zeta_d$ is the intersection exponent. As part of the proof we show that in two or three dimensions \[ P(S[0,n] \cap S[n+1,2n] = \emptyset ) \sim n^{-\zeta}, \] where $\sim$ denotes that each side is bounded by a constant times the other side.


Introduction
Let S(j) be a simple random walk taking values in Z d . An integer n is called a cut time for S if where S[0, n] = {S(j) : 0 ≤ j ≤ n}. If d ≤ 2, then with probability one the path has no cut times. However, if d ≥ 3, the path has cut times with positive probability. In fact, with probability one the paths have infinitely many cut times. This can be proved by considering the random time ξ n = inf{j : |S(j)| ≥ n}, and showing that with probability one ξ n is a cut time for infinitely many values of n (see [8] for details). In this paper we show that the number of cut times along a path is uniform at least up to logarithms. The emphasis will be on d = 3 because this is the most difficult, but we start with a quick discussion of higher dimensions. Let J n be the indicator function of the event {S[0, n] ∩ S[n + 1, ∞) = ∅} and let R n = R(n) = n j=0 J j . If d ≥ 5 (see [9]), then lim n→∞ P{S[0, n] ∩ S[n + 1, ∞) = ∅} = p = p(d) > 0.
One can show that with probability one lim n→∞ 1 n R n = p. (1) Perhaps the easiest way to see this is to introduce a second simple walkS, independent of S, and letS be the "two-sided" walkS (j) = S(j), j ≥ 0, S(−j), j ≤ 0. 1 Research supported by the National Science Foundation and NSERC IfJ n is the indicator function of the event {S(−∞, n] ∩S[n + 1, ∞) = ∅}, thenJ n is a stationary process. The ergodic theorem [2,Theorem 6.21] states that 1 n n j=0J j → p with probability one, and from this it is not difficult to conclude (1).
(In this paper, we use c, c 1 , c 2 , . . . to denote arbitrary constants that depend only on the dimension d. The values of c, c 1 , c 2 may change from place to place, but the values of c 3 , c 4 , . . . will not change.) Therefore, E(R n ) ∼ c 3 n(ln n) −1/2 .
By the methods in [9,Chapter 7], it can be proved that (c 3 n) −1 (ln n) 1/2 R n converges in probability to the constant 1. However, the convergence is not with probability one. There exists a c > 0 [9, (There are a number of ways to do this. One way is to use an argument similar to the one in the final section of this paper for d = 3.) For the remainder of this paper we will consider d ≤ 3. As n → ∞ [3], where ≈ denotes that the logarithms of both sides are asymptotic and ζ = ζ d is the intersection exponent. The intersection exponent is defined by taking independent Brownian motions B 1 (t), B 2 (t) starting distance one apart and defining ζ by It is not too difficult to show that such a ζ exists for Brownian motion although it takes more work to show that (2) holds. Cranston and Mountford [6] have shown that (2) holds for all mean zero, finite variance, truly d-dimensional random walks. It is a standard estimate that ζ 1 = 1. The values of ζ 2 and ζ 3 are unknown. The best rigorous estimates [4] are Duplantier and Kwon [7] have conjectured from a nonrigorous conformal field theory argument that ζ 2 = 5/8. This value agrees with simulations [5,13], and simulations suggest that ζ 3 is between .28 and .29.
One of the goals of this paper is to improve the convergence in (2). We show that for d = 2, 3, there are constants c 4 , c 5 such that for all n and for d = 3, The relation (3) also holds for d = 1, but this is a well known result for random walks related to the "gambler's ruin" estimate. Let and let K j,n be the indicator function of the event We will prove the following two theorems. We expect that n ζ−1 R n converges in distribution to a nondegenerate random variable, but we have no proof of this. The main technical tool in the proofs of Theorems 1.1 and 1.2 is the estimate (3). Let B 1 , B 2 be independent Brownian motions in R d (d = 2, 3) and let In [11] it was shown that there exist constants c 1 , c 2 such that where P x,y indicates probabilities assuming B 1 (0) = x, B 2 (0) = y. The fact that the probability of no intersection is logarithmicly asymptotic to n −2ζ follows easily from subadditivity and scaling. The importance of the above result is that the probability equals n −2ζ up to a multiplicative constant. In this paper we prove the analogue of this for random walk.
Since (5) is the key estimate in this paper, let us describe briefly the idea used in the proof. We use the standard Skorohod construction to define simple random walks S 1 , S 2 and Brownian motions B 1 , B 2 on the same probability space so that with high probability, the paths of S i are very close to those of B i . We have a good estimate, (4), for the probability that the Brownian motions do not intersect. The lower bound in (5) is the easier esimate. We first show that Brownian motions conditioned not to intersect have a good chance of being reasonably far apart, and conclude that the corresponding simple walks are also far apart (and hence do not intersect).
The upper bound is somewhat trickier. We first need to prove some estimates that say intuitively "random walks that get close are very likely to intersect." If we were only interested in d = 2, we could skip these estimates and rely on the discrete Beurling estimate (see [9, Theorem 2.5.2]); however, we need to do the work for d = 3. Let (We actually use a slightly different definition of b n in the proof, but this definition will do for the heuristic description.) We give an inequality for b n in terms of b j , j < n. We do this by considering the Brownian motions B 1 , B 2 associated with the random walks. Either the Brownian motions do not intersect (we can estimate the probability of this using (4)), or there is a smallest j such that the Brownian motions do not have any intersection after reaching the sphere of radius 2 j . The probability that the random walks do not intersect and that a given j is the smallest index as above is bounded essentially by the product of: the probabilty that the random walks do not intersect up to the ball of radius 2 j−1 ; the probabilty that between ξ i 2 j−1 and ξ i 2 j , the Brownian motions intersect but the random walks do not; and the probability that the Brownian motions do not intersect after hitting the sphere of radius 2 j . The last probability can be estimated easily using (4) and Brownian scaling.
With little more than Theorem 1.3, we are able to give moment estimates . If X is any nonnegative random variable with µ = E(X), then , .
Hence the moment estimates immediately give Theorem 1.1. The paper is organized as follows. Section 2 gives some preliminary lemmas about Brownian motions and simple random walks. In particular, it is shown that Brownian motions that are conditioned not to intersect are likely to stay a good distance apart. In Section 3 we review the strong approximation of Brownian motion by simple random walk derived from the Skorokhod embedding. This is a well known construction; however, it is useful to describe the construction here. The proof of Theorem 1.3 is given in Section 4. The idea of the proof is similar to that in [3,6,12]; however, things must be done somewhat more carefully to make sure that the estimates can be done up to multiplicative constants. The last section contains the proofs of Theorems 1.1 and 1.2. I would like to thank the referee and Chad Fargason for corrections to an earlier version this paper. This paper was written while the author was visiting the University of British Columbia.

Preliminary Results
In this section we prove some lemmas about Brownian motion and simple random walk. Let d = 2 or 3, and let B 1 , B 2 be independent Brownian motions in R d starting at x, y respectively with |x| = |y| = 1. We start by stating the main estimate from [11]. Let Lemma 2.1 [11] There exists a c 9 < ∞ and an increasing function f : (0, 2] → (0, ∞) such that if |x| = |y| = 1, then for all n ≥ 1 It was shown in [11] (see Corollary 3.11, Corollary 3.12) that Brownian paths conditioned not to intersect have a reasonable probability of being not too close together at the endpoints, i.e., there is an > 0 such that the conditional probability that given A n is at least . If we take Brownian paths until they reach distance n/4 and condition them to have no intersection up to that time, then with probability at least the distance between B i (T i n/4 ) and B 3−i [0, T 3−i n/4 ] will be at least n/4. We can now continue the paths up through distance n and we can be sure that there is a positive probability (independent of n) that the paths will separate. In fact we can condition the paths from T i n/4 to T i n to do almost anything that has a positive probability (independent of n) of occuring. This idea can be used to prove the next lemma.

Lemma 2.2 [11, Corollary 3.11, Corollary 3.12] Let
For any ρ > 0, let . Then for every ρ > 0 there is a u > 0 such that for all n ≥ 8, and all |x| = |y| = 1 with |x −y| ≥ 2ρ, The next lemmas are needed to formalize the statement "if two Brownian motions or two random walks get close to each other then they are likely to intersect." If we were only interested in d = 2, we would not need these lemmas, but rather could use the Beurling projection theorem, either continuous or discrete (see [1] for the continuous version and [9] for the discrete version). However, there is no useful analogue of this theorem for d = 3. Since the proofs below work equally well for two or three dimensions, we will just use these lemmas and not bother with the Beurling estimates. Let B be a third Brownian motion independent of B 1 , B 2 and let where P z denotes probabilities assuming B(0) = z. This notation is a little ambiguous; since we will use similar notation below, let us clarify. We should just write However we choose the conditional expectation notation to emphasize that the P z refers to B and that Y i n is a function of the path B i [0, T i n ]. The first lemma was proved in [11].
Then for every M < ∞, > 0, b < ∞, there exist δ > 0 and a < ∞ such that for |x| ≤ n, Proof. We will assume i = 1 and write Z n for Z 1 n . Without loss of generality we will assume that b ≥ 1, < 1/2. Cover the ball of radius n by K = K n ≤ cn 3 balls of radius 1, V 1 , . . . , V K . For j = 1, . . ., K, let By Lemma 2.3, the strong Markov property, and Brownian scaling, we can find a δ and an a such that for each j where the supremum is over all j with τ j < T 1 2n . Since K n ≤ cn 3 , But every z with |z| ≤ n and dist(z, We will need the corresponding results for simple random walk. Let S 1 , S 2 denote independent simple random walks in Z d and let Let S be another simple random walk independent of S 1 , S 2 and let ξ n denote the corresponding stopping time for S. For any m < n, let Here P z denotes probabilities assuming S(0) = z and X i (m, n) is considered as a function of Proof. We will assume i = 1. Assume k ≥ m and let where as before P z denotes probabilities assuming We claim that for every > 0 there is a δ > 0 such that for all k, Once we have (7), the proof proceeds identically to the proof of Lemma 2.3, so we will only prove (7). Let By the discrete Harnack inequality [9, Theorem 1.
Also (8) implies for all |z| ≤ 5n/4, For any positive integer j, let and similarly for ξ 1 s (x). Let Then Y 1 , . . . , Y j are independent, identically distributed, independent of {S 1 (j); 0 ≤ j ≤ ξ k }, and Hence By a standard estimate (using, e.g., the invariance principle), λ j > 0 and hence The following can be concluded from Lemma 2.5 in the same way that Lemma 2.4 was concluded from Lemma 2.3.

Lemma 2.6 For every
where the supremum is over all z with |z| ≤ n and Then for every M < ∞, > 0, b < ∞, there exist δ > 0 and a < ∞ such that if |x| ≤ n, In the next two lemmas we prove that two Brownian motions, conditioned to avoid each other, actually stay a reasonable distance apart. For positive integer n we let A n be the event Then for every > 0, b < ∞, there exist δ > 0 and a < ∞ such that if |x|, |y| ≤ 2 m , and m < j ≤ n, Proof. We will assume i = 1 and let D j = D 1 j . Assume |x|, |y| ≤ 2 m and let Then, . Also by Lemma 2.1 and the strong Markov property, For the middle term, we choose δ so that whereG =G(j, b, ) is as defined in the proof of Lemma 2.4 for B 2 (rather than for B 1 as in the proof). Then by the strong Markov property applied to the stopping time τ , Hence again by the strong Markov property, This completes the proof. 2 For m ≤ n, ρ > 0, let , and G n = G 2 n as defined in Lemma 2.2. For every b, , ρ there exist M < ∞ and a > 0 such that if M ≤ m < n < ∞, |x| = |y| = 2 m , |x − y| ≥ 2 m+1 ρ, Proof. Suppose b, , ρ are given. Let F n = F 2 n as defined in Lemma 2.2. By Lemma 2.2, there is a u 1 = u 1 (b, ρ, ) > 0 such that for all |x| = |y| = 2 m , |x − y| ≥ 2 m+1 ρ, Note that for m sufficiently large, n ≥ m, By Lemma 2.7, there exist By summing over j, we can find an M such that if m ≥ M , Therefore, by (9) -(11), for M sufficiently large,

Skorokhod Embedding
Let X(t) be a one-dimensional Brownian motion starting at the origin. Let τ 0 = 0, and for n > 0, This is the well known Skorokhod embedding of a simple random walk Y (n) in a Brownian motion. It is easy to check that E(τ 1 ) = 1 and E(e tτ 1 ) < ∞ for some t > 0. Standard exponential estimates give that for every > 0 there is a δ > 0 and an a < ∞ such that Similar exponential estimates for the Brownian motion give P{ sup for perhaps different values of δ and a (we will allow the values of δ and a to vary in this section).
Now let X 1 , . . . , X d be d independent one-dimensional Brownian motions. Let Y j be the simple random walks derived from X j by the Skorokhod embedding and let τ j (n) = τ j n be the corresponding stopping times so that Y j (n) = X j (τ j (n)).
Let Z n = (Z 1 n , . . . , Z d n ) be a multinomial process independent of X 1 , . . ., X d with Z 0 = (0, . . ., 0); {Z n −Z n−1 : n = 1, 2, . . .} independent; and where e j denotes the unit vector whose jth component equals 1. Let ). Then B(t) is a d-dimensional Brownian motion and S is a d-dimensional simple random walk. More exponential estimates give for each j = 1, . . ., d, Hence we get the following. More exponential estimates give P{T n ≥ n 2+ } ≤ ae −n δ , Hence Lemma 3.2 Let B and S be defined as above. Then for every > 0 there exist δ > 0 and a < ∞ such that P{ sup 0≤t≤T 8n In the next sections we will consider Brownian motions B and simple random walks S defined as above. We will be using the strong Markov property at times T 2n . One slight complication that arises is the fact that Another exponential estimate gives that We can therefore derive the following lemma. Lemma 3.3 There exist δ > 0 and a < ∞ such that the following holds. For each n, there is an event Γ n which is measurable with respect to the σ-algebra generated by with the property that on the event Γ n ,

Bounds for Random Walk
In this section we will prove Theorem 1.3. We will start with the lower bound in (5). Throughout this section we will let (B 1 , S 1 ) and (B 2 , S 2 ) be two independent Brownian motion-random walk pairs coupled as in the previous section. Let ρ = .1, = .25, b = 1 in Lemma 2.8. Let M, a be as in the conclusion of the lemma. Assume B 1 (0) = S 1 (0) = 2 m e 1 , B 2 (0) = S 2 (0) = −2 m e 1 , where m ≥ M and e 1 is the unit vector whose first component is 1. As before we let It follows from Lemma 3.1 that (assuming j ≥ m), for perhaps different values of a and δ. By summing over all values of j and i = 1, 2 we can therefore conclude the following lemma.
Lemma 4.1 There exist c 10 < ∞ and δ 1 > 0 such that the following holds. Let Q i j be defined as above and let Then if B 1 (0) = S 1 (0) = x, B 2 (0) = S 2 (0) = y with |x| = |y| = 2 m , then From Lemmas 2.8 and 4.1 we immediately get the following. Define events We also let C n be the discrete ball of radius n, with boundary ∂C n = {z ∈ Z d \ C n : |z − y| = 1 for some y ∈ C n }.

Corollary 4.2
For every ρ > 0, there exist M < ∞ and u > 0 such that if M ≤ m ≤ n, x, y ∈ ∂C 2 m , |x − y| ≥ ρ2 m+2 , then Once we have this corollary we can start two simple random walks at the origin. If we force S 1 to go directly to 2 m e 1 along a straight line and similarly force S 2 to go directly to −2 m e 1 , we can conclude the following.

Corollary 4.3
There is a constant c 7 > 0 such that if S 1 , S 2 are simple random walks starting at the origin, then We will now prove the upper bound for the nonintersection probability for random walks. Define b n byb To prove the upper bound it suffices to show that b n is a bounded sequence. Let (B 1 , S 1 ), (B 2 , S 2 ) be independent Brownian motion -random walk pairs starting at |x|, |y| ≤ 2 m . For each n, let Lemma 4.4 There exist c 12 < ∞, δ 2 > 0 such that if |x|, |y| ≤ 2 m , m < j ≤ n, Proof. Assume |x|, |y| ≤ 2 m , m < j ≤ n. In this proof we will write P for P x,y . Note that We will prove the estimate for L 1 ∩ ∆ n+1 ; a similar argument holds for L 2 ∩ ∆ n+1 . Let be the events given in Lemma 3.3 for (B 1 , S 1 ) and (B 2 , S 2 ), respectively, and let Γ = Γ 1 2 j+1 ∪ Γ 2 2 j+1 . Let By Lemmas 3.3 and 2.1, Hence it suffices to prove that for some appropriately chosen c, δ. By Lemma 3.3, hence, we need only consider ∆ j+1 ∩ {τ < T 1 2 j }, Let Z = Z 1 j be defined as in Lemma 2.6 with = .1, b = 2. By the lemma we can find a δ so that By the strong Markov property, the second term on the right is bounded by The first term is bounded by c(2 j ) −4ζ . and hence (We have assumed without loss of generality that ζ > δ.) This completes the proof. 2.
If m ≤ n and |x|, |y| ≤ 2 m , where J j = J j,n is as above. Note that Hence by Lemma 4.4, if |x|, |y| ≤ 2 m , where u = 2 −δ 2 < 1. To finish the proof of (5) we need only prove the following simple lemma about sequences of positive numbers. Proof. Without loss of generality we will assume b 0 = 1 and b n = a Then b n ≤ anr n−1 and hence r n ≤ max{anu n r n−1 , r n−1 }.
If we choose m sufficiently large so that amu m < 1 and let k = r m , then we see that r n ≤ k for all n. Therefore b n = a n−1 j=0 b j u j ≤ akn, for all n. Iterating again, we see this implies that b n = a Corollary 4.6 There exists a c 8 < ∞ such that Moreover, for all m ≤ n, The proof of (6) from (5) is essentially the same as the proofs of Proposition 3.14 and Proposition 3.15 in [11]. Since the proofs are nearly identical, we will not give them but will just state the results.
We will write P for P 0,0 .
This completes the proof of Theorem 1.3. We will need some slight generalizations of the lemmas proved above in the next section. Then there exist c 1 , c 2 such that P(V n ∩ ∆ n ; max(ξ 1 n , ξ 2 n ) ≤ c 2 n 2 } ≥ c 1 n −2ζ .
Proof. It suffices to prove the result for n sufficiently large. For n sufficiently large it is easy to see that Corollary 4.2 gives for some u > 0. But from (13) we see that there is a c 2 < ∞ such that Then if x ∈ ∂C n and 11n/8 ≤ |y| ≤ 13n/8, n 2 ≤ j ≤ 2n 2 , Proof. We will just sketch the proof. By Lemma 4.10, we can find an ∈ (0, 1/50) so that if S 1 and S 2 start at the origin, Fix such an . Now by extending the paths, it is not difficult to see that if U 1 = U 1 (n, ) is the event ; then there is a c > 0 such that for all 11n/8 ≤ |y| ≤ 13n/8, Finally, it is easy using the local central limit theorem to show that there is a constant c > 0 such that if |x|, |z| ≤ n + 1 and n 2 /2 ≤ j ≤ 2n 2 , then P z {S(j) = x or S(j + 1) = x; j ≤ ξ 3n/2 } ≥ cn −d/2 .

Proofs of Theorems
Assume d = 2, 3, and let J j,n be the indicator function of the event and let It follows from Proposition 4.8 that E(Y n ) ≥ cn 1−ζ .
Lemma 5.1 There exists a c 21 < ∞ such that Proof. We will show that there exists a c < ∞ such that if 0 ≤ i ≤ j ≤ n, The lemma then follows easily by expanding the square (recall that 0 < ζ < 1 for d = 2, 3). To prove (14), we may assume without loss of generality that i ≤ n − j. Let Then P{J i,n = J j,n = 1} ≤ P(U ∩ V ) = P(U )P(V | U ).
By independence and Proposition 4.8, By Proposition 4.9, Combining these estimates gives (14) and hence the lemma. 2 Now let E j,n , F j,n , G j,n , H j,n be as defined in Corollary 4.12. Let X j,n be the indicator function of E j,n ∩ F j,n ∩ G j,n ∩ H j,n and let Y n = n 2 ≤j≤2n 2 X j,n .
From Lemma 5.1 we know that E(Y 2 n ) ≤ c 2 n 4(1−ζ) . Therefore, by the argument sketched at the end of Section 1, we can conclude the following. Note that Theorem 1.1 follows immediately from this corollary. It remains to prove Theorem 1.2. For the remainder of this section we assume that d = 3. Let R n be as defined in the first section. One direction is easy. Note that E(R n ) ≤ cn 1−ζ .
Let > 0 By Markov's inequality, Hence by the Borel-Cantelli Lemma, with probability one, for all n sufficiently large and hence (since R n is increasing in n) lim sup n→∞ ln R n ln n ≤ 1 − ζ + .
Since is arbitrary, with probability one lim sup n→∞ ln R n ln n ≤ 1 − ζ.
There exists a c 22 > 0 (see, e.g., [9, Proposition 5.10]) such that Note that on the event L n ∩ V n , where R j is as defined in Section 1. We will show that there exists an α < ∞ and a c < ∞ such that if Λ n = Λ n (α) = n≤j≤n+α ln n (L j ∩ V j ), Note that on the event Λ n , R(ξ 2 n 2 α ln n ) ≥ c 21 (2 n ) 2(1−ζ) .
It follows from (16) and the Borel-Cantelli Lemma, that with probability one for all n sufficiently large Λ n holds. It is easy to check that if Λ n holds for all sufficiently large n, with probability one, then with probability one lim inf n→∞ ln R n ln n ≥ 1 − ζ.
Hence it suffices to prove (16). Note that Lŝ ∩ Vŝ holds. Hence it suffices to prove that there is an α < ∞ such that for all n sufficiently large P{ŝ ≥ n + α ln n} ≤ 2 n 2 .
Note that there is a c 23 > 0 such that P{s i+1 = ∞ | s 0 , . . ., s i } ≥ c 23 (this follows from Corollaries 4.12 and 5.2 and (15)). It is standard (see [9,Proposition 5.10]), that there is a u < 1 such that if m, k are positive integers, and S is a simple random walk in Z 3 starting at |x| ≥ 2 n+k , then P x {|S(j)| ≤ 2 m for some j ≥ 0} ≤ u k .
Hence, there is a u < 1 such that for all k, Choose M so that Let N 1 , N 2 , . . . be independent random variables from this distribution, andN = N 1 + · · · + N l−1 where l is the first index with N l = ∞. Then we can see thatN stochastically dominatesŝ − n, i.e., for all r > 0, P{ŝ − n ≥ r} ≤ P{N ≥ r}.