A BOUND FOR THE DISTRIBUTION OF THE HITTING TIME OF ARBITRARY SETS BY RANDOM WALK

We consider a random walk S n = P ni =1 X i with i.i.d. X i . We assume that the X i take values in Z d , have bounded support and zero mean. For A ‰ Z d ; A 6 = ; , we deﬂne ¿ A = inf f n ‚ 0 : S n 2 A g . We prove that there exists a constant C , depending on the common distribution of the X i and d only, such that sup ;6 = A ‰ Z d P f ¿ A = n g • C=n; n ‚ 1.

In their study of the Abelian sandpile [AJ], Athreya and Járai encountered the following problem. Let X, X 1 , X 2 , . . . be i.i.d. random variables which take values in Z d and have zero mean. Let S 0 = 0, S n = n i=1 X i and for B ⊂ Z d , define the exit time σ(B) = inf{n ≥ 0 : S n / ∈ B}; (1.1) σ(B) = ∞ if the set on the right-hand side here is empty. Let {S n } and {S n } be two independent copies of {S n }, with corresponding copies σ (B), σ (B) of σ(B). Is it true that for every ε > 0 there exists a δ > 0, independent of B, such that It is easy to see that (1.2) would follow if one could prove that there exists a constant C, independent of B, such that P {σ(B) = n} ≤ C n , n ≥ 1. C n ≤ C 2mδ + 1 m(1 − δ) . (1.4) Our main result here is the following theorem, which states that (1.3) holds if X has bounded support.
Theorem. Assume that P { X ∞ > R} = 0 (1.5) for some R < ∞, P {X = 0} < 1, (1.6) and EX = 0. (1.7) Then there exist constants 0 < C 0 , C 1 < ∞ which depend on the distribution of X and d only such that The idea of the proof is simple. The left-hand side of (1.8) can be verified by taking B a halfspace. Also the right-hand side of (1.8) follows from explicit computations if B is a cube. The right-hand side for general B is proven by observing that if n is large, then σ(B) = n can occur only if S k stays inside a suitable box D for k ≤ n or σ(B) = n = σ(D) + [σ(B) − σ (D)].
The first term can be made small by choosing D to be a suitable cube. For the second term we rewrite the last equality as σ(D) = n − [σ(B) − σ (D)]. This allows us to reduce the second term to a bound on the exit time from the cube D, which we know how to control. This leads to the simple recursion relation of Lemma 4 below and this easily implies (1.3). One obtains the form of the estimate given in the abstract by taking B = A c := Z d \ A.
In [AJ], the estimate (1.2) arose in proving (in the case d > 4) that the stationary distribution of the Abelian sandpile model in a finite set B has a weak limit as B ↑ Z d . The existence of the limit was established by restricting B to a sequence of cubes, when (1.2) is easy to see. Allowing B to be arbitrary in (1.2) implies the existence of the limit through arbitrary sets. This stronger form of convergence is convenient in studying the infinite volume limit of the sandpile model, and it is used throughout [JR]. [AJ] took {S n } to be a simple random walk, but it takes not much extra effort to deal with general bounded X. In fact we expect that the finite range condition (1.5) can be relaxed further.
Throughout C i will be used to denote constants which depend on the distribution of X and d only. We shall further use the following notation: 0 denotes the origin. For x ∈ Z d , we denote its i-th coordinate by x(i). x denotes the L ∞ -norm of x, that is, (a cube with center at the origin) and This is a kind of boundary layer (of thickness R) around D k . Assumption (1.5) implies that We also set π(n) := sup 2 Proof.
Before we start the proof proper we point out that we only have to prove our theorem in the case that the random walk {S n } is aperiodic (in the terminology of [S]). This means that R, the smallest group containing the support of X, is all of Z d . This is an immediate consequence of Proposition 7.1 in [S]. In fact, that reference shows that R is isomorphic to Z k for some unique 1 ≤ k ≤ d (k = 0 is excluded by (1.6)), and we merely have to apply our theorem to the isomorphic image of {S n } on Z k to obtain (1.8) in general. For the remainder of this note we assume that {S n } is aperiodic so that R = Z d . We break the proof down into several lemmas. The first lemma collects some facts which are basically known.
Lemma 1. Assume (1.5)-(1.7) hold. Then there exist constants 0 < C i < ∞ such that for any a ≥ 1 and n ≥ 1 Proof. By the central limit theorem there exists some m < ∞ such that for each integer a ≥ 1 Consequently,

This implies
.
The inequality (2.1) follows immediately from this. For (2.2) we use that for any m ≤ n We take m = n/2 (this m is of course unrelated to the m in the proof of (2.1)). Both terms on the right-hand side of (2.3) can be estimated in the same way. We therefore restrict ourselves to the first term. This term is at most Now it is well known (see [S], Proposition 7.6) that there exists some constant C 6 such that Moreover, by a version of Bernstein's inequality or large deviation estimates for random walks (see for instance relation (2.39) in [KS]; note that we only have to apply this in the case Rm ≥ x /2, since the first term in the right-hand side of (2.3) vanishes otherwise) there exist some constants C 7 , C 8 such that Thus, the sum in (2.4) is at most Together with a similar estimate for the second term on the right-hand side of (2.3) this proves (2.2). The next lemma holds for any random walk on Z d , even without the assumptions (1.5)-(1.7).
Proof. This lemma follows from purely combinatorial considerations. These are taken from [D]. For convenience of the reader we give the details. By definition of σ(D k ), Σ k > k, so that we can restrict ourselves to z > k. For the sake of argument assume that z(1) = z > k; all other cases can be treated in the same way. Now fix n and let X * i = X if i ≡ mod (n) with 1 ≤ ≤ n be the periodic extension of (X 1 , . . . , X n ). Further, for any integer µ ≥ 0, let S µ 0 = 0 and We call an integer t > 0 a strict ladder epoch of Also let σ µ (B) denote the exit time from B for the walk {S µ i } (defined by (1.1) with S replaced by S µ ) and let σ µ (2.10) Note that on the event {σ µ k = n} it holds that Σ µ k = S µ n = S n , so that the right-hand side of (2.10) also equals 1 n E n µ=1 (2.11) Now, as observed in [D], if t + n is a strict ladder epoch for {S 0 i } for some t > 0, then t itself is also such a strict ladder epoch. This is immediate from (2.9) and the fact that for 0 ≤ t < t, it holds Similarly, on the event {S n = z}, if t ≥ n is a strict ladder index for {S 0 i }, then so is t + n. Indeed, by (2.12) we will have S 0 (2.13) It follows from these observations, still on the event {S n = z}, that for t ≥ n, t and t + n are either both strict ladder indices or both not strict ladder indices for {S 0 i }. Consequently, U (t + 1) = U (t) for all t ≥ n, so that U (t) has some constant value U for all t ≥ n. We claim that on the event {S n = z}, the number U which we just introduced can be at most equal to z(1). To see this first note that there exist arbitrarily large strict ladder epochs for {S 0 i }, because S 0 n = z(1) is unbounded. Moreover, if t 1 < t 2 are two successive strict ladder epochs, then S 0 t2 (1) − S 0 t1 (1) has to be a strictly positive integer, hence at least 1. Thus, if we take t 0 ≥ n as a strict ladder epoch, then t 0 + n is also a strict ladder epoch and as claimed.
Finally we show that for µ ≥ 1, implies that µ + n is a strict ladder epoch for {S 0 i } and that S n = z. To see this note that, under the side condition S n = z, (2.15) implies In turn, this implies Finally (2.17) even gives (2.18) Thus, µ + n is a strict ladder epoch for {S 0 i }, as required. The lemma follows easily now. By the last argument the sum in (2.11) is at most U (n)I[S n = z] = U I[S n = z] ≤ z(1)I[S n = z] ≤ z I[S n = z]. Substitution of this bound into (2.10) shows that The preceding lemma will be used to prove the right-hand inequality in (1.8). For the left-hand inequality we shall consider σ(H k ), where (a halfspace). We prove a lower bound for P {σ(H k ) = n, Σ k = z} which is of the same nature as the upper bound in (2.7). Now we must assume (1.5) though.
Lemma 3. Assume that (1.5) holds. Then for k ≥ 1, n ≥ 1 and z(1) = k + 1 Proof. As in (2.10), (2.11) we have, with S µ and σ µ as in the preceding lemma, (2.20) As in the proof of the preceding lemma, U (t) (defined in (2.13)) has a constant value U for all t ≥ n. Now it must be the case that because if t 1 < t 2 are two consecutive strict ladder epochs for {S 0 i }, then S 0 t2 (1) − S 0 t1 (1) ≤ R (by (1.5)) and S 0 t0+n (1) − S 0 t0 (1) = S n (1) = z(1) for any strict ladder epoch t 0 > n. Moreover, if µ + n is a strict ladder epoch for {S 0 i }, then for all 0 ≤ t < n, on the event {S n = z}, In particular, for 0 ≤ t < n, Thus, if µ + n is a strict ladder epoch for {S 0 i } and S n = z, then This implication, together with (2.21) shows that The lemma now follows from (2.20).
Proof. Fix B ⊂ Z d for the time being. Then (see (1.10) and (1.11)) (2.23) The first term on the right-hand side here can be decomposed with respect to the value of S n−2 p . This shows that where P y denotes the conditional probability given that the random walk starts at y rather than at 0. Thus the last probability in (2.24) equals P {exit time for y + S . of B equals 2 p } = P {σ(B − y) = 2 p } ≤ π(2 p ).
Consequently, the first term on the right-hand side of (2.23) is bounded by (2.25) (by n ≥ 2 p+1 and (2.1) with a and n replaced by k and 2 p , respectively).
Next we turn to the second term on the right-hand side of (2.23). We decompose this term with respect to the value of σ(D k ). Since we take k ≥ 1, we must also have σ(D k ) ≥ 1. Therefore this second term is at most (2.26) Let be such that 2 ≤ k 2 < 2 +1 . Then the expression k d+2 /2 j(1+d/2) on the right-hand side of (2.26) is at most (2 +1−j ) 1+d/2 . Therefore, the right-hand side of (2.26) is at most The lemma follows by combining this estimate with (2.25) and taking the supremum over B.
Proof of the Theorem. We start with the right-hand inequality in (1.8), which is the most interesting one. Assume 2 p+1 ≤ n < 2 p+2 and take k = [α2 p ] 1/2 for some small α such that
This proves the right-hand inequality in (1.8) for all n ≥ 2 p0+2 . After an adjustment of C 1 it then holds for all n ≥ 1.
For the left-hand inequality in (1.8) we can assume without loss of generality that P {X(1) = 0} < 1 (by virtue of (1.6)). By the local central limit theorem, or even the global central limit theorem, there exist integers a n ∈ [ √ n, 2 √ n] and a constant C 10 > 0 such that P {S n (1) = a n } ≥ C 10 / √ n. We now take B = H an−1 . By summing (2.19) over all z ∈ Z d with z(1) = a n we obtain P {σ(H an−1 ) = n, S n (1) = a n } ≥ a n − 1 Rn P {S n (1) = a n }.
By the choice of a n the right-hand side here is at least C 0 /n, so that the left-hand inequality in (1.8) is proven.