The Beurling Estimate for a Class of Random Walks

: An estimate of Beurling states that if K is a curve from 0 to the unit circle in the complex plane, then the probability that a Brownian motion starting at ¡ † reaches the unit circle without hitting the curve is bounded above by c † 1 = 2 . This estimate is very useful in analysis of boundary behavior of conformal maps, especially for connected but rough boundaries. The corresponding estimate for simple random walk was ﬂrst proved by Kesten. In this note we extend this estimate to random walks with zero mean, ﬂnite (3 + – ) ¡ moment.


Introduction
The Beurling projection theorem (see, e.g., [1,Theorem V.4.1]) states that if K is a closed subset of the closed unit disk in C, then the probability that a Brownian motion starting at − avoids K before reaching the unit circle is less than or equal to the same probability for the angular projection K = {|z| : z ∈ K}.
If K = [0, 1], a simple conformal mapping argument shows that the latter probability is comparable to 1/2 as → 0+. In particular, if K is a connected set of diameter one at distance from the origin the probability that a Brownian motion from the origin to the unit circle avoids K is bounded above by c 1/2 . This estimate, which we will call the Beurling estimate, is very useful in analysis of boundary behavior of conformal maps especially for connected but rough boundaries. A similar estimate for random walks is useful, especially when considering convergence of random walk to Brownian motion near (possibly nonsmooth) boundaries. For simple random walk such an estimate was first established in [5] to derive a discrete harmonic measure estimate for application to diffusion limited aggregation. It has been used since in a number of places, e.g., in deriving "Makarov's Theorem" for random walk [7] or establishing facts about intersections of random walks (see, e.g., [8]). Recently it has been used by the first author and collaborators to analyze the rate of convergence of random walk to Brownian motion in domains with very rough boundaries. Because of its utility, we wish to extend this estimate to walks other than just simple random walk. In this note we extend it to a larger class of random walks.
We state the precise result in the next section, but we will summarize briefly here. As in [5], we start with the estimate for a half-line. We follow the argument in [6]; see [2,3] for extensions. The argument in [6] strongly uses the time reversibility of simple random walk. In fact, as was noted in [3], the argument really only needs symmetry in x component. We give a proof of this estimate, because we need the result not just for Z + but also for κZ + where κ is a positive integer. The reason is that we establish the Beurling estimate here for "(1/κ)-dense" sets. One example of such a set that is not connected is the path of a non-nearest neighbor random walk whose increments have finite range; a possible application of our result would be to extend the results of [8] to finite range walks. While our argument is essentially complete for random walks that are symmetric in the x component, for the nonsymmetric case we use a result of Fukai [3] that does the estimate for κ = 1. Since κZ + ⊂ Z + this gives a lower bound for our case, and our bound for the full line then gives the upper bound.
The final section derives the general result from that for a half-line; this argument closely follows that in [5]. We assume a (3 + δ)-moment for the increments of the random walk in order to ensure that the asymptotics for the potential kernel are sufficiently sharp (see (5)). (We also use the bound for some "overshoot" estimates, but in these cases weaker bounds would suffice.) If one would have in (5) a weaker bound, c/|z| b for some b > 1/2, an analogue of (33) would hold and this would suffice to carry out the argument in section 5. So the method presented here should require only (2.5 + δ) moment.

Preliminaries
Denote by Z, R, C the integers, the real numbers and the complex numbers, respectively. We consider Z and R as subsets of C. Let Z + = {k ∈ Z : k > 0}; N = {k ∈ Z : k ≥ 0}, Z − = Z \ N. Let L denote a discrete two-dimensional lattice (additive subgroup) of C. Let X 1 , X 2 , . . . be i.i.d. random variables taking values in L and let S n be the corresponding random walk. We say that X 1 , X 2 , . . . generates L if for each z ∈ L there is an n with P(X 1 + · · · + X n = z) > 0. Let be the first entrance time of B after time 0, and the first entrance time of B including time 0, respectively. We abbreviate T {b} , T 0 {b} by T b , T 0 b respectively. Denote by C n = {z ∈ L : |z| < n} the discrete open disk of radius n, and let τ n := T 0 C c n be the first time the random walk is not in C n .
The purpose of this paper is the prove the following result.
Theorem 1 Suppose L is a discrete two-dimensional lattice in C and X 1 , X 2 , . . . are i.i.d. random variables that generate L such that E[X 1 ] = 0 and for some δ > 0, Then for each positive integer κ, there exists a c < ∞ (depending on κ and the distribution of X 1 ) such that for every (1/κ)-dense set A and every 0 < k < n < ∞, We start by making some reductions. Since B ⊂ A clearly implies P(τ m < T 0 A ) ≤ P(τ m < T 0 B ), it suffices to prove the theorem for minimal (1/κ)-dense sets A = {w j : j ∈ κN} and, without loss of generality, we assume that A is of this form. By taking a linear transformation of the lattice if necessary, we may assume that L is of the form where z * ∈ C \ R and that the covariance matrix of X 1 is a multiple of the identity. (When dealing with mean zero, finite variance lattice random walks, one can always choose the lattice to be the integer lattice in which case one may have a non-diagonal covariance matrix, or one can choose a more general lattice but require the covariance matrix to be a multiple of the identity. We are choosing the latter.) Let p be the (discrete) probability mass function of X 1 . Then our assumptions are {z : p(z) > 0} generates L and for some δ, σ 2 > 0, Let p * (z) = p(z) be step probability mass function of the time-reversed walk; and note that p * also satisfies (1)-(3). We denote by P * (A) the probability of A under steps according to p * . We call a function f p-harmonic at w if Let X 1 , X 2 , . . . be independent L-valued random variables with probability mass function p, and let S n = S 0 + n i=1 X i , n ≥ 0 be the corresponding random walk. Denote by P x (resp., E x ) the law (resp., expectation) of (S n , n ≥ 0) when S 0 = x, and we will write P, E, for P 0 , E 0 .
Let a(z) denote the potential kernel for p, and let a * (z) denote the potential kernel using p * . Note that a is p * -harmonic and a * is p-harmonic for z = 0 and ∆ p * a(0) = ∆ p a * (0) = 1. In [4] it is shown that under the assumptions (1) -(3) there exist constantsk, c (these constants, like all constants in this paper, may depend on p), such that for all z, Since a * (z) = a(−z), this also holds for a * . For any proper subset B of L, let G B (w, z) denote the Green's function of B defined by This equals zero unless w, z ∈ B. We will write G n for G Cn . If w, z ∈ B, and G(w) where T = T 0 B c . This is easily verified by noting that for fixed z ∈ B, each of the three expressions describes the function (w) satisfying: The following "last-exit decomposition" relates the Green's function and escape probabilities: It is easily derived by focusing on the last visit to B strictly less than T 0 B c . For the remainder of this paper we fix p, κ and allow constants to depend on p, κ. We assume k ≤ n/2, for otherwise the inequality is immediate. The values of universal constant may change from line to line without further notice. In the next two sections will prove that (Here, and throughout this paper, we use to mean that both sides are bounded by constants times the other side where the constants may depend on p, κ.) In the final section we establish the uniform upper bound for all minimal (1/κ)-dense sets.

Green's function estimates
We start with an "overshoot" estimate.
Lemma 2 There is a c such that for all n and all z with |z| < n, From the central limit theorem, we know that P z {τ n > r + n 2 | τ n > r} < α < 1. Therefore, τ n /n 2 is stochastically bounded by a geometric random variable with success Therefore, and The second inequality follows immediately from applying log(1+x) ≤ x to x = (|S n |−n)/n. Remark. With a finer argument, we could show, in fact, that E z [|S τn |] ≤ n + c. By doing the more refined estimate we could improve some of the propositions below, e.g., the O(n −1/3 ) error term in the next proposition is actually O(n −1 ). However, since the error terms we have proved here suffices for this paper, we will not prove the sharper estimates.
If |z| < n, Also, for every b < 1, there exist c > 0 and N such that for all n ≥ N , Proof. The first expression follows from (5), (7) and Lemma 2 since a(0) = 0. The next two expressions again use (7), Lemma 2,and (5). For the final expression, first note it is true for b = 1/4, since for 0 ≤ |z|, |w| < n/4, G n (z, w) ≥ G 3n/4 (0, w − z). For b < 1, the invariance principle implies that there is a q = q b > 0 such that for all n sufficiently large, with probability at least q the random walk (and reversed random walk) starting at |z| < bn reaches C n/4 before leaving C n . Hence, by the strong Markov property, if |z| < bn, |w| < bn, G n (z, w) ≥ q inf |z |<n/4 G n (z , w). Similarly, using the reversed random walk, if |w| < bn, |z | < n/4, G n (z , w) ≥ q inf |w |<n/4 G n (z , w ). 2
Lemma 5 There is a c < ∞ such that for every z ∈ C n and every minimal (1/κ)-dense set A, w∈A G n (z, w) ≤ c n.
Proof. By Lemma 3, If A is a minimal (1/κ)-dense set, then #{w ∈ A : |z−w| ≤ r} ≤ cr, for some c independent of z. Hence, The main purpose of this section is to obtain estimates in Proposition 12 and Lemmas 13 and 14 which will be used in the proof of Theorem 1 in section 5.
Proof. Let q(n) = P(τ n ≤ T [−n,n]κ ) and note that if k ∈ [−n/2, n/2] κ , then The last-exit decomposition (8) tells us But (11) and (12) imply that which gives q(4n) = O(1/n). The lower bound can be obtained by noting P(ρ n < T Z ) ≤ P(τ n < T [−n,n]κ ) which reduces the estimate to a one-dimensional "gambler's ruin" estimate in the y-component. This can be established in a number of ways, e.g., using a martingale argument. 2 Lemma 7 There exist c > 0 and N < ∞ such that if n ≥ N and z ∈ C 3n/4 , 1{S j ∈ A[n/4, n/2]}, be the number of visits to A[n/4, n/2] before leaving C n . Then (11) and (12) show that there exist c 1 , c 2 such that for n sufficiently large, In particular, if z ∈ C 3n/4 , Lemma 8 There exist 0 < c 1 < c 2 < ∞ and N < ∞ such that if n ≥ N , , z ∈ C 9n/10 \ C n/10 .
Proof. This follows immediately from Lemma 3 and G n (z, 0) = P z (T 0 < τ n ) G n (0, 0). 2 Recall that P * stands for the probability under step distribution p * .
By reversing the path we can see that Also note that P κm * (S j = 0, j − 1 < ρ n ∧ T κ{...,−1,0,1,...,m−1} ) = by translation invariance. Now, This together with (14) implies the lemma. 2 Remark. The above result implies the following remarkable claim: if the step distribution of the walk is symmetric with respect to y-axis then, under P, the events E + n and E − n are independent.
Remark. Versions of this lemma have appeared in a number of places. See [6,2,3].
Proof. In the case κ = 1, this was essentially proved by Fukai [3]. Theorem 1.1 in [3] states that P(n 2 < T N ) 1 for any zero-mean aperiodic random walk on lattice Z 2 with 2 + δ finite moment. Note that we can linearly map L onto Z 2 , and by this cause only multiplicative constant change (depending on L) in the conditions (1)-(3), which imply the assumptions needed for (19) to hold. The conversion from n 2 to ρ n is not difficult and his argument can be extended to give this. Note that this gives a lower bound for other κ, where c depends on L and transition probability p only. Hence, the two terms in the product in Lemma 9 are bounded below by c/ √ n but the product is bounded above by c 1 /n. Hence, each of the terms is also bounded above byc/ √ n, and this proves the statement. 2 Proof. We prove (a), and note that (b) can be done similarly. It is equivalent to show P(T κN+n < T 0 ) ≥ c log n Note that since τ n ≤ T κN+n , Lemma 3 yields the upper bound on the above probability of the same order. For the lower bound note that invariance principle implies by Lemma 3. Use Markov property and Lemma 7 applied to disk centered at n = (n, 0) of radius 9n/10 to get P(T κN+n < T 0 |τ n < T 0 , Re(S τn ) ≥ 4n/5, |S τn | − n ≤ n/5) ≥ c, uniformly in n. An easy overshoot argument yields P(τ n < T 0 , Re(S τn ) ≥ 4n/5, |S τn |−n ≤ n/5) P(τ n < T 0 , Re(S τn ) ≥ 4n/5), which implies the lemma. 2 Proposition 12 If j, n ∈ Z + , Proof. (a) A simple Markov argument gives P(τ n ≤ T κN ) ≤ P(τ n ≤ T κZ + ) ≤ P(S T κN = 0) −1 P(τ n ≤ T κN ), and hence the first two quantities are comparable. Since τ n ≤ ρ n , (18) gives P(τ n ≤ T κN ) ≥ c/ √ n. For the upper bound, let A − = A − n be the event that Re(S τn ) ≤ 0. By the invariance principle, P(A − ) ≥ 1/4. However, we claim that P(A − | τ n ≤ T κN ) ≥ P(A − ). Indeed, by translation invariance, we can see for every j > 0, P jκ (A − ) ≤ P(A − ), and hence by the Strong Markov property, P(A − | τ n > T κN ) ≤ P(A − ). Therefore, The invariance principle can now be used to see that for some c, and hence P(ρ n ≤ T κN ) ≥ (c/4) P(τ n ≤ T κN ).
(b) Let T = T −n ∧ T κN . Since P −n (S T = −n) ≥ c/ log n by Lemma 11(a), it suffices by the strong Markov property to show that By considering reversed paths, we see that P −n (S T = 0) = P * (S T = −n).
(e) This is done similarly to (d), using (c) instead of (b). 2 Lemma 13 There exist 0 < c 1 < c 2 < ∞ and N < ∞, such that if n ≥ N, m = n 4 , and Remark. When w ∈ C 4n \ C 3n (i) implies a lower bound of the same order in (ii).
Proof. (i) Let T = τ m ∧ η n . We will show that Consider the martingale M j = π σ 2 [a * (S j∧T ) −k] − log n, and note that M j = log |S j∧T | − log n + O(|S j∧T | −1 ). Therefore, The optional sampling theorem implies that (the estimate (10) can be used to show that the optional sampling theorem is valid). Note that The last inequality uses Lemma 2. Therefore, and hence it suffices to show that Clearly, Also, and hence E w [M T ; |S T | < n] = O(log 2 n/n). Combining these estimates with (26) and (27) gives (28) and therefore (25).
(ii) Let q = q(n, A) be the maximum of P w (τ m < T A[n/2,n] ) where the maximum is over all w ∈ C 4n . Let w = w n be a point obtaining this maximum. Letη n be the first time that a random walk enters C n and let η * n be the first time after this time that the walk leaves C 2n . Then by a Markovian argument and an easy overshoot argument we get where α = 1 − c < 1 for c the constant from Lemma 7. The O(n −1 ) error term comes from considering the probability that |S η * n | ≥ 4n. By letting z = w we get P w (τ m < T A[n/2,n] ) P w (τ m < T A[n/2,n] , τ m <η n ) = P w (τ m <η n ) We now show that (i) implies Namely, by the same argument as in (i), applied to n/2 instead of n and m = n 4 still, one gets P z (τ m <η n ) 1 log n , for z ∈ C 2n \ C 3n/2 .
The uniform upper bound can easily be extended to all z ∈ C 2n using strong Markov property and overshoot estimate (10). Now for z ∈ C 4n \ C 3n we have so that the upper bound in (i) together with strong Markov imply (29) for z ∈ C 4n \ C 3n . The remaining case z ∈ C 3n \ C 2n is implied again by strong Markov inequality and an overshoot estimate. 2 Recall that we may assume k ≤ n/2.
(Here and below we use the easy estimate: