Alternative constructions of a harmonic function for a random walk in a cone

For a random walk killed at leaving a cone we suggest two new constructions of a positive harmonic function. These constructions allow one to remove a quite strong extendability assumption, which has been imposed in our previous paper (Denisov and Wachtel, 2015, Random walks in cones). As a consequence, all the limit results from that paper remain true for cones which are either convex or star-like and $C^2$.


Introduction and the main result.
Consider a random walk {S(n), n ≥ 1} on R d , d ≥ 1, where S(n) = X(1) + · · · + X(n) and {X(n), n ≥ 1} is a family of independent copies of a random vector X = (X 1 , X 2 , . . . , X d ). We will assume that the random variables have zero mean, unit variance, and are uncorrelated, that is E[X i ] = 0, var(X i ) = 1 for 1 ≤ i ≤ d and cov(X i , X j ) = 0 for 1 ≤ i < j ≤ d.
Denote by S d−1 the unit sphere of R d and Σ an open and connected subset of S d−1 . Let K be the cone generated by the rays emanating from the origin and passing through Σ, i.e. Σ = K ∩ S d−1 . Let τ x be the exit time from K of the random walk with starting point x ∈ K, that is, In the present paper we are concerned with the existence of a positive harmonic function V for a random walk killed at the exit from K, that is a function V which solves the following equation Harmonic function V (x) plays a central role in our approach to study of the Markov processes confined to unbounded domains. This approach was initiated in [8], where we studied random walks in a Weyl chamber, which is an example of a cone. These studies were extended in [9], where we considered random walks in general cones. In particular, in [9] we showed that P(τ x > n) ∼ C V (x) n p/2 , n → ∞, and proved global and local limit theorems for random walks conditioned on {τ x > n}. The approach suggested in [9] was further extended to one-dimensional random walks above the curved boundaries [11], [6], [7], integrated random walks [10], [5], products of random matrices [14], and Markov walks [13].
This approach is based on the universality ideas and heavily relies on corresponding results for Brownian motion, or, more generally, diffusion processes. Thus, an important role is played by the harmonic function of the Brownian motion killed at the boundary of K, which can be described as the minimal (up to a constant), strictly positive on K solution of the following boundary problem: ∆u(x) = 0, x ∈ K with boundary condition u ∂K = 0.
The function u(x) and constant p can be found as follows. If d = 1 then we have only one non-trivial cone K = (0, ∞). In this case u(x) = x and p = 1. Assume now that d ≥ 2. Let L S d−1 be the Laplace-Beltrami operator on S d−1 and assume that Σ is regular with respect to L S d−1 . With this assumption, there exists a complete set of orthonormal eigenfunctions m j and corresponding eigenvalues 0 < λ 1 < λ 2 ≤ λ 3 ≤ . . . satisfying Then p = λ 1 + (d/2 − 1) 2 − (d/2 − 1) > 0. and the harmonic function u(x) of the Brownian motion is given by We refer to [2] for further details on exit times of Brownian motion. For symmetric stable Lévy processes asymptotics for exit times and related questions have been considered in [1], [3] and [15], see also references therein.
In [9] we showed that one construct a harmonic function for the random walk killed at τ x as follows V (x) = lim n→∞ E[u(x + S(n), τ x > n].
The existence and positivity of V was shown under certain assumptions. The geometric assumptions in [9] can be summarised as follows, (i) K is either starlike with Σ in C 2 or convex. We say that K is starlike if there exists x 0 ∈ Σ such that x 0 + K ⊂ K and dist(x 0 + K, ∂K) > 0. Clearly, every convex cone is also starlike, for the proof see Remark 15 in [9]. (ii) We assume that there exists an open and connected set Σ ⊂ S d−1 with dist(∂Σ, ∂ Σ) > 0 such that Σ ⊂ Σ and the function m 1 can be extended to Σ as a solution to (1). Assumption (ii) is quite restrictive. For this assumption to hold it is necessary to assume that the boundary of the cone is piecewise infinitely differentiable. But this condition is not sufficient. The restriction (ii) excludes many cones which are of interest in various mathematical problems. For example, it is not clear whether (ii) holds for linear transformations of the orthant R d + , d ≥ 2 which appear often in paths enumeration problems in combinatorics. (It is worth mentioning that (ii) holds for any simply connected open cone in R 2 . This follows from the observation that m 1 (x) = sin(C 1 + C 2 x) in this two-dimensional situation. ) We have shown in [9] that the condition (ii) can be dropped in the case when the random walk {S(n)} has bounded jumps, Raschel and Tarrago [17] have recently shown that (ii) can be removed under stronger than in [9] moment restrictions on the vector X. The main aim of this paper is to show that this assumption can be removed without imposing any further conditions. Namely, we prove that (i) is sufficient and the following result holds Theorem 1. Assume that either the cone K is convex or Σ is C 2 and K is starlike. If E|X| α is finite for α = p if p > 2 or for some α > 2 if p ≤ 2, then the function is finite and harmonic for {S(n)} killed at leaving K, i.e., Furthermore, V (x) is strictly positive on the set We will present two very different proofs of this theorem. The first proof uses preliminary bounds for the moments of exit times of τ x due to [16], see Lemma 9 below. The proof is similar to that in [9], but we use an additional idea of time-dependent shifts inside the cone. Thus the approach is reminiscent of one-dimensional random walks conditioned to stay above curved boundaries [6].
The second proof combines time-dependent shifts with an iterative procedure similar to that in [8] and [10]. The main advantage of this approach is that in principle no preliminary information on moments of exit times is needed. However, we use [16] to obtain optimal moment conditions. If we assume two additional moments then this approach becomes self-contained, see Remark 19 below.
A further advantage of new constructions consists in the fact that we do not use estimates for the concentration function of the random walk {S(n)}, which were important for the method used in [9].
Since the geometric assumption (ii) has been used in [9] in the construction of V (x) only, Theorem 1 allows us to state limit theorems for random walks in cones proven in [9] and in [12] for all cones satisfying (i).

Corollary 2.
Under the conditions of Theorem 1, as n → ∞, where µ is a probability measure on K with the density H 0 u(y)e −|y| 2 /2 .
The constant C 0 is a product of the volume of the unit cell in R and of a factor, which depends on the periodicity of the distribution of X.
In the proof of Theorem 5 in [9] we have required the strong aperiodicity of X. This has been done to use the simplest version of the local limit theorem for unrestricted random walks from Spitzer's book [18]. But this standard result can be replaced by Stone's local limit theorem which is valid for all lattice walks, see [19].

Preliminary estimates
We first collect some useful facts about the classical harmonic function u(x). , Proof. Recalling that every partial derivative u xi is harmonic and using the mean value theorem for harmonic functions, we obtain where B(x, r) is the ball of radius r around x and r < dist(x, ∂K). By the Gauss-Green theorem, where ν(z) is the outer normal at z. Choosing r = dist(x, ∂K)/2 and applying the Harnack inequality in the ball B(x, dist(x, ∂K)), we conclude that This implies the desired estimate for u xi (x). Since u xj is harmonic as well we can write The inequality for the third derivative can be proved analogously. The inequality for the gradient immediately follows from the inequality for the first derivative.
For every cone K one has the bound dist(x, ∂K) ≤ |x|, x ∈ K.
Furthermore, it follows from (2) that In the next lemma we derive more accurate estimates for u(x).
Lemma 5. Assume that either the cone K is convex or Σ is C 2 and K is starlike.
We will extend the function u by putting u(x) = 0 for x / ∈ K.
Lemma 6. Assume that either the cone K is convex or Σ is C 2 and K is starlike. Let x ∈ K. Then, and, for |y| ≤ |x|/2, Proof. Consider first the case p ≥ 1. To prove (6) consider first the case when the interval [x, x + y] lies in K. Then, Hence, by (5), as required. Now if [x, x+y] does not belong to K then we have two cases: x+y ∈ K or x + y / ∈ K. If x + y ∈ K then there exist t 1 , t 2 : 0 < t 1 < t 2 < 1 such that [x, x + t 1 y) ⊂ K and (x + t 2 y, x + y] ⊂ K and x + t 1 y, x + t 2 y ∈ ∂K. If x + y / ∈ K then there exists t 1 : 0 < t 1 such that [x, x + t 1 y) ⊂ K and we put t 2 = 1. Since in both cases x + t 1 y, x + t 2 y / ∈ K and u = 0 outside of K we obtain as required. If p ≥ 1 then (7) is immediate from (6).
For p < 1 we will prove a stronger statement (8) which clearly implies (6). Consider first again the case when the interval [x, x + y] lies in K. If |x| ≥ 2|y| then Therefore, we have (7) and (8) for |y| ≤ |x|/2. Furthermore, for |x| < 2|y| one has which completes the proof (8) in the case when [x, x + y] ⊂ K. The case when [x, x + y] does not belong to K can be considered in the same way as for p ≥ 1.
Next we require a bound on f (x).
Lemma 7. Let the assumptions of Theorem 1 hold and f be defined by (9). Then, for some δ > 0, Proof. Let x ∈ K be such that |x| ≥ 1. Put g(x) = dist(x, ∂K), and let η ∈ (0, 1). Then, for any y ∈ B(0, ηg(x)), the interval [x, x + y] ⊂ K. By the Taylor theorem, The remainder R 3 (x) can be estimated by Lemma 4, which will give us Then we can proceed as follows Here we used also the bounds |u(x + y) − u(x)| ≤ C(|x + y| p + |x| p ) ≤ C(|x| p + |y| p ) valid for all x and y . After rearranging the terms we obtain Now note that the first term is 0 due to EX i = 0, cov(X i , X j ) = δ ij and ∆u = 0. The partial derivatives of the function u in the second term can be estimated via Lemma 4, which results in the following estimate Hence, from the Markov inequality we conclude Now recall the moment assumption that E|X| 2+δ < ∞ for some δ > 0. The first term is estimated via the Chebyshev inequality, The second term can be estimated similarly, In order to bound the last term in (11) we have to distinguish between p ≤ 2 and p > 2.
If p ≤ 2, then, by the Chebyshev inequality, , In case p > 2 we have, according to our moment condition, The second statement follows easily from the fact that u(x) is bounded on |x| ≤ 1 and the inequality We derive next an estimate for the maximum which will be used several times in the proofs of our main results.
Proof. For every fixed a > 0 one has (12) Using first the standard union bound and then the Fuk-Nagaev-type inequality from Corollary 23 in [9], one gets Furthermore, Combining (12)- (14), we conclude that Choosing here a = 2ε √ d((1+ε)t+5) and integrating the latter bound, one easily gets the bound Thus, the proof is complete.
Finally, we will require the following results from [16].
Lemma 9. For every β < p we have and This is the statement of Theorem 3.1 of [16]. One has only to notice that e(Γ, R) in that theorem is denoted by p in our paper.

First proof of Theorem 1
where γ ∈ (0, min(1/2, p)). First we will show that it is sufficient to show convergence of E[u(x + g k + S(k)); τ x > k] as k to infinity.
We are left to consider the case p < 1. By (8), we immediately arrive at Now we prove the existence of the limit of the sequence E[u(x+g k +S(k)); τ x > k].
Proposition 11. There exist a finite function V (x) such that We shall split the proof of this proposition into several steps. To this end we shall use the following decomposition: The proposition will follow if we show that the expectations of all three random variables in (24) converge, as k → ∞, to finite limits.
Lemma 12. The sequence W k (x) converges almost surely and in L 1 towards u(x + g τx + S(τ x )). Furthermore, Proof. The almost sure convergence is immediate from the fact that the sequence W (1) k is increasing. Thus, it remains to show that (25) holds. Since x + S(τ x ) / ∈ K, dist(x + g τx + S(τ x ), ∂K) ≤ |g τx | in the case when x + g τx + S(τ x ) ∈ K.
Assume first that p < 1. Combining (8) and (16), we obtain Consider now the case p ≥ 1. Then, using the upper bound (4), we obtain Recalling the definition of the sequence g k , we have Using (16), we conclude that the first two summands are bounded from above by C(1 + |x| p−2γ ). Applying the Hölder inequality with some p ′ ∈ (p, p + pγ) to the third summand, we get (17), EM β (τ x ) ≤ C β (1 + |x| β ), β < p. From this inequality and from (16), we infer that As a result, (25) holds also for p ≥ 1.
This implies that . From these observations and from (7) we obtain

Lemma 14.
For every x ∈ K, and Proof. Recalling the definition of the function f , we have This equality yields that (34) is a simple consequence of (33). Applying Lemma 7 and using the elementary bound dist(y + g l , ∂K) ≥ l 1/2−γ for y ∈ K, following from the choice of x 0 and R 0 in the definition of g l , we have Choosing γ sufficiently small, we have We next show that Assume first that p > 2. Applying Lemma 8, we have Combining this with the estimate |g l | ≥ cl 1/2−γ , we obtain, for γ < δ 5+2δ and ε < δ−5γ−2γδ Taking into account Lemma 9, we get (36) for p > 2.
If p ≤ 2 then the moment of order 2 + δ is finite and, consequently, where in the last step we used Lemma 8 once again. Noting that we obtain Summing over l we infer that (36) holds also for p ≤ 2, provided that γ and ε are sufficiently small. Using the bound dist(y + g l , ∂K) ≥ l 1/2−γ and choosing γ sufficiently small, we conclude that Furthermore, if |S(l)| ≤ g(x)/2 then dist(x + g l + S(l), ∂K) > g(x)/2. Thus, by the Chebyshev inequality, Consequently, As a result, Combining this with (35) and (36), we arrive at (33).
The claim of Proposition 11 is immediate from Lemmas 12, 13 and 14. Furthermore, we have, for a sufficiently small γ, the estimate Lemma 15. The function V possesses the following properties.
(a) For any γ > 0, The function V is harmonic for the killed random walk, that is The proof is identical with that of Lemma 13 in [9], for the proof of (c) one has to notice that (37) implies that

Proof of Corollary 2
As we have mentioned in the introduction, the proofs of the claims in the corollaries are quite close to that in [9]. We demonstrate the needed changes by deriving the tail asymptotics for τ x . Proofs of other results can similarly be adapted to the present setting.
It is immediate from the definition of f (x) that the sequence f (x + g l + S(l))I{τ x > l} is a martingale. Upper bound (33) together with the dominated convergence theorem imply that Furthermore, by the optional stopping theorem, f (x + g l + S(l))I{τ x > l}; ν n > n 1−ε   .
Using (33), the dominated convergence theorem and the fact that P(ν n > n 1−ε ) once again, we infer that Assume first that p < 1. Recalling the definition of W (3) k and using (8), we have Then, for every fixed N ≥ 1, The first summand on the right hand side converges to zero due to the fact that P(ν n > n 1−ε ) → 0. Furthermore, by Lemma 14 in [9], Therefore, we obtain we get Letting here N → ∞, we conclude that n 1−ε (x)|; ν n > n 1−ε ] → 0. It remains to prove this relation for p ≥ 1. In this case, using (6), we have The summands with |X(l)| p have been already considered. For the remaining summands we have The first summand converges again to zero since P(ν n > n 1−ε ) → 0. For the second summand applying the Cauchy-Schwarz inequality and then using (42), we have Therefore, letting N → ∞, we complete the proof.

Preliminary estimates.
The next statement is the most important step in this proof of Theorem 1.
Proposition 16. Assume that the conditions of Theorem 1 are valid. Then, for every sufficiently small ε > 0 there exists q > 0 such that Lemma 17. For sufficiently small ε there exists q > 0 such that Proof. For every x ∈ K define If p ≥ 1 then, using (6), we get In the case p < 1 we use (8) to obtain . Combining these two cases, we have Next, Using (6) and (8) once again, we have If p ≥ 1 then, using (4) and the fact that u(x) = 0 for x / ∈ K, we obtain To bound the second term we use the Burkholder inequality, Then, If p < 1 then, applying (8), we obtain E[u(x + k + S(τ x )); τ x ≤ k] ≤ Ck p/2−pγ . In other words, (45) holds also for p < 1.
Clearly, we can pick sufficiently small ε > 0 in such a way that all exponents on the right hand side of the previous inequality are negative. This completes the proof of the lemma.

Taking into account
Combining (49)-(51) completes the proof of the proposition.
Using (53), we have for the second expectation on the right hand side of (52), To estimate E 1 we apply the upper bound from (4), and use the fact that on the event { ν n > m}, dist(x + S(m), ∂K) ≤ 1 2 n 1/2−ε + |x + S(m)| n 2ε .
Taking into account (61) and that δ can be made arbitrarily small we conclude that the limit V (x) := lim n→∞ E[u(x + S(n)); τ x > n] exists for every x ∈ K.
For positivity of V note that by (6), when p ≥ 1 and by (8), E[u(tx + S n0 ); τ tx > n 0 ] ≥ u(tx)P(τ tx > n 0 ) − E[|S n0 | p ] when p < 1. Also, C(tx) ≤ t p−γ |x| p−γ . Hence, it follows from (63) that there exists R such that V (x) is positive for x ∈ D R,γ . The rest of the proof follows the corresponding part of Lemma 13 of [9]. The proof is complete.