Asymptotic direction for random walks in mixing random environments

We prove that every random walk in a uniformly elliptic random environment satisfying the cone mixing condition and a non-effective polynomial ballisticity condition with high enough degree has an asymptotic direction.


Introduction
Random walk in random environment is a simple but powerful model for a variety of phenomena including homogenization in disordered materials [M94], DNA chain replication [Ch62], crystal growth [T69] and turbulent behavior in fluids [Si82]. Nevertheless, challenging and fundamental questions about it remain open (see [Z04] for a general overview). In the multidimensional setting a widely open question is to establish relations between the environment at a local level and the long time behavior of the random walk. During last ten years, interesting progress has been achieved specially in the case in which the movement takes place on the hypercubic lattice Z d and the environment is i.i.d., establishing relations between directional transience, ballisticity and the existence of an asymptotic direction and the law of the environment in finite regions. To a great extent, these arguments are no longer valid when the i.i.d. assumption is dropped.
In this article we focus on the problem of finding local conditions on the environment which ensure the existence of a deterministic asymptotic direction for the random walk model in contexts where the environment is not necessarily i.i.d. As it will be shown in Section 2, there exist environments which are ergodic and for which there does not exist a deterministic asymptotic direction. Therefore, some kind of mixing or ballisticity condition should be imposed on the environment. Here we establish the existence of an asymptotic direction for random walks in random environments which are uniformly elliptic, are cone mixing [CZ01], and satisfy a non-effective version of the polynomial ballisticity condition introduced in [BDR14] with high enough degree of the decay. It will be also shown (see Section 2), that there exist environments almost satisfying the above assumptions which are directionally transient and for which there exists at least in a weak sense an asymptotic direction, but have a vanishing velocity. Here the term almost is used because in these examples the non-effective polynomial ballisticity condition is satisfied with a low degree. This shows that somehow, while the mixing and non-effective polynomial conditions we will impose do imply the existence of an asymptotic direction, they might not necessarily imply the existence of a non-vanishing velocity.
For x ∈ R d , we denote by |x| 1 , |x| 2 and |x| ∞ its l 1 , l 2 and l ∞ norms respectively. For each integer d ≥ 1, we consider the 2d−dimensional simplex P d := {z ∈ (R + ) 2d : 2d i=1 z i = 1} and E := {e ∈ Z d : |e| 1 = 1}. We define the environmental space Ω := P Z d d and endow it with its product σ-algebra. Now, for a fixed ω = {ω(y) : y ∈ Z d } ∈ Ω, with ω(y) = {ω(y, e) : e ∈ U } ∈ P d , and a fixed x ∈ Z d , we consider the Markov chain {X n : n ≥ 0} with state space Z d starting from x defined by the transition probabilities P x,ω [X n+1 = X n + e | X n ] = ω(X n , e) for e ∈ U.
(1) We denote by P x,ω the law of this Markov chain and call it a random walk in the environment ω. Consider a law P defined on Ω. We call P x,ω the quenched law of the random walk starting from x. Furthermore, we define the semi-direct product probability measure on Ω × (Z d ) N by for each Borel-measurable set A in Ω and B in (Z d ) N , and call it the annealed or averaged law of the random walk in random environment. The law P of the environment is said to be i.i.d. if the random variables {ω(X) : x ∈ Z d } are i.i.d. under P, elliptic if for every x ∈ Z d and e ∈ U one has that P[ω(x, e) > 0] = 1 while uniformly elliptic if there exists a κ > 0 such that P[ω(x, e) ≥ κ] = 1 for every x ∈ Z d and e ∈ U .
Let l ∈ S d−1 . We say that a random walk is transient in direction l or just directionally transient if P 0 -a.s. one has that lim n→∞ X n · l = ∞.
Furthermore, we say that it is ballistic in direction l if lim inf n→∞ X n · l n > 0.
In the case in which the environment is elliptic and i.i.d., it is known that whenever a random walk is ballistic necessarily a law of large numbers is satisfied and in fact lim n→∞ Xn n = v = 0 is deterministic [DR14]. Furthermore, in the uniformly elliptic i.i.d. case, it is still an open question to establish whether or not in dimensions d ≥ 2, every directionally transient random walk is ballistic (see [BDR14]).
On the other hand, we say thatv ∈ S d−1 is an asymptotic direction if P 0 -a.s. one has that lim n→∞ X n |X n | 2 =v.
For elliptic i.i.d. environments, Simenhaus established [Si07] the existence of an asymptotic direction whenever the random walk is directionally transient in an open set of S d−1 . As it will be shown in Section 2, this statement is not true anymore when the environment is assumed to be ergodic instead of i.i.d., even if it is uniformly elliptic.
Let us now define the three main assumptions throughout this article: uniform ellipticity, cone mixing, and non-effective polynomial ballisticity condition. Let κ > 0. We say that P is uniformly elliptic with respect to l, denoted by (U E)|l, if the jump probabilities of the random walk are positive and larger than 2κ in those directions which for which the projection of l is positive. In other words if P[ω(0, e) > 0] = 1 for e ∈ E and if P min e∈E ω(0, e) ≥ 2κ = 1, and by convention sgn(0) = 0.
We will now introduce a certain mixing assumption for the environment P. Let α > 0 and R be a rotation such that R(e 1 ) = l. (3) To define the cone, it will be useful to consider for each i ∈ [2, d], l +i = l + αR(e i ) |l + αR(e i )| and l −i = l − αR(e i ) |l − αR(e i )| .
H x,l := σ(ω(y) : y ∈ H x,l ). Now, for M ≥ 1, we say that the non-effective polynomial condition (P C) M,c |l is satisfied if there exists some c > 0 so that for y ∈ H 0,l one has that lim L→∞ L M sup P 0 X T B L,cL,l (0) ∈ ∂ + B L,cL,l (0), T B L,cL,l (0) < T H y,l |H y,l = 0, (5) where the supremum is taken over all the coordinates {ω(x) : x·l ≤ y·l}. It is possible to show that for i.i.d. environments, this condition is implied by Sznitman's (T ) condition [Sz03], and it is equivalent to the effective polynomial condition introduced in [BDR14].
Let I be the subset of vectors in R d different from 0 and with integer coordinates. Define S d−1 q := l |l| 2 : l ∈ I . We can now state our main result.
Theorem 1.1. Let l ∈ S d−1 q , M > 6d, c > 0 and 0 < α ≤ min{ 1 9 , 1 2c+1 }. Consider a random walk in a random environment with stationary law satisfying the uniform ellipticity condition (U E)|l, the cone mixing condition (CM ) α,φ |l and the non-effective polynomial condition (P C) M,c |l. Then, there exists a deterministicv ∈ S d−1 such that P 0 -a.s. one has that lim n→∞ X n |X n | =v.
As it will be explained in Section 2, Simenhaus's theorem which states that an asymptotic direction exists whenever the random walk is directionally transient in an open set of directions and the environment is i.i.d., is not true if the i.i.d. assumption is dropped. Somehow, Theorem 1.1 shows that if the i.i.d. assumption is weakened to cone mixing, while directional transience is strengthened to the non-effective polynomial condition, we still can guarantee the existence of an asymptotic direction.
In [CZ01], the existence of a strong law of large numbers is established for random walks in cone-mixing environments which also satisfy a version of Kalikow's condition, but under an additional assumption of existence of certain moments of approximate regeneration times. This assumption is unsatisfactory in the sense that it is in general difficult to verify if for a given random environment it is true or not. On the other hand, as it will be shown in Section 2, there exist examples of random walks in a random environment satisfying the cone-mixing assumption for which the law of large numbers is not satisfied, while an asymptotic direction exists. From this point of view, Theorem 1.1 is also a first step in the direction of obtaining scaling limit theorems for random walks in cone-mixing environments through ballisticity conditions weaker than Kalikow's condition, and without any kind of assumption on the moments of approximate regeneration times or of the position of the random walk at these times. On the other hand, in [RA03], a strong law of large numbers is proved for random walks which satisfy Kalikow's condition and Dobrushin-Shlosman's strong mixing assumption. The Dobrushin-Shlosman strong mixing assumption is stronger than cone-mixing, both because it implies cone-mixing in every direction and because it corresponds to a decay of correlations which is exponential.
A key step to prove Theorem 1.1 will be to establish that the probability that the random walk never exits a cone is positive through the use of renormalization type ideas, and only assuming the non-effective polynomial condition and uniform ellipticity. Using this fact, we will define approximate regeneration times as in [CZ01], showing that they have finite moments of order larger than one when we also assume cone-mixing. This part of the proof will require careful and tedious computations. Once this is done, the law of large numbers can be deduced using for example the coupling approach of [CZ01].
In Section 2, we will present two examples of random walks in random environments which exhibit a behavior which is not observed in the i.i.d. case, giving an idea of the kind of limitations given by the framework of Theorem 1.1. In Section 3, the meaning of the non-effective polynomial condition and its relation to other ballisticity conditions will be discussed. In Section 4, we will show that the non-effective polynomial condition implies that the probability that the random walk never exits a cone is positive. This will be used in Section 5 to prove that the approximate regeneration times have finite moments of order larger than one. Finally in Section 6, Theorem 1.1 will be proved using coupling with i.i.d. random variables.

Examples of directionally transient random walks without an asymptotic direction and vanishing velocity
We will present two examples of random walks in random environment which exhibit the framework of the hypothesis of Theorem 1.1. The first example indicates that the hypothesis of Theorem 1.1 might not necessarily imply a strong law of large numbers with a nonvanishing velocity. The second example will show that we cannot expect to prove the existence of an asymptotic direction without either some kind of mixing hypothesis on the environment or some ballisticity condition.
Throughout, p will be a random variable taking values in (0, 1) such that there exists a unique κ ∈ (1/2, 1) with the property that Now consider the random environment ω = {ω((i, j)) : (i, j) ∈ Z 2 } defined ω((i, j)) := ω i for all i, j ∈ Z. We will call P 1 the law of the above environment and Q 1 the annealed law of the corresponding random walk starting from 0.
Theorem 2.1. Consider a random walk in a random environment with law P 1 . Then, the following are satisfied: (iv) The law Q 1 satisfies the polynomial condition (P C) M,c with M = κ − 1 2 − ε and c = 1, where ε is an arbitrary number in the interval (0, κ − 1 2 ). Proof. Part (i). We will describe a one dimensional procedure which will be used throughout the proofs of items (i) and (ii). Define {Y n : n ≥ 0} := {X n · e 1 : n ≥ 0}. Note that / ω(0, e 1 ) and E 1 denotes the corresponding expectation in this random environment. Now, from the transience criteria in [Z04] Theorem 2.1.2 one has that Q 1 -a.s. lim n→∞ X n · e 1 = ∞.
Part (ii). Note that where {Y n : n ≥ 0} is the projection of the random walk in the direction e 1 defined in part (i). Now, using the strong law of large numbers for this projection ([Z04], Theorem 2.1.9), we get Q 1 -a.s., and the fact that (X n · e 2 ) is a random walk which moves with the same probability in both directions, we conclude that Q 1 -a.s. lim n→∞ X n n = 0.
Part (iii). We define the random variables N 1 and N 2 as horizontal and vertical steps performed by the walk X n , respectively. By the very definition of this example, both of them distribute like a binomial law of paratemers n and 1/2 under the quenched law. For each ε > 0, we have to estimate the probability Clearly, X n · e 2 under the annealed law has the same law P of a one dimensional simple symmetric random walk {Z m : m ≥ 0} at time m = N 2 . Note that P -a.s. N 2 /n → 1/2 as n → ∞. Therefore, since κ > 1/2 we see that and hence Q 1 -a.s. lim n→∞ (X n · e 2 ) 2 n 2κ = 0.
On the other hand, using the convergence theorem of Kesten, Kozlov and Spitzer [KKS75], we see that lim n→∞ X n · e 1 (X n · e 1 ) 2 = 1 in distribution, and hence also in Q 1 -probability. It follows that for each ε > 0, the left hand-side of (7) tends to 0 as n → ∞.
Part (iv). For j ∈ {1, 2} and a a positive real number, we define the stopping times T T e j a := inf{n ≥ 0 : X n · e j ≤ a} Notice that for c = 1 and large L one has the following estimate (10) The first probability in the right-most side of (10) has an exponential bound in L. Observe that the second probability in the right-most side of (10) is less than or equal to . Keeping the notations introduced in item (iii), one sees that for large L, there exists a positive constant K 1 such that On the other hand, using the sharp estimate in Theorem 1.3 in [FGP10] and denoting byP the law of underlying one-dimensional random walk corresponding to the annealed law of (X n · e 1 ) n≥0 , we can see that for large L, there exists a positive constant K 2 such that Therefore, in view of inequality (10), the estimates (11) and (12), we complete the proof.
2.2. Directionally transient random walk without an asymptotic direction. Let {p i : i ∈ Z} and {p j : j ∈ Z} be two independent i.i.d. copies of p. Following a similar procedure as in the previous example, we consider in the lattice Z 2 the canonical vectors e 1 and e 2 , and define the random environment ω = {ω((i, j)) : (i, j) ∈ Z 2 } by, together with We call P 2 the law of the above environment and Q 2 the annealed law of the corresponding random walk starting from 0.
Theorem 2.2. Consider a random walk in a random environment with law P 2 . Then, the following are satisfied.
Both assertions follow from an argument similar to the one used in part (i) of Theorem 2.1, Theorem 2.1.2 in [Z04] and (6). Part (ii). This proof is similar to case (ii) of Theorem 2.1 . Part (iii). For j = 1, 2 we define T 0,j = 0. For j = 1, 2 we define and for i ≥ 2 let Setting Y n,j := X T n,j · e j , we see that for j ∈ {1, 2}, the one dimensional random walks without transitions to itself at each site (Y n,j ) n≥0 are independent and their transitions at each site i ∈ Z d are determined by p i . Furthermore, for j ∈ {1, 2}, the strong law of large numbers implies that Q 2 -a.s.
We now apply the result of Kesten, Kozlov and Spitzer [KKS75] to see that there exist constants C 1 and C 2 such that Y n,1 n κ , Y n,2 n κ → C 1 1 S 1 κ ca κ , C 2 1 S 2 κ ca κ in distribution, where for j ∈ {1, 2}, S j κ ca stands for two independent completely asymmetric stable laws of index κ, which are positive. Using (14) and properties of convergence in distribution we can see that n κ e 1 + (Xn·e 2 ) n κ e 2 (Xn·e 1 ) 2 n 2κ in distribution. Therefore we have proved that the limitv is random. Part (iv). A first step will be to prove the following decay cL ] < 0 for arbitrary positive constants c and c (see (8) and (9) for the notations). We will prove this only in the case j = 1 since the case j = 2 is similar. Following the notation introduced in Theorem 2.1 item (i) and denoting the greatest integer function by [·], we see that it is sufficient to prove that for large L there exists a positive constant C such that: To this end, for a fixed random environment ω, if we define [ cL]+1 = 1 and V L [cL]+1 = 0. This system can be solved by the method developed by Chung in [Ch67], Chapter 1, Section 12. Applying it we see that where we have adopted the notation z<m≤z := log ρ(m) and ρ(m) := (1 − p m )/p m . A slight variation of the argument in [Sz02] page 744 completes the proof of claim (15). On the other hand, considering the probability we observe that this expression is clearly bounded from above by (see Figure 1) ] In virtue of the claim (15) the last expression has an exponential bound and this finishes the proof.

Preliminary discussion
In this section we will derive some important properties that are satisfied by the non-effective polynomial and cone mixing conditions. In subsection 3.1 we will show that the non-effective polynomial condition is weaker than the conditional form of Kalikow's condition introduced in [CZ02]. In subsection 3.2 we will show that the cone mixing condition implies ergodicity. Finally, in subsection 3.3, we will prove that the non-effective polynomial condition in a given direction implies the noneffective polynomial condition in a neighborhood of that direction with a lower degree.
3.1. Non-effective polynomial condition and its relation with other directional transience conditions. Here we will discuss the relationship between the condition non-effective polynomial condition and other transience conditions. Furthermore we will show that the conditional non-effective polynomial condition is weaker than the conditional version of Kalikow's condition introduced by Comets-Zeitouni in [CZ01] and [CZ02].
For reasons that will become clear in the next section, the following definition, which is actually weaker than the conditional non-effective polynomial condition, will be useful. Let l ∈ S d−1 , M ≥ 1 and c > 0. We say that condition (P ) M,c |l is satisfied, and we call it the noneffective polynomial condition if there is a constant c > 0 such that It is straightforward to see that (P C) M,c |l implies (P ) M,c |l.
It should be pointed out, that for a fixed γ ∈ (0, 1), if both in the conditional and non-conditional non-effective polynomial conditions the polynomial decay is replaced by a stronger stretched exponential decay of the form e −L γ , one would obtain a condition defined on rectangles equivalent to condition (T ) γ introduced by Sznitman in [Sz03], and also a conditional version of it. On the other hand, as we will see now, the conditional non-effective polynomial condition is implied by Kalikow's condition as defined in [CZ01] for environments which are not necessarily i.i.d. Let us recall this definition. For V a finite, connected subset of Z d , with 0 ∈ V , we let The Kalikow's random walk {X n : n ≥ 0} with state space in V ∪ ∂V , starting from y ∈ V ∪ ∂V is defined by the transition probabilities , for x ∈ V and e ∈ E 1 for x ∈ ∂V and e = 0.
We denote byP y,V the law of this random walk and byÊ y,V the corresponding expectation. The importance of Kalikow's random walk stems from the fact that We now define Kalikow's condition with respect to the direction l as the following requirement: there exits a positive constant δ such that denotes the drift of Kalikow's random walk at x, and the infimum runs over all finite connected subset V of Z d such that 0 ∈ V . The following result shows that Kalikow's condition is indeed stronger that the conditional non-effective polynomial criteria.
Proposition 3.1. Let l ∈ S d−1 . Assume Kalikow's condition with respect to l. Then there exists an r > 0 such that for all y ∈ H 0,l one has that lim sup where the supremum is taken in the same sense as in (5). In particular, Kalikow's condition with respect to direction l implies (P C) M,r |l for all M > 0.
Proof. Suppose that Kalikow's condition is satisfied with constant δ > 0. We will first assume that y · l ∈ (−L, 0). Let c > 1. For y ∈ H 0,l and L ≥ 1 consider the box Therefore, using (16) we find that (17) Notice that on the set Thus, by means of the auxiliary martingale {M V n : n ≥ 0} defined by which has bounded increments (indeed bounded by 2) we can see that s. Now, it will be convenient at this point to recall Azuma's inequality (see for example [Sz01]) for martingales with increments bounded by 2, Using this inequality and (18) we obtain that for a suitable positive constant c 1 . Finally, coming back to (17), we can then conclude that lim sup Let us now assume that y · l ≤ −L. By Lemma 1.1 in [Sz01] we know that there exists a positive constant ψ depending on δ such that for all V finite connected subsets of Z d with 0 ∈ V e −ψXn·l is a supermartingale with respect to the canonical filtration of the walk under Kalikow's law P 0,V . Thus, we have that by means of the stopping time theorem applied at time T V . By an argument similar to the one developed for the case y · l ∈ (−L, 0), we can finish the estimate in the case y · l ≤ L.

3.2.
Cone mixing and ergodicity. The main objective in this section is to establish that any stationary probability measure P defined on the canonical σ− algebra F, which satisfies property (CM ) φ,α |l is ergodic with respect to space-shifts. We do not claim any originality about such an implication, but since we where not able to find an adecuate reference, we have included the proof of it here for completeness.
Let us recall that a set E ∈ F is an invariant set if Theorem 3.2. Assume that the probability space (Ω, F, P) has the property (CM ) φ,α |l and is stationary, then the probability measure P is ergodic, i.e. for any invariant set E ∈ F we have: Proof. Let E ∈ F be an invariant set. Note that for each > 0 there exists a cylinder measurable set A ∈ F such that Since A is a cylinder measurable set, it can be represented as where B(P d ) stands for the Borel σ−algebra on the compact subset P d of R 2d . Choose now L such that φ(L) < .
Plainly, for L we can find an x ∈ Z d such that θ x A and A are L separated on cones with respect to direction l: there exists y ∈ Z d such that We can suppose that P[E] > 0, otherwise there is nothing to prove. So as to complete the proof we have to show that P[E] = 1. Therefore taking small enough we can suppose that P[A] > 0. Thus, using the cone mixing property, we get that On the other hand, since E is an invariant set, we see that which implies In turn, from inequality (22), it is clear that P[A ∩ (θ x A) c ] < 2 . Now, using the inequality (20) one has that As a result, we see that Hence, since > 0 is arbitrary we conclude that 3.3. Polynomial Decay implies Polynomial decay in a neighborhood. In this subsection we prove that whenever (P C) M,c |l holds, for prescribed positive constants M and c, then we can choose 2(d − 1) directions where we still have polynomial decay although of less order. More precisely, we can prove the following.
Proof of Proposition 3.3. We will just give the proof for direction l −2 , the other cases being analogous. Throughout the proof we pick α ∈ (0, 1) and we define the angle Consider the rotation R on R d defined by where this representation matrix is taken in the vector space base {R(e 1 ), R(e 2 ), . . . , R(e d )}. It will be useful to define a new rotation together with the rotated box B L (0) given by Notice that with these definitions, P 0 -almost surely: Figure 2 shows the boxes involved in (24).
As a result we have that Figure 2. The choice of boxes.
Furthermore, a straightforward computation makes us see that the scale factor λ 3 (α) is less than 4 3 whenever α ≤ 1 9 . Therefore if we let the positive α ≤ 1 9 one has that For technical reasons, we need to introduce an auxiliary box. Specifically, we first set and observe that 4 5 < h < 1. We then can introduce the new box From this definition, we obtain In order to complete the proof, we claim that for large enough U the probability decays polynomially as a function of U , where the box B l −2 ,U (0) is defined by The general strategy to follow will be to stack smaller boxes up inside of B l −2 ,U (0) and then using the Markov property along with good environment sets we ensure that the walk exits from box B l −2 ,U (0) by ∂ + B l −2 ,U (0) with probability bigger than 1 − P (U ), where P is a polynomial function. Specifically, we let Introduce now a sequence of stopping times as follows and for i > 1 For simplicity we write T 1 instead of T B l −2 ,U (0) . In view of (25) and (26) it is clear that to ensure that the random walk exits at time T 1 through ∂ +B l −2 ,U (0), it is enough that it exits through the corresponding positive boundaries through four succesive times, so that In order to use (27), let i be a positive integer number and consider the lattice sets sequence (F i ) i≥1 defined by and for i > 1, we define by induction We now define for i ≥ 1, the environment events G i by By the Markov property applied at time T 3 and the very meaning of G 3 , we get that the last expression is equal to Repeating the above argument, one has the following upper bound for the right-most expression of (28), (29) At this point, we would like to obtain for i ∈ |[1, 3]|, an upper bound of the probabilities To this end, we first observe that Chebyshev's inequality and our hypothesis imply Clearly, we have the estimate | F 1 |≤ 8 3 L d−1 (recall (25)). As a result, we see that By a similar procedure we can conclude that Combining the estimates in (27) and (31) and the assumption M ≥ 6(d − 1) we see that This ends the proof by choosing the required α as any number in the open interval (0, 1 9 ).
4. Backtracking of the random walk out of a cone Here we will provide a uniform control on the probability that a random walk starting form the vertex of a cone stays inside the cone forever. It will be useful to this end to define where as before l ∈ S d−1 .
In what follows we prove this proposition. With the purpose of making easier the reading, we introduce here some notations. Let l ∈ S d−1 and choose a rotation R on R d with the property R (e 1 ) = l For each x ∈ Z d , real numbers m > 0, c > 0 and integer i ≥ 0 we define the box along with its "positive boundary" We also need slabs perpendicular to direction l . Set The positive part of the boundary for this set is defined as Furthermore, we will define recursively a sequence of stopping times as follows. First, let . and for i ≥ 1 We now need to define the first time of entrance of the random walk to the hyperplane R (−∞, 0) × R d−1 , D l := inf{n ≥ 0 : X n · l < 0}.
With these notations we can prove: Then, for all m ∈ N and x ∈ {z ∈ Z d : z · l ≥ 2 m }, we have that where y(m) does not depend on l and satisfies lim m→∞ y(m) = 1.
Proof. From the fact that (P ) N,2 |l holds, we can (and we do) assume that there exists a m > 0 large enough, such that for any positive integer i one has that holds. By stationarity, we have for x ∈ Z d : Throughout this proof, let us choose x ∈ {z ∈ Z d : z · l ≥ 2 m }. For reasons that will be clear through the proof, we need to estimate for i ≥ 1 the following probability and with this aim, in view of (34), we have Now, as a preliminary computation for the recursion, we begin to estimate I 1 . Note that Using the strong Markov property at time T 0 we then see that where Thus, it is clear that Notice that by (34) and Chebyshev's inequality Plugging (39) into (38) we see that (40) Hereafter we can do the general recursive procedure. For this end, we define for i ≥ 1 It is straightforward that I i ≥ J i . Furthermore, through induction on i ≥ 1, we will establish the following claim To prove this, we first define the extended boundary of the pile of boxes at a given step as and for i ≥ 2 Using these notations, we can apply the strong Markov property to (41) at time T i−1 , to get that Following the same strategy used to deduce (40), it will be convenient to introduce for each i ≥ 2 the event , for all y ∈ F i−1 }.
Inserting the indicator function of the event G i−1 into (41) we get that By the same kind of estimation as in (38), we have We need to get an estimate for P[(G i−1 ) c ]. We do it repeating the argument given in (39). Let us first remark that holds. Indeed, the case in which l = e 1 gives the maximum number for |F i−1 |. Keeping (44) in mind we get that Therefore, combining (45) and (43) we prove claim (42). Iterating (42) backward, from a given integer i, we have got where we have used for short The same argument used to derive (40) can be repeated to conclude that (47) Replacing the right hand side of (47) into (46), and together to the fact I i ≥ J i , we see that ).
(48) Now we can finish the proof. First, observe that where as a matter of definition I ∞ := lim i→∞ I i (this limit exists, because it is the limit of a decreasing sequence of real numbers bounded from below). By the condition N > 2(d − 1), we get that for each m ≥ 1 one has that for all j ≥ 1, where ϑ stands for the positive number so that N = 2(d − 1) + ϑ. Thus all the products and series in (48) converge and we have that for all m ≥ 1 and ).
Clearly for each m ≥ 1, y(m) does not depend on the direction l and lim m→∞ y(m) = 1, which completes the proof.
With the previous Lemma, we now have enough tools to prove Proposition 4.1. Before this, we need a definition of geometrical nature.
We will say that a sequence (x 0 , . . . , x n ) of lattice points is a path if for every 1 ≤ i ≤ n − 1, one has that x i and x i−1 are nearest neighbors. Furthermore, we say that this path is admissible if for every 1 ≤ i ≤ n − 1 one has that Proof of Proposition 4.1. Assume (P ) M,c |l, where M > 6(d − 1) + 3 which is the condition of the statement of the Proposition 4.1. We appeal to Proposition (3.3) and assumption (P ) M,c |l to choose an α > 0 such that for all i ∈ [2, d] (P ) N,2c |l ±i is satisfied with From now on, let m be any natural number satisfying where y(m) is the function given in Lemma 4.2. Note that there exists a constant c 3 (d) such that for all x ∈ Z d contained in C(α, l, R(2 m e 1 )) and such that |R(2 m e 1 )−x| 1 ≤ 1 one has that there exists an admissible path with at most c 3 2 m lattice points joining 0 and x. We denote this path by (0, y 1 , . . . , y n = x) noting that n ≤ c 3 2 m .
The general idea to finish the proof is to push forward the walk up to site x with the help of uniform ellipticity in direction l and then we make use of Lemma (4.2) to ensure that the walk remains inside the cone.

Now notice that
On the other hand, by definition of the annealed law, together with the strong Markov property we have that Using the uniform ellipticity assumption (U E)|l, along with (51) and (52), we can see that (54) is bounded from below by By virtue of our choice of m in (50), we see that there exists a constant c 2 just depending on the dimension (we recall that m is fixed at this point of the proof), such that Finally, in view of the inequalities (53) and (54) it follows that

Polynomial control of regeneration positions
In this section, we define an approximate regeneration times as done in [CZ01], which will depend on a distance parameter L > 0. We will then show that these times, assuming (P C) M,c |l for M large enough, and cone-mixing, when scaled by κ L , define approximate regeneration positions with a finite second moment.

Preliminaries.
We recall the definition of approximate renewal time given in [CZ01]. Let W := E ∪ {0} [c.f. (2)] and endow the space W N with the canonical σ−algebra W generated by the cylinder sets. For fixed ω ∈ Ω and ε = (ε 0 , ε 1 , . . .) ∈ W N , we denote by P ω,ε the law of the Markov chain {X n } on (Z d ) N , so that X 0 = 0 and with transition probabilities defined for z ∈ Z d , e, |e| = 1 as Call E ω, the corresponding expectation. Define also the product measure Q, which to each sequence of the form ε ∈ W N assigns the probability Q(ε 1 = e) := κ, if e ∈ E, while Q(ε 1 = 0) = 1 − κ|E|, and denote by E Q the corresponding expectation. Now let G be the σ-algebra on (Z d ) N generated by cylinder sets, while F be the σ-algebra on Ω generated by cylinder sets. Then, we can define for fixed ω the measure P 0,ω := Q ⊗ P ω,ε on the space (W N × (Z d ) N , W × G), and also P 0 := P ⊗ Q ⊗ P ω,ε on (Ω × W N × (Z d ) N , F × W × G), denoting byĒ 0,ω andĒ 0 the corresponding expectations. A straightforward computation makes us conclude that the law of {X n } underP 0,ω coincides with its law under P 0,ω and that its law under P 0 coincides with its law under P 0 .
(58) We see that the random variable τ (L) is the first time n in which the walk has reached a record at time n − L in direction l, and then the walk goes on L steps in the direction l by means of the action of ε (L) and finally after this time n, never exits the cone C(X n , l, α).
The following lemma is required to show that the approximate renewal times areP 0 -a.s. finite. Its can be proved using a slight variation of the argument given in page 517 of Sznitman [Sz03].
Lemma 5.1. Consider a random walk in a random environment. Let l ∈ S * d−1 , M ≥ d + 1 and c > 0 and assume that (P C) M,c |l is satisfied. Then the random walk is transient in direction l.
Proof. The proof can be obtained following for example the argument presented in page 517 of [Sz03], through the use of Borel-Cantelli and the fact that for any M > 0 we have that P 0 [lim sup n→∞ X n · l = ∞] = 1.
We can now prove the following stronger version of Lemma 2.2 of [CZ01]. Proof. Following the arguments in the proof of Lemma 2.2. of [CZ01] (using u instead of l), one has that: From the assumption (CM ) α,φ |l, we have φ(L) → 0 as L → ∞. On the other hand, by Lemma 4.1, Therefore, we can find a L 0 with the property: for all L ≥ L 0 , L ∈ N|l| 1 . Then, via Borel-Cantelli Lemma, one has thatP 0 − almost surely holds. Now, observe thatP 0 − almost surely: In turn, using (57) which is satisfied in view of Lemma 5.1, turns out that inf{n ≥ 1 : S n < ∞ R n = ∞} = K < ∞ P 0 − almost surely.
Finally, we can state the following proposition, which gives a control on the second moment of the position of the random walk at the first regeneration position. Define for x ∈ Z d and L > 0 the σ−algebra Assume that 0 < α < min{ 1 9 , 1 2c+1 } and that (CM ) α,φ |l, (U E)|l and (P C) M,c | l hold. Then, there exists a constant c 5 , such thatĒ 5.2. Preparatory results. Now we are in position to prove the main proposition of this section. Before we do this, we will prove a couple of lemmas.
Lemma 5.4. Assume that (CM ) α,φ |l holds. Then, for each x ∈ Z d one has that Clearly (63) defines a measure on (Ω, F x,L ). We will show that (64) also. Indeed, take an A ∈ F x,L and note that P x,ω [D = ∞] is σ{ω(y, ·), y ∈ C(x, l, α)}-measurable. Therefore, by assumption (CM ) α,φ |l one has that Consequently, (64) defines a measure µ on (Ω, F x,L ). Consider the increasing sequence {A n : n ≥ 1} of F x,L -measurable sets defined by Observe that for each n ≥ 1 we have that . Therefore, one has that for each n ≥ 1, P[A n ] = 0 and consequently P[A] = 0. Observing that One can prove that following the same argument used to show (65), but changing the event The second lemma that will be needed to prove Proposition 5.3 is the following one. To state it define M := sup 0≤n≤D (X n − X 0 ) · u, D (0) := inf{n ≥ 0 : X n ∈ C(0, l, α)}, and for a ∈ R T l a := inf{n ≥ 0 : X n · l ≥ a} and T l a := inf{n ≥ 0 : X n · l > a}.
Lemma 5.5. Let M > 4d + 1 and Assume that (P C) M,c |l is satisfied. Then, there exists c 6 = c 6 (d) > 0 such that a.s. one has that P− almost surely.
Proof. To simplify the proof, we will show that the second moment of Therefore, it is enough to obtain an appropriate upper bound of the probability when m is large Note that, (69) Using (P C) M,c |l, we get the following upper bound for the first term of the rightmost expression in (69), As for the second term in the rightmost expression in (69), it will be useful to introduce the set Now, by the strong Markov property we have the bound In order to estimate this last conditional probability, we obtain a lower bound for its complement as follows. To simplify the computations which follow, for each x ∈ Z d we introduce the notation Now, note that under the assumption (67) we have that which implies that the boxes B y and B z , for all y ∈ F m and z ∈ ∂ + B y , are inside the cone C(0, l, α) (see Figure 3). Therefore, fixing y ∈ F m , it follows that To estimate the right-hand side of the above inequality, it will be convenient to introduce the set and the event , for all z ∈F m }.
Using the strong Markov property, we can now bound from below the right-hand side of inequality (72) by In turn, by means of the polynomial condition and the fact that the boxes B y and B z are inside the cone C(0, l, α) we see that (73) is greater than or equal to where in the first inequality we have used Chebyshev inequality, in the second one the assumption that (P C) M,c | l is satisfied and in the third one the bound |F 2m | ≤ (4c) d−1 2 m(d−1) . Consequently inserting the estimates (75) into (74) and combining this with inequality (72) we conclude that Using the bound (76) in (71), together with the estimate |F m | ≤ (2c) d−1 2 m(d−1) , we see that (77) Combining the estimates (77), (70), (69) with (68) we conclude that and E P⊗Q := EE Q . Furthermore, it will be necessary to define for each j ≥ 0 and n ≥ L + j the events D j,n := {ε ∈ W N : (ε m , . . . , ε m+L−1 ) = ε (L) for all j ≤ m ≤ j+n−L+1}.
The following lemma, whose proof is presented in Appendix A, will be useful in the proof of Proposition 5.3.
Lemma 5.6. There exists a constant c 7 such that for all n ≥ L 2 one has that We now present the proof of Proposition 5.3, divided in several steps. For the sake of simplicity, we will write τ instead of τ (L) .
Step 0. We first note that Throughout the subsequent steps of the proof we will estimate the right-hand side of (78).
Step 1. Here we will prove the following estimate valid for all k ≥ 1 and 0 ≤ k < k.
Then, for each 0 ≤ k < k, one has that where here for each x ∈ Z d , ϑ x denotes the canonical space shift in Ω so that ϑ x ω(y) = ω(x + y), while for each n ≥ 0, θ n denotes the canonical time shift in the space W so that (θ n ) m = n+m , in the first equality we have used the fact that the value of X S k · u ≥ X S 1 · u, in the second equality the Markov property and in the last equality we have used the independence of the coordinates of and the fact that the law of the random walk is the same under P x,ω and under E Q P ϑxω,θn .
Moreover, by the fact that the first factor inside the expectation of the right-most expression of (80) is F x,L -measurable, the right-most expression in (80) is equal to Applying next Lemma 5.4 to (81), we see that Next, observe that for k < k one has that By Lemma 5.4, we have that Using this inequality to estimate the last term in (83), we see thatĒ By induction on k we get that (84) Combining (84) with (82) we obtain (79).
Step 2. For k ≥ 1 we define Define also the sets parametrized by k and n ≥ 0 and In this step we will show that for all k ≥ 0 one has that k < ∞, B n,k , A n,k | F 0,L ], (88) To prove (88), we have to introduce some further notations. Now, note that on the event A n,k ∩ B n,k one has that Thus, as a consequence of the definition of S k+1 , one has thatP 0 -a.s.
Step 3. Here we will derive an upper bound for the two sums appearing in the right-hand side in (88). In fact, we will prove that there is a constant c 8 such that for all k ≥ 1 one has that and Note that for all n ≥ 0 one has that and hence by induction on n we get that Therefore, if we set where c 9 is a constant depending on l and d, we can see that P 0 -a.s on the event {t (n) k < ∞, A n,k } one has that Therefore, for all 0 ≤ n ≤ L 2 − 1 one has that where in the equality we have applied the Markov property and in the second inequality the fact that Q is a product measure and that Similarly for all n ≥ L 2 one has that where in the second inequality we have used the Markov property, in the third one the fact that R k ≤ t (0) k and in the last one Lemma 5.6. Now, by displays (94) and (95), to finish the proof of inequalities (90) and (91) it is enough to prove that there is a constant c 10 such thatĒ using the fact that n ≤ L 2 − 1 in the left-hand side of inequality (90). To prove (96), the following identity will be useful We will now insert this decomposition in the left-hand side of (96) and bound the corresponding expectations of each term. Let us begin with the expectation of the last term. Note that by an argument similar to the one developed in Step 1 we have that for some constant c 11 . Similarly, the expectation of the first term of the right-hand side of display (97) can be bounded using Lemma 5.5, so thatĒ Again, for the expectation of the second term of the right-hand side of display (97), we have that for some suitable positive constant c 12 . For the expectation of fourth term of the right-hand side of (97), we see by Lemma 5.5 that Finally, for the expectation of third term of the right-hand side of (97) we have thatĒ Using the bounds (102), (101), (100), (99) and (98) we obtain inequality (96).
Step 4. Here we will derive for all k ≥ 1 the inequalitȳ Note thatĒ By an argument similar to the one used in Step 1 we see that for k < k one has thatĒ Now, we can use inclusion (89) in order to get that k < ∞, B n,k , A n,k | F 0,L ], (106) where the events A n,k and B n,k are defined in (86) and (87). Using the fact that on the event {t (n) k < ∞, B n,k , A n,k } one has that P 0 -a.s.
we see that the right-hand side of (106) is bounded by the right-hand side of (103), which is what we want to prove.
Step 5. Here we will obtain an upper bound for the terms in the first summation in (106). Indeed, note that on R k ≤ t (n) k , by an argument similar to the one used to derive inequality (94), we have that for all 0 ≤ n ≤ L 2 and 0 ≤ k ≤ k − 1 Step 6. Here we will obtain an upper bound for the terms in the second summation in (106), showing that for all n ≥ L 2 and 0 ≤ k ≤ k − 1, Using Lemma 5.6 to estimate Q[D 0,n ] we conclude the proof of inequality (107).
Step 7. Here we will show that there exist constant c 13 and c 14 such that (109) Let us first note that by an argument similar to the one used to derive the bound in Step 1 (through Lemmas 5.4 and 5.5), we have that where c 15 := √ c 6 . Let us now prove (108). Indeed, note that by Step 5 and (110) we then have that for some suitable constant c 13 . Let us now prove (109). First note that for some constant c 16 , where in the first inequality we have used Step 6 and in the second we have used inequality (110). Finally notice that using the fact that for n ≥ L 2 one has that n ≤ 2L 2 n L 2 , we get that Using this estimate in (112) we obtain (109).
Step 8. Here we finish the proof of Proposition 5.3 combining the previous steps we have already developed. Combining inequality (103) proved in Step 4 with inequalities (108) and (109) proved in Step 7, we see that there is a constant c 17 such that (113) Thus, by inequality (90) proved in Step 3, we have that (114) for certain positive constant c 18 On the other hand, combining inequality (91) proved in Step 3 with (113), we see that there exists a constant Now, note that for some constant c 20 one has that Substituting (116) and (117) into (115) we see that for some suitable positive constant c 21 . Substituting (115) and (118) into inequality (88) of Step 2, we then conclude that there is a constant c 22 such that Substituting (119) into (79) of Step 1, we get that From the fact that ∞ k=1 k−1 k =0 b k+1 k < ∞ together with (120 and (78) of Step 0, we conclude that E 0 [(X τ · u) 2 |F 0,L ] ≤ c 23 κ −2L , for some constant c 23 > 0, which proves the proposition. 6. Proof of Theorem 1.1 In this section we will prove Theorem 1.1 using Proposition 5.3 proved in Section 5. First in Subsection 6.1, we will define an approximate sequence of regeneration times. In Subsection 6.2, we will show through this approximate regeneration time sequence, that there exists an approximate asymptotic direction. In Subsection 6.3, we will use the approximate asymptotic direction to prove Theorem 1.1.
6.1. Approximate regeneration time sequence. As in [CZ01], we define approximate regeneration by the recursively by τ We will drop the dependence in L on τ (L) 1 when it is convenient for us, using the notation τ i instead τ Similarly define for k ≥ 2 Let us now recall Lemma 2.3 of [CZ01], stated here under the condition Lemma 6.1. Let l ∈ S * d−1 , α > 0 and φ be such that lim r→∞ φ(r) = 0. Consider a random walk in a random environment satisfying the conemixing assumption with respect to α, l and φ and uniformly elliptic with respect to l. Assume that L is such that Then, P−a.s. one has that .
We let A be a measurable set of the path space, for short we will write 1 A := 1 {(Xn−X 0 ) n≥0 ∈A} . By the strong Markov property and using that τ 1 < ∞ within an event of full P 0 probability, we get: Now, notice that for given t ∈ N, m ∈ N, x ∈ Z d , we can find a random variable h 1,t,m,x measurable with respect to σ({ω(y, ·) : y · u < x · u − L |u| |u| 1 }, {X i } i<m ) such that it coincides with h 1 on the event {τ 1 = S t = m, X St = x}, therefore (122) equals We now work out the following expression Observe that, as in the case of h 1 , for fixed x and m, we consider the probability measure P θxω,θmε . Then we can find a measurable function h 2,j,n,z with respect to σ({ω(y, ·) : y · u ≤ z · u − L |u| |u| 1 , y ∈ C(x, l, α)}, {X i } i<j ) , which coincides with h 2 on the event {τ 1 = S n = j, X Sn = z, D = ∞}, furthermore note that D = ∞ depends up to (j − 1) coordinate in ε (recall that {D = ∞} ∈ H 1 ), hence we can apply the Markov property to get that the last expression in (123) . From now on, we can follow the same sort of argument as in ( [CZ01]), in order to conclude that Therefore the second step induction is complete.
6.2. Approximate asymptotic direction. We will show that a random satisfying the cone mixing, uniform ellipticity assumption and the non-effective polynomial condition with high enough degree has an approximate asymptotic direction. The exact statement is given below. It will also be shown that the right order in which the random variable X τ 1 grows as a function of L is κ −L .
Proposition 6.2. Let l ∈ S * d−1 , φ be such that lim r→∞ φ(r) = 0, c > 0, M > 6d and 0 < α < min{ 1 9 , 1 2c+1 }. Consider a random walk in a random environment satisfying the cone mixing condition with respect to α, l and φ and the uniform ellipticity condition with respect to l. Assume that (P C) M,c |l is satisfied. Then, there exists a sequence η L such that lim L→∞ η L = 0 andP 0 -a.s.
We first prove inequality (125) of Proposition 6.2. We will follow the argument presented for the proof of Lemma 3.3 of [CZ01]. For each integer i ≥ 1 define the sequence X i := κ L (X τ i − X τ i−1 ), with the convention τ 0 = 0. Using Lemma 6.1 and Lemma 3.2 of [CZ01], we can enlarge the probability space where the sequence {X i : i ≥ 1} so that there we have the following properties: (2) There exists a sequence {Z i : i ≥ 2} of random variables such that for all i ≥ 2 one has that Furthermore, for each i ≥ 2, ∆ i is independent of Z i and of G i := σ{X j : j ≤ i − 1}.
We will call P the common probability distribution of the sequences {X i : i ≥ 2}, { X i : i ≥ 2}, {Z i : i ≥ 2} and {∆ i : i ≥ 2}, and E the corresponding expectation. From (128) note that Let us now examine the behavior as n → ∞ of each of the four terms in the left-hand side of (129). Clearly, the first term tends to 0 as n → ∞. For the second term, note that on the event {D = ∞}, one has that | X 1 | 2 2 ≤ c 24 (X 1 · l) 2 for some constant c 24 . Therefore, by Proposition 5.3, and the fact that X 2 has the same distribution as X 1 underP 0 [·|D = ∞], we see that E[| X 2 | 2 2 ] =Ē 0 [|X 1 | 2 2 |D = ∞] ≤ c 24Ē0 [(X 1 · l) 2 |D = ∞] < c 25 , (130) for a suitable constant c 25 . Hence, by the strong law of large numbers, we actually have that P -a.s.
where we have used Proposition 5.3 and Lemma 5.4 in the second inequality. So that by (134) we see that the martingale {M j n : n ≥ 1} converges P −a.s. to a random variable for any j ∈ {1, 2, . . . , d}. Thus, by Kronecker's lemma applied to each component j ∈ {1, 2, . . . , d}, we conclude that P -a.s.
Now, note from (135) that there is a constant c 29 such that Therefore, P -a.s. we have that lim sup n→∞ Let us now prove the inequality (127). By an argument similar to the one presented in [CZ01] to show that the random variable τ 1 has a lower bound of order κ −L , we can show that X τ 1 · l is bounded from below by the sum S := N i=1 U i , where {U i : i ≥ 1} are i.i.d. random variables taking values on {1, 2, . . . , L} with law P [U i = n] = κ n for 1 ≤ n ≤ L, while N := min{i ≥ 1 : U i = L}. It is clear then that for some constant c 31 . 6.3. Proof of Theorem 1.1. It will be enough to prove that there is a constant c 32 such that for all L ≥ 1 one has that lim sup n→∞ X n |X n | 2 − λ L |λ L | 2 2 < c 260 η L λ L .
Indeed, by compactness, we know that we can choose a sequence {L m , m ≥ 1} such that lim m→∞ λ Lm |λ Lm | 2 =v, exists. On the other hand, by the inequality (127) of Proposition 5.3, we know that lim m→∞ η Lm λ Lm = 0. Now note that by the triangle inequality and (139), for every m ≥ 1 one has that lim sup Taking the limit m → ∞ in (141) using (140) we prove Theorem 1.1. Let us hence prove inequality (139). Choose a nondecreasing sequence {k n : n ≥ 1}, P -a.s. tending to +∞ so that for all n ≥ 1 one has that τ kn ≤ n < τ kn+1 .
Notice that On the other hand, we assume for the time being, that for large enough L we have proved that lim sup n→∞ |X n − X τ kn | 2 k n = 0.
Combining (144) and (146) with (142) we get (139). Thus, it is enough to prove the claim in (143). To this end, note that |X n − X τ kn | 2 k n ≤ sup j≥0 |X (τ kn +j)∧τ kn+1 − X τ kn | 2 k n (147) We now consider the sequence X k≥1 := κ L sup j≥0 |X (τ k +j)∧(τ k+1 ) − X τ k | k≥1 , a coupling decomposition as in the proof of Proposition 6.2 turns out; in a enlarged probability space P if necessary, the existence of two i.i.d. sequences (X k ) k≥1 , (∆ k ) k≥1 and a sequence (Y k ) k≥1 , such that P supports the following: • For k ≥ 1, the common law of X k is the same as X 1 under P [· | D = ∞], and one has that ∆ k is Bernoulli with values in the set {0, 1} independent of G k and P[∆ k = 1] = φ (L). • P-almost surely for k ≥ 1, we have the decomposition: Furthermore, quite similar arguments as the ones given in the proof of Proposition 6.2 allow us to conclude that: where Y j := E[Y j | G j ]. Therefore, using the following inequality implies that The proof is finished.