Random walk driven by the simple exclusion process

We prove a strong law of large numbers and an annealed invariance principle for a random walk in a one-dimensional dynamic random environment evolving as the simple exclusion process with jump parameter $\gamma$. First, we establish that if the asymptotic velocity of the walker is non-zero in the limiting case"$\gamma = \infty$", where the environment gets fully refreshed between each step of the walker, then, for $\gamma$ large enough, the walker still has a non-zero asymptotic velocity in the same direction. Second, we establish that if the walker is transient in the limiting case $\gamma = 0$, then, for $\gamma$ small enough but positive, the walker has a non-zero asymptotic velocity in the direction of the transience. These two limiting velocities can sometimes be of opposite sign. In all cases, we show that the fluctuations are normal.


Introduction
The question of the evolution of a random walk in a disordered environment has attracted a lot of attention in both the mathematical and physical communities over the past few decades. The first studies were concerned with static random environments. In this set-up anomalous slowdowns are expected with respect to the homogeneous case as the environment may create traps where the walker could be blocked for long times. This effect is most obvious in dimension 1, where it is by now well understood (see [19] for background). In dynamical random environments instead, the transition probabilities of the walker evolve with time too. If the environment has good space-time mixing properties, the trapping phenomenon is expected to disappear and one may hope that the evolution of the walker will resemble more that in an homogeneous medium. The study of this case has recently led to intense researches; see for example [12,16] and references therein, as well as [1] for an overview and further references.
However, examples of dynamical environments with slow relaxation times occur naturally. Indeed, in the presence of a macroscopically conserved quantity, the environment may evolve diffusively. Time correlations then only decay as t −d/2 in dimension d. Such environments constitute an intermediate class of models, that is still far for being well understood at the present time [1,4].
In this paper, we are interested in the asymptotic behavior as t → ∞ of X t /t, where X t denotes the position at time t of a walker driven by the one-dimensional simple exclusion process at equilibrium with density 0 < ρ < 1. The walker evolves in discrete time: if he sits on a particle at the moment of jumping, he moves to the right with probability α and to the left with probability 1 − α for some 0 < α < 1, while if he sits on a vacant site, these probabilities become respectively β and 1 − β for some 0 < β < 1 (see below for a more rigorous description). Of particular interest is the case α < 1 2 < β or vice versa, as we then see reappearing the possibility of trapping mechanisms.
Let γ be the jump rate of the particles of the exclusion dynamics. Two regimes, depending on the value of γ, are considered in this article. We first deal with the fluid regime γ ≫ 1. In the formal limit "γ = ∞", the walker evolves as in an homogeneous medium: he jumps to the right with the homogenized probabilityp = ρα + (1 − ρ)β and to the left with probability 1 −p. We assumep = 1 2 , and sayp > 1 2 to fix things. The walker is thus drifted to the right in the limiting case "γ = ∞" and the fluctuations are normal. Let us now take 1 ≪ γ < ∞. The exclusion processes mixes the particles in boxes of size γ 1/2 in between each step of the walker. Therefore, most of the time, the walk will behave in the same way as the limiting homogeneous walk. However, the walker will eventually visit some regions where the density of particles is anomalously high or small, and behave then differently. But regions of size l with an anomalous density of particles, are typically at distance e cl from the origin for some c > 0 in the equilibrium measure, while only a time of order l 2 is needed for the dynamics to disaggregate them.
Therefore, these regions do not act as efficient barriers to the evolution of the walker, that exhibits thus a positive asymptotic velocity. This is made precise in Theorem 1, where it is also shown that fluctuations are normal.
We next deal with the quasi-static regime γ ≪ 1. The dynamics is then no longer dominated by the homogenized probabilityp. Instead of the conditionp = 1 2 , we now assume that, in the limiting static case γ = 0, the walk is transient, say to the right (see (1.9) below). Under this hypothesis, at γ = 0, it is known that X t ∼ t δ as t → ∞ for some 0 < δ ≤ 1, and that the sub-ballistic behavior observed in the cases where δ < 1, is due to the fluctuations of the environment. Let us now take γ > 0. We take it however small enough so that, over a time t = τ γ −1 for some small τ > 0, during which the environment has not evolved significantly, the walk moves to the right by an amount of order t δ , if started in an environment that looks locally typical with respect to the Gibbs measure. As long as the walk evolves in such configurations of the environment, it is thus drifted to the right. Nevertheless, when coming in a region where the density of particles is anomalous, its progression may be much slowed down. Since however γ is positive, we may just reuse the argument developed for large γ and show that traps are irrelevant: large traps (of size l) disappear on much shorter time scales (γ −1 l 2 ) than the time needed for the walker to see a next trap of that size (a time at least of order l K for some K ≫ 1, see Section 4). We conclude that the walker has a positive velocity to the right, as stated in Theorem 2. Again, we show also that fluctuations are normal.
From these two results, we conclude that the value of the limiting velocity v(γ) may change drastically between the large an small γ regimes. The following cases seem to be of particular interest: for some values ρ, α and β, the walk is transient to the left in a static environment while the homogenized drift 2p − 1 is positive (the asymptotic velocity in the static environement is then necessarily zero). Therefore it hods that v(0) = 0, then v(γ) < 0 for small enough γ > 0, and finally v(γ) > 0 for large enough γ > 0.
We did not show that v(γ) has limit v(0) when γ goes to 0 nor limit 2p − 1 when γ goes to ∞, but expect both of them. While the latter could be shown using the technique of the present article, at the cost of slightly more involved estimates (control on both excess and shortage of particles), some more care should be needed to check the continuity at 0. Theorems 1 and 2 are shown in essentially the same way. In a first step, by means of a multi-scale analysis, we show recursively that the walker can pass larger and larger traps, for environments that are allowed to become locally more and more atypical. The introduction of renormalization techniques to study random walks in random environments seems to go back to [7] and [9], from where our strategy is inspired. A similar method is used in [14] (see also [11] and [13] for a slightly different approach). This first step allows to conclude that the walker is drifted almost surely to the left or the right.
The law of large numbers and invariance principle are deduced in a second step. As the environment evolves only on diffusive time scales, a ballistic walker discovers fresh randomness most of the time. From this observation, it is possible to build up a renewal structure, allowing to cut the full trajectory into pieces that are mutually independent. This idea was first introduced in [18] for the case of a static i.i.d. environment, and further adapted by [10] to deal with the case of static environments with good mixing properties. It was exploited in [2] to obtain a law of large numbers for dynamic environments with good mixing rates, and then in [8,14]. As this method is rather delicate and model dependent, we had to perform specific constructions and estimates.
Let us now discuss some existing works that are directly related to our results. A rather comprehensive study of the model studied here was initiated in [4], where several conjectures, based on numerical computations and some heuristics, were presented. Partial mathematical results have been obtained in [13] and [3]. Our Theorem 2 answers negatively one of the "key open question" asked in (3.8) in [4]: Is γ 1 (ρ) > 0? Moreover, two dimensional analogs of our model have been studied in the physics literature.
In [5], the differential mobility of a tagged particle driven by an external field is shown, by means of numerical computations, to undergo a transition from a quasi-static to a fluid regime as γ is increased.
These two regimes are somehow analogous to the ones described by our Theorem 2 and Theorem 1 respectively. In addition, the same model was studied in [6] as a way to probe the glassy transition in liquids, and the possibility of anomalous fluctuations was alluded. Our results suggest that this is not the case.
Finally, in a recent work [14], a law of large numbers and an invariance principle for the fluctuations of a walker were obtained, in a set-up close to ours. The authors consider indeed a random walk driven by a set of non-interacting particles at equilibrium with density 0 < ρ < +∞. The transition probability of the walker differs if he sits on a vacant site or on a site occupied by at least one particle. Their results hold then for all ρ large enough, assuming that the limiting velocity is non-zero in the limiting case "ρ = ∞". This is thus a situation analogous to that described by our Theorem 1, which proof is moreover based on a similar architecture as their. We stress nevertheless that, even when restricting the attention to Theorem 1, we developed our strategy independently and that a closer look at the details shows that many steps cannot be simply adapted.

Model
An environment is a function (ω(t, x)) t≥0,x∈Z with value in the interval (0, 1), the time indexed by t is continuous and the space is indexed by x ∈ Z. Given such a function ω we define for any space time point (n, x) the markovian (discrete time) law P ω n,x by P ω n,x (X 0 = x) = 1 and, for all k ≥ 0, z ∈ Z (1.1) Given a discrete time process (X n ) n≥0 we also define with a slight abuse of notation the continuous time Consider the simple exclusion process on Z where γ > 0 is the jump rate of the particles (the intensity of the process) and σ x,y is defined for any x, y ∈ Z by, If η x (t) = 1, site x is said to be occupied by a particle at time t, while it is said to be vacant if η x (t) = 0.
Let 0 ≤ α, β ≤ 1. Given a realization of η we define an environment ω as a function of η by We define P, a law on the space of environments, as the push-forward of P ρ through this function. To fix ideas we will from now on assume that the drift corresponding to empty sites is larger than the drift corresponding to occupied sites that is We define for any space-time point (n, x) the annealed law by Remark that the law of (X k − x) k≥0 under P n,x is the same as the one of (X k ) k≥0 under P 0,0 , that we will often denote simply by P 0 .

Results
We assume that the environment is uniformly elliptic: For our first result, we assume that the walker would be drifted to the right if, at each step, the environment would be entirely refreshed according to the equilibrium measure (limiting case "γ = ∞").
We assume thus that ρ, α and β are such that where E 0 is the expectation with respect to P 0 and E the expectation with respect to P. We show that the drift to the right will still be observed if, instead of refreshing the environment at each step of the walker, it evolves according to the dynamics generated by (1.4) with initial condition distributed as P ρ , provided that γ is taken large enough once the parameters ρ, α and β have been fixed. The following theorem may therefore be seen as a perturbative result around the trivial case "γ = ∞": Theorem 1. Assume that the uniform ellipticity condition (1.7) and the drift condition (1.8) hold. There exists v * > 0 so that, for γ large enough,

the annealed central limit theorem holds that is, under
where (B t ) t≥0 is a non degenerated brownian motion and the convergence in law is considered under Skorohod topology.
For our second result, we assume that the walker is transient to the right in a static environment (γ = 0) that is the parameters ρ, α and β are such that For background about static random walks in random environments, see e.g. [19]. We show that, for strictly positive and small enough γ, the walker has a positive velocity: Theorem 2. Assume that condition (1.7) and condition (1.9) for transience to the right in a static environment holds. Then for γ small enough 2. the annealed central limit theorem holds that is, under P 0 where (B t ) t≥0 is a non degenerated brownian motion and the convergence in law is considered under Skorohod topology.
We stress that in the case where (1.8) holds while the left hand side of (1.9) is strictly positive, the velocity can be either positive or negative according of the value of γ. We can thus exclude that any of our two theorems could be valid for all strictly positive γ.

Outline of the paper
The rest of the article is organized in four sections. In Section 2, we control the time of dissipation of zones with high density of particles. In Section 3, we use these results on the environment together with a renormalization procedure to derive that, if the assumption (1.8) holds and if γ is large enough, the walk is ballistic to the right. In Section 4, the same ideas are used to prove that the walk is ballistic to the right if the assumption (1.9) holds and if γ > 0 is small enough. Finally, in Section 5, we build a renewal structure to precise the results of Sections 3 and 4, i.e. to derive the law of large numbers and the annealed invariance principle stated in Theorems 1 and 2.

Dissipation of traps
The walker can be slowed down in these places where the concentration of particles is too high with respect to the expected density ρ. These locally anomalous configuration of the environment are called traps.
Here we make precise the idea that traps disappear on diffusive time scales. Concretely, we establish that, if in a box of size L around a point x, the density of particles is very close to the density ρ for a given initial profile η ∈ {0, 1} Z , then, waiting a time bigger than L 2 , the density in smaller boxes around x becomes close to ρ as well, with high probability with respect to the evolution of the process.
The section is divided into three parts: we first state our results, then show some technical lemmas, and finally give the proof of our propositions. The technical estimates in 2.2 are very close to some results obtained in [13] (see Lemma 5.3 there).
Before starting, let us introduce an extra notation, to be in use mainly in Sections 2-4. We define for η ∈ {0, 1} Z the law P η of the simple exclusion process defined by (1.4) and deterministic initial condition η. We define then where ω is the environment built from (η(t, x)) t≥0,x∈Z (see (1.5)).

Statement of the results
Given x ∈ Z, L ∈ N and η ∈ N Z , let Let (ǫ L ) L≥0 be some decreasing sequence of numbers in ]0, 1]. The numbers ǫ L will serve to control the difference between the density ρ and the empirical density in a box of size L. Given η ∈ {0, 1} Z and L ∈ N, we define the set of good sites G(η, L) ⊂ Z as follows: we say that x ∈ G(η, L) if 3) The main result of this section is contained in the following proposition, where we use the assumption γt ≥ L 3 instead of the more natural assumption γt ≥ CL 2 for some large constant C, in order to avoid the introduction of too many constants.

Proposition 1.
There exist constants C < +∞ and c > 0 such that, given an initial profile η ∈ {0, 1} Z , and given x ∈ Z, t ≥ 0 and J, L ∈ N, the conditions imply that This proposition can only be applied if, given a profile η, one waits a time γt ≥ L 3 . The next proposition furnishes a control that holds for short times too, There exist a constant c > 0 such that, given an initial profile η ∈ {0, 1} Z , and given ǫ > 0, t ≥ 0, x ∈ Z and L ∈ N, it holds that if y ∈ G η, L ∀y ∈ B(x, L) and ǫ ≥ ǫ L , It is observed that this last Proposition is independent of γ.

Some lemmas : heat equation properties and concentration
We let be the mean value of the field after a time t, starting from the initial field η, where we have used E η for the expectation with respect to P η . The mean evolution η(t) solves the discrete heat equation ∂ t η = γ∆η with initial condition η(0) = η. The operator ∆ appearing here is the discrete Laplacian defined by We find it convenient to introduce three closely related kernels. Let first p : Z × R + → R be the heat kernel associated to the Laplacian γ∆: p solves the initial value problem So, for x ∈ Z and t ≥ 0, p(x, t) represents the probability that a free particle jumping with rate γ starting at origin sits on site x at time t. Given L ∈ N, let then p L and p L be given by for (x, t) ∈ Z × R + . The quantity p L (x, t) represents the probability that a free particle starting from origin lies in the box of size L centered at x at time t.
Our first lemma furnishes a concentration bound: with high probability, the empirical density of η(t) in a box of size L does not deviate too much from the empirical density of the mean evolution η(t) in the same box. This result does not depend on γ.
Lemma 1. There exists a constant c > 0 such that, given η ∈ {0, 1} Z , L ∈ N, x ∈ Z, t ≥ 0 and a ≥ 0, Proof. Given δ ≥ 0, it follows from Markov inequality that We compute as is seen from the definition (2.6) of p L .
Let us then work out the third factor. We use Ligett's inequality to get rid of the exclusion constraint (see [13]). Let us define the process θ that represents the collective motion of independent particles evolving on Z. So let θ = (θ(t)) t≥0 be the process on N Z defined by the generator with θ x,y = (. . . , θ x − 1, . . . , θ y + 1 . . . ) if θ = (. . . , θ x , . . . , θ y , . . . ). We assume that θ(0) = η. It is convenient to adopt the following interpretation: we say that there are n ∈ N particles at x ∈ Z at time t ≥ 0 if and only if θ x (t) = n. Let us label all the particles, in an arbitrary way, by k ∈ N * . Let X k (t) be their position at time t. At any time t ≥ 0, the variables (X k (t)) k≥1 are independent. By Ligett's inequality, and remembering the definition (2.6) of p L , we get where the last expression follows from (2.8).

This inequality is optimized for
as η(t) x,L ≤ 1 for the simple exclusion process. If instead δ = 2L + 1, we obtain Because in this case 2L + 1 ≤ a(2L+1) 2C η(t) x,L , this implies The Lemma is obvious for a > 1 and, for a ≤ 1, (2.10) is always larger than (2.11) as soon as C ≥ 1/2.
This gives the claim.
The next two lemmas furnish a control on the solution of the heat equation. The first of these makes precise the fact that, after a time t, the solution at x is well approximated by the empirical density of the initial profile in a box of size (γt) 1/2 around x.
Lemma 2. There exists a constant C < +∞ such that, given η ∈ {0, 1} Z , M, L ∈ N, x ∈ Z and t ≥ 0, Proof. Let us start by quoting a property of the heat equation. Let f : for all x ∈ Z, and such that, for all x ≥ 0, it holds that f (x) − f (x + 1) ≥ 0. For x ∈ Z and t ≥ 0, let also f (x, t) the solution of the initial value problem f (x, 0) = f (x) and ∂ t f (x, t) = γ∆f (x, t). It is checked that, for all t > 0, we still have f ( This has the following consequence. It is seen from the definition (2.6) of p M that p M (x, 0) = Let us now write for simplicity As shown in (2.8), it holds that η(t) x,M = z∈Z p M (z − x, t)η z , so that, using (2.12), we find We see that we already obtain the result if L = 0, so that we can further assume L ≥ 1. Inserting these two estimates in (2.13), we find A direct computation in Fourier space shows that there exists a constant C < +∞ such that, for any Therefore Inserting this estimate in (2.14) furnishes the claim.
Our last lemma gives a bound that does not require to wait a time γt ≥ L 3 to hold: Proof. For L = 0, the lemma follows from Lemma 2. We assume L ≥ 1. To simplify writings, let us write R := sup η y,r : r ≥ L, y ∈ B(x, L) .
We decompose and since we assume y ∈ B(x, L), our hypotheses imply η y,k ≤ R, so that If z ∈ B(0, L − 1), we have x + z ∈ B(x, L), and so by hypothesis, We thus conclude that which is the claim.

Proof of Propositions 1 and 2
Proof of Proposition 1. We have By Lemma 2, and since by hypothesis γt ≥ L 3 and x ∈ G(η, L), we find for all J ′ ∈ N that Therefore, thanks to the hypothesis that L(ǫ J − ǫ L ) is large enough, we get This last inequality allows us to use the concentration estimate stated in Lemma 1: This is the claim.
Proof of Proposition 2. We write It follows from the hypotheses and from Lemma 3 that

Drift for large γ
We here prove Theorem 3. Assume that the drift to the right condition (1.8) holds. Then there exists v * > 0 so that, We observe that the uniform ellipticity condition (1.7) is not required. The work is divided into three parts. We first prove that, given an arbitrary large time T , we can find γ large enough so that the walker is drifted to the right, for a set of initial conditions on the environment that has large probability with respect to the Gibbs state (see Lemma 4 below). Next, thanks to a renormalization procedure, we show that this preliminary result can be extended to arbitrary long times, and to larger and larger sets of initial conditions (see Proposition 3 below). Our scheme is eventually inspired by a method developed in [7] (see also [9] for a dynamical uniformly mixing environment). In the case at hand however, we get off the hook by means of a much more simple iteration than what was needed there. Theorem 3 is finally established thanks to a Borel-Cantelli argument.
For the whole section we assume that α, β and ρ are fixed such that the drift condition (1.8) holds.
Let us still fix some parameters. First, we assume from now on that the sequence (ǫ L ) L≥0 , introduced in Section 2 to control the excess of density in boxes of size L, is explicitly given by This choice is motivated by the following facts. In view of the definition (2.3) of good sites, we see that ǫ L >> 1/L 1/2 (that is much larger than the standard deviation of the exclusion process) is needed to ensure that a given site is typically good for large L. Moreover, we will need to apply Proposition 1 with J and L going to infinity in a certain ratio (L ∼ J 2 actually) and we want to keep ǫ J − ǫ L as large as possible. The sequence (ǫ L ) L≥0 defined by (3.1) converges very slowly to zero, satisfying thus these two requirements.
Second, we define also a sequence (φ L ) L≥0 that will serve to quantify the size of the traps in a box of size L. Intuitively, typical regions of anomalous density in a box of size L are of size ln L. For our estimates, we nevertheless found it convenient to overestimate their size. We set In the sequel, we will tacitly use the bound |X t − X s | ≤ t − s, valid for all 0 ≤ s ≤ t.

Initial step
Lemma 4. Let T ∈ N be large enough, and let γ ≥ φ 3 T . There exists a constant v > 0 (that depends only on α, β and ρ) such that, given η ∈ {0, 1} Z and x ∈ Z such that

It holds
Proof. Letting v > 0 be a constant that we will determine later, we first write as the hypotheses do not allow to determine whether the site x is initially occupied or not.
Let us define a few objects. Since we assume α ≤ β, there exists δ > 0 small enough so that (1.8) will still be satisfied with any ρ ′ ∈ [0, ρ + δ] instead of ρ. Let δ > 0 be such that this holds. Up to an enlargement of the probability space we define a sequence of i.i.d. random variables (Y k ) k≥1 that are independent from both the exclusion process and the walker with distribution Thanks to (1.8) and our choice of δ, it holds that P η (Y k = 1) > 1/2. For m ≥ 2, we define the events and We aim to show that Before deriving this expression, let us see that it implies the lemma for v small enough depending on the values of α, β and ρ. Indeed for such a v, by (3.4), the first term in the right hand side of (3.6) is seen to be bounded as for some constant c > 0. The second term in the right hand side of (3.6) is then bounded by means of Proposition 1. Since T is assumed to be large enough, and since γ ≥ φ 3 T , we deduce that where, to get the last inequality, we have used the explicit expressions (3.1) and (3.2) and (3.5), as well as the fact that T is large enough. We obtain the lemma by inserting the bounds (3.7) and (3.8) in (3.6), and then (3.6) in (3.3).
We are thus left with the proof of (3.6). For this, we show that, for any m ≥ 1 and any a ∈ R, from which (3.6) follows by iteration using Fubini.
Let us first deal with the case m = 1. We need to show that P η ( Since, by (3.4), we have P η (Y 1 = −1) = 1 − β + (β − α)(ρ + δ), we will conclude by showing that P η (η z (1) = 1) ≤ ρ + δ for any z ∈ B(x, 1). Let z ∈ B(x, 1). It holds that where, as in Section 2, η(1) denotes the solution to the heat equation, with Laplacian γ∆, after a time 1, starting from η. We can use Lemma 2, with M = 0, L = φ T , t = 1 and x = z, to estimate the right hand side of (3.10). Since γ ≥ φ 3 T and z ∈ G(η, φ T ) by hypothesis, we conclude that if T is large enough, Let us next consider the case m > 1 in (3.9): For each term of the sum, we proceed with the first factor exactly as for m = 1. Replacing L = φ T by Altogether, we obtain as desired.

Renormalization procedure
We let v the constant appearing in Lemma 4.

Proposition 3. Let T be large enough and let
it holds that Proof of Proposition 3. We consider T large enough so that conclusions of Lemma 4 hold for the time such that for some N ≥ 0, t N = t (for example define recursively, from t N = t with some suitable N , We prove by recurrence on n ≥ 0 that, given η ∈ {0, 1} Z , x ∈ Z and n ≥ n 0 , if, for any y ∈ B(x, t n ), it holds that y ∈ G(η, φ tn ), then This will imply the claim.
By Lemma 4 and the hypotheses, (3.15) holds true for n = 0 since γ ≥ φ 3 T . We now assume that (3.15) holds for some n ≥ 0, and we show that it implies it for n + 1. We follow the same steps as in the proof of Lemma 4. To simplify notations, let us write t n = t and t n+1 = t ′ , as well as c n = c and c n+1 = c ′ . As in Lemma 4, we first need to wait an initial step, as the information on the initial environment does not allow us to use our inductive hypothesis. It follows from the definition (3.13) of If n 0 is large enough, waiting this time r will suffice to dissipate possible traps with high probability.
As in the proof of Lemma 4 we define on the same probability space (enlarged if necessary) a sequence (Y k ) k≥1 of i.i.d. random variables independent from both the exclusion process and the walker with Using our inductive hypothesis, we aim to show that To see this, we show that, for any m ≥ 1 and any a ∈ R, we have Since t ′ = r + (t − 1)t, (3.17) follows from (3.18) by iteration.
We prove now (3.18). For any m ≥ 1, Using the inductive hypothesis each term of the sum can be controlled by . So (3.18) and hence (3.17) are shown.
We now proceed to bound each term in the right hand side of (3.17) separately. First, it follows from the definition (3.13) of (t n ) n≥0 and the definition (3.14) of (c n ) n≥0 that, if T is large enough, where we have used the decomposition t ′ = r + (t − 1)t, as well as the bounds r ≤ 4t and c ′ ≤ 1 to get the first inequality, the estimate (3.19) to get the second one, the definitions (3.13) of (t n ) n≥0 and (3.14) of (c n ) n≥0 to get the third one, and finally a classical concentration bound for sum of independent bounded variables (|Y k | ≤ t a.s.) to obtain the last one.
We then use Proposition 1 to get a bound on the second term in the right hand side of (3.17). By assumption, y ∈ G(η, φ t ′ ) as soon as y ∈ B(x, t ′ ), while, by the definition (3.2) of the sequence (φ L ), we

Proof of Theorem 3
Proof of Theorem 3. Given η ∈ {0, 1} Z and t ∈ N, we define a subset A(η, t) ⊂ Z of admissible points as follows: x ∈ A(η, t) if and only if for all y ∈ B(x, t), it holds that y ∈ G(η, φ t ). Let us first show that there exists a constant C < +∞ and α > 0 such that Indeed, because the equilibrium measure is a Bernoulli product measure, we find Relation (

Drift for small γ
We here prove Theorem 4. Assume that the condition (1.9) for transience to the right in a static environment holds.
Then for γ small enough there exists v(γ) > 0 such that We observe that the uniform ellipticity condition (1.7) is not required. The cases where (1.9) holds while (1.7) is violated correspond to the cases (α = 1, β > 0) or (α > 0, β = 1). Since then (1.9) will still hold if the values or α or β are lowered by a sufficiently small amount, we conclude by a coupling argument that the hypothesis (1.7) can be added without loss of generality. We will thus assume that (1.7) holds.
Several constants and parameter will appear in this section. Since it matters to select them in a given order, we introduce all of them (at least informally in some cases) already now: 1. We let as before the sequence (ǫ L ) L be given by (3.1).
3. We fix a number q ≥ 2 that will serve to bound the probability that the walker deviates to much to the left, in Lemma 6 and Proposition 4.
4. We will take τ > 0 small enough so that the environment can be approximated with high enough accuracy by a static environment on time intervals of length γ −1 τ , in the proofs of Lemmas 6 and 7.
5. We will take K > 0 large enough so that environments with no trap of size larger or equal to K ln L in a given region of size L, are typical with respect to the equilibrium measure, in Lemmas 6 and 7 and in Proposition 4. Incidentally, taking K large enough allows also to successfully apply Propositions 1 and 2.
6. We will take δ > 0 small enough so that, in a static environment, the walker drifts a distance t δ to the right in a time t with high probability, in Lemmas 6 and 7 and in Proposition 4.

Static environment
In this section we consider random walks in static random environment. We refer to [19] for background on this topic. Given an environment ω ∈ [0, 1] Z we define S ω x to be the law of the walker starting at time 0 in x ∈ Z and evolving in the static environment ω. We use S ω for S ω 0 . Given η ∈ {0, 1} Z , we can define via (1.5) an environment ω(η). To simplify notation we may use S η instead of S ω(η) . Given a static environment ω, we define the associated potential V by Lemma 5. Let K, and δ be positive numbers. Letρ ∈ [ρ, 1[ be small enough so that where E ρ denotes the expectation with respect to P ρ .
1. There exists C > 0 so that for t large enough (depending on δ), if the environment satisfies 2. For t large enough (depending on δ and K), if the environment satisfies Note that the assumption of point 2. implies the assumption of point 1.
Proof. We assume for simplicity and avoid integer parts that t δ is an integer.
We start with point 1. Under assumption η −t δ /2,t δ /2 ≤ρ We remind that (see [19] for example) Finally using Markov property at successive return times in 0 we obtain and that concludes the proof of (4.1).
We turn to point 2. and first control the probability that the walker has not exited the interval Considering an obvious coupling of S η and Sω, where T x denotes the hitting time of x ∈ Z. Using a classical recurrence (see [19] p.59 for example) it holds that Assumption of point 2. implies that any excursion above a minimum of V has length at most K ln t δ and thus [V ] −2t δ ,2t δ ≤ K ln(t δ ) ln( 1−α α ). Finally using Markov inequality We turn to the probability that the walker is at the left of t δ at time t.
We have already controlled the first two terms. For the last one, we apply Markov property at time T t δ and then use (4.1) to derive : The largest of the three terms is thus the first one at least for t large enough. That concludes the proof of (4.2).

Initial step
The assumption in Proposition 1 will read t ≥ K 3 ln 3 (γ −δ t), once applied at a time γ −1 t and for a length K ln(γ −δ t) in the renormalization procedure in Section 4.4 below. Therefore, this procedure can only be initiated for times satisfying this bound, hence the choice of a window ln 4 (γ −1 ) ≤ T ≤ ln 9 (γ −1 ) in the next lemma: Lemma 6. Let q ≥ 2. Let K > 0 large enough then δ > 0 small enough and then γ > 0 small enough.

Rough lower bound for intermediate times
Lemma 6 derived above does not yet allow to initiate the renormalization procedure described in the proof of Proposition 4 below. Indeed, while on a time γ −1 t the walker moves a distance γ −δ t to the right with probability 1 − 1/t q for good environments, it could a priori move a distance γ −1 t to the left with probability 1/t q . Therefore, as long as roughly t < γ −1/q , we could not exclude that the drift is actually to the left. Conclusions of Lemma 6 are however valid only for t ≤ ln 9 (γ −1 ) and we can not expect, using the same proof, to generalize this result to times t close to γ −1 , see (4.6). For intermediate times t such that ln 4 (γ −1 ) ≤ t ≤ ln 9 (γ −1 ), the next lemma furnishes a better lower bound than the deterministic bound X s ≥ −s (s ≥ 0), so as to make sure that the walker is drifted to the right.
For the present proof, we assume that (4.11) holds instead of (3.1), and we use directly the hypothesis (4.10) instead of (4.9).
Fix 1 ≤ s ≤ t and define an event E relative to the exclusion process alone. We set • (η(t)) t≥0 ∈ B j if and only if the number of jumps of the exclusion process that moves particles in We then decompose Let us estimate the first term in the right hand side of (4.12). Let ω be an environment such that the corresponding (η(t)) t≥0 satisfies (η(t)) t≥0 ∈ E. A coupling argument shows that It holds that η j xj −L,L ≤ (1 + ǫ L 1/2 + τ 1/2 )ρ :=ρ.
For τ small enough and γ small enough, it holds that Eρ((1 − ω)/ω) < 1 2 E ρ ((1 − ω)/ω). By a coupling argument and the first part of Lemma 5, we obtain for some c > 0, Therefore, inserting this estimate in (4.13), it holds that P ω 0,0 X γ −1 s ≤ −γ −δ s ≤ (s/τ )e −cL so that (4.14) For the second term in the right hand side of (4.12), we have too since, applying Proposition 2 we find P η (A c j ) ≤ e −cL if K is large enough, while, if τ is chosen small enough, a classical concentration bound furnishes P η (B c j |A j ) ≤ e −cL . Inserting (4.14) and (4.15) into (4.12), we obtain for γ large enough,
By Lemma 6 and by the hypotheses, (4.16) holds true for n = 0. We now assume that (4.16) holds for some n ≥ 0, and we show that it implies it for n + 1. To simplify notations, let us write t n = t and t n+1 = t ′ , as well as c n = c and c n+1 = c ′ . It holds that t ′ = r + (t − 1)t for some t ≤ r < 4t. The cases t ′ ≤ γ −1 and t ′ > γ −1 are treated differently. We first write a bound analogous to (3.16) in the proof of Proposition 3: where (4.17) is derived from Lemma 7 while the bound X γ −1 r ≥ −γ −1 r in (4.18) is deterministic.
We need to evaluate the the right hand side of either of these bounds. For this, we define on the same probability space (enlarged if necessary) a sequence (Y k ) k≥1 of i.i.d. random variables independent from both the exclusion process and the walker with distribution: For any integer m ≥ 0, let us also define the events Using our inductive hypothesis, we aim to show that, for t ′ ≤ γ −1 , with (Y k ) k≥1 given by (4.19), and for t ′ > γ −1 , with (Y k ) k≥1 given by (4.20). One sees that the first of these bounds, valid for t ′ ≤ γ −1 , involves two extra terms in comparison with the second one, valid for t ′ > γ −1 . This comes form the fact that, in the first case, it is not always so that, after a time γ −1 (r + (m − 1)t) (for some 1 ≤ m ≤ t − 1), the walker is on a site where we have a control on the environment, while it is always so in the second case (the initial environment is controlled in a box of size (γ −δ t ′ ) 2 ).
The bounds (4.21) and (4.22) are shown in an analogous way; as the latter is easier, we focus on the derivation of (4.21). Let us thus assume t ′ ≤ γ −1 . Let us establish that, for any m ≥ 1 and any a ∈ R, we have Since t ′ = r + (t − 1)t, (4.21) follows from (4.23) by iteration and Fubini theorem. For m ≥ 1: The first term in the right hand side is expressed as Our inductive hypothesis at scale n and Markov property imply that We now proceed to bound each term in the right hand side of (4.21) and (4.22) separately. Let us start with (4.21). To deal with the first term in the right hand side of (4.21), we define for any integer 1 ≤ p ≤ t − 1, the event We have We take p = 3(q + 1) and we show that provided that γ was taken small enough. Indeed, the first term in the right hand side of (4.27) vanishes since, on A c p , it holds that as indeed the last inequality reads (c− c ′ )t > C(p) (which holds true since c− c ′ = 2 −(n+2) while t ≥ e c2 n for some c > 0). To deal with the second term in the right hand side of (4.21), we apply Lemma 7: (4.29) Finally, to bound for any m ≥ 0, P η (E c m ), we apply Proposition 1. The hypothesis reads here γ(γ −1 (r + mt)) ≥ (K ln(γ −δ t)) 3 , and is satisfied since, for γ smal enough, r + mt ≥ t ≥ ln 4 γ −1 . Therefore, taking K large enough,

Conclusion of the proof
Proof of Theorem 4. If K is large enough, we have, writing simply ǫ for ǫ K ln γ −δ t , once t is large enough. This last term can be bounded by 1/t 2 that is summable for K large enough.
Therefore, since it follows from Borel-Cantelli lemma, from Proposition 4 and from (4.31) that, for K large enough, δ > 0 small enough and γ > 0 small enough,

Proof of Theorem 1 and Theorem 2
Theorem 1 is deduced from Proposition 3 and Theorem 3 exactly in the same way as Theorem 2 is deduced from Proposition 4 and Theorem 4. We only show Theorem 1, and so we fix γ large enough so that the conclusions of Proposition 3 and Theorem 3 are in force.
The key point to prove Theorem 1 is the construction of a renewal structure, that is to cut the random path used by the walker into pieces that are independent under P 0 . As mentioned in the introduction, a similar construction was already used several times to study various models involving dynamical environments, and again recently in [8,14]. However, the construction is very model-dependent, leading us to develop our own version in order to overcome the specific difficulties we had to face. This is in particular the case for Proposition 5, where we show that there exists P 0 − a.s. a positive asymptotic density of times where the walker sits in front of all particles visited until that time. This result constitutes the main technical part of our proof.
In Section 1.1 the law P of the environment was built from the exclusion process. In this section we build it from the interchange process on Z together with an independent collection of particles of two different types, so that each site is occupied by exactly one particle. We use the following definitions: 1. For i ∈ Z and t ≥ 0, ξ(t, i) denotes the position at time t of the i-th particle, that is the particle that was in i at time 0. Let P 1 be the law of this process.
2. For t ≥ 0 and x ∈ Z, µ(t, x) is the unique i such that ξ(t, i) = x that is the label of the particle that is in x at time t. Note that µ is a function of ξ.
3. The process ξ is build with a collection of space independent Poisson clocks (U (t, x)) t∈R,x∈Z with parameter γ, called updates: if the clock x rings at time t then ξ exchanges the particle that is at x − 1 at time t − with the one that is at x. Remark that the time is indexed by R : it is convenient for technical reason although not necessary to define the model, see Lemma 8 for example. 4. We consider also a family of type of particles (ν(i)) i∈Z that is just a product of independent Bernoulli with parameter ρ and let P 2 be its law.
The environment ω is viewed as a function of ξ and ν: We let the reader check that the law of the environment P defined in Section 1.1 is the push forward probability of P 1 ⊗ P 2 through this function. We denote the space where lives ξ by Ξ and the set of path that supports the process X by P.
We will use the uniform ellipticity of the environment to control the probability that the walker executes some displacements independently from any information we could have about the environment.
Define this minimal probability by By assumption (1.7), we have κ > 0. For any time-space point (t, x) ∈ R × Z, we define where v is the constant appearing in Proposition 3. The position of the rightest visited particle at time n ≥ 0 is M (n) := sup{ξ(n, µ), µ ∈ V n } with V n = {µ(i, X i ), i ≤ n − 1} the set of label of visited particles at time n. Our goal is to define a sequence of random times, called renewal times that satisfy ∀n ≤ τ, (n, X n ) ∈ T + τ,Xτ The meaning of (5.4) and (5.6) is clear. Condition (5.5) means that at time τ , all particles visited by the walker before time τ , are behind the walker. Finally, (5.7) means that all the particles at the left of X τ at time τ will never enter into the cone T − τ,Xτ . These four conditions together imply in particular that after the time τ , the walker will only visits particles that have not yet been visited at time τ . Our goal is to prove that P 0 − a.s., there are an infinity of renewal times, and to obtain a bound on the first and second moments of τ .

5.1
Existence of a density of points (n, X n ) satisfying (5.4-5.5) A time-space point (n, X n ) is said to be a candidate if it satisfies (5.4) and (5.5): ∀m ≤ n, (m, X m ) ∈ T + n,Xn , M (n) < X n .
We prove here that, under P 0 , eventually there is a density of candidates: There exists some c > 0 such that, for n large enough, In particular, by Borel-Cantelli lemma, To describe the idea of the proof, let us first introduce a notion that is slightly weaker than that of being candidate. Given l ≥ 1 and n ∈ N, a time-space point (n, X n ) is said to be good (for l) if ∀m ≤ n, (m, X m ) ∈ T + n,Xn We will prove Proposition 5 in two steps: we first establish the result with "candidate" replaced by "good", and then show that Proposition 5 follows from this intermediate statement.
Let us take some large integer n. On the one hand, by Proposition 3, it holds that X n ≥ vn/2 with high probability, for some v > 0. Therefore, it follows from basic geometric considerations that, with the same probability, there is a density of points (k, X k ) satisfying (5.10). On the other hand, given a deterministic path (j, Y j ) j≥0 with Y j+1 − Y j = ±1, it holds that any given point (k, Y k ) satisfying (5.10), satisfies also (5.11) with a probability bounded by 1 − e −cl , where c is some constant independent from the path and the point (see Lemma 8 bellow). One may therefore try to establish that, with very high probability, any given path (j, Y j ) j≥0 independent of the environment and having a density of points satisfying (5.10), has also a density of good points. More precisely, since the number of such paths that differ from one another before the time n is roughly bounded by 2 n , we would need to establish that the probability that a given such path has less than cn good points decays faster to 0 than 2 −n , provided that c has been chosen small enough. If the events that different points on the path are good were independent, we would indeed conclude by the above that this probability decays like e −I(l)n , where I(l) grows to infinity as l grows to infinity, so that the result would follow by taking l large enough.
The lack of independence forces us to adapt this strategy. Let us assume that a point (k, Y k ) satisfies (5.10) but not (5.11), and let us denote by d the minimum over the maximal distances to the right of Y k + l travelled by the particles ensuring that (5.11) fails (see (5.13) for a precise definition). The main observation is that, when considering a next point (k ′ , Y k ′ ) satisfying (5.10), with k ′ > k, then, if (k, Y k + d) ∈ T + k ′ ,Y k ′ , we can estimate the probability that (5.11) is satisfied for (k ′ , Y k ′ ), independently of our knowledge about (k, Y k ). This observation is made useful thanks to the two following ingredients.
First, in order to decrease the cardinal of possible paths, we consider paths of blocks of size l instead of paths of points. While the number of such paths is now bounded by 2 n/l , the probability that a given block is bad (see the precise definition just below) still behaves like 1 − l 2 e −cl ∼ 1 − e −c ′ l . Second, each time a bad block is seen, we estimate how bad it was, i.e. we estimate E, as defined in (5.17). The probability that E exceeds a certain amount k ≥ 0 decays exponentially with k (see Lemma 8 bellow).
Before starting the proof of Proposition 5, let us state an elementary result relative to the updates of the environment alone (remind that the updates U (t, x) are defined for all t ∈ R). We say that a time-space point (t, x) is bad if there exists a particle i ∈ Z and a time s ≤ t such that and ξ(t, i) ≥ l + x.
Given (t, x) a bad point, A t,x denotes the set of particles satisfying (5.12). Given a particle i ∈ A t,x we define Finally we define the variable d t,x by x) is not bad. We stress that the variables d t,x are deterministically equal at 0 or larger than l, measurable with respect to ξ and identically distributed (but off course not independent).

Lemma 8.
There exists c > 0 so that for any k ≥ l (5.14) Proof. We first establish the exponential decay for large k. For k ≥ l, using a union bound, For each i ≥ l, (ξ(s, i)) s≤0 is a simple continuous time random walk so that there exists some constant The sum (5.15) is decomposed in two parts. First, there exists c 2 > 0 such that, for k large enough, Second, let us consider an index l ≤ i ≤ k − 1. Using Markov property at the hitting time of the complementary of T + 0,k−1 together with the fact the interchange process is considered under its invariant measure we obtain We thus obtain (5.14) for k large enough.
As P 1 (d 0,0 ≥ k) is non-increasing with k ≥ l, it remains to prove that P 1 (d 0,0 ≥ l) < 1 to complete the proof, or equivalently that P 1 (d 0,0 = 0) > 0. We already know that for some K large enough i≥K P 1 (∃t ≤ 0 s.t. ξ(t, i) ∈ T + 0,0 ) < 1. We consider a time s < 0 so that − v 4 s > K. Using Markov property at time s together with the fact the interchange process is considered under its invariant measure we obtain that where U (s, l) = U (0, l) means that the clock between the sites l − 1 and l has not ringed during the time interval [0, s]. That concludes the proof.
Proof of Proposition 5. The proof is made of two steps.
In a first step, we fix l ≥ 1 large enough, and we prove that there exists some constant c 0 > 0 such that, for all n large enough, where "good (for l)" is defined by (5.10-5.11). We observe that By Proposition 3, the second term is bounded by P 0 (X n ≤ v 2 n) ≤ e −φ 1/4 n ≤ 1/n q+2 , where the second inequality is valid for n large enough. We thus need a bound on the first term.
Let us assume that X n ≥ v 2 n. We first describe how to create a trajectory of boxes of size l starting from the trajectory of points (j, X j ) 0≤j≤n . For j ≥ 0, we define and we remark that, on the event {X n ≥ v 2 n}, t i ≤ n for all i ≤ v 4 n. We cut the time-space parallelogram defined by its opposite sides We denote by I ⊂ {1, . . . , T } × {1, . . . , K} the subset of indices (i, j), such that (i, j) ∈ I if and only if the box C(i, j) contains at least one point (t k , X t k ) for some k ≤ v 4 n. We observe the following: 1. For any 1 ≤ j ≤ K, there exists at least one i ∈ {1, . . . , T } such that C(i, j) ∈ I.
2. For any 1 ≤ i ≤ T , there are at most two j ∈ {1, . . . , K} such that C(i, j) ∈ I, and in that case the two indices j are consecutive.
It makes thus sense to consider the subset J ⊂ I such that (i, j) ∈ J if j is odd and if i is the smallest number so that (i, j) ∈ I. The cardinal of J is K/2 (assuming K even, the other case being analogous) and the set J can be described as In the sequel, we will say that a subsetJ ⊂ I is admissible if it can be constructed as above starting from . Remind now the definition (5.12) of bad points. A block C(i, j) is said to be bad if at least one point in C(i, j) is bad. For any block C(i, j), we define the variable E(i, j) by E(i, j) = 0 if the block is not bad, and by if the block is bad, whereT il . We note that E(i, j) ≥ 1 if and only if the block C(i, j) is bad. The crucial observation is that, for any Moreover using a union bound together with (5.14) we get that there exists c 1 > 0 such that for any Let us now assume that 0 < c 0 ≤ v 16l . We have then For some c 2 > 0 we can bound for every n the number of admissible subsets J by e c2 n l so that Moreover, since for any admissible J there are also at most e c3 n l ways to extract half of the blocks of J, (wae assume that K/4 is an integer for the ease of notation, the other cases are analogous).
To get a bound on the maximum appearing in this last expression, we need some extra notations. Let j 1 < · · · < j K/4 be the set of points such that (i, j k ) ∈ J ′ for some unique i. We denote by Φ a partition of {j 1 , . . . , j K/4 } in non-empty intervals, by which we mean non-empty sets of the type [a, b]∩{j 1 , . . . , j K/4 }, for [a, b] an interval of R. We denote by |Φ| the number of sets in Φ, by φ k the sets of Φ, by |φ k | ≥ 1 the cardinal of each set, by l k the smallest integer in the set φ k , by r k the largest integer in the set φ k . Moreover, to lighten the notations, let us simply write C(j k ) (resp. E(j k )) for C(i(j k ), j k ) (resp. is the number such that (i(j k ), j k ) ∈ J ′ for a given j k ∈ {j 1 , . . . , j K/4 }. Then where the equality in front of the third line follows from (5.18), where the inequality in front of the fourth line follows from (5.19) together with the rough estimates |r k − l k + 1| ≥ |φ k | for any 0 ≤ k ≤ |Φ|, and where the last estimates follows from the fact that the number of partitions Φ in intervals is bounded by 2 K/4 . Since K ≥ c 5 n/l, we obtain finally that, for some constant c 6 < +∞, By taking l large enough, this is bounded by 1/n q+2 for n large enough, from where (5.16) follows.
We now turn to the second step of the proof, and derive the result from (5.16). Let g 1 be the first good time, i.e. the first time such that (g 1 , X g1 ) is good, and define then, by iteration on i ≥ 1, g i+1 as the first good time after g i + l. By (5.16), the sequence (g i ) i≥1 is almost surely infinite. Remark that (g i ) i≥1 are stopping times with respect to the filtration There exists a constant ǫ > 0 such that, for any i ≥ 1, where we have used the uniform ellipticity to bound the conditional probability that the walker does l + 1 steps to the right. Let us denote by (Z i ) i≥1 a sequence of variables with values in {0, 1} such that Z i = 1 if g i + l is candidate, and Z i = 0 otherwise. The above implies that P 0 (Z k = 1|Z 1 , . . . , Z k−1 ) ≥ ǫ.
The second term is bounded by 1/n q+1 thanks to (5.16). For the first one, we observe that on the event {♯{i ≤ n : (i, X i ) good} ≥ c 0 n}, there exists c 9 > 0 such that g c9n ≤ n. Therefore, if c > 0 is taken so that c ≤ c 7 c 9 , we obtain thanks to (5.21).

5.2
Proof that (0, X 0 ) satisfies (5.6-5.7) with positive probability We prove here that with positive probability the walker lives in T − and does not visit any of the particles that were behind it at initial time. We introduce Proof of (5.24). In this proof, as X 0 = 0 a.s., we consider F as a function of ξ only that is F = F (ξ, 0).
As P 0 −a.s. lim inf X n /n > v we deduce by monotonicity that Using the same type of computation as for (5.14) we obtain that and we choose L large enough so that (remind (5.25)) Finally for L ′ large enough so that L ′ − v/4L ′ ≥ L P 0 (H = +∞) ≥ P × P ω 0,0 (F = +∞, where we have used uniform ellipticity to get the last line and κ is defined in (5.1). As the law of the environment is invariant by translation, P × P ω L ′ ,L ′ (X n ≥ v 4 (n − L), ∀n ≥ 0)) is equal to P 0 (X n ≥ v 4 (n − L), ∀n ≥ 0) so that finally using (5.26)

Building the first renewal point
We define for n ≥ 0 the shift θ n on the space Ξ × P by θ n (ξ, and consider the increasing sequence of stopping times with respect to the filtration (F k ) k≥1 (see (5.20)) defined by : S 0 = 1, and for k ≥ 1, We claim that P 0 − a.s., K < +∞ (5.28) so that τ := R K is well defined and moreover τ is a renewal time in the sense that it satisfies (5.4)-(5.7).
Proof of (5.28). We first deduce from Lemma 5 that So that P 0 (R k+1 < +∞) = P 0 (R k < +∞) − P 0 (R k < +∞, S k = +∞). (5.29) To compute the last term of (5.29) we use first that P ω is markovian where E 1 (resp. E 2 ) denote the expectation with respect to P 1 (resp. P 2 ). For any fixed ξ, let V i,x be the set of label of particles that are strictly behind x at time i that is Note that given ξ, P ω 0,0 (R k = i, X R k = x)) is measurable with respect to σ(ν(µ), µ ∈ V i,x ) while P ω i,x (H(θ i ξ, X) = +∞) is measurable with respect to σ(ν(µ), µ / ∈ V i,x )). These two variables are thus independent under P 2 .

Defining a sequence of renewal points by iteration
Remark that τ 1 has to be thought as a function of X and ξ so that, in order to iterate the construction, we study the law of these two processes after τ 1 . That is the purpose of the next proposition.
Proof of Proposition 6. The proof is quite usual for this kind of construction (see for example in the static case [18]). We adapt it explicitly to our case in order to be exhaustive. We define We have to prove that for any bounded φ 1 , φ 2 (t, x)) x∈Z,t≥0 )|H = +∞). (5.32) Consider the variables ψ 1 ((X n∧τ1 ) n≥0 ) and ψ 2 ((ξ(s, x)) s≤τ1 ,x∈Z ) where ψ 1 and ψ 2 are bounded functions.
If Z is some process and t a time the process stopped at time t (that might be random) is denoted Z t that is for any s ≥ 0, Z t s = Z s∧t . Using the same arguments as in the proof (5.30)-(5.31) we deduce that (we precise the arguments of the function only when they change from line to line) where the last line is obtained from the previous one by taking φ 1 = φ 2 = 1 in the same computation.
This concludes the proof of (5.32) and thus the one of Proposition 6.
The proof follows by induction from Proposition 6.

5.5
Control on the moments of τ 1 and conclusion of the proof Proposition 8. For n ≥ 0 large enough P 0 (τ 1 > n) ≤ 1 n 3 .
That concludes the proof.
We are now ready to conclude the proof of Theorem 1. The proofs that a finite second moment for τ 2 − τ 1 implies a law of large numbers and an annealed central limit theorem are classical in the context of random walk in static random environment, see [18] and [17]. We remind here for sake of completeness the steps of these proofs.
We start with the proof of the law of large numbers, that is point 1. of Theorem 1. For n ≥ 0, k(n) denotes the label of the "renewal slab" that contains n that is the unique integer such that τ k(n) ≤ n < τ k(n)+1 . We can thus control the walker via X τ k(n) τ k(n)+1 ≤ X n n ≤ X τ k(n)+1 τ k(n) (5.36) Rewrite the right term X τ k(n)+1 τ k(n) = X τ1 + k(n)+1 i=2 ∆ i k(n) k(n) .
(5.37) so that using Proposition 7 and the law of large numbers it converges P 0 -a.s. to v(γ) = E 0 (X τ1 |H = +∞) E 0 (τ 1 |H = +∞) (5.38) that is always well defined and positive because of (5.24) and Proposition 8. Using the same decomposition as in (5.37) to study the left term in (5.36), we obtain the law of large numbers that is stated in point 1.
We turn to the proof of point 2. of Theorem 1, mainly following [17]. Define for j ≥ 1 converges in law to a brownian motion with variance E 0 (Z 2 1 ) that is positive as Z 1 is not P 0 −a.s. constant. As a consequence of Proposition 7, the law of large numbers and Dini's theorem P 0 − a.s., ∀T > 0, sup 0≤t≤T k(⌊tn⌋) n − t E 0 (τ 2 − τ 1 ) so that we deduce from (5.39) that converges in law to a brownian motion with variance E 0 (Z 1 ) 2 /E 0 (τ 2 − τ 1 ). Finally observe that P 0 − a.s., for all T > 0, with the convention τ 0 = 0. We prove that the right term converges in probability using Propositions 7 and 8. Indeed for any ǫ > 0 and this last term converges to 0 when n goes to +∞. We have thus proven that the Skorohod distance between Xnt−ntv(γ) √ n t≥0 and k(⌊nt⌋) i=1 Zj √ n t≥0 goes to 0 in P 0 probability. We deduce from the convergence in law of the latter one that converges in law to a brownian motion with variance σ 2 := E 0 (X τ2 − X τ1 ) 2 /E 0 (τ 2 − τ 1 ).