Random Walk on Random Walks

In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density $\rho \in (0,\infty)$. At each step the random walk performs a nearest-neighbour jump, moving to the right with probability $p_{\circ}$ when it is on a vacant site and probability $p_{\bullet}$ when it is on an occupied site. Assuming that $p_\circ \in (0,1)$ and $p_\bullet \neq \tfrac12$, we show that the position of the random walk satisfies a strong law of large numbers, a functional central limit theorem and a large deviation bound, provided $\rho$ is large enough. The proof is based on the construction of a renewal structure together with a multiscale renormalisation argument.


Introduction and main results
Background. Random motion in a random medium is a topic of major interest in mathematics, physics and (bio-)chemistry. It has been studied at microscopic, mesoscopic and macroscopic levels through a range of different methods and techniques coming from numerical, theoretical and rigorous analysis.
Since the pioneering work of Harris [14], there has been much interest in studies of random walk in random environment within probability theory (see [17] for an overview), both for static and dynamic random environments, and a number of deep results have been proven for various types of models.
In the case of dynamic random environments, analytic, probabilistic and ergodic techniques were invoked (see e.g. [1], [4], [7]- [11], [12], [13], [19], [27], [28]), but good mixing assumptions on the environment remained a pivotal requirement. By good mixing we mean that the decay of space-time correlations is sufficiently fastpolynomial with a sufficiently large degree -and uniform in the initial configuration. More recently, examples of dynamic random environments with non-uniform mixing have been considered (see e.g. [15], [24], [6], [2]). However, in all of these examples either the mixing is fast enough (despite being non-uniform), or the mixing is slow but strong extra conditions on the random walk are required.
In this context, random environments consisting of a field of random walks moving independently gained significance, not only due to an abundance of models defined in this setup, but also due to the substantial mathematical challenges that arise from their study. Among various conceptual and technical difficulties, slow mixing (in other words, slow convergence of the environment to its equilibrium as seen from the walk) makes the analysis of these systems extremely difficult. In particular, in physical terms, when ballistic behaviour occurs the motion of the walk is of "pulled type" (see [31]).
In this paper we consider a dynamic random environment given by a system of independent random walks. More precisely, we consider a walk particle that performs a discrete-time motion on Z under the influence of a field of environment particles which themselves perform independent discrete-time simple random walks. As initial state for the environment particles we take an i.i.d. Poisson random field with mean ρ ∈ (0, ∞). This makes the dynamic random environment invariant under translations in space and time. The jumps of the walk particle are drawn from two different random walk transition kernels on Z, depending on whether the space-time position of the walk particle is occupied by an environment particle or not. For reasons of exposition we restrict to nearest-neighbour kernels, but our analysis easily extends to the case where the kernels have finite range. Subsequently, let all the environment particles evolve independently as "lazy simple random walks" on Z, i.e., at each unit of time the probability to step −1, 0, 1 equals 1 2 (1 − q), q, 1 2 (1 − q), respectively, for some q ∈ (0, 1). The assumption of laziness is not crucial for our arguments, as explained in Comment 5 below.
Let T be the set of space-time points covered by the trajectory of at least one environment particle. The law of T is denoted by P ρ (see Section 2.1 for a detailed construction of the dynamic environment and the precise definition of T ). Note that T does not have good mixing properties. Indeed, Cov ρ (1 (0,0)∈T , 1 (0,n)∈T ) ∼ c(ρ) 1 n 1/2 , (1.1) where Cov ρ denotes covariance with respect to P ρ (see (2.9)-(2.11) in Section 2.1.) Given T , let X = (X n ) n∈Z+ be the nearest-neighbour random walk on Z starting at the origin and with transition probabilities (1.2) where p • , p • ∈ [0, 1] are fixed parameters and P T stands for the law of X conditional on T , called the quenched law. The annealed law is given by P ρ (·) = P T (·) P ρ (dT ). the drifts at vacant and occupied sites, respectively. Following the terminology established in the literature on random walks in static random environments (see [32], [33]), we classify our model as follows.
Definition 1.1. The model is said to be non-nestling when v • v • > 0. Otherwise it is said to be nestling.
We are now in the position to state our main results.
(1.4) (b) Under P ρ , the sequence of random processes X nt − nt v n 1/2 σ t≥0 , n ∈ N, (1.5) converges in distribution (in the Skorohod topology) to the standard Brownian motion.
The difference between the nestling and the non-nestling case can be seen in the statement of Theorem 1.2: in the non-nestling case we can prove (a)-(c) for any ρ ≥ 0, in the nestling case only for ρ ≥ ρ , where ρ will need to be large enough.
Theorem 1.2 will be obtained as a consequence of Theorems 1.4-1.5 and Remark 1.6 below. Before stating them, we give the following definition that will be central to our analysis. Definition 1.3. For fixed v • , v • , ρ and a given v ∈ [−1, 1], we say that the v -ballisticity condition holds when there exist c = c(v • , v • , v , ρ) > 0 and γ = γ(v • , v • , v , ρ) > 1 such that P ρ ∃ n ∈ N : X n < nv − L ≤ c −1 e −c log γ L ∀ L ∈ N. (1.7) Condition (1.7) is reminiscent of ballisticity conditions in the literature on random walks in static random environments, such as Sznitman's (T )-condition (see [32]). The next theorem shows that, if the model satisfies (1.7) with v > 0 as well as an ellipticity condition, then the asymptotic results stated in Theorem 1.2 hold.
Random walk on random walks The proofs of Theorems 1.4 and 1.5 are given in Sections 4 and 3, respectively. They rely on the construction and control of a renewal structure for the random walk trajectory, respectively, on a multiscale renormalisation scheme. The latter is used to show that the random walk stays to the right of a point that moves at a strictly positive speed. The former is used to show that, as a consequence of this ballistic behaviour, the random walk has a tendency to outrun the environment particles, which only move diffusively, and to enter "fresh territory" containing particles it has never encountered before. Therefore the random walk trajectory is a concatenation of "large independent random pieces", and this forms the basis on which the limit laws in Theorem 1.2 can be deduced (after appropriate tail estimates). None of these techniques are new in the field, but in the context of slow mixing dynamic random environments they are novel and open up gates to future advances.

Comments.
1. It follows from Theorem 1.5 (and reflection symmetry in the case v (1.4). This can also be deduced from the following asymptotic weak law of large numbers derived in [16]: In fact, [16] considers the version of our model in Z d , d ≥ 1, in continuous time and with more general transition kernels.

2.
It can be shown that the asymptotic speed and variance v and σ in Theorem 1.

3.
We expect Theorem 1.2 to hold when v • = 0, v • = −sign(v • ) and ρ is small. In the non-nestling case, this already follows (for any ρ ≥ 0) from Theorem 1.4, but in the nestling case we would have to prove the analogue of Theorem 1.5 for v • < v • and ρ small.

4.
Our techniques can potentially be extended to higher dimensions. The restriction to the one-dimensional setting simplifies the notation and allows us to avoid certain technicalities.

5.
Our dynamic random environment is composed of lazy random walks evolving in discrete time. This assumption was made for convenience in order to simplify some technical steps. However, as discussed in Remark C.4, our analysis can be extended to symmetric random walks with bounded steps that are aperiodic or bipartite (in the sense of [20]), or that evolve in continuous time.

6.
It is a challenge to extend Theorem 1.2 to other environments, in particular to environments where the particles are allowed to interact with each other. The renormalisation scheme is robust enough to show that the ballisticity condition (1.7) holds as long as the environment satisfies a mild decoupling inequality (see Section 3.5 for specific examples).
On the other hand, the regeneration structure is more delicate, and uses model-specific features in an important way. A recent development can be found in [18], where the authors consider as dynamic environment the simple symmetric exclusion process in the regimes of fast or slow jumps. While their proofs differ significantly from ours, their methods are in spirit close. It remains an interesting question to determine how far both methods can be pushed (see also Section 4.4

7.
After the preprint version of the present article appeared, it was brought to our attention that a regeneration argument similar to the one used to prove Theorem 1.4 appears in [3], where the authors consider a different model in the same random environment (in continuous time). We believe, however, that our approach is simpler and is better suited to the ballisticity condition (1.7) (which is mildly stronger than the one used in [3]).
Organization of the paper. In Section 2 we give a graphical construction of our random walk in dynamic random environment. This construction will be convenient during the proofs of our main results. In Section 3 we set up a renormalisation scheme, and use this to show that, for large densities of the particles, the random walk moves with a positive lower speed to the right. This lower speed of the random walk plays the role of a ballisticity condition and is crucial in Section 4, where we introduce a random sequence of regeneration times at which the random walk "refreshes its outlook on the random environment", and show that these regeneration times have a good tail. In Section 4.3 the regeneration times are used to prove Theorem 1.4. Appendices A-E collect a few technical facts that are needed along the way.

Preliminaries
In this section we give a particular construction of our model, supporting a Poisson point process on the space of two-sided trajectories of environment particles (Section 2.1) and an i.i.d. sequence of Uniform([0, 1]) random variables that are used to define our random walk (Section 2.2). This formulation is equivalent to that given in Section 1, but has the advantage of providing independence and monotonicity properties that are useful throughout the paper (see Definitions 2.1-2.2 and Remark 2.3 below).
Throughout the sequel, c denotes a positive constant that may depend on v • , v • and may change each time it appears. Further dependence will be made explicit: for example, c(η) is a constant that depends on η and possibly on v • , v • . Numbered constants c 0 , c 1 , . . . refer to their first appearance in the text and also depend only on v • and v • unless otherwise indicated.

Dynamic random environment
be a doubly-indexed collection of independent lazy simple random walks such that S z,i 0 = z for all i ∈ N. By this we mean that the past (S z,i n ) n∈Z− and the future (S z,i n ) n∈Z+ are independent and distributed as symmetric lazy simple random walks as described in Section 1.
Let (N (z, 0)) z∈Z be a sequence of i.i.d. random variables independent of S. Then, the process N (·, n) defined by (with 0 assigned to empty sums) is a translation-invariant Markov process representing the number of environment particles at site x and time n. For any density ρ > 0, the process N is in equilibrium when we choose the distribution of N (·, 0) to be product Poisson(ρ). Denote by P ρ the joint law of N (·, 0) and S in this case. It will be useful to view N as a subprocess of a Poisson point process on a space of trajectories as follows. Let  and P x is the law on W , with support on W x , under which Z(·) = (Z n (·)) n∈Z is distributed as a lazy simple random walk on Z. Moreover, we have that   Now, µ w(0) = 0 or w(n) = 0 = µ w(0) = 0 + µ w(n) = 0 = w(0) = 1 + P 0 (Z n = 0) = 2 − P 0 (Z n = 0), (2.10) where we used the symmetry of Z. Hence,

Random walk in dynamic random environment
In order to define the random walk on our dynamic random environment, we first enlarge the probability space. To that end, let us consider a collection of i.i.d. random variables U = (U y ) y∈Z 2 , independent of the previous objects, with each U y uniformly distributed on [0, 1]. Set Ω =Ω × [0, 1] Z 2 , and redefine P ρ to be the probability measure giving the joint law of N (·, 0), S and U . Given a realisation of ω and U and y ∈ Z 2 , define the random variables Y y n , n ∈ Z + , as follows: (2.12) In words, Y y = (Y y n ) n∈Z+ is the space-time process on Z 2 that starts at y, always moves upwards, and is such that its horizontal projection X y = (X y n ) n∈Z+ is a random walk with drift v • = 2p • − 1 when Y y steps on T (ω) and drift v • = 2p • − 1 otherwise. Note that Y y depends on T (ω), but this will be suppressed from the notation. Also note that, for any y ∈ Z 2 , the law of X y under P ρ coincides with the annealed law described in Section 1. So from now on X = X 0 will be the random walk in dynamic random environment that we will consider. We may also write Y n to denote Y 0 n . Definition 2.1. For ω, ω ∈Ω, we say that ω ≤ ω when T (ω) ⊂ T (ω ). We say that a random variable f : Ω → R is non-increasing when f (ω , ξ) ≤ f (ω, ξ) for all ω ≤ ω and all ξ ∈ [0, 1] Z 2 . We extend this definition to events A in Ω by considering f = 1 A . Standard coupling arguments imply that E ρ (f ) ≤ E ρ (f ) for all non-increasing random variables f and all ρ ≤ ρ .

Definition 2.2.
We say that a random variable f : Remark 2.3. The above construction provides two forms of monotonicity: (2.14)

Proof of Theorem 1.5: Renormalisation
In this section we prove Theorem 1.5, which shows the validity of the ballisticity condition in (1.7) when v • < v • and ρ is large enough. This will be crucial for the proof of Theorem 1.4 later. In Section 3.2 we introduce the required notation. In Section 3.3 we devise a renormalisation scheme (Lemmas 3.2-3.3) to show that under a "finite-size criterion" the random walk moves ballistically, and we prove that for large enough ρ this criterion holds (Lemma 3.4). In Section 3.4 we show that the renormalisation scheme yields the large deviation bound in Theorem 1.5 (Lemma 3.5). This bound will be needed in Section 4, where we show that, as the random walk explores fresh parts of the dynamic random environment, it builds up a regeneration structure that serves as a "skeleton" for the proof of Theorem 1.4. In Section 3.5 we comment on possible extensions.

Space-time decoupling
In order to implement our renormalisation scheme, we need to control the dependence of events having support in two boxes that are well separated in space-time. This is the content of the following corollary of Theorem C.1, the proof of which is deferred to Appendix C.
where per(B 1 ) stands for the perimeter of B 1 .
The decoupling in Corollary 3.1, together with the monotonicity stated in Definition 2.1, are the only assumptions on our dynamic random environment that are used in the proof of Theorem 1.5. Hence, the results in this section can in principle be extended to different dynamic random environments. (See Section 3.5 for more details.)

Scale notation
Define recursively a sequence of scales (L k ) k∈Z+ by putting (The choice L 0 = 100 has no special importance: any integer ≥ 4 will do, as long as it stays fixed.) Note that the above sequence grows super-exponentially fast: log (3.5) denote the set of all indices whose corresponding shift of the rectangle B L k still intersects the larger rectangle B L k+1 = B L k+1 (0, 0).  Since k∈N 1/k 2 = π 2 /6, it follows that k → v k decreases strictly to v. The reason why we introduce a speed for each scale k is to allow for small errors as we change scales.
(The need for this "perturbation" will become clear in (3.15) below.) We are interested in bounding the probability of bad events A k on which the random walk does not move to the right with speed at least v k , namely, Note that A k (m) is defined in terms of the dynamic random environment and the random , for each k and m the random variable 1 A k (m) is non-increasing in the sense of Definition 2.1.
Define recursively a sequence (ρ k ) k∈Z+ of densities by putting Again, we introduce a density for each scale k in order to allow for small errors. (The need for this "sprinkling" will become clear in (3.18) below.) Observe that ρ k increases strictly to ρ defined by Finally, define where the last equality holds for all m ∈ Z 2 because of translation invariance.

Estimates on p k
Lemmas 3.2-3.4 below show that p k decays very rapidly with k when ρ 0 is chosen large enough.
The first step is to prove a recursion inequality that relates p k+1 with p k : Proof. Let k 0 = k 0 (δ) be a non-negative integer such that, for all k ≥ k 0 (δ), The existence of k 0 follows from the fact that L k increases faster than exponentially in k.
We begin by claiming the following: (3.14) The proof is by contradiction. Suppose that the claim is false. Then there are at most two elements m = (r, s), m = (r , s ) in M k , with s = s , such that B L k (m) and B L k (m ) are slow boxes. It then follows that, for any (x, n) ∈ I L k+1 , where the terms in the sum correspond to the displacements over the L 1/2 k time layers of height L k in the box B L k+1 . The term −2L k appears in the right-hand side of the first inequality because there are at most two layers (associated with the two slow boxes mentioned above) in which the total displacement of the random walk is at least −L k , since the minimum speed is −1. The second inequality uses that v k ≤ 1.
By the definition of k 0 (δ) we get, Substituting this into (3.15) we get so that A k+1 (0) cannot occur. This proves the claim (3.14). Thus, on the event A k+1 (0), we may assume that there exist m 1 = (r 1 , s 1 ), m 3 = (r 3 , s 3 ) in M k such that s 3 ≥ s 1 + 2, meaning that the vertical distance between B L k (m 3 ) and B L k (m 1 ) is at least L k . It follows from Corollary 3.1 and the fact that the events A k (m) are non-increasing that where per(B L k ) denotes the perimeter of B L k , and in the last inequality we use that which completes the proof of (3.12).
Next, we prove a recursive estimate on p k .
which is possible because lim k→∞ L k = ∞. Dividing (3.21) by e − log 3/2 L k+1 , recalling from (3.2) that L k+1 ≤ L 3/2 k and using (3.22), we get Proof. Recall (3.3), (3.8) and (3.11). Recall also from (2.2) that N (x, n) denotes the number of particles in our dynamic random environment that cross (x, n) (i.e., N (x, n) = ω({w ∈ W ; w(n) = x})), and let be the event that all space-time points in B L k are occupied by a particle. Estimate The first term in the right-hand side of (3.25) can be estimated from above by where the last inequality uses (3.7). On the event C k , all the space-time points of B L k in the dynamic random environment are occupied, and so the law of (X 0 n ) 0≤n≤L k coincides with that of a nearest-neighbour random walk with drift v • starting at 0. Therefore, by an elementary large deviation estimate, we have (3.27) independently of the choice of ρ 0 . We can therefore choose k 2 = k 2 (δ) large enough so (3.28) Having fixed k 2 , we next turn our attention to the second term in the right-hand side of (3.25). Recalling that, under P ρ k 2 , the random variables (N (x, n)) x∈Z are Poisson(ρ k2 ), Since this tends to zero as ρ 0 → ∞ (recall (3.9)), we can take ρ 0 large enough so that Combine (3.25), (3.28) and (3.29) to get the claim.

Large deviation bounds
Together with Lemmas 3.3-3.4, the following lemma will allow us to prove Theorem 1.5.
We next define the set of indices (see Fig. 2) and consider the event This event has high probability. Indeed, according to our hypothesis on the decay of where in the fourth inequality we use Lemma D.1, while in the last inequality we use that .2) and (3.32)) and that 2Lǩ < L. It is therefore enough to show that the event in (3.31) is contained in B č k .
We claim that on the event Bǩ, To see why this is true, fix some k ≥ǩ as in the definition of Jǩ. It is clear that the inequality holds for j = 0. Suppose by induction that X lL k ≥ vlL k for some l ≤ L k+2 /L k .

Random walk on random walks
Observe that Y lL k belongs to some box B k (m) with m ∈ M k ⊂ M ǩ . It even belongs to the corresponding interval I L k (m) as defined in (3.5). Since we are on the event A k (m) c , this implies that which shows that the bound in (3.37) holds for l + 1. Since this can be done for any k ≥ǩ, we have proven (3.37) by induction.
We now interpolate the statement in (3.37) to all times n > 2Lǩ +2 (> Lǩ +2 + Lǩ). More precisely, we will show that, on the event Bǩ, Indeed, given such a n ≥ Lǩ +2 + Lǩ, we fixk to be the smallest k such that and we writel for this unique value of l. Noting that we can put the above pieces together and estimate (3.42) where the first inequality uses (3.37),k ≥ǩ and the definition ofl, the second inequality uses thatl > 2/ε, the third inequality uses that v − ε ≤ 1 and, for the fourth inequality, we use (3.40) considering separately the cases v − ε ≥ 0, v − ε < 0. This proves (3.39).
To complete the proof, we observe that, since X is Lipschitz, having (as in (3.39)) X n ≥ (v − ε)n for any n ≥ 2Lǩ +2 we get X n ≥ (v − ε)n − 2Lǩ +2 ≥ (v − ε)n − L for all n ∈ Z + . Thus, we have proved that the event appearing in the right-hand side of (3.31) is contained in B č k , so that its probability is bounded as in (3.35).
Proof of Theorem 1.5. Put v = v + ε, let ρ 0 be large enough to satisfy Lemma 3.4, and take ρ as in (3.10). Recalling that X n is the horizontal projection of Y n and that, by monotonicity, we see that Lemmas 3.3-3.5 prove the large deviation bound in Theorem 1.5.
Remark 3.6. Note that the speed in Lemma 3.5 was not chosen arbitrarily below the speed given by the law of large numbers in (1.4). What we have obtained is that for any v < v • there exists a density ρ 0 (v) such that (3.31) holds for ρ ≥ ρ 0 (v).

Extensions
The ballisticity statement in Theorem 1.5 holds under mild conditions on the underlying dynamic random environment. Indeed, the only assumptions we have made on the law of T are: (i) The monotonicity stated in Definition 2.1 (see (3.18)).
(iii) The perturbative condition lim ρ→∞ P ρ [0 ∈ T ] = 1 (used to trigger (3.29)). Let us elaborate a bit more on the space-time decoupling condition given by Corollary 3.1. This condition was designed with our particular dynamic random environment in mind, which lacks good relaxation properties. However, several dynamic random environments satisfy the simpler and stronger condition are not allowed to depend on ρ, since the triggering of (3.29) is done after the induction inequality of Lemma 3.3. The condition in (3.44) holds, for instance, when the dynamic random environment has a spectral gap that is bounded from below for ρ large enough.
Such a property can be obtained for a variety of reversible dynamics with the help of techniques from Liggett [21].
The contact process. It can be shown that (3.44) holds for the supercritical contact process for non-increasing f 1 , f 2 , uniformly in infection parameters that are uniformly bounded away from the critical threshold. A proof can be developed using the graphical representation (see e.g. Remark 3.7 in [15]) and the strategy of Theorem C.1. Note, however, that the results in [15] already imply stronger results for the large deviations of the random walk in the regime of large infection parameter, namely, (1.6) with exponential decay.   For each site x ∈ Z, we produce an independent copy N (x, n) n∈Z+ of the above Markov chain. Denote by P ν the law of one chain started from the probability distribution ν. We define as a dynamic random environment the field given by these chains when starting from the stationary distribution q. We fix ρ ≥ 0 and set T = {(x, n) : N (x, n) < ρ}, so that we can define the random walk (Y n ) n∈Z+ as in (2.12).
In order to prove Corollary 3.1 for this dynamic random environment, we would like to couple two renewal chains N (0, n), N (0, n), starting, respectively, at δ 0 and q, in such a way that they coalesce at a random time T . Using Proposition 3 of [22], we obtain such a coupling with E δ0,q [exp[T 1/8 ]] < ∞ (note that p is aperiodic, i.e., gcd(supp(p)) = 1).
We  (3.47) where in the last inequality we use the definition of q and the Markov inequality for exp[T 1/8 ]. Repeating this for every chain N (x, n) with x ∈ [a, b], we prove (3.44) for T with κ = 1 8 . It is clear that lim ρ→∞ P [0 ∈ T ] = 0. Thus, the conclusion of Theorem 1.5 holds for the dynamic random environment T .
In fact, also Theorem 1.4 holds in this case, as a simple regeneration strategy can be found; see Section 4.4. As a consequence, the statements of Theorem 1.2 are true for this example.

Remark 3.7.
Observe that T is not uniformly mixing. Indeed, given any n ∈ Z + , we can start our Markov chain in events with positive probability (say, N (0, 0) = 2n) such that the information at time zero is not forgotten until time n.

Proof of Theorem 1.4: Regeneration
In this section, we state and prove two results about regeneration times (Theorems 4.1-4.2) that are then used to prove Theorem 1.4 in Section 4.3. A discussion about extensions is given in Section 4.4.
In Section 4.1 we introduce some additional notation in order to define our regeneration time. This definition is made in a non-algorithmic way and does not immediately imply that the regeneration time is finite with probability 1. Nonetheless, in the latter event we are able to show in Theorem 4.1 that a renewal property holds for the law of the random walk path. The next step is to prove Theorem 4.2, which shows that the regeneration time not only is a.s. finite but also has a very good tail. This is accomplished by finding a suitable upper bound, which consists of two main steps. First, we define what we call good record times and show that these appear very frequently (Proposition 4.6). This is done in an algorithmic fashion, but only by exploring the system locally at each step. Second, we show that, outside a global event of small probability, if we can find a good record time then we can also find nearby an upper bound for the regeneration time.
For y ∈ Z 2 , define the sigma-algebras and note that these are jointly independent under P. Also define the sigma-algebras and set Next, define the record times i.e., the time when the walk first enters the cone ∠ k = ∠((1 −v)k, 0). Note that, for any k ∈ N, y ∈ ∠ k if and only if y + (1, 1) ∈ ∠ k+1 . Thus , R k+1 ≥ R k + 1, and X R k +1 − X R k = 1 if and only if R k+1 = R k + 1. Define a filtration F = (F k ) k∈N by setting F ∞ = σ (ω(A) : A ∈ W) ∨ σ(U y : y ∈ Z 2 ) and (4.9) i.e., the sigma-algebra generated by Finally, define the event in which the walker remains inside the cone ∠(y), the probability measure P ∠ (·) = P · ω W 0 = 0, A 0 ,

Regeneration theorems
The following two theorems are our key results for the regeneration times.

Theorem 4.2.
There exists a constant c 4 > 0 such that E e c4 log γ τ < ∞ (4.14) and the same holds under P ∠ .

Proof of Theorem 4.1
Proof. First we observe that, for all k ∈ N and all bounded measurable functions f , The joint distribution of f ((Y y i ) i≥0 ), A y and ω(W y ) under P does not depend on y.
By summing (4.16) over y ∈ Z 2 , we get (4.15). Next, let F τ be the sigma-algebra of the events before time τ , i.e., the set of all events B ∈ F ∞ such that, for each k ∈ N, there exists a B k ∈ F k such that B ∩ {I = k} = B k ∩ {I = k}. Note that τ and (Y i ) 0≤i≤τ are measurable with respect to F τ . Let Γ k = {ω(W Y R k ) = 0} ∩ A Y R k , and note that for each 0 ≤ k ≤ n ∈ N there exists a D k,n ∈ F n such that Γ k ∩ Γ n = D k,n ∩ Γ n . In particular, there exists a C n ∈ F n such that Thus, for B ∈ F τ and f bounded measurable, we may write which proves the statement under P(·). To extend the result to P ∠ = P(· | Γ 0 ), note that Γ 0 ∈ F τ because Γ 0 ∩ Γ n = D 0,n ∩ Γ n with D 0,n ∈ F n , and so we may apply (4.20) to B ∩ Γ 0 .

Proof of Theorem 4.2
In what follows the constants may depend on v • , v • , v and ρ. We begin with a few preliminary lemmas.
Define the influence field at a point y ∈ Z 2 as h(y) = inf l ∈ Z + : ω(W y ∩ W y+(l,l) ) = 0 .  Proof. By translation invariance, it is enough to consider the case y = 0. By the definition of h(0), we know that  Recall from Section 2.1 that P x stands for the law on W x under which the family (Z n ) n∈Z given by Z n (w) = w(n) is distributed as a two-sided simple random walk starting at x. We write y = (x, n) and y = (x , n ), use translation invariance of µ, and use Azuma's inequality, to get Combining (4.24-4.25) and noting that n − n ≤ (x − x)/v, we get For fixed x = −k, there are at most k/v space-time points (x, n) ∈ ∠ (0, 0). Analogously, for fixed x = k + l, there are at most (k + 1)/v space-time points (x , t ) ∈ ∠(l, l).  Define the local influence field at (x, n) as where c 5 , c 6 are the same constants as in Lemma 4.3.
Proof. The result follows from Lemma 4.3 by noting that h T (y) is independent of F y−( (1−v)T ,0) and smaller than h(y).
We say that R k is a good record time (g.r.t.) when Note that, when (4.33) occurs, Y R k + (T , T ) = Y R k+T (see Fig. 4).
( The idea is that, when R k is a good record time, R k+T is likely to be an upper bound for the regeneration time. In Proposition 4.6 below we will show that when many records are made, with high probability good record times occur. First, we need an additional lemma. the index of the last cone containing y. Note that κ(Y R k ) = k. Then define, for t ∈ N, the space-time parallelogram P t (y) = ∠(y) \ ∠ κ(y)+t ∩ y + {(x, n) ∈ Z 2 : n ≤ t/v} (4.37) and its right boundary We say that "Y y exits P t (y) through the right" when the first time i at which Y y i / ∈ P t (y) satisfies Y y i ∈ ∂ + P t (y). Note that, when y = Y R k , this implies Y y i = Y R k+t . Lemma 4.5. There exists a constant c 7 > 0 such that, for all t ∈ N large enough, P (Y y exits P t (y) through the right | F y ) ≥ c 7 P-a.s.
∀ y ∈ Z 2 . (4.39) Proof. If v • ≥ v • , then the claim follows from simple random walk estimates, since 0 <v < v • . Therefore we may assume that v • < v • . First note that, for fixed y and t large enough (e.g. t > 3), where H v,L is as in (3.30). Reasoning as for (4.16), we see that the latter equals P Y n / ∈ H v ,1 ∀ n ∈ Z + ω(W 0 ) = 0 P-a.s. on the event {ω(W y ) = 0}. By monotonicity, if ω(W y ) > 0, then Y y can only be further to the right. Hence P (Y y exits P t (y) through the right | F y ) ≥ P Y n / ∈ H v ,1 ∀ n ∈ Z + ω(W 0 ) = 0 P-a.s., (4.42) so we only need to show that this last probability is strictly positive. To that end, fix L > 1 large enough such that which is possible by (1.7). If t is large enough (e.g. t > 2), then  Proof. First we claim that there exists a c > 0 such that, for any k ≥ T , P R k is a g.r.t. F k−T ≥ cT δ log(p•∧p•) a.s.

(4.46)
To prove (4.46), we will find c > 0 such that where the second equality uses the independence of G ∠ y and F y , and the last step uses the monotonicity and translation invariance of ω.  P (R k is not a g.r.t. for any k ≤ T ) ≤ P R (2k+1)T is not a g.r.t. for any k ≤ T /3T (4.54) by our choice of and δ.
We are now ready to finish the proof of Theorem 4.2.
Proof of Theorem 4.2. Since P ∠ (·) = P(·|Γ 0 ) with P(Γ 0 ) > 0, it is enough to prove the statement under P. To that end, let on E c 2 , and (4.58) follows from (4.59) and (4.35). To verify (4.57), first note that, by (4.32) and (4.34), we only need to check that To that end, define Furthermore, since the paths in W take nearest-neighbour steps, we have and the latter set is empty on E c 1 ∩ E c 2 . Thus, (4.60) holds.
In conclusion, for T large enough we have

Proof of Theorem 1.4
We begin by making the following observation.

Theorem 4.7.
On an enlarged probability space there exists a sequence (τ n ) n∈N of random times with τ 1 = τ such that, under P and with S n = n i=1 τ i , is an i.i.d. sequence, independent of (τ, (X s ) 0≤s≤τ ), with each of its terms distributed as (τ, (X s ) 0≤s≤τ ) under P ∠ .
Proof. The claim follows from Theorem 4.1 and the fact that τ < ∞ a.s., exactly as in Avena, dos Santos and Völlering [2, Proof of Theorem 3.8].
We are now ready to prove Theorem 1.4.
Proof of Theorem 1.4. We start with (c). Let for some c > 0. Next define, for t ≥ 0, k t as the random integer such that S kt ≤ t < S kt+1 . (4.67) Since S n > t if and only if k t < n, for any ε > 0 we have for some δ , > 0 and large enough n. On the other hand, since X is Lipschitz we have and therefore for any ε > 0 To prove (b), letσ 2 be the variance of X τ − τ v under P ∠ , which is finite because of (4.14) and strictly positive because X τ − τ v is not a.s. constant. For the process (Y k ) k∈N defined by Y k = X S k − S k v, a functional central limit theorem with varianceσ 2 holds because, by Theorems 4.2 and 4.7, the assumptions of the Donsker-Prohorov invariance principle are satisfied. Now consider the random time change ϕ n (t) = k nt /n. We claim that = 0 P-a.s. ∀ L > 0.   To extend this to X, note that, for any T > 0,

Extensions
As mentioned in Section 1, finding a regeneration structure is usually a delicate matter, as often one needs to rely on precise features of the model at hand. Approximate renewal schemes are more general, but do not usually give as much information as full regeneration.
Let us mention other examples of dynamic random environments where a renewal strategy can be found. For the simple symmetric exclusion process, such a renewal structure was developed in [2]. There, the tail of the regeneration time is controlled by imposing a non-nestling condition on the random walk drifts. Using the techniques of this section, it would be possible to extend these results (i.e., obtain Theorem 1.2) to the nestling situation, provided one manages to prove the analogue of Theorem 1.5 for the exclusion process.
Another example where a regeneration strategy is useful is the independent renewal chain discussed in Section 3.5. Indeed, a regeneration time can be obtained as follows.
Recall that, for large enough ρ, we obtain the ballisticity condition (1.7) for some v > 0. Retaining the notation of Section 4.1 forv, ∠(y), R k and A y , we define I = inf{k ∈ N : A Y R k occurs} and τ := R I . We may then verify that τ satisfies analogous properties as stated in Theorems 4.1 and 4.2. Hence, by the exact same arguments as in Section 4.3, Theorem 1.2 holds also in this case.
The remainder of this paper consists of five appendices. All we have used so far is Theorem C.1 in Appendix C (recall Section 3.1), which is a decoupling inequality, Lemma D.1 in Appendix D (recall Section 3.4), which is a tail estimate, and Lemma E.1 in Appendix E (recall Section 4.3), which is an estimate for independent random variables satisfying a certain tail assumption. Appendices A-B are preparations for Appendix C.

A Simulation with Poisson processes
In this section we recall some results from Popov and Teixeira [26] about how to simulate random processes with the help of Poisson processes. Corollary A.3 will be used in Section B to prove a mixing-type result for a collection of independent random walks (Lemma B.3 below).
Let (Σ, B, µ) be a measure space, with Σ a locally compact Polish metric space, B the Borel σ-algebra on Σ, and µ a Radon measure, i.e., every compact subset of Σ has finite µ-measure. The set-up is standard for the construction of a Poisson point process on Σ.
To that end, consider the space of Radon point measures on Σ × R + , We can canonically construct a Poisson point process m on the measure space (M, M, Q) with intensity µ ⊗ dv, where dv is the Lebesgue measure on R + . (For more details on this construction, see e.g. Resnick [29,Proposition 3.6].) Proposition A.1 below provides us with a way to simulate a random element of Σ by using the Poisson point process m. Although the result is intuitive, we include its proof for the sake of completeness. Fig. 5). Then, under the law Q of the Poisson point process m, (1) A.s. there exists a singleî ∈ N such that ξg(zî) = vî.
Proof. For measurable A ⊂ Σ, define the random variable  . Now, given ξ, m is a mapping of m (in the sense of Resnick [29,Proposition 3.7]).
In Proposition A.2 below we use Proposition A.1 to simulate a collection (Z j ) j∈N of independent random elements of Σ using the single Poisson point process m defined above. Formally, suppose that in some probability space (M, M, P) we are given a collection (Z j ) j∈N of independent (not necessarily identically distibuted) random elements of Σ such that the distribution of Z j is given by g j (z)µ(dz), j ∈ N.
(A. 6) In the same spirit as the definition of ξ in Proposition A.1, we define what we call the soft local time G = (G j ) k j=1 associated with a sequence (g j ) k j=1 of measurable functions: (see Fig. 5 for an illustration of this recursive procedure). Proof. Apply Proposition A.1 repeatedly, using induction on J.
We close this section by exploiting the above construction to couple two collections of independent random elements of Σ using the same Poisson point process as basis.
The following corollary will be needed in Appendix B.
EJP 20 (2015), paper 95. Corollary A.3. Let (g j (·)) J j=1 be a family of densities with corresponding ξ j , G j , i j , j = 1, . . . , J, as in (A.7-A.8). Then, for any ρ > 0, Note that the right-hand side of (A.12) only depends on the soft local time, which may e.g. be estimated through large deviation bounds.

B Simulation and domination of particles B.1 Simple random walk
In this section we collect a few facts about the heat kernel of random walks on Z. Let p n (x, x ) = P x (Z n = x ), x, x ∈ Z, where P x stands for the law of a lazy simple random walk Z n on Z as defined in Section 1, i.e., p 1 (0, x) > 0 if and only if x ∈ {−1, 0, 1} and p 1 (0, 1) = p 1 (0, −1). Then there exists constants C, c > 0 such that the following hold for all n ∈ N: The above observations will be used in the proof of Lemma B.2 below, which deals with the integration of the heat kernel over an evenly distributed cloud of sample points and is crucial in the proof of Theorem C.1 in Section C. In order to state this lemma, we need the following definitions.
Definition B.1. (a) For H ⊂ Z and L ∈ N, we say that a collection of intervals {C i } i∈I indexed by a subset I ⊂ Z is an L-paving of H when H ⊂ ∪ i∈I C i and there is an x ∈ Z such that We say that a collection of points (x j ) j∈J ⊂ Z is ρ-dense with respect to the L-paving {C i } i∈I when #{j : We know that x∈Z p n (0, x) = 1. The next lemma approximates this normalization when the sum runs over a dense collection (x j ) j∈J . Lemma B.2. Let {C i } i∈I be an L-paving of H ⊂ Z and (x j ) j∈J be a ρ-dense collection with respect to {C i } i∈I . Then, for all n ∈ N, j∈J p n (0, x j ) ≥ ρ P 0 (Z n ∈ H) − cL log n √ n .

B.2 Coupling of trajectories
Given a sequence of points (x j ) j∈J in Z, let (Z j n ) n∈Z+ , j ∈ J, be a sequence of independent simple random walks on Z starting at x j , and let j∈J P xj denote their joint law. The next lemma, which will be needed in Appendix C, provides us with a way to couple the positions of these random walks at time n with a Poisson point process on Z. This lemma is similar in flavor to [25,Proposition 4.1].
Then for any ρ ≤ ρ there exists a coupling Q of ⊗ j∈J P xj and the law of a Poisson point process j ∈J δ Y j on Z with intensity ρ such that for all H ⊂ Z such that {z ∈ Z : dist(z, H ) ≤ n} ⊂ H and all n ≥ c 10 L 2 .
Proof. By Corollary A.3, there exists a coupling Q such that where G J (z) = j∈J ξ j p n (x j , z) with (ξ j ) j i.i.d. EXP(1) random variables. We will estimate the right-hand side of (B.11) using concentration inequalities. First, noting that P z (Z n ∈ H) = 1 for any z ∈ H , we use Lemma B.2 to estimate where the third inequality uses (B.12) and (B.15). Inserting this estimate into (B.13), we get the claim.

C Decoupling of space-time boxes
In this section we prove a decoupling inequality on two disjoint boxes in the spacetime plane Z + × Z.

C.1 Correlation
Intuitively, if two events depend on what happens at far away times, then they must be close to independent due to the mixing of the dynamics. This is made precise in the following theorem.
where per(B) stands for the perimeter of B.
Note that, by the FKG-inequality, we have . Thus, the bound in (C.1) shows that f 1 , f 2 are almost uncorrelated.
To prove Theorem C.1, we need the following definition. For a box B = ([a, b] × [n, n ]) ∩ Z 2 in the space-time upper halfplane Z × Z + , let C(B) be the cone associated with B, defined as (see Fig. 6) This cone can be interpreted as the set of points that can reach B while traveling at speed at most one, and encompasses every space-time point that can influence the state of B.
Given a box B and a halfplane D as in Theorem C.1, we denote by H and H the separating segments (see Fig. 6) The next lemma states a Markov-type property.
where N (y) is the number of trajectories crossing y (recall (2.2)).
Proof. Since y ∈ T if and only if N (y) ≥ 1, f is a function of (U y , N (y)) y∈B . Noting that (N (y)) y∈B is a function of (N (y)) y∈H and (S y,i n ) y∈H,i∈N,n∈Z+ only, we get the claim.

C.2 Proof of Theorem C.1
In the following we will abuse notation by writing H,H to denote also the projection on Z of these sets. We start by choosing an L-paving {I j } j∈J of H, composed of segments of length L = n 1/4 (the choice of exponent 1/4 is arbitrary: any choice in (0, 1 2 ) will do).

Remark C.3.
It is important that the constants in Theorem C.1 do not depend on ρ, in accordance with our convention. This is crucial for our proof in Section 3 to work.
Moreover, by translation invariance we can apply the result for general boxes B and half-spaces D as long as their vertical distance is at least c 11 . (1) In the statement of Lemmas B.2-B.3, we suppose that the collection (x j ) j∈J is dense in both of the sets C i ∩ 2Z and C i ∩ Z \ 2Z. Since (B.2) still holds when x, x have the same parity, and (B.1) and (B.3) are still valid, the proofs of the lemmas go through with this modification.
(2) In the proof of Theorem C.1, we modify E to be the event where enough trajectories cross both of the sets H ∩ I i ∩ 2Z and H ∩ I i ∩ Z \ 2Z, which allows us to use Lemmas B.2-B.3 with the new statements.
In the case of continuous-time symmetric random walks with bounded jumps, (B.1)-(B.3) and Lemma B.2 remain true. However, the random walk is no longer almost surely Lipschitz, and this property is used in Lemmas B.3 and C.2, Theorem C.1 and in several other places throughout the paper. Nonetheless, the random walk is still Lipschitz with high probability, and this is enough to adapt all arguments to this case.
In particular, f (a) ≥ 0 for all a ≥ α 0 , and hence l>a l β e − log 3/2 l ≤

E Rate of convergence
In this section we state and prove a basic fact about independent random variables that is used in the proof of Theorem 1.4(c) in Section 4.3.
Lemma E.1. Let X i , i ∈ N, be independent random variables with joint law P such that for some K > 0 and γ > 1, where log + x = max(log x, 0). Then, for all ε > 0, there exists a c > 0 such that We claim that we have X i − X