Random walks in dynamic random environments and ancestry under local population regulation

We consider random walks in dynamic random environments, with an environment generated by the time-reversal of a Markov process from the oriented percolation universality class. If the influence of the random medium on the walk is small in space-time regions where the medium is typical, we obtain a law of large numbers and an averaged central limit theorem for the walk via a regeneration construction under suitable coarse-graining. Such random walks occur naturally as spatial embeddings of ancestral lineages in spatial population models with local regulation. We verify that our assumptions hold for logistic branching random walks when the population density is sufficiently high.


Introduction
Let η n (x) be a random number of particles located at position x ∈ Z d at time n, where η := (η n ) n∈Z := (η n (x) : x ∈ Z d ) n∈Z is a stationary (discrete time) Markovian particle system whose evolution can be described by 'local rules'. We assume that η is in its unique non-trivial ergodic equilibrium. Prototypical examples are the super-critical discretetime contact process, see (2.3) below, or systems of logistic branching random walks, see (4.4) in Section 4.1. We consider a random walk X = (X k ) k=0,1,... that moves 'backwards' through the medium generated by η, i.e. given η, X is a Markov chain and given X k = x, the law of the next increment is a function of η in a finite window around the space-time point (x, −k).
Our main result, see Theorem 3.1 in Section 3, provides a law of large numbers (LLN) and an averaged central limit theorem (CLT) for X. Very broadly speaking we require that the law of an increment of X is close to a fixed symmetric finite-range random walk kernel whenever the walk is in a 'good region' and that such good regions are sufficiently frequent in a typical realisation of η. In particular we assume that on suitably coarse-grained space-time scales, the occurrence of good regions can be compared to super-critical oriented percolation. The explicit assumptions are rather technical and we refer to Sections 3.1-3.2 for details.
The reversal of the natural time directions of X and η results from and is consistent with the interpretation of X k as the position of the k-th ancestor of a particle picked its time reversal cannot be explicitly constructed using local rules. A similar problem was faced in [5] in the study of a directed random walk on the backbone of an oriented percolation cluster. There, the particular structure of oriented percolation allowed to jointly construct the medium and the walk under the annealed law using suitable spacetime local operations (cf. [5,Sect. 2.1]) and therefrom deduce the regeneration structure. This construction was extended in [24] to random walks on weighted oriented percolation clusters with weights satisfying certain mixing conditions.
Here, we must use a different approach. Again very broadly speaking, regeneration occurs after T steps if the medium η −T in a large window around X T is 'good' and also the 'local driving randomness' of η in a (large) neighbourhood of the space-time path {(X m , −m) : 0 ≤ m ≤ T } has 'good' properties. This essentially enforces that the information about η that the random walk path has explored so far is a function of that local driving randomness. Such a time allows to decouple the past and the future of X conditional on the position X T and η −T in a finite window around that position. A difficulty arises from the fact that if a regeneration fails at a given time k, then we have potentially gained a lot of undesirable information about the behaviour of η n at times n < −k which might render successful regeneration at a later time > k much less likely. We address this problem by covering the path and the medium around it by a carefully chosen sequence of eventually nested cones, see Figure 6. We finally express X as an additive functional of a Markov chain which keeps track of the increments between regeneration times and local configurations of η at the regeneration points.
Note that random walks in dynamic random environments generated by various interacting particle systems, in particular also by the contact process in continuous time, have received considerable attention recently; see for example [1,2,4,8,25,27]. A fundamental difference to the present set-up lies in the time directions. Traditionally, both the walker and the dynamic environment have the same 'natural' forwards time direction whereas here, forwards time for the walk is backwards time for the medium. We also refer to the more detailed discussion and references in [5,Remark 1.7].
The rest of this manuscript is organised as follows. We first introduce and study in Section 2 a class of random walks which travel through the time-reversal of the discrete time contact process, i.e., η is literally a super-critical contact process. We note that unlike the set-up in [5], here the walk is also allowed to step on zeros of η. We use this simple model to develop and explain our regeneration construction and obtain a LLN and an annealed CLT in the 'p close to 1' regime, see Theorem 2.6. In Section 3 we develop abstract conditions for spatial models and random walks in dynamic random environments governed by the time-reversals of the spatial models. Under these conditions on a coarse-grained space-time grid we implement a regeneration construction similar to the one from Section 2 and then obtain a LLN and an annealed CLT in Theorem 3.1. In Section 4 we introduce logistic branching random walks, the class of stochastic spatial population models mentioned above. An ancestral lineage in such a model is a particular random walk in a dynamic random environment, see (4.10). We show that this class provides a family of examples where the abstract conditions from Section 3 can be verified. We believe that there are several further classes of (population) models that satisfy the abstract conditions from Section 3 in suitable parameter regions. In Section 5 we list and discuss such models.
Finally, we note that a natural next step will be to extend our regeneration construction to two random walks on the same realisation of η and to then also deduce a quenched CLT, analogous to [5]. We defer this to future work.

Acknowledgements:
We would like to thank Nina Gantert for many interesting discussions on this topic and for her constant interest and encouragement during the preparation of this work. We also thank Stein Bethuelsen for carefully reading a preprint version of the manuscript and his helpful comments. Finally, we are grateful to an anonymous referee for her or his suggestions that made the presentation more complete.

An auxiliary model
In this section we prove a law of large numbers and an annealed (averaged) central limit theorem for a particular type of random walks in dynamic random environments. The model is the simplest and the most transparent among the models that we consider in this paper. The proofs here contain already the main ideas and difficulties that we will face in the following sections. It will also become clear later how the dynamics of ancestral lineages in spatial stochastic population models is related to this particular random walk.

Definition of the model and results
We define first the model that generates the dynamic random environment of the random walk. Let ω := {ω(x, n) : (x, n) ∈ Z d × Z} be a family of i.i.d. Bernoulli random variables with parameter p > 0. We call a site (x, n) open if ω(x, n) = 1 and closed if ω(x, n) = 0. Throughout the paper · denotes sup-norm. For m ≤ n, we say that there is an open path from (x, m) to (y, n) if there is a sequence x m , . . . , x n ∈ Z d such that x m = x, x n = y, x k − x k−1 ≤ 1 for k = m + 1, . . . , n and ω(x k , k) = 1 for all k = m, . . . , n. In this case we write (x, m) → ω (y, n), and in the complementary case (x, m) → ω (y, n). For sets A, B ⊆ Z d and m ≤ n we write A × {m} → ω B × {n}, if there exist x ∈ A and y ∈ B so that (x, m) → ω (y, n). Here, slightly abusing the notation, we use the convention that ω(x, m) = 1 A (x) while for k > m the ω(x, k) are i.i.d. Bernoulli random variables as above. With this convention for A ⊂ Z d , m ∈ Z we define the discrete time contact process η A := (η A n ) n=m,m+1,... driven by ω as η A m = 1 A and η A n (x) := 1 {A×{m}→ ω (x,n)} , n > m. (2.1) Alternatively η A = (η A n ) n=m,m+1,... can be viewed as a Markov chain with η A m = 1 A and the following local dynamics: η A n+1 (x) = 1 if ω(x, n + 1) = 1 and η A n (y) = 1 for some y ∈ Z d with x − y ≤ 1, 0 otherwise.
For a distribution µ on {0, 1} Z d we write η µ = (η µ n ) n=m,m+1,... for the discrete time contact process with initial (random) configuration η µ m distributed according to µ. The contact process η A = (η A n ) n=m,m+1,... is closely related to oriented percolation. In this context A is the set of 'wet' sites at time m and {x ∈ Z d : η A n (x) = 1} × {n} is the n-th time-slice of the cluster of wet sites. Obviously for any p < 1 the Dirac measure on the configuration 0 ∈ {0, 1} Z d is a trivial invariant distribution of the discrete time contact process. It is well known that there is a critical percolation probability p c ∈ (0, 1) such that for p > p c and any non-empty A ⊂ Z d the process η A survives with positive probability. Furthermore, in this case there is a unique non-trivial extremal invariant measure ν, referred to as the upper invariant measure, such that, starting at any time m ∈ Z the distribution of η Z d n converges to ν as n → ∞. We assume p > p c throughout this section. Given a configuration ω ∈ {0, 1} Z d ×Z , we define the stationary discrete time contact process driven by ω as The event on the right hand side should be understood as ∩ m≤n Z d × {m} → ω (x, n) . In the above notation we have η = η Z d = (η Z d n ) n∈Z . Furthermore, since η is a stationary Markov process, by abstract arguments its time reversal is also a stationary Markov process with the same invariant distribution; cf. Remark 2.7. Unless stated otherwise, throughout the paper η denotes the stationary discrete time contact process. Many other versions of contact processes that will be needed in proofs will be labelled by some superscripts similarly to the definition in (2.1).
To define a random walk in the random environment generated by η, or more precisely by its time-reversal, let be a family of random transition kernels defined on the same probability space as η, in particular κ n (x, · ) ≥ 0 and y∈Z d κ n (x, y) = 1 holds for all n ∈ Z and x ∈ Z d . Given κ, we consider a Z d -valued random walk X := (X n ) n=0,1,... with X 0 = 0 and transition probabilities given by P X n+1 = y X n = x, κ = κ n (x, y), (2.5) that is, the random walk at time n takes a step according to the kernel κ n (x, · ) if x is its position at time n. We make the following four assumptions on the distribution of κ; see Remark 2.5 for an interpretation.
Assumption 2.1 (Locality). The transition kernels in the family κ depend locally on the time-reversal of η, that is for some fixed R loc ∈ N κ n (x, ·) depends only on ω(y, −n), η −n (y) : x − y ≤ R loc .
Here · TV denotes the total variation norm.
(2.10) Remark 2.5 (Interpretation of the assumptions). The Assumptions 2.1-2.4 are natural as we want to interpret the random walk as the spatial embedding of an ancestral lineage in a spatial population model, in which roughly speaking, children (if any present at a site) choose their parents at random from a finite neighbourhood in the previous generation. See also Section 4 and in particular the discussion around (4.10).
By (2.5) and (2.6), we can, and often shall think of creating the walk from η and ω in a local window around the current position and additional auxiliary randomness.
The main result of this section is the following theorem. Its proof is given in Section 2.4. Theorem 2.6 (LLN and annealed CLT). One can choose 0 < ε ref sufficiently small and p sufficiently close to 1, so that if κ satisfies Assumptions 2.1-2.4 then X satisfies the strong law of large numbers with speed 0 and an annealed (i.e. when averaging over both ω and the walk) central limit theorem with non-trivial covariance matrix. A corresponding functional central limit theorem holds as well.
Remark 2.7 (Time-reversal of η, oriented percolation interpretation). In [5], the stationary process η was equivalently parametrised via its time reversal Then ξ is the indicator of the backbone of the oriented percolation cluster and it was notationally and conceptually convenient to use in [5], not least because then the medium ξ and the walk X had the same positive time direction.
Here, we keep η as our basic datum because we wish to emphasise and in fact later use in Section 3 the interplay between the medium η, interpreted as describing the dynamics of a population, and the walk X, cf. (2.5) above, describing the embedding of an ancestral lineage. Furthermore, in the more general population models, as the one studied in Section 4 for instance, there will be no natural parametrization of the time-reversal of η.
Note that the assertions of Theorem 2.6 are in a sense conceptual rather than practical because the proofs of preliminary results in Section 2.3 require 1 − p to be very small. Situations with p > p c but also 1 − p appreciably large require an additional coarse-graining step so that the arguments from Section 3 can be applied.
In order to prove Theorem 2.6 we will construct suitable regeneration times and show that the increments of these regeneration times as well as the corresponding spatial increments of the walk have finite moments of order b for some b > 2. This construction is rather intricate. The main source of complications stems from the fact that in order to construct the random walk X one should know ω and η in the vicinity of its trajectory; cf. Remark 2.5 above. While it is easy to deal with the knowledge of ω's, because they are i.i.d., the knowledge of η leads to problems. Due to definition (2.3) of η and Assumption 2.1 on κ, this knowledge provides non-trivial information about the past behaviour of η and therefore also about the future behaviour of X. Both is not desirable at regeneration times.
More precisely, we need to deal with two types of information on η. The first type, the negative information, that is knowing that η n (x) = 0 for some n and x is dealt with similarly as in [5]. The key observation is that such information is essentially local: To discover that η n (x) = 0 one should check that Z d × {−∞} → ω (x, n) which requires observing ω's in a layer Z d × {n − T, . . . , n} where T is a random variable with exponentially decaying tails. The second type of information, the positive one, that is knowing that η n (x) = 1, is removed by making use of strong coupling properties of the forwards-in-time dynamics of η. When at time −t we have η −t ≥ 1 x+{−L,...,L} d pointwise, then there is a substantial chance that every infection, i.e., every '1' of η, inside a growing space-forwards-time cone with base point (x, −t) can be traced back to (x + {−L, . . . , L} d ) × {−t}. Furthermore, whether this event has occurred can be checked by observing the restriction of η −t to x + {−L, . . . , L} d and the ω's inside a suitably fattened shell of the cone in question, in particular without looking at any η m (y) for m < −t; see (2.27) and Lemma 2.13 below. We will construct suitable random times T at which this event occurs for η at the current space-time position (X T , −T ) of the walker and in addition the space-time path of the walk up to T , {(X k , −k) : 0 ≤ k ≤ T }, is completely covered by a suitable cone. Such a time T allows to regenerate.
For the proof of Theorem 2.6 we first collect some results on the high density discrete time contact process in Section 2.2. We then rigorously implement the regeneration construction sketched above in Section 2.3. Finally we prove Theorem 2.6 in Section 2.4.

Some results about the contact process
This section contains several estimates for the discrete-time contact process η that will be crucial for the regeneration construction. The main results of this section are the estimates given in Lemma 2.11 and Lemma 2.13. We start by recalling two well known results.
Proof. Due to self-duality of the contact process this is a reformulation of the fact that n ≡ 0 and η {0} eventually dies out ≤ C(p)e −c(p)n , n ∈ N. (2.12) For a proof of the latter assertion we refer to e.g. [10,18] or Lemma A.1 in [5].   Proof. For the contact process in continuous time, this is proved in [12], see in particular (33)  that for any p > p c and a > 0, there is a C < ∞ such that the probability on the left hand side of (2.13) is bounded below by 1 − Cn −a .
More recently, in [17] large deviations for the continuous time contact process in a random environment were studied. Among other results it is shown that exponential decay as in (2.13) holds in the supercritical case; see Theorem 1, Eq. (2) there.
The first main result of this section is the following lemma on controlling the probabilities of certain negative events; cf. Lemma 7 in Section 3 of [11] for a related result.
(2.14) Remark 2.12. In our proof of (2.14) it is essential that all t i 's are distinct. For a general set V ⊂ Z d × Z space-time boundary effects can play a role so that the decay will only be stretched exponential in |V |. Note however that in Corollary 4.1. in [23] it is shown that the upper invariant measure of the continuous time contact process on Z d dominates stochastically a product measure on {0, 1} Z d . Therefore a bound analogous to (2.14) does hold in the situation when all t i 's are equal. Lemma 2.11 can be seen as a space-time extension of that result in the discrete time case and in the 'p large enough' regime.
Proof of Lemma 2.11. An immediate consequence of Lemma 2.8 is that for every p > p c P Z d × {−n} → ω (0, 0) and Z d × {−∞} → ω (0, 0) ≤ e −c1(p)(n+1) , n = 0, 1, 2, . . . , (2.15) with some c 1 = c 1 (p) > 0 satisfying lim p 1 c 1 (p) = ∞. To prove (2.14), we extend the finite sequence {t 1 , . . . , t k } to an infinite sequence via t k+j := t k − j, j = 1, 2, . . . , and put . (2.16) Note that the random variables D i are upper bounds on the heights of the backwardsclusters of open sites attached to (x i , t i ) given by (see Figure 1) Thus the left-hand side of (2.14) satisfies   Note that The events in the intersection on the right-hand side depend on ω restricted to disjoint sets and are thus independent. Furthermore we observe that the event enforces that (x u(j) , t u(j) ) is the starting point of a finite (backwards) cluster of height at least t u(j)+dj −1 − t u(j) ≥ d j − 1 (when d j = 1 this means that ω(x u(j) , t u(j) ) is closed, which also gives a factor 1 − p < 1). Hence, using (2.15) we obtain By definition of I(m, k) for s ≥ k + 1 we have Thus, the right hand side of (2.22) can be bounded by yielding the claim of the lemma.
The second main result of this section, Lemma 2.13 below, is the crucial tool in the construction of a certain coupling which will be useful to forget the positive information about η in the regeneration construction. To state this lemma we need to introduce more notation. For A ⊂ Z d let η A = (η A n ) n=0,1,... be the discrete time contact process as defined in (2.1). For positive b, s, h we write (denoting by Z + the set non-negative integers and by · 2 the 2 -norm) for a (truncated upside-down) cone with base radius b, slope s, height h and base point (0, 0). Furthermore for b inn ≤ b out and s inn < s out , (2.26) we define the conical shell with inner base radius b inn , inner slope s inn , outer base radius b out , outer slope s out , and height h ∈ N ∪ {∞} by The conical shell can be thought of as a difference of the outer cone cone(b out , s out , h) and the inner cone cone(b inn , s inn , h) with all boundaries except the bottom boundary of that difference included; see Figure 2.
We think of η cs as a version of the contact process where all ω's outside the conical shell cs(b inn , b out , s inn , s out , ∞) have been set to 0. We say that η cs survives (in all parts of the conical shell) if for all n ∈ Z + there is x ∈ Z d with η cs n (x) = 1. In the case we say that γ crosses the conical shell cs(b inn , b out , s inn , s out , ∞) from the outside to the inside if the following three conditions are fulfilled: (i) the starting point lies outside the outer cone, i.e., x m 2 > b out + ms out , (ii) the terminal point lies inside the inner cone, i.e., x n 2 < b inn + ns inn , (iii) all remaining points lie inside the shell, i.e., (x i , i) ∈ cs(b inn , b out , s inn , s out , ∞) for i = m + 1, . . . , n − 1.
Note that if in the case d = 1 the process η cs survives in a ray of a conical shell and γ is a path that is open in this ray then by geometric properties of directed paths and clusters γ necessarily intersects η cs . In the case d > 1 however, even if η cs survives, it is in principle possible that an open path crosses the conical shell cs(b inn , b out , s inn , s out , ∞) without intersecting η cs . The next lemma states that the probability of that can be made arbitrarily small.
For any ε > 0 and 0 ≤ s inn < s out < 1 one can choose p sufficiently close to 1 and b inn < b out sufficiently large so that Remark 2.14 (Observations concerning G 1 and G 2 ). The meaning of the event G 1 is clear. Let us just note that it is essential that the relations in (2.26) hold. In particular, in the case s inn = s out survival of η cs is only possible in the trivial case p = 1.
To understand the importance of the event G 2 , observe that if a path γ as in (2.28) crosses and is open in cs(b inn , b out , s inn , s out , ∞), and also intersects η cs then necessarily η cs n (x n ) = ω(x n , n) for the terminal point (x n , n) of the path. Thus, on G 1 ∩ G 2 the values of the contact process inside the inner cone, that is for (x, n) with x 2 < b inn + ns inn , are independent of what happens outside of the shell; cf. (2.59) and Lemma 2.22.
Proof of Lemma 2.13. The proof consists of two steps. In the first step we prove the assertion for the case d = 1. The second step then uses the assertion for d = 1 to give a proof for d ≥ 2.
Throughout the proof of this lemma for r > 0 and x ∈ Z d we denote by B r (x) the closed 2 ball of radius r around x, i.e., B r (x) = {y ∈ Z d : x − y 2 ≤ r}.
Step 1. Consider the case d = 1. We first check that the discrete time contact process survives with high probability in any oblique cone when p is large enough. To this end for 0 < s 1 < s 2 < 1 and b ∈ N we set Furthermore we letη := (η n ) n=0,1,... be the discrete time contact process in C b,s1,s2
(ii) Moreover, there exist c, C ∈ (0, ∞) so that on the event {η survives}, with probability at least 1 − Ce −cn ,η n restricted to the ball B rn (x n ) can be coupled with the unrestricted process η ν = (η ν n ) n=0,1,... started from the upper invariant measure ν, that The claim (i) follows using the same arguments as in the proof of Theorems 1 and 2 of [7], where an analogous statement is proved for continuous time contact process in a wedge. Moreover, [7] use for the proofs a coarse-graining construction and comparison with oriented percolation. That links the problem in the wedge with a suitable shift of the contact process η {0} used in Lemma 2.9. Combining that lemma with coarsegraining then yields the claim (ii).
As noted in the paragraph above Lemma 2.13 we have G 1 ⊂ G 2 in the case d = 1.
Thus, by using the above argument for the oblique cone twice we see that the assertion of the lemma holds in the case d = 1.
Step 2. Consider now d > 1. Since the probability of G 1 is increasing in dimension, it can be bounded from below by the same reasoning as in d = 1. It remains to show that the probability of G c 2 can be made small by choosing b inn , b out and p appropriately. To this end for n ≥ 0 we set d(n) = (b inn + b out )/2 + n(s inn + s out )/2 and define Furthermore for x ∈ M n we consider the event Finally, we define the backward cluster of (x, n) by Assume that B n (x) occurs. Then, by trivial geometrical arguments there is a small constant ρ depending on the parameters of the shell so that for M = (1 − ρ)n the set bC M (x, n) is not empty. Moreover, self-duality of the contact process and Lemma 2.9 imply that there exist s cpl > 0 (here we think of s cpl ≈ s coupl ρ for s coupl from Lemma 2.9) and c > 0 such that B s cpl n (x) × {M } ⊂ cs(b inn , b out , s inn , s out , ∞) and with probability bounded below by 1 − e −cn , the indicator function of the set bC M (x, n) can be coupled inside B s cpl n (x) with the set of 1's under the upper invariant measure ν of the (full) contact process. (2.29) Fix p large enough so that the density of 1's under ν is strictly larger than 1/2; this is not a restriction in the parameter region that we consider. On the one hand this requirement means heuristically that the set bC M (x, n) ∩ (B s cpl n (x) × {M }) is large with high probability. Thus we must have η cs M (z) = 0 for all (z, M ) in this set. On the other hand, using d = 1-arguments we will show that this is not possible.
To this end, depending on the previous parameters, we fix δ > 0 small, and unit vectors v i ∈ R d , 1 ≤ i ≤ N , with N sufficiently large so that for every x ∈ M n , there is an i ≤ N such that the length of the intersection of the half-line {tv i : t ≥ 0} with the (real) ball {y ∈ R d : x − y 2 ≤ s coupl n} has length at least δn. (2.30) Observe that N and v i 's can be chosen independently of n.
We now use the result of Step 1 and the last claim to bound the probability of B n (x). To this end we define contact process η (i) as the contact process restricted to the set . Observe that W i contains an isomorphic image of C b,s1,s2 , for some b, s 1 , s 2 . Thus, the contact process η (i) dominates a corresponding image of a contact process in C b,s1,s2 .
a stationary contact process. Moreover, the parameters can be chosen so that for every x ∈ M n , there is x Hence, using again large deviation arguments for the density of 1's we obtain (2.33) Comparing (2.31) with the last display, we see that for every x ∈ M n , (2.34) Assume that G c 2 occurs. Since there are at most polynomially many x ∈ M n , there must be (x, n) ∈ ∪ ≥1 (M × { }) so that B n (x) occurs. It follows that for p sufficiently large we have and therefore P(G 1 ∩ G 2 ) ≥ 1 − ε, as required.

Regeneration construction
In Theorem 2.6 we claim that the speed of the random walk X is 0. As an intermediate result we will show that the speed is bounded by a small constant. This will be needed for the regeneration construction. Proof. With the percolation interpretation in mind, we say that a space-time . . , n, where R κ is the range of the kernels κ n from Assumption 2.4. For γ ∈ Γ n and 0 ≤ i with c 1 , c 2 ∈ (0, ∞), when δ > 0 is sufficiently small and p ≥ p 0 = p 0 (δ, ε ref ).
Writing X n = (X n,1 , . . . , X n,d ) we can couple the first coordinate (X n,1 ) n=0,1,... of the random walk X with a one-dimensional random walk X = ( X n ) n=0,1,... with transition probabilities given by ., X takes with probability 1 − ε ref a step according to the projection of κ ref on the first coordinate and with probability ε ref simply a step of size R κ to the right) such that for all n ∈ N X n,1 ≤ X n−Hn + R κ H n .
Then, we have The estimates (2.38), (2.39) and standard large deviations bounds for X show that By symmetry, we have an analogous bound for P (X n,1 < −sn). The same reasoning applies to the coordinates X n,2 , . . . , X n,d . (2.41) In particular we have lim sup n→∞ X n /n ≤s a.s. by the Borel-Cantelli lemma.
Denote the R loc -tube around the first n steps of the path by The idea is that if η n (x) = 0, i.e. (x, n) is not connected to Z d × {−∞}, then this information can be deduced by inspecting the ω's in D(x, n). By definition of (x, n), in this case there must be a closed contour contained in D(x, n) which separates (x, n) from Z d × {−∞}; see Figure 3. Note in particular that D( When constructing the walk X for n steps we must inspect ω and η in tube n (cf. Remark 2.5). By the nature of η, this in principle yields information on the configurations η −k , k > n that the walk will find in its future. Positive information of the form η m (y) = 1 for certain m and y is at this stage harmless because η has positive correlations and in view of Assumption 2.2 this suggests a well-behaved path in the future. On the other hand, negative information of the form η m (y) = 0 for certain m and y is problematic because this increases the chances to find more 0's of η in the walk's future. In this case Assumption 2.2 is useless. In order to 'localise' this negative information we 'decorate' the tube around the path with the determining triangles for all sites in tube n (obviously, only zeros of η matter) dtube n = (y,k)∈tuben D(y, k).
Note that D n is precisely the time (for the walk) at which the reasons for η −n (y) = 0 for all y from the R loc -neighbourhood of X n are explored by inspecting all the determining triangles with base points in B R loc (X n ) × {−n}. The information η −n (y) = 0 does not affect the law of the random walk after time D n . Note that the 'height' of a non-empty triangle D(y, −n) is (y, −n) + 1. This is why (y, −n) + 2 appears in definition (2.45). Now, between time n and D n the random walk might have explored more negative information which in general will be decided after time D n and will affect the law of the random walk thereafter. To deal with this cumulative negative future information we define recursively a sequence In words, σ i is the first time m after σ i−1 when the reasons for all the negative information that the random walk explores in the time interval σ i−1 , . . . , m are decided 'locally' and thus the law of the random walk after time σ i does not depend on that negative information. The σ i are stopping times with respect to the filtration F = (F n ) n=0,1,2,... , where F n := σ X j : 0 ≤ j ≤ n ∨ σ η j (y), ω(y, j) : (y, j) ∈ tube n ∨ σ ω(y, j) : (y, j) ∈ dtube n . (2.47) Note that by construction we have η −σi (y) = 1 for all y ∈ B R loc (X σi ). Lemma 2.17. When p is sufficiently close to 1 there exist finite positive constants c and C so that P σ i+1 − σ i > n F σi ≤ Ce −cn for all n = 1, 2, . . . , i = 0, 1, . . . a.s., (2.48) in particular, all σ i are a.s. finite. Furthermore, we have .. for all i = 0, 1, . . . a.s., (2.49) where ' ' denotes stochastic domination.
Proof. Throughout the proof we write R κ := (2R κ + 1) d and R loc := (2R loc + 1) d for the number of elements in B Rκ (0) respectively in B R loc (0). Consider first the case i = 0 in (2.48). The event {σ 1 > n} enforces that in the R loc -vicinity of the path there are space-time points (y j , −j) with η −j (y j ) = 0 for j = 0, 1, . . . , n. For a fixed choice of the y j 's by Lemma 2.11 the probability of that event is bounded by ε(p) n . We use a relatively crude estimate to bound the number of relevant vectors (y 0 , y 1 , . . . , y n ) ∈ (Z d ) n+1 , as follows. There are R n κ possible n-step paths for the walk. Assume there are exactly k time points along the path, say 0 ≤ m 1 < · · · < m k ≤ n, is not empty (when n > 1, we necessarily have m 1 = 0 or m 1 = 1).
For consistency of notation we write m k+1 = n. Then the height of D(y mi , −m i ) is bounded below by m i+1 − m i . For a fixed n-step path of X and fixed m 1 < · · · < m k , there are at most R k loc many choices for the y mi , i = 1, . . . , k, and inside D(y mi , −m i ) we have at most R mi+1−mi−1 κ choices to pick y mi+1 , y mi+2 , . . . , y mi+1−1 (start with y mi , then follow a longest open path which is not connected to Z d × {−∞}, these sites are necessarily zeros of η). Thus, there are at most possible choices of (y 0 , y 1 , . . . , y n ) and hence we have The right hand side decays exponentially when p is close to 1 so that ε(p) is small enough. For general i > 0 (2.48) follows by induction, employing (2.49) and the argument for i = 0. In order to verify (2.49) note that the stopping times σ i are special in the sense that on the one hand, at a time σ i the 'negative information' in F σi , that is the knowledge of some zeros of η in the R loc -neighbourhood of the path, has been 'erased' because the reasons for that are decided by local information contained in F σi . On the other hand, the 'positive information', that is the knowledge of ones of η, enforces the existence of certain open paths for the ω's. And this information is possibly retained. Thus, (2.49) follows from the FKG inequality for the ω's.
Proof. The assertion is an easy consequence of (2.49) and Lemma 2.11.
For t ∈ N we define R t := inf{i ∈ Z + : σ i ≥ t} and for m = 1, 2, . . . we put where we used Lemma 2.17 and Similarly, for m ≥ 2 (we assume implicitly that σ i ≤ t − n − m − 1 for otherwise the conditional probability appearing on the right-hand side of (2.53) equals 0) where we used in the first inequality that together with Lemma 2.17 and then argued analogously to the proof of (2.52) for the second inequality. For m = 2, the chain of inequalities above yields the bound Thus, (2.53) holds (with suitable adaptation of the value of the prefactor).
As a result of (2.49) and Assumption 2.2, the walk is well-behaved at least along the sequence of stopping times σ i , we formalise this in the following result.
Lemma 2.20. When p is sufficiently close to 1 there exist finite positive constants c and C so that for all finite F-stopping times T with T ∈ {σ i : i ∈ N} a.s. and all k ∈ N and for j < k with s max as in Lemma 2.16.
Proof. Note that by Lemma A.1, we may assume that T = σ for some ∈ Z + , the general case follows by writing 1 = Using (2.55) from Lemma 2.20 we can make the second sum small by choosing m sufficiently large and s > s max . Then we can make the first sum small (or even vanish) by picking b − M sufficiently large.
Recall that R κ is the range of the random walk X. Inequality (2.52) from Lemma 2.19 implies that P T − k ≥ (M − R loc )/R κ F T can be made arbitrarily small by choosing M sufficiently large. On B T,k ∩ {T − k < (M − R loc )/R κ }, which has high probability under P(· | F T ), we have by construction that i.e., the path together with its R loc -tube is covered by a suitably shifted cone with base point (X T , −T ).
It remains to verify that under P(· | F T ) with high probability also the decorations Note that dtube T is contained in a union of space-time rectangles with heights τ A geometric argument shows that on the event Cm 2 e −c(M1+ 1m) a.s. on {T < k} which can be made arbitrarily small when M 1 and 1 are suitably tuned. This completes the proof of (2.58).
Note that the σ i defined in (2.46) are themselves not regeneration times since (2.49) is in general not an equality of laws. We use another layer in the construction with suitably nested cones to forget remaining positive information.
Recall the definition of cones and cone shells from (2.25), (2.27) and Figure 2. The following sets of 'good' ω-configurations in conical shells will play a key role in the regeneration construction. Let G(b inn , b out , s inn , s out , h) ⊂ {0, 1} cs(binn,bout,sinn,sout,h) be the set of all ω-configurations with the property ..,h} with ω| cs(binn,bout,sinn,sout,h) ∈ G(b inn , b out , s inn , s out , h) : Proof. The assertion follows from Lemma 2.13 because if the event G 1 ∩G 2 defined there occurs, then ω| cs(binn,bout,sinn,sout,h) ∈ G(b inn , b out , s inn , s out , h) holds (recall Remark 2.14).
Let us denote the space-time shifts on Z d × Z by Θ (x,n) , i.e., (2.61) An elementary geometric consideration reveals that one can choose a deterministic sequence t ∞ with the property that for ∈ N and x ≤ s max t +1 Note that this essentially enforces t ≈ ρ for a suitable ρ > 1. Indeed, a worst case picture (see Figure 5) shows that we need Figure 5: Growth condition for the sequence (t ): The small inner cone is cone(t s max + b out , s out , t ) shifted to the base point (0, −t ). The big outer cone is cone(b inn , s inn , t +1 ) shifted to the base point (−t +1 s max , −t +1 ). The slope of the dashed line is s max . The sequence (t ) must satisfy a 1 < a 2 for a 1 = s out t + b out + s max t and a 2 = s inn t +1 + b inn − s max t +1 .
Thus, we can use (for sufficiently large) t = ρ for any ρ > s out + s max s inn − s max .
(2.63) Furthermore note that using Lemma 2.16 we obtain P ∃ n ≤ t : X n > s max t ≤ t n= t smax P X n > s max n ≤ C e −c t . (2.64) Since t grows exponentially in , the right hand side is summable in . Thus, from some random 0 on, we have sup n≤t X n ≤ s max t for all ≥ 0 , and 0 has very short tails.

Proof of Theorem 2.6
The proof of Theorem 2.6 relies on a regeneration construction and moment estimates for the increments between regeneration times (recall the discussion after Remark 2.7). We now have prepared all the ingredients to carry out the argument, which we prefer to give in a more verbal, descriptive style. While all the concepts and properties discussed below can be easily expressed in mathematical formulas we believe that the resulting increase in length and in notational heaviness would burden the text unnecessarily without improving neither readability nor understandability. That said, the regeneration construction goes as follows (see also Figure 6): 1. Go to the first σ i after t 1 , check if η in the b out -neighbourhood of (X σi , −σ i ) is ≡ 1, 2. If the event fails, we must try again. We successively check at times t 2 , t 3 , etc.: If not previously successful, at the -th step letσ be the first σ i after t , check ifσ is a cone point for the decorated path beyond t −1 with Xσ ≤ s maxσ , the η's in the b out -neighbourhood of (Xσ , −σ ) are ≡ 1, ω's in the corresponding conical shell are in the good set as defined in (2.59) and the path (with tube and decorations) up to time t −1 is contained in the box of diameter s out t −1 + b out and height t −1 . If this all holds, we have found the first regeneration time T 1 .
(We may assume thatσ −1 is suitably close to t −1 , this has very high probability by Lemma 2.19.) 3. The path containment property holds from some finite 0 on. Given the construction and all the information obtained from it up to the ( −1)-th step, the probability that the other requirements occur is uniformly high (for the cone time property use Lemma 2.21 with k = t ; use (2.49) to verify that the probability to see η ≡ 1 in a box around (Xσ , −σ ) is high; use Lemma 2.22 to check that conditional on the construction so far the probability that the ω's in the corresponding conical shell are in the good set is high, note that these ω's have not yet been looked at by the construction so far).
4. We will thus at most require a geometric number of t 's to construct the regeneration time T 1 . Then we shift the space-time origin to (X T1 , −T 1 ) and start afresh, noting that by construction, the law of (η −k−T1 (x + X T1 )) x∈Z d ,k∈Z given all the information obtained in the construction so far equals the law of (η −k (x)) x∈Z d ,k∈Z conditioned on seeing the configuration η 0 ≡ 1 in the b out -box around 0.
The sequence t grows exponentially in with rate ρ (see (2.63)) and we need to go to at most a random with geometric distribution with a success parameter 1 − δ very close to 1. We thus can enforce a finite very high moment of the regeneration time: P(regeneration after time n) ≤ P(more than log n/ log ρ steps needed) ≤ δ log n/ log ρ = n −a , where a = log(1/δ)/ log ρ can be made large by choosing δ small and ρ close to 1. Both is achieved by choosing p close to 1.
We obtain a sequence of random times T 1 < T 2 < · · · such that (X Ti − X Ti−1 , The existence of such regeneration times implies Theorem 2.6 by standard arguments, see e.g. the proof of Corollary 1 in [20] (it is easy to see from the construction that X Ti − X Ti−1 is not a deterministic multiple of T i − T i−1 ) and the proof of Theorem 4.1. in [28] for the functional CLT. Note that the speed must be 0 by the assumed symmetry; see Assumption 2.3.

Remark 2.23.
In the general case without the Assumption 2.3 the above argument yields that there must be a limit speed, its value is given only implicitly as If in Assumption 2.3 we would additionally require symmetries with respect to coordinate permutations and with respect to reflections along coordinate hyperplanes then the limiting law Φ would be a (non-trivial) centred isotropic d-dimensional normal law, cf. the proof of Theorem 1.1 in [5].

A more abstract set-up
The goal of this section is to present an abstract set-up where a renewal construction similar to the one of the previous section can be implemented. Our main motivation of positive time (for the walk) this set-up is to study the dynamics of ancestral lineages in spatial populations, but it can be likely applied for other types of directed random walks in random environment.
In Sections 3.1 and 3.2, we present certain abstract assumptions on the random environment and the associated random walk. These assumptions allow to control the behaviour of the random walk using a regeneration construction that is very similar to the one from Section 2. In particular, they allow to link the model with oriented percolation, using a coarse-graining technique.
We would like to stress that coarse-graining does not convert the presented model to the one of the previous section. In particular, the nature of regenerations is somewhat different. We will see that the sequence of regeneration times and associated displacements, (T i+1 − T i , X Ti+1 − X Ti ) i≥2 is not i.i.d. but can be generated as a certain function of an irreducible, finite-state Markov chain and additional randomness. By ergodic properties of such chains, this will lead to the same results as previously.
Theorem 3.1. Let the random environment η and the random walk X satisfy the assumptions of Sections 3.1 and 3.2 below with sufficiently small parameter ε U . Then the random walk X satisfies the strong law of large numbers with speed 0 and the annealed central limit theorem with non-trivial covariance matrix. A corresponding functional central limit theorem holds as well.
A concrete example satisfying the abstract assumptions of Sections 3.1 and 3.2 will be given in Section 4. They can also be verified for the oriented random walk on the backbone of the oriented percolation cluster which was treated in [5] using simpler, but related, methods.

Assumptions for the environment
We now formulate two assumptions on the random environment. The first assumption requires that the environment is Markovian (in the positive time direction), and that there is a 'flow construction' for this Markov process, coupling the processes with different starting conditions. The second assumption then allows to use the coarsegraining techniques and the links with oriented percolation.
Formally, let be an i.i.d. random field, U (0, 0) taking values in some Polish space E U (E U could be {−1, +1}, [0, 1], a path space, etc.). Furthermore for R η ∈ N let B Rη = B Rη (0) ⊂ Z d be the ball of radius R η around 0 with respect to sup-norm. Let be a measurable function.
Assumption 3.2 (Markovian, local dynamics, flow construction). We assume that η := (η n ) n∈Z is a Markov chain with values in Z Z d + whose evolution is local in the sense that η n+1 (x) depends only on η n (y) for y in a finite ball around x. In particular we assume that η can be realised using the 'driving noise' U as Here θ x denotes the spatial shift by x, i.e., θ x η n ( · ) = η n ( · + x) and θ x U ( · , n + 1) = U ( · + x, n + 1). Furthermore θ x η n | B Rη and θ x U ( · , n + 1)| B Rη are the corresponding restrictions to the ball B Rη .
Note that (3.1) defines a flow, in the sense that given a realisation of U we can construct η simultaneously for all starting configurations. In most situations we have in mind the constant zero configuration 0 ∈ Z Z d + is an equilibrium for η, that is, and there is another non-trivial equilibrium. It will be a consequence of our assumptions that the latter is in fact the unique non-trivial ergodic equilibrium.
The second assumption, inspired by [6], allows for comparison of η with a supercritical oriented percolation on a suitable space-time grid. Loosely speaking, this assumption states that if we have a good configuration on the bottom of a (suitably big) block and the driving noise inside the blocks is good, too, then the configuration on the top of the block is also good and the good region grows with high probability. Furthermore if we input two good configurations at the bottom of the block then good noise inside the block produces a coupled region at the top of the block.
Formally, let L t , L s ∈ N. We use space-time boxes whose 'bottom parts' are centred at points in the coarse-grained grid L s Z d × L t Z. They will be partly overlapping in the spatial direction but not in the temporal direction, and we typically think

2)
and block( x, n) := block 1 ( x, n); see Figure 7. For a set A ⊂ Z d × Z, slightly abusing the notation, we denote by U | A the restriction of the random field U to A. In particular, U | block4( x, n) is the restriction of U to block 4 ( x, n) and can be viewed as element of with the following properties: Figure 7: Locality of the construction of (η n ) on the block level for d = 1. If U is known in the grey region and η nLt is known on the bottom of the dashed trapezium then the configurations η k are completely determined inside block( x, n) drawn in solid lines.
• There is a fixed (e.g., L s -periodic or even constant in space) reference configura- Note that if the event in (3.4)-(3.5) occurs then a coupling of η and η on B 2Ls (L s x) × { nL t } has propagated to B 2Ls (L s ( x+ e))×{( n+1)L t } for e ≤ 1 and also the fact that the local configuration is 'good' has propagated. The event in (3.4) enforces propagation of goodness and can also be viewed as a contractivity property of the local dynamics. In other words the flow tends to merge local configurations once they are in the 'good set'.   From the local construction of η given in (3.1) it follows easily (see Figure 7) that for fixed ( x, n) ∈ Z d × Z the values η n (x) for (x, n) ∈ block( x, n) are completely determined by η nLt restricted to B KηLs ( xL s ) and U restricted to ∪ y ≤Kη block( x + y, n).
Using the above assumptions, it is fairly standard to couple η to an oriented percolation cluster. Recall the notation in Section 2.1 and in particular the definition of the stationary discrete time contact process in (2.3).

Lemma 3.5 (Coupling with oriented percolation). Put
If ε U is sufficiently small, we can couple U ( x, n) to an i.i.d. Bernoulli random field ω( x, n) with P( ω( x, n) = 1) ≥ 1 − ε ω such that U ≥ ω, and ε ω can be chosen small (how small depends on ε U , of course).
Moreover, the process η then has a unique non-trivial ergodic equilibrium and one can couple a stationary process η = (η n ) n∈Z with η 0 distributed according to that equilibrium with ω so that Proof. The first part is standard: Note that the U ( x, n)'s are i.i.d. in the n-coordinate, with finite range dependence in the x-coordinate. Using For the second part consider for each k ∈ N the process η (k) = (η (k) n ) n≥−kLt which starts from η (k) −kLt = η ref and evolves according to (3.1) for n ≥ −kL t , using given U 's which are coupled to ω's as above so that U ≥ ω holds. We see from the coupling properties guaranteed by Assumption 3.3 and Lemma 3.12 below that the law of η (k) restricted to any finite space-time window converges. By a diagonal argument we can take a subsequence k m ∞ such that η n (x) := lim m→∞ η (km) n (x) exists a.s. for all (x, n) ∈ Z d × Z, then (3.8) and (3.9) hold by construction.
The fact that the law of limit is the unique non-trivial ergodic equilibrium can be proved analogously to [6,Cor. 4]. Remark 3.6 (Clarification about the relation between ξ and η). The contact process ξ is defined here with respect to ω analogously to the definition of the discrete time contact process η with respect to ω in (2.3). The rationale behind this change of notation is that throughout the paper η is a stationary population process (contact process in Section 2 and logistic BRW in Section 4) and the random walk X is interpreted as an ancestral lineage of an individual from that population. The coarse-grained contact process ξ plays a different role. In particular, the knowledge of ξ alone does not determine the dynamics of X; cf. definition of X in (3.11).
Finally, we need the following technical assumption which is sufficiently strong for our purposes but can be relaxed presumably.

Assumptions for random walk
We now state the assumptions for the random walk X = (X k ) k=0,1,... in the random environment generated by η. To this end let U := ( U (x, k) : x ∈ Z d , k ∈ Z + ) be an independent space-time i.i.d. field of random variables uniformly distributed on (0, 1).

Furthermore let
a measurable function, where R X ∈ N is an upper bound on the jump size as well as on the dependence range. Given η, let X 0 = 0 and put Note that, as usual, forwards time direction for X is backwards time direction for η.  (b) The simple Assumption 3.9 allows to obtain a rough a priori bound on the speed of the walk and suffices for our purposes here, a more elaborate version would require successful couplings of the coordinates of X with true random walks with a small drift while on the box, similar to the proof of Lemma 2.16.
Assumption 3.11 (Symmetry of ϕ X w.r.t. point reflection). Let be the (spatial) point reflection operator acting on η, i.e., η k (x) = η k (−x) for any k ∈ Z and x ∈ Z d . We Note that (3.13) guarantees that the averaged speed of X will be 0.

The determining cluster of a block
We now explain how Theorem 3.1 can be proved using similar ideas as in Section 2.
In order to avoid repetitions and to keep the length of the paper acceptable, we only explain the major differences to the proof of Theorem 2.6.
The main change that should be dealt with is the fact that the construction of the random walk X requires not only the knowledge of the coarse-grained oriented percolation ξ, but also of the underlying random environment η. This additional dependence on η should be controlled at regeneration times. To tackle this problem, Assumption 3.3 and Lemma 3.5 play the key role. By this lemma, the value of η(x, n) can be reconstructed by looking only at the driving noise U in certain finite set 'below' (x, n). Formally, for ( x, n) ∈ Z d × Z we define its determining cluster DC( x, n) by the following recursive algorithm: 1. Initially, put k := n, DC( x, n) := {( x, n)}.  is finite a.s. with exponential tail bounds.
Proof. This can be shown as in the proof of Lemma 2.11, see alternatively Lemma 7 in [11], or proof of Lemma 14 in [6]. To see this, consider the system η := (η n : ( n−1)L t ≤ n ≤ ( n+1)L t ) which starts from η ( n−1)Lt = η ref and uses the fixed boundary condition η n (y) = η ref (y) for y − L s x > 5L s and ( n − 1)L t < n ≤ ( n + 1)L t . For (y, n) ∈ block 5 ( x, n) ∪ block 5 ( x, n − 1) the values η n (y) are computed using (3.1) with the same realisations of U as the true system η.

A regeneration structure
In this section we construct regeneration times similar to those constructed in Section 2.3. First we need to introduce the analogue of the 'tube around the path' and its 'decoration with determining triangles'; cf. equations (2.42), (2.43) and (2.44). We set   We define the coarse-graining function π : Z d → Z d by (3.19) and denote by ρ(x) the relative position of x inside the block centred at xL s , i.e. we set We define the coarse-grained random walk X = ( X n ) n=0,1,... and the relative positions Y = ( Y n ) n=0,1,... by X n := π(X nLt ) and Y n := ρ(X nLt ). (3.21) We need to keep track of the relative positions to preserve the Markovian structure. Note that between the original random walk and the coarse-grained components just defined we have the following relation: We define the filtration F := ( F n ) n=0,1,... by To mimic the proofs of Section 2 for the model considered here we need the following ingredients: 1. As in Lemma 2.16 there exist s max (that is close to 1 4 under our assumptions) and positive constants C, c such that P X n > s max n ≤ Ce −c n .  Lemma 3.14. When 1 − ε ω is sufficiently close to 1 there exist finite positive constants c and C so that P σ i+1 − σ i > n F σi ≤ Ce −c n for all n = 1, 2, . . . , i = 0, 1, . . . a.s., (3.26) in particular, all σ i are a.s. finite. Furthermore, .. for every i = 0, 1, . . . a.s., (3.27) where ' ' denotes stochastic domination.
Proof. Analogous to the proof of Lemma 2.17 (see also Lemma 3.12).
Similarly to the definition in (2.56) we say that n is a (b, s)-cone time point for the decorated path beyond m (with m < n) if In words, as in Section 2.3 n is a cone time point for the decorated path beyond m if the space-time path ( X  , − ) = m,..., n together with its 'tilde'-decorations is contained in the cone with base radius b, slope s and base point ( X n , − n).  We now define 'good configurations' of ω's (analogous to (2.59)). Recall the definition of a cone shell in (2.27). Let G(b inn , b out , s inn , s out , h) ⊂ {0, 1} cs(binn,bout,sinn,sout,h) be the set of possible ω-configurations in cs(b inn , b out , s inn , s out , h) with the property ..,h} with ω| cs(binn,bout,sinn,sout,h) ∈ G(b inn , b out , s inn , s out , h) : Note that if ξ( x, 0) = 1 in the ball B bout (0) and ω| cs(binn,bout,sinn,sout,h) ∈ G(b inn , b out , s inn , s out , h) then {η n (x) : (x, n) ∈ block( x, n), ( x, n) ∈ cone(b inn , s inn , h)} is a function of η 0 (y), y ≤ b out L s and U | block4( x, n) , ( x, n) ∈ cone(b inn , s inn , h). In particular, if we start with different η 0 and U with η 0 (y) = η 0 (y), Proof sketch for Theorem 3.1. We now have all the ingredients for the regeneration construction, to imitate the proof of Theorem 2.6. We again choose to keep the arguments more verbal and descriptive, hoping to strike a sensible balance between notational precision and readability.
First we choose a sequence t 0 , t 1 , . . . with t ↑ ∞ such that (2.62) is satisfied with s max replacing s max and parameters b out , s out , b inn and s inn adapted from Lemma 3.15. Recall from Remark 3.13 that on the event { ξ( x, n) = 1}, η| block( x, n) is determined by U | block5( x, n)∪block5( x, n−1) .
1. Go to the first σ i after t 1 , check if in the b out -neighbourhood of ( X σi , − σ i ) we have ξ ≡ 1, the path (together with its tube and decorations) has stayed inside the interior of the corresponding conical shell based at the current space-time position and the ω's in that conical shell are in the good set as defined in (3.30).
This has positive (in fact, very high) probability (cf. Lemma 3.15) and if it occurs, we have found the 'regeneration time'.
2. If the event fails, we must try again. We successively check at times t 2 , t 3 , etc.: If not previously successful, at the -th step let σ J( ) be the first σ i after t , check if σ J( ) is a cone point for the decorated path beyond t −1 with X σ J( ) ≤ s max σ J( ) , the η's in the b out -neighbourhood of (Xσ , −σ ) are ≡ 1, ω's in the corresponding conical shell are in the good set as defined in (3.30) and the path (with tube and decorations) up to time t −1 is contained in the box of diameter s out t −1 + b out and height t −1 . If this all holds, we have found the regeneration time.
(We may assume that σ J( −1) is suitably close to t −1 , this has very high probability by an adaptation of Lemma 2.19.) 3. The path containment property holds from some finite 0 on. Given the construction and all the information obtained from it up to the ( −1)-th step, the probability that the other requirements occur is uniformly high: For the cone time property use Lemma 3.15 with k = t ; use (3.27) to verify that the probability to see ξ ≡ 1 in a box around ( X σ J( ) , − σ J( ) ) is high; use (a notational adaptation of) Lemma 2.22 to check that conditional on the construction so far the probability that the ω's in the corresponding conical shell are in the good set G(b inn , b out , s inn , s out , t ) is high. Note that these ω's have not yet been looked at.
4. We thus construct a random time R 1 with the following properties: (i) ξ( X R1 + y, R 1 ) = 1 for all y ≤ b out ; (ii) the decorated path up to time R 1 is in cone(b inn , s inn , R 1 ) centred at ( X R1 , R 1 ); (iii) after centring the cone at base point ( X R1 , R 1 ), ω| cs(binn,bout,sinn,sout, R1) lies in the good set G(b inn , b out , s inn , s out , R 1 ).
We will thus at most require a geometric number of t 's to construct the R 1 . As in step 4 in the proof of Theorem 2.6 we obtain P( R 1 ≥ n) ≤ P(more than log n/ log c steps needed) ≤ δ log n/ log c = n −a , where again a can be chosen large when p is close to 1.

Set
, the displacement of X Lt R1 relative to the centre of the L s -box in which it is contained.
Now we shift the space-time origin to ( X R1 , R 1 ) (on coarse-grained level). Then we start afresh conditioned on seeing (i) configuration ξ ≡ 1 in the b out -box around 0 (on the coarse-grained level); (ii) η 1 on the b out L s box (on the 'fine' level); (iii) Displacement of the walker on the fine level relative to the centre of the corresponding coarse-graining box given by Y 1 .
6. We iterate the above construction to obtain a sequence of random times R i , positions X Ri , relative displacements Y i and local configurations η i . By construction Along the random times L t R n , is an additive functional of a well-behaved Markov chain (with exponential mixing properties) and Step 4). From this representation the (functional) central limit theorem can be deduced; see e.g. Chapter 1 in [19] or Theorem 2 in [26].
Note that the speed of the random walk must be 0 by the symmetry assumption; see (3.13).

Example: an ancestral lineage of logistic branching random walks
In this section we consider a concrete stochastic model for a locally regulated, spatially distributed population that was introduced and studied in [6] and we refer the reader to that paper for a more detailed description, interpretation, context and properties.
We call this logistic branching random walk because the function f in (4.1), which describes the dynamics of the local mean offspring numbers, is a 'spatial relative' of the classical logistic function x → x(1−x) which appears in many (deterministic) models for population growth under limited resources. See also Remark 5 below for a discussion of related models and possible extensions. After defining the model we recall and slightly improve some relevant results from [6]. Then in Proposition 4.7 we show that in a high-density regime (see Assumption 4.2) assumptions from Section 3 are fulfilled by the logistic branching random walk and the corresponding ancestral random walk.

Ancestral lineages in a locally regulated model
Let p = (p xy ) x,y∈Z d = (p y−x ) x,y∈Z d be a symmetric aperiodic stochastic kernel with finite range R p ≥ 1. Furthermore let λ = (λ xy ) x,y∈Z d be a non-negative symmetric kernel satisfying 0 ≤ λ xy = λ 0,y−x and having finite range R λ . We set λ 0 := λ 00 and for a configuration ζ ∈ R Z d + and x ∈ Z d we define We consider a population process η := (η n ) n∈Z with values in Z Z d + , where as in the previous sections η n (x) is the number of individuals at time n ∈ Z at site x ∈ Z d . Before giving a formal definition of η let us describe the dynamics informally: Given the configuration η n in generation n, each individual at x (if any at all present) has a Poisson distributed number of offspring with mean f (x; η n )/η n (x), independent of everything else. Offspring then take an independent random walk step according to the kernel p from the location of their mother. Then the offspring of all individuals together form the next generations configuration η n+1 . For obvious reasons p and λ are referred to as migration and competition kernels respectively. Note that in the case λ ≡ 0 the process η is literally a branching random walk. We now give a formal construction of η. Let which is a Polish space as a closed subset of the (usual) Skorokhod space D. For given Note that for each x, the right-hand side of (4.4) is a finite sum of (conditionally) Poisson random variables with finite means bounded by f ∞ . Thus, (4.4) is well defined for any initial condition -in this discrete time scenario, no growth condition at infinity, etc. is necessary. Furthermore we note that by well known properties of Poisson processes η n+1 , given η n , is a family of conditionally independent random variables with : m ≤ k < n, x, y ∈ Z d ). (4.6) By iterating (4.4), we can define a random family of G m,n -measurable mappings Φ m,n :  Using these mappings we can define the dynamics of (η n ) n=m,m+1,... simultaneously for all initial conditions η m ∈ Z Z d + for any m ∈ Z. Let us for a moment consider the process η = (η n ) n=0,1,... . Obviously, the configuration 0 ∈ Z Z d + is an absorbing state for η. Thus, the Dirac measure in this configuration is a trivial invariant distribution of η. In [6] it is shown that for certain parameter regions, in particular m ∈ (1, 4) and suitable λ, the population survives with positive probability. For m ∈ (1, 3) (and again suitable λ) the existence and uniqueness of non-trivial invariant distribution is proven. We recall the relevant results for m ∈ (1, 3). Proposition 4.1 (Survival and complete convergence, [6]). Assume m ∈ (1, 3) and let p and λ be as above.
(ii) Conditioned on non-extinction, η n converges in distribution in the vague topology toν.
Since we are only interested in the regime when the corresponding deterministic system, cf. (4.14) below, is well controlled and in particular, Proposition 4.1 guarantees that a non-trivial invariant extremal distributionν exists, we make the following general assumption. and for any g ∈ C b (R d ) (4.13) where Φ is a non-trivial d-dimensional normal law and Φ(g) := g(x) Φ(dx).
Proof. The assertions of the theorem follow from a combination of Proposition 4.7 and Theorem 3.1.

Coupling reloaded
Remark 4.6 (Initial/boundary conditions on certain space-time regions). Note that for any n ∈ N, Φ 0,n as defined in (4.8) can be viewed as a function of (U (x,y) m : 0 ≤ m < n, x, y ∈ Z d ).
Let L ∈ N, R p the range of p, put cone(L, R p ) := (x, n) ∈ Z d × Z + : x ≤ L + R p n (4.20) (recalling (2.25), we have cone(L, R p ) = ∪ h>0 cone(L, R p , h)). For given values of η k (x), ) (we can view the latter set as a 'space-time boundary' of cone(L, R p )), we can define η n consistently inside cone(L, R p ) through (4.4).
In fact, we can think of constructing the space-time field η in a two-step procedure: First, generate the values outside cone(L, R p ) (in any way consistent with the model), then, conditionally on their outcome, use (4.4) inside.
Proof. The crucial idea is that using the flow version (4.4) we can augment the coupling argument in Lemma 13 in [6] to work with a set of (good) initial conditions with α, β from (4.17) and the (uncountable) index set I being defined implicitly here.
The proof consists of 6 steps. For parameters K t K s K t to be suitably tuned below, we set L s = K s log(1/γ) In the first step, we use the propagation properties of the deterministic system as described in Lemma 4.4 together with the fact that for small γ, the relative fluctuations of the driving Poisson processes are typically small to ensure that after time L t , the 'good region' has increased sufficiently.
In the second step we use the flow version (4.4) and its contraction properties to ensure that in a subregion, after L t steps, coupling has occurred with high probability.
Several copies of such subregions are then glued together in Steps 3 and 4. In Step 5 we use the fact that in a good region, the relative fluctuations of η are small so that p η (k; x, y) is close to the deterministic kernel p xy ; this ensures (3.12). Finally, in the last step we collect the requirements on the various constants that occurred before and verify that they can be fulfilled consistently.
Step 1. Let (for some fixed constant c > 0) which can be made arbitrarily close to 1 by choosing γ small.
By iterating (4.17) in combination with (4.18) we see that if the ratio L t /L s is chosen sufficiently large. To verify this note that we can consider η as a perturbation of the deterministic system ζ from (4.14) and on X 1 the relative size of the perturbation is small when γ is small (cf. [6,Eq. (13) and the proof of Lemma 7]).
Then (4.10) implies α β p xy ≤ p η (k; x, y) ≤ β α p xy for x, y ∈ B 2Ls (0), k = 1, . . . , L t , hence the total variation distance between p η (k; x, ·) and p x,· is at most (1 − α β ) ∨ ( β α − 1) uniformly inside this space-time block. We use Lemma 4.4, (iii) to make this so small that coupling arguments as in the proof of Lemma 2.16 (with a comparison random walk that has a deterministic drift d max L s /(L t + L t ) show (3.12).
Step 6. Finally, we verify that the constants K s , K t , K t can be chosen consistently so that all intermediate requirements are fulfilled.
1. The right-hand side of (4.22) can be chosen arbitrarily close to 1 for any choice of K s , K t , K t by making γ small.
4. The right-hand side of (4.33) can be made close to 1 if K t > (− log c(ε)) −1 (and γ is small). This also implies that the right-hand side of (4.36) can be chosen arbitrarily close to 1.

For (4.38) note that
is a fixed ratio when γ is small, and 1 − P(X 2 ) can be made small by choosing γ small. We see that for γ ≤ γ * for some γ * > 0, all requirements can be fulfilled e.g. by choosing K t := 2/(− log c(ε)), K s := C(R p + R λ )K t with some large C and K t := 6 s0 K s .

Discussion of further classes of population models
While we analysed in this article only one explicit spatial population model, namely logistic branching random walks (LBRW) defined in (4.4) with the dynamics of the spacetime embedding of an ancestral lineage given by (4.11), we do believe that the same program can be carried out for many related population models and that LBRW is in this sense prototypical; see also [6,Remark 5]. We list and discuss some of these in the following five paragraphs. Note that implementing the details to verify the conditions of Theorem 3.1 for these models will still require quite some technical work and we defer this to future research.

More general 'regulation functions'
The logistic function x → x(m − λx) (with λ small enough) whose 'spatial version' is used in the definition (4.1) can be replaced by some function φ : R + → [0, a], a ∈ (0, ∞) ∪ {∞} with φ (0) > 1, lim x→a φ(x) = 0 that possesses a unique attracting fixed point x * = φ(x * ) > 0. This will ensure that a result analogous to Lemma 4.4 holds, and an analogue of Proposition 4.7 can be obtained when a suitable small parameter is introduced. For example, in the ecology literature, in addition to the logistic model, also the Ricker model corresponding to φ(x) = x exp(r − λx) and the Hassel model corresponding to φ(x) = mx/(1 + λx) b are used to describe population dynamics under limited resources (r > 0, resp., m > 1 and b > 0 are parameters). Note that in all these cases, 1/λ is related to a carrying capacity, so assuming λ small means weak competition.
More general families of offspring distributions As described in the informal discussion above (4.2), (4.5) can be interpreted as stipulating that each individual at y in generation n has a Poisson number of offspring (with mean f (y; η n )/η n (y)) which then independently take a random walk step. One could replace the Poisson distribution by some another family of distributions L(X(ν)) on N 0 that is parametrised by the mean E[X(ν)] = ν ∈ [0,ν] whereν ≥ sup y,η ≡0 f (y; η)/η(y) and then define the model accordingly. If the family of offspring laws satisfies a suitably quantitative version of the law of large numbers (cf. Step 1 of the proof of Proposition 4.7), one can derive an analogue of Proposition 4.7.
For example, one could take an N 0 -valued random variable X with mean E[X] =ν and E[e aX ] < ∞ for some a > 0 and then define X(ν) via independent thinning, i.e. X(ν) 'Moderately' small competition parameters As it stands, Theorem 4.3 requires sufficiently (in fact, very) small competition parameters (cf. Assumption 4.2). This is owed to the fact that our abstract 'work-horse' Theorem 3.1 requires (very) small ε U in Assumption 3.3 and ε in Assumption 3.9 (we in fact did not spell out explicit bounds).
In simulations of LBRW one observes also for moderately small competition parameters λ xy apparent stabilisation to a non-trivial 'equilibrium' as required by Assumption 3.3.
We note that the assumptions of Theorem 3.1 are 'effective' in the sense that they only require controlling the system (η n ) and the walk in certain finite space-time boxes.
Thus, a suitably quantified version of Theorem 3.1 allows at least in principle to ascertain via simulations that for a given choice of parameters m, (p xy ) and (λ xy ) the system (η n ) has a unique non-trivial ergodic equilibrium and that the conclusions of Theorem 4.3 hold.
Continuous-time and continuous-mass models An infinite system of interacting diffusions that can be obtained as a time-and mass-rescaling of LBRW is considered in [14], see Definition 1.3 there; one can in principle define an 'ancestral lineage' in such a model which will be a certain continuous-time random walk in (the time-reversal of this) random environment. It is conceivable that a coarse-graining construction similar to the one discussed here can be implemented and that in fact an analogue of Theorem 4.3 can be proved at least for suitably small interaction parameters.
Reversible Markov systems Obviously, we tailored Theorem 3.1 and its assumptions to a random walk that moves in the time-reversal of a non-reversible Markov system η which possesses two distinct ergodic equilibria, our prime example being the (discrete time) contact process.
If we instead assume that η is a reversible Markov system with local dynamics and 'good' mixing properties possessing a unique equilibrium (for example, the stochastic Ising model at high temperature), the assumptions from Section 3.1 will be fulfilled as well -this case is in fact easier since 'good blocks' in η ( n+1)Lt as required in (3.4) of Assumption 3.3 will have uniformly high probability anyway, irrespective of η nLt .
Assume in addition that we can verify Assumption 3.9 for the walk. For example, this can be done by requiring that the walk is a (sufficiently small) perturbation of a fixed symmetric random walk or by assuming a (small) a priori bound on the drift. Then we don't need to require the symmetry assumption (3.13) (in fact, the resulting walk can have non-zero speed). This re-reading of Theorem 3.1 and its proof allows to recover a special case of [27,Thm. 3.6] where, using entirely different methods, a CLT is obtained for random walks in dynamic environments that satisfy sufficiently strong coupling and mixing properties.

A An auxiliary result
The following result should be standard, we give here a brief argument for completeness' sake and for lack of a precise point reference.  Thus, we obtain A ∩ {T = T } ∈ F T and a similar argument for the other case shows the assertion. By approximation arguments we find that is also F T -measurable and for A ∈ F T , This concludes the proof of the lemma.