Random walks generated by equilibrium contact processes

We consider dynamic random walks where the nearest neighbour jump rates are determined by an underlying supercritical contact process in equilibrium. This has previously been studied by den Hollander and dos Santos and den Hollander, dos Santos, Sidoravicius. We show the CLT for such a random walk, valid for all supercritical infection rates for the contact process environment.


Introduction
In this note we consider a random motion (X t ) t≥0 in Z generated by a supercritical one dimensional contact process (ξ t ) t≥0 in upper equilibriumν. We suppose that the motion (X t ) t≥0 performs nearest neighbour jumps with rate depending on the local values of ξ t : there exist r 0 < ∞ and functions g 1 and g −1 that depend only on the spins within r 0 of the origin so that for all x, t, i = ±1, P (X t+h = X t + i|X s , ξ s , s ≤ t) = hg i (θ Xt oξ t ) − o(h) as h → 0, where (θ y oξ)(x) = ξ(y + x) for all x, y. By contrast, the evolution of process X does not affect that of the contact process ξ.
Remark: For simplicity we take X to be a nearest neighbour random walk and also ξ to be a nearest neighbour symmetric contact process. The approach and result given here extend without difficulty to random walks whose jumps are finite range (with all jump rates being appropriate shifts of cylinder functions of ξ). Equally with a bit more care the arguments can be adapted to deal with finite range contact processes. Our result is that Theorem 1.1. For all λ > λ c and any non trivial (i.e. non identically zero) g i as above, there exist µ ∈ R and α > 0 so that This result has already been shown for λ large in the case of r 0 = 0, see [dHS] and [dHSS], by a nice regeneration argument. We exploit in this article the strong regeneration properties of (ξ t ) t≥0 but in a different way, though we also embed an i.i.d. sequence of r.v.s in our process. To our knowledge the first central limit theorem for the contact process was due to [GP] who considered the position of the rightmost occupied site for a one sided supercritical contact process. A beautiful alternative proof was produced by [K] (who wrote his approach explicitly for oriented percolation). The central limit proof of [dHS] is in this tradition. We suppose that the process ξ is generated by a Harris system as is usual. Details will be supplied in the next section. The process X is generated by a Poisson process N X of rate M ′ > g 1 ∞ + g −1 ∞ and associated i.i.d. uniform [0,1] r.v.s {U i } i≥1 : if t ∈ N X is the i'th Poisson point, then a jump from X t− to X t− −1 occurs only if U i ∈ [0, g −1 (θ Xt− oξ t− )/M ′ ] and jumps to X t− + 1 only if U i ∈ [1 − g 1 (θ Xt− oξ t− )/M ′ , 1]. Thus, irrespective of the behaviour of (ξ t ) t≥0 over a time interval I, if N X ∩ I = ∅ then X makes no jumps over time interval I. We now fix throughout the paper M > M ′ .
In the following we will use the expression dynamic random walk to denote a pair (ξ, X) which evolve according to these stipulated rules. We will say a pair (ξ, X) is a piecewise dynamic random walk if it evolves according to the rules given on intervals but may have a discrete set of global jumps in the pair (ξ, X).

A reminder on the contact process
The contact process with parameter λ > 0 on a connected graph G = (V, E) is a continuous-time Markov process (ξ t ) t≥0 with state space {0, 1} V and generator where f is any local function on {0, 1} V and, given x ∈ V and {y, z} ∈ E, we define Given A ⊆ V , we write (ξ A t ) t≥0 to denote the contact process started from the initial configuration that is equal to 1 at vertices of A and 0 at other vertices. When we write (ξ t ), with no superscript, the initial configuration will either be clear from the context or unimportant. We often abuse notation and identify configurations ξ ∈ {0, 1} V with the corresponding sets {x ∈ V : ξ(x) = 1}.
The contact process is a model for the spread of an infection in a population. Vertices of the graph (sometimes referred to as sites) represent individuals. In a configuration ξ ∈ {0, 1} V , individuals in state 1 are said to be infected, and individuals in state 0 are healthy. Pairs of individuals that are connected by edges in the graph are in proximity to each other in the population. The generator (2.1) gives two types of transition for the dynamics. First, infected individuals heal with rate 1. Second, given two individuals in proximity so that one is infected and the other is not, with rate λ there occurs a transmission, as a consequence of which both individuals end up infected.
The configuration 0 ∈ {0, 1} V that is equal to zero at all vertices is a trap for (ξ t ). For certain choices of the underlying graph G and the parameter λ, it may be the case that the probability of the event {0 is never reached} is positive even if the process starts from finitely many infected sites. In fact, whether or not this probability is positive does not depend on the set of initially infected sites, as long as this set is nonempty and finite. We say that the process survives if this probability is positive; otherwise we say that the process dies out. Survival or not depends on the value of the parameter λ. As is intuitive, there is a value λ c (depending on G) so that there is survival above λ c and nonsurvival below.
We now recall the graphical construction of the contact process and its selfduality property. Fix a graph G = (V, E) and λ > 0. We take the following family of independent Poisson point processes on (−∞, ∞): Let H denote a realization of all these processes. Given x, y ∈ V, s ≤ t, we say that x and y are connected by an infection path in H (and write (x, s) ↔ (y, t) in H) if there exist times t 0 = s < t 1 < · · · < t k = t and vertices x 0 = x, x 1 , . . . , x k−1 = y such that Such a collection will be called a path from (x, s) to (y, t) (here and elsewhere, we drop the dependence on H if a Harris system is given). Points of the processes (D x ) are called death marks and points of (N e ) are links; infection paths are thus paths that traverse links and do not touch death marks. H is called a Harris system; we often omit dependence on H.
(here and in the rest of the paper, 1 denotes the indicator function). It is well-known that the process (ξ A t ) t≥0 = (ξ A t (H)) t≥0 thus obtained has the same distribution as that defined by the infinitesimal generator (2.1). The advantage of (2.2) is that it allows us to construct in the same probability space versions of the contact processes with all possible initial distributions. From this joint construction, we also obtain the attractiveness property of the contact process: From now on, we always assume that the contact process is constructed from a Harris system. In discussing dynamic random walks, it will be understood that the Poisson process N X and associated uniform random variables also is part of the Harris system. Now fix A ⊆ V, t ∈ R and a Harris system H. Let us define the dual process (ξ A,t s ) 0≤s<∞ byξ A,t s (y) = 1 {(y,t−s)↔A×{t} in H} .
If A = {x}, we write (ξ x,t s ). This process satisfies two important properties. First, the distribution of (ξ x,t s s ≥ 0) is the same as that of a contact process with same initial configuration. Second, it satisfies the duality equation In particular, where (ξ 1 t ) is the process started from full occupancy. Also (for λ > λ c ) if we put ξ 0 (x) = 1 if and only ifξ x,0 . never dies out, then configuration ξ 0 has the upper equilibrium distribution.
We will talk of a contact process {ξ t } t≥0 restricted to R ⊆ V × R to mean the contact process generated by Harris system paths that are entirely contained in R. This is interpreted to signify that ξ t (x) = 0 for each (x, t) / ∈ R. We remark that if R 1 and R 2 are disjoint, then conditional upon initial configurations, two contact processes restricted respectively to R 1 and R 2 are independent. When necessary we use the notation ξ R or ξ A,R to denote contact processes restricted to space time regions R, with A standing for the initial configuration.
We use the suffix t to denote contact processes run from time t. From now on we will consider the supercritical contact process (λ > λ c ) on the integer lattice, V = Z with E the set of nearest neighbour edges.
We now recall classical results about the contact process on the line. The proposition below can be found in [DG] or Theorem 3.23, chapter VI of [Li1].
Proposition 2.1. There exists a constant c 1 ∈ (0, ∞) so that for τ the stopping time equal to the first hitting time of 0 for the process, we have for any configuration ξ 0 .
One important consequence of the above, that indeed will be exploited, is the fact that if instead of considering as occupied (at a given time) sites whose dual survives forever, we consider sites whose dual survives to large time t, then the resulting configuration has a distribution very close to equilibrium.
We have that a contact process (ξ x t ) t≥0 has exponentially small (in t) chance of surviving until time t but subsequently dying out. Furthermore by considering large deviations of the rightmost descendant r x t and leftmost descendant l x t (see [DSch] and Theorem 3.23, of [Li1]), we have that for the event H(t): where | · | refers to the cardinality.
The classical renormalization argument that compares the contact process with supercritical oriented percolation, see for instance the proof of [Li1,Corollary VI.3.22] and classical contour arguments for oriented percolation (see e.g. [Du1]) give the following Lemma 2.3. Given γ and θ both strictly greater than β, there exists constant c 2 so that for (ξ t : t ≥ 0) the contact process restricted to rectangle [0, N ] × [0, T ], one has: if (i) ξ 0 has no gaps of size N β and (ii) the dualξ x,T T −N γ has cardinality at least N θ , then the conditional probability that ξ T (x) = 1 is at least for N sufficiently large.
Similarly we arrive at In particular we have the following.
Corollary 2.5. There exists a constant c 4 ∈ (0, +∞) so that if x ∈ (−n 2 2 n , n 2 2 n ) and ξ t is a configuration with no n 3/2 vacant intervals on [−2n 2 2 n , 2n 2 2 n ], then outside a set of probability at most n 9 e −c4 log 3/2 (n) , the configuration ξ t+n 4 has no gap of size log 3/2 (n) within n 10 of x, for all n.
Simple large deviations estimates for the rightmost particle for a one-sided initial configuration give the following: Lemma 2.6. There exists a constant c 5 ∈ (0, ∞) so that for a given Harris system, the chance that there is a path from (−∞, 0) × (0, M ) to (RM, ∞) × (0, M ) is less than e −c5R for all R > 1 c5 , M > 1.
Writing p x for the probability that ξ ′ (x) = ξ(x), the following is obvious: 2 then the law of ξ ′ is absolutely continuous with respect to the equilibrium measureν and has Radon Nikodym derivative bounded by 2.
Remark: We do not actually need this absolute continuity but rather that ξ ′ has an equilibrium distribution conditioned on an event of reasonable probability.
We now get to define R x and C x adapted to scale 2 n . We first fix h 1 the positive constant of Lemma 2.2.
A) For |x| ≤ n 9 : where t(n) = log 4 (n)/2 and C x is the event that at time t(n),ξ x ∩ [−n 9 , n 9 ] has size at least ≥ h 1 t(n).
To verify the condition for n large, we first note that the summands for |x| > n 9 are zero. Secondly we have by Proposition 2.1, and taking c 1 the positive constant in that statement: which converges to zero as n tends to infinity. For the term x P (ξ ′ (x) = 0, ξ(x) = 1) summed over |x| ≤ n 9 , we apply Lemma 2.2 to get the required bounds.
We now alter this definition for |x| > n 9 . The objective is to define a configuration which is essentially the same as above but which is independent of certain rectangles of the Harris system. The "cost" of losing absolute continuity and changing the values far away is small compared to the independence gained. We replace condition B) with B ′ ) for |x| > n 9 , we set ξ ′ (x) = 1. It is to be noted that with this amended definition the configuration ξ ′ is independent of the Harris system on Z × (−∞, t(n)) We let ν(= ν(n)) be the distribution of the configuration ξ ′ with rules given in A) and B ′ ) above, conditioned on the event that for all x ∈ [−n 9 , n 9 ], ξ ′ (x) = 1 wheneverξ x survives till time t(n).

A regeneration time
The purpose of this section is to describe a regeneration time σ = σ(n, T ) associated to a space and time scale 2 n and a stopping time T (also called n order regeneration time). The regeneration time will be a stopping time (if appropriate auxiliary uniform random variables are added to our Harris system) occuring after the stopping time T . The construction will be such that at time σ a random configuration ξ ′ σ will be produced so that The idea is that in subsequent evolution of a dynamic random walk (ξ ′ , X ′ ), with X ′ σ = X σ , X ′ s = X s with very high probability. See Lemma 4.3. We will suppose n is fixed and drop it from notation for our regeneration time σ = σ(n). We also translate our Harris system temporally by T and spatially by X T so in the following we take (T, X T ) = (0, 0). The time σ is obtained via a series of runs. Each run will probably fail but if the run succeeds then, as far as evolution on a scale of 2 n is concerned, the process will start from a given distribution (which will weakly depend upon n). In the following, t could be any time in (0, 2 2n ), but see the discussion at the end of the section on extending it to times in [0, 2 Kn ] for K large but fixed. We begin a run at time t (for the first run t = 0) by considering the joint (ξ, X) process on time interval (t, t + n 4 + log 4 (n)). If the run is successful, then the distribution of ξ t+n 4 +log 4 (n) shifted by X t+n 4 +log 4 (n) will be ν at least on interval (−n 9 , n 9 ). If it is unsuccessful, then we try a subsequent "run" at time t + n 4 + log 4 (n) and so on until a successful run is obtained.
A run consists of at most five stages. The run will be a success (in the sense that σ = t + n 4 + log 4 (n)) either if the first stage is a failure or if all five stages succeed. The latter case will be good from the point of view of (ii) above while the first case will be bad (but mercifully of small probability). The first stage should succeed with very large probability. The second stage will succeed with probability of order e −Mlog 4 (n) . The next two will succeed with high probability (for n large), the last with a reasonable probability, given the success of the previous stages.
The first stage consists simply of seeing if there is a vacant gap of size n 3/2 for ξ t on (X t −2n 2 2 n , X t +2n 2 2 n ). If so we stop the run and designate σ = t+n 4 +log 4 (n). We put X ′ σ equal to X σ and ξ ′ σ to be a random distribution independent of the natural Harris system for (ξ, X) so that shifted by X σ , ξ ′ σ has distribution ν. Technically this makes the run a success (since it has established σ) but of course this case has severed any link between ξ and ξ ′ and will be treated as a "disaster". It is easy to see, however that the chance that this occurs for t in [0, 2 Kn ] is bounded by e −cn 3/2 for some universal c (see Lemma 5.1 below). Thus the contribution to the various integrals considered in later sections will be negligible. Confusingly if there are no n 3/2 gaps we describe the first stage of the run as a success.
The second stage is a success if we have 1) N X (t + n 4 ) − N X (t) < M n 4 and 2) N X (t + n 4 + log 4 (n)) − N X (t + n 4 ) = 0.
We remark that the first condition is satisfied with probability tending to one as n becomes large, while the second condition has probability exactly e −Mlog 4 (n) . The first condition implies that whatever the contact process might be, X moves less than M n 4 on the time interval (t, t + n 4 ) and is constant on the time interval (t + n 4 , t + n 4 + log 4 (n)). As with subsequent stages, if this is a failure we let the process run up until time t + n 4 + log 4 (n) in order to regain the Markov property.
Given the second stage is a success, we pass to the next, and require that on interval (X(t + n 4 ) − n 9 , X(t + n 4 ) + n 9 ) there be no gaps of size log 3/2 (n) for ξ t+n 4 .
We note here that as ξ t has no n 3/2 gaps the chance of this event is close to one by Corollary 2.5.
For |x| > n 9 we set ξ ′ t+n 4 +log 4 (n) (X(t + n 4 + log 4 (n)) + x) = 1. The fourth stage is successful if ξ ′ t+n 4 +log 4 (n) (x) ≥ξ t+n 4 +log 4 (n) (x) whereξ t+n 4 +log 4 (n) is given byξ t+n 4 +log 4 (n) (x) = 1 if and only if the dual ξ x+X(t+n 4 +log 4 (n)),t1 survives for time t(n) = log 4 (n)/2 (or equivalentlyξ · is the contact process defined on the Harris system from time t + n 4 + log 4 (n)/2, starting with full occupancy). By Proposition 2.1 and Lemma 2.2, the probability of success at the fourth step tends to one as n tends to infinity (the calculation is essentially given in Section 3). It is to be noted that this condition relies on a Harris system disjoint from (and independent of) the Harris systems observed in stage 2. It should also be noted that we are not claiming (and do not require) that the conditional probability of a successful fourth stage should be close to one.
Finally for the fifth stage we note that, provided that the requisite stages have been successfully passed, the conditional chance that for every x such that |X t+n 4 − x| ≤ n 9 and ξ ′ t+n 4 +log 4 (n) ((x) = 1 one has ξ ′ t+n 4 +log 4 (n) (x) = ξ t+n 4 +log 4 (n) (x) is at least 3/4 for n large, as an easy calculation shows. This conditional probability will depend on the random configuration ξ ′ t+n 4 +log 4 (n) as well as the configuration ξ t . Let us denote it by p(ξ ′ t+n 4 +log 4 (n) , ξ t ). Having introduced an auxiliary independent uniform random variable U associated to the "run" (enlarging the probability space if necessary), we then say that the run is (globally) a success if ξ ′ t+n 4 +log 4 (n) (x) = ξ t+n 4 +log 4 (n) (x) and U ≤ 3/4 p(ξ ′ t+n 4 +log 4 (n) ,ξt) . Using this randomization procedure we see that, conditionally on success, the distribution of ξ t+n 4 +log 4 (n) shifted by X t+n 4 +log 4 (n) and restricted to the interval [−n 9 , n 9 ] coincides with ν restricted to [−n 9 , n 9 ].
Notation: For a stopping time T , we let σ T = σ T (n) denote the end time of the first successful run after beginning the runs at time T : σ T = inf{T + k(n 4 + log 4 (n)) : a successful run is initiated at time T + (k − 1)(n 4 + log 4 (n))}. We say σ T is the T regeneration. We say that a disaster occurs at σ T if ξ ′ σT (x) = ξ σT (x) for some x within n 9 of X σT .
We easily obtain the following.
Proposition 4.1. There exists a constant c 6 ∈ (0, ∞) so that for n large and any stopping time, T , for the filtration (F t ) determined by the (ξ, X) process, σ T is the first time for a successful run, starting at T , satisfies P (σ T > T + n 8 e Mlog 4 (n) |F T ) < e −c6n 3 .
We also have the following: Proposition 4.2. Let K ≥ 2. Let T be any stopping time for the filtration (F t ) determined by the (ξ, X) process (augmented by suitable independent uniform random variables) and let τ = τ (T ) = inf{t > T : |X(t) − X(T )| ≥ n 8 or ξ t has a vacant n 3/2 interval in (X(T ) − 2 Kn , X(T ) + 2 Kn )}, and with σ T the T regeneration time, let (ξ σT t , X σT t ) t≥σT be the dynamic contact process run with the given Harris system but starting with X σT σT = X σT and ξ σT σT (x) = ξ σT (x) for |x − X σT | ≤ n 9 ; = 1 elsewhere. There exists a constant c 7 ∈ (0, ∞) so that for n large outside of probability e −c7n 3 , either τ (T ) < T + n 5 or ∀t ∈ (σ T , σ T + 22 n ), X t = X σT t and ∀t ∈ (σ T + n 5 , σ T + 22 n ), A consequence is that in considering n order runs and n order regenerations on time interval [0, 2 Kn ] we can apply the above and use the fact that the chance (under equilibrium) that τ (T ) < σ T + n 5 is small, as it follows from Lemma 5.1 below and simple tails on the Poisson process N X .
The above result gives the following which is crucial for our regeneration arguments as it implies that replacing at regeneration times our configuration ξ σ by a regenerated ξ ′ σ does not change the evolution of X, thus facilitating an i.i.d. structure for X. Lemma 4.3. Consider two dynamic random walks (ξ, X) and (ξ ′ , X ′ ) run with the same Harris system and so that . Then, for a suitable positive constant c 8 , outside of probability e −c8n 2 we have either a) ξ 0 has an n 3/2 gap within 2 4n of X 0 , or b) X n 4 = X ′ n 4 and ξ n 4 (x) = ξ ′ n 4 (x) for |x − X n 4 | ≤ 2 3n .

Existence of normalizing constants
In this section we wish to use our coupling time to establish the existence of µ and α so that as n → ∞ E X 2 n 2 n → µ and 1 2 n E(X 2 n − 2 n µ) 2 → α 2 .
We first state the following general result, which is shown through basic techniques: Lemma 5.1. There exists constant c 9 > 0 so that for all large n, if contact process ξ is in equilibrium, then for n large P (∃t ≤ 2 3n , |x| ≤ 2 4n so that ξ t ≡ 0 on (x, x + n 3/2 ) ≤ e −c9n 3/2 .
Considering (ξ ′ , X ′ ) associated to a regeneration time σ = σ(n, T ) for T = 0 as in the previous section, and ν = ν(n) the measure defined in Section 3 we will need: Lemma 5.2. There exists constant c 10 < ∞ so that for all n, E X 2 n 2 n − E ν(n) X 2 n 2 n < c 10 n 9 e Mlog 4 (n) 2 n .
Proof. Let σ = σ(n, 0) be the n order regeneration time for time 0 and let D be the event that either (i) σ > n 8 e Mlog 4 (n) or (where X ′ is the dynamic random walk resulting from regeneration time σ) By Lemmas 4.3 and 5.1 and Proposition 4.1 we have that P (D) < e −c9n 3/2 + e −c6n 3 + e −c8n 2 . But (for n large) and so the result follows from easy bounds and the Cauchy Schwarz inequality.
Lemma 5.3. There exists constant c 11 < ∞ so that for all n, Proof. We begin by now taking θ to be the regeneration time after 2 n . 5.2) = 2EX 2 n + O(n 9 e Mlog 4 (n) ) by similar arguments to those used for Lemma 5.2. From this the lemma follows.
Remark: It is not difficult to see that E(X t /t) converges to µ, see for instance the proof of Lemma 5.9.
We now look for a bound for E As we have seen µ = lim n→∞ E(X 2 n ) 2 n exists, furthermore by Lemma 5.2 for Given a dynamic random walk (ξ, X) and a scale n, we define a sequence of renewal points β i , i ≥ 1 as follows: let β 1 = σ(0), the regeneration time for the stopping time 0. Subsequently for i ≥ 1, we define β i+1 so that β i+1 − β i is the regeneration time for stopping time 2 n for the dynamic random walk (ξ i , X i ) so that (i) (ξ i , X i ) is generated by the Harris system temporally shifted by β i ; (ii) for |x − X i βi+1−βi | ≤ n 9 , ξ i+1 0 (x) = (ξ i ) ′ βi+1−βi (x); elsewhere ξ i+1 0 (x) = 1; (iii) X i+1 0 = X i βi+1−βi . (We take (ξ 0 , X 0 ) to be the original dynamic random walk (ξ, X) and β 0 = 0.) We wish to only deal with "good" i: we say that i is good if all 0 ≤ j < i are good and if (a) β i+1 − β i ≤ 2 n + n 8 e Mlog 4 (n) We define the random variables Z i i ≥ 1 by (with S = inf{i : i is not good}) for j < S, Z j = X j βj+1−βj − X j 0 , for j ≥ S, Z j are taken from an independent i.i.d. sequence of random variables with the distribution that of Z 1 conditioned on 1 being good. We note that unless a disaster occurs at some stage β j (i.e. ξ j 0 (x) = ξ j−1 βj −βj−1 (x) for some x within n 9 of X j−1 βj−βj−1 ) we have for each i, ξ i 0 ≥ ξ βi . Thus we have via Proposition 4.1 and Lemmas 4.3 and 5.1 that outside probability 2 n (e −c8n 2 + e −c9n 3/2 + 2 n e −c6n 3 ) all i ≤ 2 n are good and for such i we have Take R to be the integer part of 2 2n 2 n +n 8 e M log 4 (n) − 2 and let us define As noted, outside probability 2 n (e −c8n 2 + e −c9n 3/2 + 2 n e −c6n 3 ), Y 2 2n = X 2 2n . Now, by techniques already employed for Lemma 5.2, it is easy to see that for universal c 12 , we have |E(Z 1 ) − 2 n µ| ≤ c 12 n 8 e Mlog 4 (n) and so we may write (with Z ′ j equal to Z j minus its expectation) where E((F ′ n ) 2 ) ≤ cn 16 e Mlog 4 (n) 2 2n for universal c. From this and the obvious bound E(Z 2 j ) ≤c2 2n we obtain for some universal C that We finally use the elementary identity and Cauchy Schwarz to conclude Lemma 5.5. There exists universal constant c 13 so that for all positive integer n, E((X 2 2n − 2 2n µ) 2 ) ≤ c 13 2 3n .
We can now prove Proposition 5.6. For X as defined above, there exists α ∈ (0, ∞) so that as n → ∞ Proof. The proof that the limit is strictly positive is given below in Proposition 5.8. For the existence, we write as before X 2 n+1 − 2 n+1 µ as (X 2 n − 2 n µ) where Y 1 is the increment of X over time [2 n , σ] with σ being the time of the regeneration after time 2 n .
and Y 2 is defined via the above equality. Then we have By our choice of Z 1 we have 2E((X 2 n − 2 n µ)Z 1 ) = 0, while by Cauchy Schwarz and Lemma 5.3 we have, for n large, (and some finite K not depending on n) 3n 4 n 8 e Mlog 4 (n) .

It simply remains to check that (increasing
from which we obtain the existence of limit of 2 −n E((X 2 n − 2 n µ) 2 ).
We now wish to prove that α is strictly positive.
Proposition 5.8. The constant α defined above is strictly positive.
Proof. In the proof of Proposition 5.6 we show that Given this, we see at once that it suffices to show that there exists β < 1/4 so that for each n 0 there exists n 1 ≥ n 0 so that To do this we introduce a new regeneration time σ ′ similar to the regeneration time of order n but with two additional stages added into the "runs". We first choose a j ∈ {−1, 1} so that ||g j || ∞ = 0. Without loss of generality this will be j = 1. We define a run beginning at a Markov time t . If the first five stages are successful then at time t+n 4 +log 4 (n) the process ξ (relative to X) is in approximate equilibrium, ν(n) at least close to X. So we have with c 2 (> 0) probability that g 1 > c 2 on the configuration ξ shifted by X.
The sixth and seventh stages are motivated by a desire to create a "regeneration time σ ′ so that the distribution of ξ relative to position X is (essentially) the same irrespective of whether X has advanced by zero or by one during a certain time interval. This will add uncertainty into the system thus increasing the "variance".
The primary sixth stage event is that on time interval [t + n 4 + log 4 (n), t+n 4 +log 4 (n)+1], we have that either N X is constant or increases by one, and the uniform random variable associated to the single Poisson point is in [1 − c 2 /M ′ , 1].
Thus on this event during time interval [t + n 4 + log 4 (n), t + n 4 + log 4 (n) + 1], X either advances by one or stays fixed. Our task is to show that the process will forget which.
We require that for no x in the above interval do we haveξ x,t(t,n) survives for time log 4 (n)/2 but γ ′ t(t,n) (x) = 0. Finally, for the seventh stage we simply introduce (just as in stage 4 for σ) an auxiliary uniform random variable U . We can show via simple arguments that γ ′ t(t,n) = ξt (t,n) on [X(t(t, n)) − n 9 , X(t(t, n)) + n 9 ] with probability q = q(γ ′ t(t,n) , ξt (t,n) ) which will be at least 3 4 . The last stage (and hence the "run") will be a success if this occurs and if U ≤ 3 4q .
As before we produce mostly failures but will with high probability produce a success before time 2 n/9 . We then let the process restart the series of runs and continue. It is the easy to see that for n large E (X 2 n − 2 n µ) 2 ≥ 2 n7/8 . By the first paragraph this concludes the proof.
Proof. Let T be the first time in [0, 2 n ] that |X σ+s − X σ − sµ| ≥ 2 n(1+γ) 2 . This is a stopping time for the Harris system filtration. If 2 n − T < n 9 e Mlog 4 (n) , then there is hardly anything to prove and so we suppose otherwise. At time T we begin runs concluding in an n order regeneration σ. We put Z = X ′ 2 n − X ′ σ − (2 n − σ)µ and define random variable W by µ| − |Z| − |W |, By elementary bounds on regeneration times and Poisson process tail probabilities and Proposition 4.1, we have that outside probability 2e −c6n 3 for n large |(X σ − X T ) − (σ − T )µ| ≤ M n 9 e Mlog 4 (n) << 2 n 1+γ 2 . Secondly, the term Z given information up to T is equal in distribution to X s −sµ for s = 2 n − σ where the X process begins in distribution ν (at least for the sites within n 9 of X(0). Since the distribution ν restricted to (X(σ) − n 9 , X(σ) + n 9 ) has bounded Radon Nikodym derivative relative to equilibrium measure, Lemma 5.9 enables us to conclude (for universal K), P |Z| ≥ 2 n 1+γ 2 /4 ≤ K/2 −nγ .
Thus we obtain (at least for large n), using the usual bounds for the probability that the random variable W is zero, ≤ 2P (|X 2 n − 2 n µ| ≥ 2 n 1+γ 2 /4), and we are done.
6. Proof of Theorem 1.1 Given this we can establish our invariance principle. We consider 2 n ≤ t < 2 n+1 and choose scale 2 n 2 (1+β) = 2 n1 for 0 < β < 1. We can apply Proposition 4.1 to show that if we define n 1 order regeneration times σ k recursively, as with the proof of Lemma 5.5, so that σ k is the time of the first regeneration for process (ξ k−1 , X k−1 ) after starting runs at time σ k−1 + 2 n1 , and (ξ k σ k , X k σ k ) = (ξ k−1 , X k−1 ) ′ then we have with high probability that for all σ k < t that X σ k +2 n 1 − X σ k = X k σ k +2 n 1 − X k σ k .