Front evolution of the Fredrickson-Andersen one spin facilitated model

The Fredrickson-Andersen one spin facilitated model (FA-1f) on Z belongs to the class of kinetically constrained spin models (KCM). Each site refreshes with rate one its occupation variable to empty (respectively occupied) with probability q (respectively $p = 1 - q$), provided at least one nearest neighbor is empty. Here, we study the non equilibrium dynamics of FA-1f started from a configuration entirely occupied on the left half-line and focus on the evolution of the front, namely the position of the leftmost zero. We prove, for q larger than a threshold $\bar{q}<1$, a law of large numbers and a central limit theorem for the front, as well as the convergence to an invariant measure of the law of the process seen from the front.


Introduction
Fredrickson-Andersen one spin facilitated model (FA-1f) belongs to the class of kinetically constrained spin models (KCM). KCM are interacting particle systems on Z d which have been introduced in physics literature in the '80s (see [RS03] for a review) to model the liquid glass transition, a major open problem in condensed matter physics. A configuration of a KCM is given by assigning to each vertex x ∈ Z d an occupation variable σ(x) ∈ {0, 1}, which corresponds to an empty or occupied site, respectively. The evolution is given by a Markovian stochastic dynamics of Glauber type. With rate one, each vertex updates its occupation variable to occupied or to empty with probability p ∈ [0, 1] and q = 1 − p, respectively, if the configuration satisfies a certain local constraint. For the FA-1f model the constraint requires at least one empty nearest neighbor. Since the constraint to update the configuration at x does not depend on σ(x), the dynamics satisfies detailed balance w.r.t. Bernoulli product measure at density p. A key issue is to analyze the large time evolution when we start from a distribution different from the equilibrium Bernoulli measure. Note that, due to the presence of the constraints, FA-1f dynamics is not attractive, so powerful tools like monotone coupling and censoring inequalities cannot be applied. Furthermore, convergence to equilibrium is not uniform on the initial condition since completely occupied configurations are blocked under the dynamics. Therefore the study of the large time dynamics is particularly challenging. In [BCM+13] a convergence to equilibrium was proven when the starting distribution is such that the mean distance between nearest empty sites is uniformly bounded and the equilibrium vacancy density q is larger than a threshold 1/2. In [MV18] the result was extended to initial configurations with at least one zero, provided q is sufficiently near to one.
Here, we consider FA-1f on Z starting from a configuration which has a zero at the origin and is completely occupied in the left half line and we study the evolution of the front, namely the position of the leftmost zero. A law of large numbers and a central limit theorem (Theorem 2.2) for the front, as well as the convergence to an invariant measure of the law of the process seen from the front (Theorem 2.1) are proven. Our results hold for q sufficiently large, more precisely for q >q whereq is related to the critical parameter in the threshold contact process. We stress that, even though we are in one dimension, proving the ballistic motion of the front is non trivial. Indeed, due to the non attractiveness of the dynamics, the classic tool of sub-additivity [Dur80] cannot be used. Obviously, all the following results will also be true for the position of the rightmost zero in the FA-1f starting from a configuration which has a zero at the origin and is completely occupied in the right half-line.
The motion of the front has been analyzed in [Blo13,GLM15] for another one dimensional KCM, the East model, for which the constraint requires the site at the right of x to be empty: ergodicity of the measure seen from the front, law of large numbers, central limit theorem and cutoff results have been established. A key tool for the East model introduced in [AD02] and used in [Blo13,GLM15] for the study of the front, is the construction of a distinguished zero, a sort of moving boundary which induces local relaxation to equilibrium. This construction relies heavily on the oriented nature of the East constraint and cannot be extended to FA-1f and to generic KCM. Another consequence of the orientation is that the cutoff result in [GLM15] follows immediately from the central limit theorem for the front, while for FA-1f it would involve a more complex argument.
A sketch of the main step of our proof follows. Our first key result is to prove relaxation to equilibrium far behind the front (Theorem 5.1). In order to establish this result, we couple FA-1f dynamics with a sequence of threshold contact processes (Lemma 4.1) where zeros flip to ones at rate p without any constraint, and ones flip to zeros at rate q if and only if there is at least one nearest neighbor zero. The first contact process starts with a zero at the position of the front (namely at the origin), then if the contact process dies we restart a new one from the last killed zero. The threshold contact process is attractive, and it is well known that for q above a thresholdq < 1 the front of the process conditioned on survival moves ballistically. Due to the fact that there is no constraint for the contact process for the move 0 → 1, we can couple FA-1f and contact trajectories in such a way that FA-1f configurations contain more zeros than contact process configurations. Then, we can use the well-known behavior of the contact process (ballistic motion of the front, shape theorem for the coupled zone) to guarantee enough zeros behind the front of FA-1f. This will be the work in Section 4. The construction is illustrated by the first simulation in Figure 1; we can see on the second one that the ballistic behavior of the FA-1f front seems still valid for some q ≤q but in this regime, the contact process does not give us any useful information. Afterward, we apply the techniques of [BCM+13] (Corollary 3.3) to prove relaxation to equilibrium using these zeros in Section 5. Once this result is established, we construct a coupling inspired by [Blo13,GLM15] to prove ergodicity of the process behind the front, namely convergence to the unique invariant measure for the process seen from the front (Section 6). This convergence result allows us to analyze the increments of the front (Section 7.1) and to deduce a law of large numbers. Finally, to study the fluctuations of the front, we generalize the strategy of [GLM15] which is in turn based on the result of Bolthausen [Bol82] which allows to establish a central limit theorem for random variables which are not stationary but satisfy a proper mixing condition (Theorem A.1).
Figure 1: Simulation of FA-1f dynamics (gray points) coupled with restart threshold contact processes (white points). The first one is for q >q and the second one for q <q.
flipped at site x. We denote by θ y the space shift operator of vector y: θ y σ(x) = σ(x + y). We denote by δ y the configuration such that δ y (x) = 1 for all x = y and δ y (y) = 0.
The FA-1f dynamics on Ω are described by a Markov process with the following generator: for any local function f and any σ ∈ Ω, is the product between a constraint c(x, σ) and a flip rate qσ(x) + p(1 − σ(x)). The constraint c(x, σ) = 1 − σ(x − 1)σ(x + 1) requires at least one empty neighbor to allow the flip. The parameter p is the rate to update to 1 and the parameter q = 1 − p is the rate to update to 0. Let σ t be the configuration at time t starting from σ. When there is no confusion, X(t) denotes the front of the configuration σ t . In the following, all constants may depend on q.
It is easy to verify that the FA-1f process is reversible w.r.t. µ := Ber(p) ⊗Z .

An auxiliary process: the threshold contact process
We introduce a threshold contact process where the 0's are the infected points and the 1's are the healthy ones. Its dynamics on Ω is given by a Markov process with the following generator: for any local function f and any η ∈ Ω, ). In this model, the constraint only applies to a flip 1 → 0. Namely, a 1 particle can flip to 0 only if one of its neighbor is 0; we interpret the state 0 as an infection that propagates by contact. The flip from 0 to 1 is a spontaneous recovery. Let η t be the configuration at time t starting from η. We define the extinction time of the threshold contact process by that is the first time when the threshold contact process has no more zero (i.e. infected site). This state is absorbing for the process. If η 0 has a finite number of infected sites, τ (η · ) can be finite. For the FA-1f process, the corresponding extinction time is always infinite because a single zero can not disappear. If the event {τ (η · ) = +∞} occurs we say that the contact process survives.
To ensure that the threshold contact process survives with positive probability, we need to suppose that q p > λ TCP c (Z) where the critical parameter of the threshold contact process λ TCP c (Z) has an approximate value of 1.74 (cf. [BD88]). In the following we will suppose a stronger condition which is q p > 2λ c (Z) where λ c (Z) is the critical parameter of the classical contact process and has an approximate value of 1.65 (cf. [BFM78]). This hypothesis allows us to use all the classical contact process estimates (cf. Appendix B) instead of having to reestablish them for the threshold contact process in the intermediate regime λ ∈ (λ TCP c (Z), 2λ c (Z)]. So, in this paper, we will consider q > q where This corresponds approximately to taking q 0.76 rather than q 0.63.

Main results
Now we have the tools to enounce precisely the theorems that we will prove in this paper. The first one is the ergodicity of the process seen from the front. It will be proved thanks to a major coupling in Section 6.
Theorem 2.1. Let q > q. The process seen from the front has a unique invariant measure ν and, starting from every σ ∈ LO 0 , it converges in distribution to ν in the following sense: there exist d * > 0 and c > 0 (independent of σ) such that for t large enough whereμ σ t is the distribution of the configuration seen from the front at time t starting from σ, i.e. θ X(σt) σ t , and π − π Λ denotes the total variation distance between the marginals of π and π on Λ.
Remark. For every α > 0, the velocity of convergence e −e c(log t) 1/4 is less good than e −t α (which was the velocity obtained in the East case by [GLM15] for some α < 1/2) but it is better than e −(log t) 1/α (and in particular better than any polynomial velocity).
The second one is a law of large numbers and a central limit theorem for the front. where

Graphical construction and basic coupling
We briefly recall here the graphical construction for the FA-1f and contact process, which allows to construct the two processes on the same probability space and to compare them pointwise. Troughout the text, N denotes the set of positive integers. Let C = (B x,n , E x,n ) x∈Z,n∈N be a collection of independent random variables, where for all (x, n) ∈ Z × N, B x,n ∼ Ber(p), E x,n ∼ Exp(1). These variables are interpreted as follows: with each site x ∈ Z we associate a sequence of exponential clock rings given by the n k=1 E x,k , n ∈ N, and with the clock ring at time n k=1 E x,k we associate the Bernoulli variable B x,n . Starting from configurations σ, η ∈ Ω, we construct a FA-1f process (σ t ) t≥0 and a (threshold) contact process (η t ) t≥0 using the same collection C. When a clock rings at x, we update each process at this site to the associated Bernoulli variable, provided the constraint of the process is satisfied (for instance, if the Bernoulli variable is 1, the contact process automatically updates, while the FA-1f needs at least one empty neighbor). We call (σ t , η t ) t≥0 the basic coupling started from (σ, η) using C. We denote by P, E the associated probability and expectation. This probability space allows us to construct the processes with any initial configuration simultaneously. We will denote by P σ,η (respectively P σ , P η ) the associated probability to the initial configuration (σ, η) (respectively the projected probabilities).
The following property will be our main tool to guarantee a minimal quantity of zeros in the FA-1f process. Lemma 2.3. If σ ≤ η (pointwise), with the above construction, we have a.s.
Proof. It is not difficult to check that every possible transition preserves the order.
It will also be useful to define a (space) shift operation on the collection C by To quantify the amount of zeros we will introduce the following event. A 0-gap is an interval without zeros. If l ≤ M + 1, it is also equivalent to say that there is at least one zero in the box and the maximal distance between two zeros in the box [L, L + M ] is less than l.

Finite speed propagation
Classically, this type of graphical construction implies finite speed of propagation in the following sense. For x, y ∈ Z and t > 0, we denote by F (x, y, t) = {before time t there is a sequence of successive rings linking x to y}, Above, a sequence of successive rings linking x and y means that (if e.g. x < y) there is a clock ring at site x, and then at site x + 1 and so on until y is reached. Standard results on Poisson point processes imply the following. Lemma 2.5. There exists a constant v (which is bigger than 1) such that, for every t and x, y ∈ Z such that |x − y| ≥ vt then P(F (x, y, t)) ≤ P(F (x, y, t)) ≤ e −|x−y| .
This has immediate implications for the maximal velocity of the front in the FA-1f process and for the propagation of the contact process.

Relaxation in FA-1f
In this section, we collect results about the relaxation to equilibrium (i.e. to µ = Ber(p) ⊗Z ) of the FA-1f process. These can be deduced from the proofs in [BCM+13] as explained below.
Proof of Proposition 3.1. The proof of this result is essentially contained in [BCM+13]. We reproduce here the arguments that we need. First, let (σ Λ t ) t≥0 denote the FA-1f process restricted to Λ = [−K − vt, K + vt] ∩ Z with empty boundary condition and started from σ Λ . By finite speed of propagation, we have, for f bounded with support in [−K, K] Also, Proposition 5.1 in [BCM+13] ensures that for some c = c(q) < ∞, ) ≥ 2 is the event that there are at least two zeros in each Λ i .

Proof of Proposition 3.2.
The only change with respect to the above proof is that we don't need the finite volume propagation step (7).
where (σ The assumption K ≤ e t α with α < 1/2 represents in fact the largest support we can consider for f such that the estimate in Proposition 3.1 is useful. Indeed, it does not give a vanishing estimate for K = e √ t .

Coupling between FA-1f and contact process
We wish to exploit the comparison result between the FA-1f and contact processes (see Lemma 2.3) to guarantee a sufficient number of zeros for the FA-1f dynamics. To this purpose, since the contact process can die, we will need first to do a restart argument.

Restart argument
Lemma 4.1. Let q > q. For any σ ∈ LO 0 , there exist a process (σ t , η t ) t≥0 taking values in Ω 2 and two random variables T and Y taking respectively their values in R + and Z such that 3. (η T +t (Y + ·)) t>0 is a surviving threshold contact process starting from δ 0 .
Moreover, T and |Y | have exponentially decaying tail probabilities. Proof. The idea is to couple a FA-1f process with a contact process and to restart the second one each time that it vanishes. Eventually, the contact process will survive (because q >q) and the space-time point (Y, T ) corresponding to the origin of this surviving contact process is not very far from the origin. The procedure is illustrated by Figure 2. Let (C (i) ) i∈N be a sequence of independent copies of the collection described in Section 2.4 and P their distribution. For i ∈ N, let η (i) · ) be the extinction time of η (i) · as defined in (2). The random variables (U i ) i∈N are independent and identically distributed and, by our choice of q, we have P(U 1 = ∞) > 0. Let The random variable L has geometric distribution. Moreover, conditionally on {L = l}, Then T has exponentially decaying tail probabilities. Indeed, from Estimate (47) (cf. Appendix B), we have for t > 0 So, we can choose β 1 and β 2 such that E[e β 1 (L−1) ] < ∞ and E e β 2 U 1 | U 1 < ∞ < e β 1 . We have that We construct recursively a sequence of processes (σ t ) t≥0 and random variables X i ∈ Z for i ∈ N.
t ) t≥0 be the basic coupling started from (σ, δ 0 ) using C (1) . We define Since L has geometric distribution, the algorithm fixates almost surely in finite time: and Y = X L−1 . Moreover, since the U i are stopping times, (σ t ) t≥0 is a FA-1f process started from σ. We also have immediately that (η T +t (Y + ·)) t>0 is a surviving threshold contact process starting from δ 0 . Finally, Lemma 2.3 implies that σ t ≤ η t for all t ≥ 0; indeed, by definition, X i is a zero of the configuration σ It remains to show that Y has exponentially decaying tail probability. To that end, . . , l−1} are independent and have the same distribution: they represent the expansion of a non-surviving contact process. For t > 0, using the 'at most linearity' (6) of the contact process with a < 1 2v . We conclude using the exponentially decaying tail of L and T :

Consequences of the coupling
The first consequence of the previous coupling is the 'at least linear growth' of the front of the FA-1f process.
Proof. Denote by v cp the velocity of the contact process (see Theorem B.1).
, T and Y be the objects defined in Lemma 4.1. We denote by X(η t ) the position of the leftmost zero at time t in a contact process started from δ 0 . For every t > 0, c ∈ (0, 1), where we used that σ t ≤ η t (and therefore X(σ t ) ≤ X(η t )) in the first line. We choose c > 0 such that v+c 1−c < v cp and apply (48) of Theorem B.1 to the surviving contact process η u (·) = η T +u (Y + ·) and we bound the last probability. The fact that T and |Y | have exponentially decaying tails allows to conclude.
The second consequence of the coupling is to guarantee a minimal quantity of zeros around the origin in the FA-1f process.
Proof. Let us do the proof for x = 0. We use again the coupling from Lemma 4.1. For where we applied the Corollary B.2 to the surviving contact processη The fact that T and |Y | have exponentially decaying tails allows to conclude. If x = 0 we can do the same proof but we have to start the coupling from the point x and to restart from the closest zero (instead of the front) when the contact process dies.

Zeros lemma
In the following lemma we use repeatedly Corollary 4.3 to control the probability that, at time s, at distance L from the front and over a distance M , we have no 0-gap larger than l. 2. If L + M ≥ 2vs and σ ∈ H(0, L + M, 2vs) then there exists c > 0 such that Proof. The strategy is the following: we consider a number of zeros which we know are present in the dynamics, either because they are present in the initial configuration or because they are well-chosen intermediate positions occupied by the front. For each of these zeros, we use Corollary 4.3 to guarantee that at time s, a given interval around them contains no gap larger than l/2. We then control that w.h.p. the different intervals thus For i ∈ {0, . . . , n}, by Corollary 4.3 and the Markov property applied at time s i , for some constants C, c > 0 contains no gap larger than l/2 with high probability.
In case 2, we also need to use the zeros of the initial configuration. Let 0 =: x 0 < x 1 < . . . < x m be the ordered set of zeros located between 0 and L + M in the initial configuration σ. Then by Corollary 4.3, for i ∈ {0, . . . , m}, i.e. [x i − vs, x i + vs] contains no gap larger than l/2 with high probability. The next step is to control the respective positions of the intervals we introduced to check that with high probability they cover [X(s) + L, X(s) + L + M ]. Let us consider for all i ∈ {0, . . . , n − 1} the events that Fix k ∈ {0, . . . , n}. With our choice of ∆ and n, if (11) and (12) hold for i ∈ {n − k, . . . , n − 1}, we have for i ∈ {n − k, . . . , n − 1} To derive (14) we used that v∆ ≤ v(2∆ + ∆ ). These equations in turn imply that . Then 2v(∆ + k∆ ) ≥ L + M , and the above arguments and the bounds we have on the speed of the front yield where we used Corollary 4.2 and Lemma 2.5.
Case 2: L + M ≥ 2vs Our arguments imply that

Relaxation far from the front
Now, we will use the bounds on the speed of the front ((5) and Corollary 4.2) and relaxation results (Corollary 3.3) to prove a relaxation far from the front. To do that we need to decorrelate the front trajectory from the interval in which we want to relax.
Theorem 5.1. Let q > q and σ ∈ LO 0 . Let α < 1/2 and δ > 0. There exists c > 0 such that for any M ≤ e δt α , any f with support in Remark. The proof strategy of the equivalent theorem in [Blo13] By finite speed of propagation, we have with probability 1 So, for y ≤ vt, the probability that there exists a sequence of successive clock rings between 3vt − y and max u∈ [0,t] |X(σ u )| is less than O(e −t ). On the event where there is no such sequence, 1 1 X(σt)=−y and f (θ 3vt−y σ t ) are independent. Therefore, In order to apply our relaxation result Corollary 3.3 on the interval we need to check that which is clearly satisfied if y ∈ [vt, vt]. Therefore, our assumption on σ and Corollary 3.3 imply that which in turn yields the desired result.

Invariant measure: proof of Theorem 2.1
Proof of Theorem 2.1. We start with two initial configurations σ and σ in LO 0 and we prove that there exist d * > 0, c > 0 (independent of (σ, σ )) such that for t large enough whereμ σ t is the distribution of the configuration seen from the front at time t, that is θ X(σt) σ t .
Thanks to Theorem 5.1 we know that far from the front the configurations starting respectively from σ and σ will be close to a configuration sampled by the equilibrium measure, so they will be close tp one another (for the total variation distance). Following the strategy of [Blo13,GLM15], we construct a coupling where we use this property and where we wait until the configurations also coincide near to the front. Given > 0 and t > 0, we fix the following quantities: The 'perfect procedure' would be: Step 0. Both configurations have a lot of zeros at time t 0 Step 1. Thanks to these zeros, they both closely match equilibrium far from the front after a time-lag ∆ 1 . Thus, they match each other for the same conditions.
Step 2. Then, during a time-lag ∆ 2 , a very favorable event happens and the configurations coincide also near to the front.
Roughly speaking, the steps 0 and 1 are very likely and the step 2 has very small probability. So we will repeat step 2 a lot in order to make it succeed. In practice we also need to repeat step 1 because multiple tries of step 2 could destroy the assets of step 1. To do that, the time t − t 0 will be split in N = t ∆ repetitions of steps 1 and 2, respectively of duration ∆ 1 and ∆ 2 . For n ∈ {0, . . . , N }, let t n = t 0 + n∆ (resp. s n+1 = t n + ∆ 1 ) the instants at which each of the N repetitions of step 1 (resp. step 2) begins. The repetition of the N steps is illustrated by Figure 4 and the steps 1 and 2 are illustrated by Figure 5.
During the description of the precise procedure, we will use the basic coupling which consists in making the two configurations evolve according to the graphical representation using the same Poisson clocks and coin tosses (as we did in Section 2.4 between the FA-1f and contact processes). Note that whenever we use the basic coupling in our construction, we mean the basic coupling between two FA-1f processes with generator given by (1) (and not to processes "seen from the front"). We will also use a trickier coupling: the Λ−maximal coupling, denoted by M C Λ (µ, µ ) where (µ, µ ) are two probability measures on Ω and Λ is a finite box of Z. It is defined as follows: 1. we sample (σ, σ ) |Λ×Λ according to the maximal coupling (which achieves the total variation distance see e.g. [LPW09]) of the marginals of µ and µ on Ω Λ ; 2. we sample σ |Z\Λ and σ |Z\Λ independently according to their respective conditional distribution µ(·|σ |Λ ) and µ (·|σ |Λ ).
• (Step 0) P (0) is the trivial product coupling: to sample (σ t 0 ,σ t 0 ) we run the FA-1f dynamics starting from (σ, σ ) according to the product coupling and we take the configurations seen from the fronts at time t 0 .
• P (n) → P (n+1) : we sample (σ tn ,σ tn ) according to P (n) : 1. If the configurations coincide on the interval I n = [1, d n ], where d n = 2vt n −(v+ v)∆n, then we let them evolve according to the basic coupling during a time lag ∆: we obtain (σ t n+1 ,σ t n+1 ) by running the basic coupling started from (σ tn ,σ tn ) for a time ∆, and then taking the configurations obtained as seen from the front. Using the basic coupling ensures that the equality of the configurations seen from the front is preserved w.h.p. on the interval I n+1 = [0, d n+1 ]. By contrast, beyond distance d n − (v − v)∆ from the front, the information from beyond I n could have propagated, so we can not ensure equality. See Figure 6. Figure 4: Coupling of the evolutions from distinct initial configurations: N repetitions of the special coupling during a time lag ∆. The labels T correspond to trials (Steps 1-2, detailed in Figure 5), where the coupling attempts to match the two configurations. After the first success, the standard coupling maintains the matching up to time t; labels S refer to the first item in the coupling construction (see Figure 6).
basic coupling maximal coupling basic coupling  2. If they do not coincide, we proceed in two steps. (Step 1) We first choose (σ s n+1 ,σ s n+1 ) using the Λ n -maximal coupling between the laws of the configurations seen from the front after a time ∆ 1 starting fromσ tn andσ tn , with (a) Ifσ s n+1 andσ s n+1 are not equal in the interval Λ n , then we let them evolve for a time ∆ 2 via the basic coupling. (b) (Step 2) If instead they agree on Λ n , then we search for the leftmost common zero ofσ s n+1 andσ s n+1 in Λ n and call x * its position. If there is no such zero, we define x * as the right boundary of Λ n . Then we sample an independent Bernoulli random variable β with P(β = 1) = e −2∆ 2 . The event {β = 1} corresponds to the fact that the two Poisson clocks associated with x * and with the origin in the graphical construction do not ring during step 2. Using (σ s n+1 ,σ s n+1 ) as initial configurations, we sample two new configurations as follows.
i. If β = 1 then we fix the configurations at x * asσ s n+1 (x * ),σ s n+1 (x * ). On [1, x * − 1], we sample the two configurations using the maximal coupling for the FA-1f process run during time ∆ 2 , with boundary conditions at x * beingσ s n+1 (x * ) andσ s n+1 (x * ) respectively, and 0 at the origin. Finally, we sample the configurations on the right of x * and on the left of the origin using the basic coupling during time ∆ 2 with the same boundary conditions. Let ξ n+1 be the (common) increment of the front during that time. See Figure 5. ii. If β = 0 then we let evolve (σ s n+1 ,σ s n+1 ) for a time ∆ 2 via the basic coupling conditioned to have at least one ring either at x * or at the origin (or both). Once procedure (a) or (b) is completed, we define (σ t n+1 ,σ t n+1 ) as the version seen from their fronts of the configurations we obtained.
The final coupling P t σ,σ is obtained by sampling (σ t N ,σ t N ) according to P (N ) , then applying the basic coupling during a time t − t N and then defining (σ t ,σ t ) as the configurations seen from the fronts at that time.
We can now study the probabilities of the events that we expect to see at the step n of the previous procedure.
• At time t n , we hope for enough zeros, that is, for the occurence of the event Z n = no 0-gap greater than ∆ 1 on [v∆ 1 , 2vt n ] behind the front at time t n := σ tn ,σ tn ∈ H v∆ 1 , 2vt n , ∆ 1 .

The distance
√ ∆ 1 is the maximal space that we allow between two zeros to relax to equilibrium at the next step; the distance 2vt n is the maximal distance where we can expect zeros without making any hypothesis on the initial configuration. By the first case of Lemma 4.4 we have • At time s n+1 , conditioning on the event Z n , we first can relax on the interval seen from the front Λ n = [3v∆ 1 , ∆ 1 ) in term of total variation by Theorem 5.1. Since the distributions of both configurations are close to µ on Λ n , they are also close to each other, and denoting by Q n = {σ s n+1 = σ s n+1 on Λ n } and sampling the configurations according to the Λ n -maximal coupling, we have Let B n the event corresponding to the fact that x * is at distance less than √ ∆ 2 2 from the left boundary of Λ n , that is On Q n , B n is implied byB By Theorem 5.1, and therefore Then, we also use the zeros at time t n to generate zeros between the front and Λ n at time s n+1 . Actually the event Z n implies thatσ tn ,σ tn ∈ H(0, 3v∆ 1 , 2v∆ 1 ) because v∆ 1 + √ ∆ 1 ≤ 2v∆ 1 . We consider the event Z n = no 0-gap greater than √ ∆ 2 2 on √ ∆ 2 2 , 3v∆ 1 behind the front at time s n+1 Applying the second case of Lemma 4.4 we have that: • At time t n+1 conditioning on the events Z n , Q n , B n and on the fixation of the zeros on X(s n+1 ) and x * (i.e. β = 1), we can relax to equilibrium on [X(s n+1 ) + 1, X(s n+1 ) + x * − 1], that is, on the interval [1 − ξ n+1 , x * − 1 − ξ n+1 ] seen from the front. So we have for all y ∈ Λ n at distance less than from the left boundary of Λ n : (22) applying (10). Furthermore, if the configurations coincide on Λ n at time s n+1 , then we can control the probability that they coincide on Λ n − ξ n+1 minus a sub interval of size v∆ 2 at time t n+1 by the probability of the event F (d n , d n − v∆ 2 , ∆ 2 ) corresponding to the propagation of the information located beyond Λ n . Indeed, by Lemma 2.5 and Corollary 4.2, we have (recall Figure 6), where the first inequality comes from the fact that, if ξ n ≤ −v∆ 2 , d n − v∆ 2 − ξ n+1 ≥ d n+1 . To conclude, notice that by construction Combining (22), (23) and (24), we get Let M n be the event of matching in I n = [1, d n ] at time t n : M c n = σ tn =σ tn on I n , and denote by p n the probability that the coupling P (n) is not successful, that is, p n = P(M c n ). With the previous estimates in mind, we prove the following recursive relation: Indeed, using the previously introduced event Z n , we have that The last two terms come from a reasoning similar to what we did in (23). By Lemma 2.5 and Corollary 4.2, their contribution is O(e −c∆ ). The quantity P (Z c n ) has been controlled in (18) so we now have to study P M c n+1 M c n ∩ Z n . Now we condition by the favorable events at time s n+1 : . The last terms have been controlled by (19), (20) and (21).
Finally, using (25) for t large enough. Combining with (27) we obtain the desired recursivity (26). So, we get and we conclude that p N = O(e −ce (log t) 1/4 ), replacing ∆ 1 , ∆ 2 , N by their chosen values. To conclude the proof of (17), we now compute the distance on which the matching occurs w.h.p. At time t N the coupling was successful on I N with probability 1 − p N . Note that d N ≥ (2v − (v + v))t − 2v∆. Let = v 2(v+v) and d * = v. Then d * t ≤ d N and if we let the configurations evolve according to the basic coupling between time t N and time t we have ) ≤ e −ct and we obtained the announced convergence.
The fact that the set of probability measures on Ω is compact gives us the existence of invariant measures. The uniqueness and convergence starting from any π measure on LO 0 comes from (17) and we conclude the proof of Theorem 2.1.

Moments and covariances
Thanks to Theorem 2.1 we can study the increments of the front. We consider σ ∈ LO 0 . For n ∈ N, we introduce the following increments ξ n = X(σ n ) − X(σ n−1 ), so that, for t > 0, In order to prove a law of large numbers and a central limit theorem, we want to control the moments (Lemma 7.1) and the covariances (Lemma 7.3) of these front increments. Lemma 7.2 proves the convergence of E σ [ξ n ] to E ν [ξ 1 ], which leads to the fact that the covariance between ξ j and ξ n decays fast enough.
Remark. Consequently, we have the same result for E σ [f (ξ n ) 2 ], for the expectation under ν and for the covariances between increments.
Proof. The result follows from the finite speed of propagation. Indeed, by Lemma 2.5 we have that, for σ ∈ LO 0 and |x| ≥ v Then, for every σ ∈ LO 0 , we have The right bound does not depend on σ, so we obtain the announced result.
Lemma 7.2. There exists γ > 0 such that for f : Proof. We use the Markov property at time n − 1 to write where µ σ t is the distribution of the configuration at time t starting from σ. Let Φ t (σ ) be the configuration equal to θ X(σ ) σ on [0, d * t] and equal to 1 elsewhere (we recall that d * is a quantity defined in Theorem 2.1). Under the basic coupling (denoted by E in the following computations), the front of the configuration after a time δ starting with σ is different from the one starting with Φ t (σ ) only if the eventF (0, d * t, δ) occurs. So, denoting by X andX the first increments starting respectively from configurations σ and Φ n−1 (σ ), we have that − 1), δ)).
Similarly (taking the expectation w.r.t. ν), we have that Besides, we can write that Finally, putting everything together Applying Theorem 2.1 we obtain the desired control.
1. The idea is the following one: if the difference between j and n is large enough then ξ j and ξ n are almost independent. Let (F j ) j∈N be the filtration associated with the random variables (ξ j ) j∈N . Using the Markov property and Cauchy-Schwarz inequality we have that Then, we apply Lemma 7.2 and we obtain the first point: which is relevant if n − j is large. Combining this with Lemma 7.2 yields the result for the covariance under ν.
2. If n − j is not large but j (and n) are, we are looking at the process after large times so the configurations seen from the front are close to configurations sampled by ν. We can use the same trick as in the proof of the previous lemma, replacing the expectations by the covariances to obtain the second point. Indeed, we have Using the Markov property at time t j−1 , finite speed propagation under the hypothesis d * (j − 1) ≥ v(n − j + 1) and Theorem 2.1, we can show that We conclude using Lemma 7.2 which says that The second point in the previous lemma can be generalized in the following way.

Law of large numbers and central limit theorem
Proof of Theorem 2.2. We first proove (3). We take inspiration from the proof of the law of large numbers of [GK11].
i ] ≤ c by Lemma 7.1. First we compute the variance of Sn n : Var σ ξ i + 2 n 2 n j<k Cov σ ξ j ,ξ k .
The first sum is dominated by cn. We control the second one thanks to Lemma 7.3.

So, Var σ
Sn n ≤ C n . Applying Chebychev Inequality and Borel-Cantelli Lemma, we deduce that S n 2 n 2 converges a.s. To prove the convergence of the complete sequence, we need to control Var σ Sp p − Sn n with p = p(n) = √ n 2 . It is easy to check that n−p ≤ 2 √ n ≤ p for n large enough. We introduce the following quantity D n,p =ξ p+1 + . . . +ξ n .
We have that The two first terms are dominated by C(n−p) n 2 ≤ 2C n 3/2 . Using the same computations as above, we have that This yields and by the same arguments as previously, p(n) − Sn n converges almost surely to 0, as well as S p(n) p(n) . We deduce that almost surely, Sn n converges to 0. Moreover, by Lemma 7.2, is the Cesaro mean of a convergent sequence. It is also clear (e.g. by finite speed of propagation) that converges a.s. to 0. To identify the limit in (3) and conclude, we observe that which converges to the announced velocity.
(c) for every k, n ≥ 1, f : R n → R measurable such that sup σ E σ [f (X 1 , . . . , X n )] < ∞, for all initial σ, we have the Markov property 2. There exist a decreasing function Φ, constants C, c * ≥ 1 and v ∈ R and a measure ν such that (d) for every k, n such that k≥c * n, (e) for every k, n such that k > c * n and any bounded function F : Then, there exists s ≥ 0 such that Lemma A.2. The hypotheses of Theorem A.1 imply that there exists C > 0 such that for every i ≤ j, Proof. On the one hand, by Hypothesis (1b) and the Markov property, we have where the last equality comes from (30) applied twice to f (x) = x and Hypothesis (1a). The same strategy also shows Cov On the other hand, if i ≥ c * (j − i + 1), we can apply (31) and again (30) to get Proof of Theorem A.1. We begin by showing that We have s 2 < ∞ by (34). Moreover, by (30) applied to f (x) = x 2 , Similarly, by (35), which concludes the proof of (36). The proof then depends on the value of s 2 : if s 2 = 0, Chebychev's inequality shows that n i=1 X i −vn √ n − → 0 in probability. If s 2 > 0, the proof appeals to the variation of Stein's method found in [Bol82]. For The theorem's hypotheses and the boundedness of the Y i 's imply that there exists C such that 1. For every i ≤ j < k ≤ l we have that 2. For every i ≤ j ≤ k < l we have that We just give details for the first item. We have that and we conclude using hypothesis (31). The others cases are obtained applying the Markov property at time k and hypothesis (30). Thanks to (33), we have that (1 + o(1)) and it is enough to prove that Sn √ αn is asymptotically normal. The main idea is to use a lemma from [Bol82], itself inspired from Stein method, which is the following one.
Then the sequence (ν n ) converges to the standard normal law.
The proof can be found in [Bol82]. We want to apply this lemma to the distributions of the random variables Sn  − k)). The verification of item (a) follows.
We now need to check item (b). Let λ ∈ R and A n = iλ − Sn √ αn e iλ Sn √ αn . To prove that E σ [A n ] goes to 0 we part A n in three terms: The decaying of the two first terms come easily from the estimates on the covariances.
If |i − j| ≥ (c * + 2) n , then w.l.o.g i ≤ k < j ≤ l and (j − k) ≥ c * n ≥ c * (l − j); so applying (37), we have that If we are not in the previous case then w.l.o.g i ≤ k ≤ j < l) and applying (38) then Consequently, Using the Taylor expansion of the exponential and the fact that E σ [S 2 j,n ] = O( n ) thanks to (33), we easily control the term A (2) n : To study E[A (3) n ], Bolthausen uses the stationarity of the field but we do not have this property here. We need to analyse carefully this term as it is done in [GLM15]. First, using the boundedness of the variables (Y j ) and the fact than n √ α n we just need to prove that The usefulness of considering the sum from n instead of 0 will appear later. We fix M a number (which will eventually depend on n) and we develop the exponential in two parts (its partial sum and its remainder): Lemma A.4. If m ≤ log n 7 log(c * +1) , then for any j ∈ { n , . . . , n} and any (i 1 , . . . , i m ) ∈ τ Proof. Let m ≤ log n 7 log (c * +1) , j ∈ { n , . . . , n} and (i 1 , . . . , i m ) such that ∀k, |i k − j| ≥ n . We have several cases 1. if i m ≤ j − n , applying Markov property at time i m and hypothesis (30), we have that So, using Markov property at time j and hypothesis (32), we have that where the last equality comes from the first case where all the indices are smaller that j − n .
(b) Else, we denote by k * = max{k ≥ b + 1 : If all the indices between b + 1 and k * satisfy (2a) then we are reduced to the previous case. If not, we iterate until we are reduced to the previous case or to the first case (all indices smaller than j − n ).
3. If i 1 ≥ j + n , then we do the same computation as in the previous case with b = 0 and we conclude using that E σ [Y j ] = O(Φ(j)) = O(Φ( n )) because j ≥ n .
We now have to understand the contribution of R M j,n (λ) to (40). We fix L a number (which will eventually depend on n). Using classical formulas on the remainder of the exponential series (and Cauchy-Schwarz inequality for the second inequality), we obtain that The right-hand side of (41) will be taken care of by an appropriate choice of L. The right-hand side of (42) calls for a better understanding of S n and S j,n .
Lemma A.5. There exists c > 0 such that for any n large enough and any β = O( √ n −1 n ), Proof. For A ⊂ {1, . . . , n}, let Z A = k∈A Y k . It is immediate that |Z A | ≤ C Y |A|. We partition the interval {1, . . . , n} into blocks B of n successive integers. Let B be a block such that the smaller index s B of B is larger than c * n . Let t ∈ {0, · · · , s B − c * n }. We can apply hypothesis (32) with the function F : (x 1 , . . . , x n ) → exp( β x i ∧( nCY ) √ n ) (and the parameter k = s B − t > c * n ): where Z B−t designates the sum of n variables starting from s B − t. Furthermore, we have that Now we consider B (and S B the associate sum) a set of blocks such that the distance between two blocks B and B of B is larger than c * n ; in other words, if t B is the maximal index in B and s B is the minimal one in B then s B − t B > c * n . Let B and B be two such blocks of B. Markov property and Equation (43) By definition of B we clearly have that |B| ≤ n n so after iteration, we conclude that E σ exp β S B √ n ≤ 1 + c β 2 n n |B| ≤ exp cβ 2 n n n n ≤ exp(c β 2 ).
Let us go back to the whole sum S n ; we can write S n = Z 0 + S B 1 + . . . + S B c * where the B i are disjoint sets of blocks with the same property as B and Z 0 is the sum of the c * n first terms. By Hölder's inequality, we have that where we use that E σ exp βZ 0 √ n ≤ 1 + βC Y n √ n . We could proceed similarly to control E σ exp −β Sn So, if L = O( √ n −1 n ) then we can apply Lemma A.5 with β = L (with small enough) and use the exponential Chebychev inequality to obtain that P σ S n − S j,n √ α n > L ≤ e − L 2 E σ exp β S n − S j,n √ α n ≤ e − L 2 .
We recall that we have previously chosen M = log n 7 log(c * +1) . Now we choose L = log n 7 log(c * + 1) · (10 ∧ (|λ| + 2)) and we use Lemma A.5 and Equation (44)  Putting everything together we conclude the proof for bounded random variables Y i . We use a classical truncation argument to prove the result for non bounded random variables: let T N (x) be the truncation operator and R N (x) the remainder: Thanks to (33) we have that converges to 0 as N goes to ∞ uniformly in n. Using moreover that Var σ R N (Y i ) → 0 as N goes to infty, uniformly in i, we can conclude that the central limit theorem is still valid for the unbounded variables.