Stationary product measures for conservative particle systems and ergodicity criteria

We study conservative particle systems on W^S, where S is countable and W = {0, ..., N} or the natural numbers. The rate of a particle moving from site x to site y is given by p(x,y) b(eta_x, eta_y), where eta_z is the number of particles at site z. Under assumptions on b and the assumption that p is finite range, which allow for the exclusion, zero range and misanthrope processes, we show exactly what the stationary product measures are. Furthermore we show that a stationary measure mu is ergodic if and only if the tail sigma algebra of the partial sums is trivial under mu. This is a consequence of a more general result on interacting particle systems that shows that a stationary measure is ergodic if and only if the sigma algebra of sets invariant under the transformations of the process is trivial. We apply this result combined with a coupling argument on the stationary product measures to determine which product measures are ergodic. For the case that W is finite this gives a complete characterisation. For the case that W is the set of natural numbers we show that for nearly all functions b a stationary product measure is ergodic if and only if it is supported by configurations with an infinite amount of particles. We show that this picture is not complete, we give an example of a system where b is such that there is a stationary product measure which is not ergodic, even though it concentrates on configurations with an infinite number of particles.


Introduction
For the exclusion, inclusion, zero range and misanthrope process [3,5,7,12] there is a long history of research into the stationary and ergodic measures. For the exclusion process it is known for a long time that the model has invariant product measures which are indexed by the particle density per site. It was shown that the model has stationary measures which have a constant density and that there are measures which are indexed with a parameter (λ x ) x∈S that is reversible with respect to the random walk kernel p, i.e. λ x p(x, y) = λ y p(y, x), see e.g. [12]. This picture was shown to be true for other models as well [5,7]. For the zero range process however this picture was not complete as was shown in Andjel [3]. The underlying parameters λ for a product measure in case of the zero range process were only required to satisfy x λ x p(x, y) = λ y . In 2005 Bramson and Liggett [4] extended the picture for the exclusion process by showing that product measures for which (λ x ) x∈S satisfies x λ x p(x, y) = λ y and if λ x p(x, y) = λ y p(y, x) then λ x = λ y are stationary as well.
The problem of finding all ergodic measures for such systems is still open. Progress has been made to classify which stationary product measures are ergodic. For the exclusion process this process was solved in Jung [10], for the zero range process the problem is solved in Sethuraman [17] with additional conditions on the interaction function g. Also for the misanthrope process and the inclusion process ergodicity problems regarding product measures are solved [2,7]. For different models different methods are being used, Sethuraman [17] however, uses an approach that works for a range of models.
In this paper show in that these questions can dealt with regardless of the specific model, i.e. we will work with systems with a generator of the form Lf (η) = x,y p(x, y)b(η x , η y )(f (η − δ x + δ y ) − f (η)) where p is finite range and where function b depends on the model that we are working with. We will take b bounded for convenience, but the methods are not restricted to this case.
We start in section 2 by proving that a stationary measure µ is ergodic if and only if the tail sigma algebra of the partial sums is trivial under µ. In fact this is a consequence of a result that is valid for more general interacting particle systems(IPS). We will show that a stationary measure µ for a general IPS is ergodic if and only if the sigma algebra of sets that are invariant under the possible transformations of the system is trivial under µ. This result also shows that stationary measures for Glauber dynamics are ergodic if and only if they are tail trivial.
In section 3 we will address the question of stationarity of product measures. We show that the idea of Bramson and Liggett [4] extends to other models and that the structure of the set of stationary invariant measures depends crucially on the structure of the function b. If you put more restrictions on b less parameter sets λ will yield a stationary product measure. Also we show that these are exactly the stationary product measures of this type and no more can be found.
After that we apply these results in section 4 to show which product measures are ergodic. We use a coupling proof to extend the results of Jung [10] to the case where W = {0, . . . , N }, hence completely resolving the question if W is finite. We use the same techniques to show similar results for the case that W = N. In this case however we find some interesting behaviour. For most functions b we see that a product measure is ergodic if it has zero mass on configurations with a finite number of particles, this behaviour is consistent with the behaviour found for the zero range process [17]. For certain functions however this behaviour breaks down, as we illustrate in section 5 with an example of a system where we have a stationary, but non ergodic, product measure which concentrates on configurations with an infinite amount of particles but which follows a certain increasing deterministic profile.

Main Results
Let E = W S be the set of configurations (η x ) x∈S for a countable set W and a countable set S. Let B be the product σ-algebra. For example the exclusion process is defined on {0, 1} S , the zero range process on N S and the stochastic Ising model on {−1, 1} S . By η(t) = (η(t)) i∈S we describe the configuration of the process at time t. We order S by a bijection φ : S → N, so i < j if φ(i) < φ(j). Using this ordering define S n = {x ∈ S : φ(x) ≤ n} and B n = σ{η i : i ∈ S n }. Define ∆ f (i) = sup {|f (η) − f (ζ)| : for j = i : η j = ζ j } the variation of f at i ∈ S. Define the space of test functions by Note that for W = {0, 1} and b(n, k) = n(1 − k) we obtain the exclusion process and that for W = N and b(n, k) = g(n) we obtain the zero range process. We will refer to b as the rate function and to this class of processes by the name product type processes. We will assume that We also make the following assumption. In the case that W is a finite set we know by theorem I.3.9 in Liggett [12] that there exists a process η(t) and semigroup S t : C(E) → C(E) corresponding to L b,p . With the same techniques it is not hard to show that there is a process η(t) and semigroup S t : D → D in the case that W = N and b is bounded. In both cases D is a core for L b,p . Note that in the case that W = N it is not the case that D = C(E). It seems that D which is the uniform closure of bounded local functions is the natural space to work with. The zero range process has been constructed also for unbounded b in Andjel [3], the results that we obtain in this article do not improve upon the results of Sethuraman [17] with respect to the zero range process, so we will not deal with with this construction. The methods developed here apply to the zero range process as the methods are valid regardless of the structure of b.
For a more general interacting particle system we follow the notation of Liggett [12]. For T a finite subset of S and ζ ∈ W T let c T (η, ζ) be the rate at which the system makes a transformation from configuration η to configuration θ T,ζ (η) which is defined by If we assume W to be finite theorem I.3.9 in Liggett [12] gives that L generates a Markov process η(t) and semigroup S t : C(E) → C(E) for which D is a core. One of the two assumptions for this theorem to hold is we state this assumption as we need it for calculations later on. Note that in the particular case of product type systems assumption 1.1 implies assumption 1.3.
Furthermore we define the set of stationary measures for the process generated by L by I(L), proposition 4.9.2 in Ethier and Kurtz [6] shows that We start with stating the result on ergodicity.

Ergodic measures for general IPS
In this section we work with a generator L that is given by equation (1.2). For the results that follow we need the following assumption.
there is a n ∈ N, there are finite sets T 1 , . . . , T n ⊂ S and ζ 1 ∈ W T1 , . . . , ζ n ∈ W Tn such that for all i ≤ n: This assumptions states that if the Markov process allows the transformation from η to θ T,ζ (η) then there is a sequence of possible transformations that returns the configuration to η. Under this assumption we can define the following σ-algebra.
Definition 1.5. For a generator L define the σ-algebra G L of sets that are invariant under transformations of the process generated by L. That means that if G ∈ G L and η ∈ G, T ⊂ S finite, ζ ∈ W T such that c T (η, ζ) > 0 then θ T,ζ (η) ∈ G.
Note that by assumption 1.4 G L is a σ-algebra. We now state the main theorem of this section.
Theorem 1.6. If L generates a Markov process and µ ∈ I(L), then µ is ergodic if and only if G L is trivial under µ.
We give two corollaries to this theorem regarding two examples, see corollary 1.11 below. The first class of examples are spin flip systems, with a generators that read for some rate function r, where W = {−1, 1} and η x y = η y if y = x and η x x = −η x . Important examples are stochastic Ising models. The second class of examples are conservative systems, of which the product type systems are a special case. Also Kawasaki dynamics belongs to this case.
(a) The tail σ-algebra: The tail σ-algebra of the partial sums: We use this information combined with the following irreducibility assumptions to obtain corollary 1.11. Assumption 1.9. In the case that we are working with a conservative particle system, we assume that c is irreducible. This means that if we have two configurations η andη such that there is a finite box B ⊂ S such that η agrees witĥ η outside B and x∈B η x = x∈Bη x , then there exists a sequence of configurations η = η 0 , . . . , η n =η, so that we have a sequence of sites in S: Look for example at a product type conservative particle system, then this assumption is satisfied as a consequence of assumption 1.2. It is easy to see that this assumption implies that G L = A = H. Assumption 1.10. In the case that we are working with a spin flip system we assume that inf Under this assumption we see that G L = T . Note that we it is possible to work with a more general assumption then 1.10 but we do not need that here. The use of theorem 1.6 is not restricted to these cases however. For example it can also be applied to the tagged particle proces [13,15,16]. These models are just like the product type IPS, but now one is interested in the properties of a single particle, the tagged particle. One starts the dynamics from a translation invariant stationary product measure. Important information can be obtained by looking at the environment as seen from this tagged particle: the environment process. It is proven that the environment process also has a stationary product measure, see e.g. [13], proposition III.4.3, or [16], proposition 7. One would like to prove that this measure is ergodic, see [13], proposition III.

Results on product measures for product type conservative particle systems
We return to product type systems where the generator reads For the existence of product stationary measures we make the following two assumptions This property ensures that we obtain a set of invariant product measures and can be traced back to Cocozza-Thivent [5]. The second assumption is needed for the case W = N and will be explained below.
Under these assumptions the process generated by L b,p has a natural class of invariant product measures. These are defined in the following way.
λ * is the radius of convergence of the formal sum Z λ , so for λ < λ * we have that Z λ < ∞. Note that if we are working with W = {0, . . . , N } then we only define a k for k ≤ N , hence λ * will be infinite. Because of assumption 1.13 we know that λ * > 0.
Extend λ to have one value for each point in S, so λ ∈ [0, λ * ) S . Then: Definition 1.14. The one site marginal: let x ∈ S, then µ λx (n) = Z −1 λx a n λ n x The measure µ λ will be the product measure on W S : The set of measures of this type is denoted by We see that given a function b we obtain the set of measures P ⊗ (b). Note however that different b's can lead to the same set of probability measures. We identify the stationary product measures of this type. (a) If for all k it holds that b(n, k) = b(n, 0), i.e. the zero range process, then µ λ ∈ I(L b,p ).
and λ is such that if λ x p(x, y) = λ y p(y, x), then λ x = λ y , then it holds that µ λ ∈ I(L b,p ).
(c) If λ x p(x, y) = λ y p(y, x) for all x and y then µ λ ∈ I(L b,p ).
Furthermore, an invariant measure µ λ in the set P ⊗ (b) must be of one of these three types, i.e. λ is a solution of x λ x p(x, y) = λ y x p(y, x) and the pair (λ, b) satisfies (a), (b) or (c).
where s is symmetric. Choose for example r(n) = b(n, 0). 2 Furthermore it is an interesting question whether these results can be extended to infinite range p.
Now that we know what the class of invariant product measures is, we can apply corollary 1.11. A coupling argument is used to prove the following theorem. For a fixed generator (c) Furthermore if W = N and i:λi<1 λ i + i:λi≥1 1 λi = ∞ then µ λ is ergodic. Note that case (a) was proved also by Jung [10] for W = {0, 1}. His condition x λx (1+λx) 2 = ∞ seems different but is equivalent to the one given here. Remarks 1. In the case that W = N one might think that it is possible to prove that i λ i = ∞ implies that µ λ is ergodic without any further conditions like ( * * ). We show that this is not possible in section 5. We give an example of a system where b and p have a specific structure such that there exists a product measures of the given type such that i λ i = ∞, while µ λ is not ergodic. This raises the question under which additional assumptions i λ i = ∞ implies ergodicity. The proof of (b) shows some analogy with the proof of theorem 1.8 in Aldous and Pitman [1] and the open question we see here is similar to the open question in [1], see theorem 1.8 and example 7.5 in that article.
2. We give an explanation for the symmetric nature of theorem 1.16 (a). We will see that the condition for ergodicity means that the measure concentrates on configurations which have an infinite number of particles i.e. η i = ∞, but also such that i (N − η i ) = ∞, i.e. infinitely many anti-particles. We give a more intuitive view on this by the following approach. Instead of saying that a particle moves from site x to site y with rate p(x, y)b(η x , η y ) one could say that an empty spot, or anti-particle moves from site y to site x with ratẽ For more details on this rewrite see section 6 below.
2 Proof of theorem 1.6 and lemma 1.8 We start with the proof of lemma 1.8 which states that A = H. We refer to the point φ −1 (0) as the origin.
Proof of lemma 1.8. Let A ∈ A and fix n, we show that A ∈ H n . By the defining property of A we see that A does not depend on the exact configuration of η in S n given its configuration on S c n but just on the sum of the values in S n . We elaborate on this argument a little for the case that W = N. If one understands the argument for this case then it is clear for the finite case too. Suppose that we have a configuration η ∈ A. We see that the configuration So any configuration that is equal to η outside S n and has i∈Sn η i of particles in S n is in A. This means that given the configuration outside S n , 1 A only depends on this i∈Sn η i . Hence A ∈ H n , but n was arbitrary, so A ∈ H.
Let A ∈ H. Pick a η ∈ A we show that for x and y so that η x > 0 that η x,y ∈ A. Pick a n so that n > φ(x), φ(y). We know that A ∈ H n so A does not depend on the exact values in S n but only on the sum i∈Sn η i which is not changed by moving a particle from x to y, therefore η x,y ∈ A. This yields A ∈ A.
We start with proving theorem 1.6, but for this we need some machinery. Fix a measure µ ∈ I(L).
We denote the norm on L 2 (µ) by ||·|| µ . The proof is rather standard but we give it for sake of completeness in our general setting.
Proof. By invariance of µ we obtain that Hence we see that S t viewed as a operator on the subset D ⊂ L 2 (µ) is a contraction. We now prove that D is dense in L 2 (µ). Clearly D contains all local bounded functions, hence its closure in L 2 (µ) contains all local functions in L 2 (µ). We prove that all local bounded functions in L 2 (µ) are dense in L 2 (µ).
Recall the definitions of B n . Pick a bounded f ∈ L 2 (µ) and define the local As taking a conditional expectation is a projection in a L 2 space we see that ||f n || µ ≤ ||f || µ , furthermore the sequence f n is a martingale with respect to the filtration (B n ) n≥0 . By martingale convergence f n converges to f in L 2 (µ). By a truncation argument we see that the bounded functions are dense in L 2 (µ), so indeed D is dense in L 2 (µ).
So S t being a contraction with respect to the Hilbert space norm on D ⊂ L 2 (µ) defines by a continuous extension a linear operator S µ t on L 2 (µ). This also defines a generator L µ with domain As we clearly have that ||·|| µ ≤ ||·|| ∞ , it holds that L µ is the closure of L and D ⊂ D(L µ ). As D is a core for L we obtain that D is a core for L µ as well. This last statement is obtained by using proposition 3.1 from Ethier and Kurtz [6]. This proposition shows that R(λ − L) is dense in D for some λ > 0. We know that D is dense in L 2 (µ), hence R(λ − L µ ) is dense in L 2 (µ). The same proposition yields that D is a core for L µ .
We now give a technical result which helps us to analyse the structure of the set I. Define in the spirit of lemma IV.4.3 of Liggett [12] and Sethuraman [17] the following two quadratic forms, for f for which they are finite: Liggett defines bilinear forms instead of quadratic ones, we will not do that here because the following result is only true for quadratic forms. Below we will show that equality for bilinear forms is possible only in the case that the underlying measure µ is reversible with respect to the dynamics. : Remark This proposition is an improvement over lemma 2.4 of Sethuraman [17] since the latter only holds for product measures.
Proof. The proof is analogous to that of lemma IV.4.3 in Liggett [12]. We will not repeat the proof here, the key step that is different is to note that for f ∈ D it holds that After that simply work out the right hand side and plug in the arguments from [12].
The same techniques can be used to prove that for f, g ∈ D(L µ ) This shows that we have equality for bilinear forms only when µ is reversible with respect to the dynamics.
For the proof of theorem 1.6 we introduce approximating Markov processes.
Recall the definition of S n . Define for f ∈ D Because S n is a finite set L (n) is a bounded operator which therefore generates a Markov Jump process with semigroup S t (n). This semigroup also extends to S µ t (n) on L 2 (µ).
Proof of theorem 1.6. Suppose that µ is ergodic. Pick a set A ∈ G L , we need to show that µ(A) ∈ {0, 1} or equivalently that the function 1 A is constant µ almost surely. Intuitively one would like to say that L µ 1 A = 0, because clearly for every η, finite T ⊂ S and ζ ∈ W T it holds that ∇ T,ζ 1 A (η) = 0, hence S µ t 1 A = 1 A for all t, hence by ergodicity 1 A is constant µ almost surely.
This reasoning is not rigorous as we do not know whether 1 A ∈ D(L µ ). However by corollary I.3.14 in Liggett [12] we obtain that S µ t (n)f → S µ t f for all f ∈ L 2 (µ), uniformly for t in compact intervals.
The set A ∈ G L is invariant under finitely many transformations of the form η to θ T,ζ (η) for T and ζ so that c T (η, ζ) > 0. Denote the Markov process generated by L (n) by η n (t). Under the law of this Markov process the set on which there are only a finite number of allowed transitions by time t has probability 1. This means that for any starting configuration η, t ≥ 0 and n ∈ N it holds that S µ Now we can use the ergodicity of µ with respect to the Markov process to obtain that 1 A is µ constant almost surely, which implies that µ(A) ∈ {0, 1}. Since A ∈ G L was arbitrary we see that G L is trivial under µ.
For the second implication assume that G L is trivial under µ. Fix A ∈ B and assume that S µ t 1 A = 1 A µ a.s. for all t ≥ 0. We will show that there is a set A ∞ ∈ G L such that µ(A) = µ(A ∞ ). First note that 1 A ∈ D(L µ ) and L µ 1 A = 0, hence by proposition 2.2 we see that R(f ) = 0. This in turn implies that the set B 0 defined by has µ measure zero. Let A 0 = A. Define A 1 = A 0 \B 0 and note that 1 A0 = 1 A1 µ a.s. because µ(B 0 ) = 0. This means that This yields that the set B 1 given by has measure 0. Define A 2 = A 1 \ B 1 . We can repeat this step and construct We show that A ∞ ∈ G L . Suppose η ∈ A ∞ , T ⊂ S finite and ζ ∈ W S such that c T (η, ζ) > 0, we must prove that θ T,ζ (η) ∈ A ∞ . This is not to difficult, suppose that θ T,ζ (η) / ∈ A ∞ , then there is a N > 0 so that for all n ≥ N θ T,ζ (η) / ∈ A n , but then for all n > N η / ∈ A n , so that it follows that η / ∈ A which is a contradiction.

Proof of proposition 1.15
First we give a consequence of the definition of the product measures in P ⊗ (b). Let A be the set defined by On the set A define µ y,x λ to be the measure obtained from µ λ by the transformation η → η y,x , i.e. 1 A µ y,x λ (dη) = 1 A µ λ (dη y,x ).
Lemma 3.1. For µ λ the Radon-Nikodym derivative corresponding to the change of variables η y,x to η is: Proof. The transformation η → η y,x only affects two coordinates, hence In the last line we use assumption 1.12.
We start the proof of proposition 1.15 with: Proof. This is a short calculation. Let f ∈ D. We are allowed to rearrange the terms in the next calculation, because f is a local function and p is finite range.
We arrive at the fourth line by using lemma 3.1 on the first term. We obtain the last expression by changing the roles of x and y in the first term.
Clearly if for all x, y that λ x p(x, y) = λ y p(y, x) the integral is 0. This is case (c). Now suppose that this is not the case and we have a pair i, j such that λ i p(i, j) = λ j p(j, i). Then the above argument does not work. Note that if we can prove that that for all η x , η y it holds that x,y b(η x , η y )[p(y, x) λy λx − p(x, y)] = 0, then we are done. We start with 1.15(b), suppose that if x ≁ y then λ x = λ y .
In line five we use that ∼ is a symmetric relation. In line seven we use the second item in the assumptions. and in line eight we switch back the way we switched forward in lines two to five. In the last line we use lemma 3.2.
For the proof of item (a) note that the method above does not work in this case as we cannot use reversibility or the relation ∼. However b(n, k) reduces to b(n, 0). We check again that The last equality is due to the primary assumption on λ: x λ x p(x, y) = λ y x p(y, x).
We now prove that an invariant measure in the set P ⊗ (b) must be of one of the three given types. Pick a point z ∈ S and a finite set B(z) containing z such that if x / ∈ B(z), then p(z, x) = p(x, z) = 0, i.e. B(z) contains all points that can be reached by p from z. Let F (z) = {η : if x ∈ B(z) \ {z} then η x = 0}, furthermore let F * (z) = {η : if x ∈ B(z) then η x = 0}. Note that 1 F (z) and 1 F * (z) are local bounded functions, hence in D. Now fix some generator L = L b,p for which we know p, but do not know the specific form of b.
Suppose that we have a product measure µ λ ∈ P ⊗ (b), for a nonzero λ and suppose that µ λ is invariant for the process generated by L b,p . This yields L1 F (z) dµ λ = 0 and L1 F * (z) dµ λ = 0 by equation (1.3). We now look at these integrals. First note that by definition of our indicator function, we only have to look at couples where x or y is in B(z). The second equality is due to a calculation similar to the one in (3.1).
When using the same methods on the function 1 F * (z) µ λ (η z = 0) −1 we obtain We know that µ λ is a product measure and in the last line the only term involving the integral over η z is the function 1 F * (z) , but clearly 1 F * (z) = 1 F (z) 1 {ηz =0} . Hence we can first integrate over η z such that the normalising term disappears and then add the integral over η z , because it integrates to 1:
For the subdivision into items (1), (2) and (3) we adapt the above argument by looking at two sites. Pick two distinct sites z and w such that p(z, w) or p(w, z) is nonzero. Fix a finite set B(z, w) ⊂ S containing z and w such that if y / ∈ B(z, w) then p(z, y) = p(y, z) = p(w, y) = p(w, y) = 0. Let F (z, w) = {η : if x ∈ B(z, w) \ {z, w} then η x = 0, η z = n, η w = k} and let F * (z, w) = {η : if x ∈ B(z, w) then η x = 0}. We do a similar calculation.
We clarify the last expression. We have split the sum over x and y into a number of parts, in the first two lines x = z, in the second two lines x = w and in the last line we sum over x / ∈ B(z, w). The sum over x ∈ B(z, w) \ {z, w} does not play a role because on the set F (z, w) we integrate over b(0, ·) = 0. The term in the last line is 0 because it is equal to just like in the argument where we singled out only one point in S. We obtain that 0 = y =w b(n, 0) p(y, z) λ y λ z − p(z, y) Now we add the terms missing from the first two sums and subtract them from the last two sums: Note that the first 2 lines are zero because λ satisfies x λ x p(x, y) = x λ y p(y, x). Therefore: From equation (3.5) we can now derive necessary conditions for λ to yield an invariant measure for a certain b.
Suppose that b does not depend on the second variable, b(n, k) = b(n, 0), then we see that this yields 0 in equation (3.5). This is option (a) in the proposition. Now suppose that we have a k so that b(n, k) = b(n, 0). We can rewrite equation (3.5) to the following equality: This gives by the assumption that p(z, w) + p(w, z) > 0 either Now we can simply take λ to be reversible, which is exactly equation (3.7). This is option (c) in the proposition. Suppose that we have two sites z and w so that λ is not reversible: p(w, z) λz λw − p(z, w) = 0, then equation (3.6) must hold. Now suppose that λz λw = c = 1, then b(n, k) − b(n, 0) = c(b(k, n) − b(k, 0)). Note that because b(n, k) = b(n, 0) clearly b(k, n) = b(k, 0), so we could have made the same argument with n and k the other way around to obtain that b(n, k) − b(n, 0) = c −1 (b(k, n) − b(k, 0)), a contradiction. Hence the assumption that p(w, z) λw λz − p(z, w) = 0 leads to the fact that λ z = λ w . And this in turn leads by equation (3.6) (k, 0). This is option (b) in the proposition.

Proof of theorem 1.16 and a counterexample
We prove the following two theorems which imply theorem 1.16 by using corollary 1.11. Let µ λ ∈ I(L b,p ) ∩ P ⊗ (b), which we will denote by P in this section.
then we have the following equivalence.

3.
i λ i = ∞. We leave the proof of the equivalence of (2) and (3) to the reader, as this is straightforward by the definition in (1.4). The proof that (1) implies (2) is a consequence of Borel-Cantelli. Note that we do not need to assume ( * * ) for these arguments.
The proof of (3) to (1) in both theorems uses a coupling argument. From lemma 1.8 we know that A = H, so we look at H instead. As S is a countable set, we assume for the moment that it is equal to N to simplify the notation. Now define the partial sums Z n = n i=0 η i , then H is the tail σ-algebra of the Z n .
Define transition matrices p n for every n ∈ N by p n (x, y) = µ λn (y − x). We see that the chain (Z n ) n≥0 is a time inhomogeneous Markov chain with transition matrices (p n ) n≥0 . Let p((t 0 , k 0 ), (t, k)) give the probability that the chain that is started at time t 0 in k 0 is in k at time t, in other words p((t 0 , k 0 ), (t, k)) = ( t i=t0+1 p i )(k 0 , k).
The next theorem is theorem 4 of Iosifescu [8], but can also be derived from theorem 4.1 in Thorisson [19] or from theorem 20.10 in Kallenberg [11].  By the coupling event inequality which can be found as equation (2.10) in Lindvall [14] we know that By letting n go to infinity we obtain the result.
We need to construct a successful coupling of two chains starting at time n 0 at positions s 0 and s. We do this by the so called Mineka coupling, which is described in Lindvall [14]. Let α i k = 1/2 (µ λi (k) ∧ µ λi (k + 1)). Let R i andR i be the step sizes of the coupled random walks, their probabilities are given by: Now let Y n = s 0 + n i=n0+1 R i andŶ n = s + n i=n0+1R i for n ≥ n 0 as long as Y n =Ŷ n . From the first moment that Y n =Ŷ n we let Y andŶ make the same steps of which the distribution is given by the measure µ λ . It is easy to check that the distributions of Y andŶ are correct. If we look at the difference of these two chains: V n = Y n −Ŷ n we see that V n starts in Y n0 −Ŷ n0 = s 0 − s and (V n ) n makes steps of size one and of size zero only. Furthermore the expectation of steps is zero. As long as V n has not hit zero yet, then it will make a random walk given by the transition probabilities for R andR. It is a well known fact that if such a random walk makes an infinite number of non-zero steps, then it is recurrent, see e.g. [18]. So if we can prove that the chain induced by R and R makes an infinite number of steps, then V will hit zero eventually and thus the coupling is successful.
Proof. The random walk given by R andR makes an infinite number of non-zero steps if k i 2α i k = ∞ by Borel-Cantelli. But this is exactly the claim. We give a small summary in the form of the next corollary.
The result can be generalised, for this we need Bezout's identity. The proof of this lemma is elementary, see e.g. [9,20]. . , x p ∈ Z so that: Furthermore gcd(D) is the smallest positive integer for which this is possible.
We use it to prove the following theorem. This result is analogous to theorem 1.8 in Aldous and Pitman [1] and is proven by similar methods.
Proof. We need to couple two walks Y andỸ that start at time n 0 in s 0 and s. Let again V = Y −Ỹ and denote V n0 = s 0 − s the difference. By Bezout's identity, lemma 4.7, and the fact that gcd(D) = 1 we can write 1 = The idea is that we use p different versions of the Mineka coupling, one for each integer in D. First we use a coupling so that the difference of the walks makes step sizes 0, −d 1 , d 1 . At a suitable time we switch to a coupling so that the difference makes steps of sizes 0, −d 2 , d 2 and so on. Define for d ∈ D We start the coupling just as in the case where 1 ∈ D.
T 1 is almost surely finite because d 1 ∈ D and the argument preceding lemma 4.5. Now let Y n = s 0 + n i=n0+1 R i andŶ n = s + n i=n0+1R i until T 1 . From time T 1 onwards we let the walks evolve with steps of size d 2 , so: Conditional on its position at time T 1 we define Because T 1 is almost surely finite and the fact that d 2 ∈ D it holds that T 2 is also almost surely finite.
We repeat the last step for 3 to p and obtain that T p = inf{n ≥ T p−1 : V n = 0} is almost surely finite.
This machinery is enough to start prove implication (3) to (1) in theorems 4.1 and 4.2. We start with theorem 4.1.
i k then there are infinitely many i such that 1 ≤ λ i < aN−1 aN . For a given k the set of probabilities The other case is that the sum for elements i so that λ i > B is infinite: For λ i > B a small calculation shows that P[η i = N ] is bounded from below by This proves theorem 4.1 (3) to (1). Now we prove the second theorem.

Proof of (3) to (1) of theorem 4.2.
For the proof of (1) given (3) we need ( * * ), so either λ * < ∞ or that λ * = ∞ and there exists a finite set D = {d 1 , . . . , d p } such that gcd(D) = 1 and We take two approaches to show that this implies that A is trivial. The first approach resembles the proof of (3) to (1) of theorem 4.1 above. Under the assumption that λ * < ∞ this method will show that i λ i = ∞ implies that A is trivial under P = µ λ . The second and more difficult approach will be used for the case that λ * is infinite.
We start the first approach by analysing the minimum of the two probabilities P[η i = k] ∧ P[η i = k + 1]. Recall the definition of I from assumption 1.13, then we see that Therefore: We know from assumption (2) that i P[η i > 0] = ∞. Suppose now that the sum λi<I P[η i > 0] = ∞, then we are done. On the other hand it could be that λi≥I P[η i > 0] = ∞. Note the following for λ i ≥ I: This leads to the observation that i:λi≥I Because we know from the fact λi≥I P[η i > 0] = ∞ that there are infinitely many terms so that λ i ≥ I we obtain that if i:λi≥I Note that if λ * = ∞ this is tricky to check. But suppose now that λ * < ∞ then this is satisfied automatically. This is the first assumption of ( * * ). This however also proves the following: If then A is trivial. We state this as a corollary after the proof of theorem 4.2.
We now start with the second approach for the case where λ * = ∞, lim inf λ i = ∞ and i:λi≥I 1 λi < ∞. We will use the second part of ( * * ) for this case.
Start by assuming the special case that D = {1} i.e. sup k a 2 k a k−1 a k+1 < ∞ the proof in full generality is only slightly more difficult but uses similar arguments.
Assume λ * = ∞ Recall the definition of a k and note that a 0 = a 1 = 1 and define the following two sets.
The intuition behind these sets is the following. Fix (1). Hence in the sum k µ x (k) ∧ µ x (k + 1) the probability µ x (k) is not present. On the other hand if k ∈ M * (x) the probability will occur twice. This means that We work as before, we have assumed that i λ i = ∞, suppose that i:λi<1 λ i = ∞, then we obtain that A is trivial under µ λ . This is because This leaves the case that i:λi≥1 λ i = ∞. This tells us nothing but the fact that there are infinitely many sites i so that λ i ≥ 1. We use this to prove divergence of i k From this point on if we we sum over i we implicitly assume that λ i ≥ 1 to make the notation easier. The sets M * and M * allow us to rewrite this sum.
So if we can bound k∈M * (λi) µ λi (k) away from 1 uniformly in i, then this sum will be infinite.
For k ∈ M * (x) we see that which in turn implies that We use this to bound the following conditional probability.
Note that bounding the sum k∈M * (λi) µ λi (k) away from 1 uniformly in i is possible by bounding away fractions of the type found in equation (4.7) from 1, but this is equivalent to bounding away C(k) from ∞. Hence we obtain as a condition that If we fix some x and some k / ∈ M * (x) then this fraction is bounded by 1, so it holds uniformly if If we plug in theorem 4.8 instead of corollary 4.6 then we can improve on the last calculation to obtain (3) to (1) of theorem 4.2.
In the proof of theorem 4.2 we saw that the condition in equation (4.4) is also enough to obtain triviality of A.

A Product measure following a deterministic strictly increasing profile
In the case that W = N one might think that i λ i = ∞ is enough to prove that A is trivial. This is not the case as one can see in the next example. We construct a particle system and a non-ergodic stationary product measure, such Construct a conservative product type particle system on N N with the following properties. First we define the nearest neighbour transition kernel p, which we condition to satisfy p i,i+1 + p i,i−1 = 1, let p 0,1 = 1 and let p 1,2 ∈ (0, 1). First let λ i = (2i 2 + 1)! then define the other p i,j by By induction, using the fact that λ i+1 > λ i we see that indeed p i,i+1 ∈ (0, 1) for all i, so p is irreducible. Furthermore from equation (5.1) we see that p is reversible with respect to λ, so this λ is a candidate to generate an invariant measure for conservative particle systems. Now define the function b: b(n, k) = 1 (2(k + 1))! and note that this function is bounded, so that there is a corresponding Markov process. Furthermore assumptions 1.12 and 1.13 are satisfied.
Define the product invariant measure P = µ λ on N N associated to λ by giving the coefficients a k .
The one site marginal is as before P[η i = k] = a k λ k i /Z λi . A small calculation shows that Note that the sequence a k+1 a k is decreasing. A moments thought shows that in this case corollary 4.6 is in fact the strongest case of theorem 4.8. Hence by equation (4.6) we see that So by Borel-Cantelli and corollary 4.6 we see that P[η k = k 2 infinitely often] = 1 implies that A is trivial under µ λ . We now show that we can reverse the implication as well. Suppose that P[η k = k 2 infinitely often] = 0. Define A n = {η : k (η k − k 2 ) = n} and note that A n ∈ A, furthermore a calculation shows that these sets have positive µ λ measure, so indeed A is not trivial. So in this case we obtain a strengthening of corollary 4.6 to A is trivial under µ ⇔ P[η k = k 2 infinitely often] = 1. (5.2) We check which of the two possibilities is the case here.
So we see that A is not trivial which in turn implies that µ λ is not extremal and can be disintegrated into measures µ (n) supported on the sets A n = {η : where µ λ (A n ) > 0 for all n.

Anti-particle perspective
In this section we elaborate on the symmetric nature of theorem 1.16 (a). We work with a system where L b,p f (η) = x,y p(x, y)b(η x , η y )∇ x,y f (η) in the case that W = {0, . . . , N }.
In the process particles jump from x to y with rate p(x, y)b(η x , η y ). Instead of saying that a particle jumps from x to y one could say that an anti-particle jumps from y to x. The next step is to forget about the motion of the particles, and only look at the motion of the anti-particles. It is clear that this motion contains exactly the same information as the motion of the particles. This way of looking at the system has some consequences. Define the transformation θ and the following adapted versions of b and p: The motion of the antiparticles can be described by these adapted versions.
Clearly if there are n particles at some site, then there are N − n antiparticles, this explains the definition of θ. A transition of a particle from x to y with rate p(x, y)b(η x , η y ) corresponds to the transition of an anti-particle from y to x with rate p(x, y)b(η x , η y ). We would like to rewrite this rate in such a way that we recognize it in a familiar form. First of all the anti-particle moves from y to x so we turn p around:p. Also we must write η x and η y in anti-particle form, and we must write it in such a way that the first coordinate describes the number of anti-particles at the site of departure. This leads us to the rate of moving an anti-particle from y to x with rate p(x, y)b(η x , η y ) =p(y, x)b(N − η y , N − η x ) = p(y, x)b((θη) y , (θη) x ). Thus this model can be described by the generator Lb ,p g(θη) = x,yp (x, y)b((θη) x , (θη) y )∇ x,y g(θη), (6.1) which is exactly the form we are used to. Note that the assumptions that hold for the model associated with L b,p also hold for Lb ,p . Hence Lb ,p generates a semigroupS t . Without to much work we see that equation (6.1) leads to L b,p (f • θ)(η) = Lb ,p f (θη).
We restate the results in theorem 6.1.
Theorem 6.1. For f ∈ D it holds that For f ∈ D = C(E) it holds that As a consequence we obtain that if {η(t) : t ≥ 0} is the Markov process generated by L b,p , then {(θη)(t) : t ≥ 0} is the process with generator Lb ,p .

Anti-particle perspective on product measures
We look at how to interpret proposition 1.15 from this perspective. We start by looking at the case that b is independent of the second variable, the case corresponding to proposition 1.15 (a) where b(n, k) = b(n, 0) for all n and k. These b are not suitable for the case that W is a finite set, because we need the fact that b(·, N ) = 0.
For the cases (b) and (c) in proposition 1.15 the λ that generate invariant measures satisfy one of the following two conditions.
• For all x and y it holds that x ∼ y.
In this case denote by π = c λ for some c ≥ 0. Suppose that λ solves x λ x p(x, y) = λ y x p(y, x).
Then it is an easy calculation that π solves x π xp (x, y) = π y xp (y, x).
Furthermore if λ is reversible with respect to p then π is reversible with respect top and the property that: if x and y are such that λ x p(x, y) = λ y p(y, x), then λ x = λ y is turned into: if x and y are such that π xp (x, y) = π yp (y, x), then π x = π y .
We look at how the second item in proposition 1.15 should be interpreted for the anti-particle model. One would think that the anti-particle perspective would give invariant measures for functions b so that b(n, k) − b(k, n) = b(N, k) − b(N, n), but notice that a small calculation using assumption 1.12 yields that b(N, k) − b(N, n) = b(n, 0) − b(k, 0). The associated product measures must change too. Denote the valuesã n analogous to the values of a n :ã (1, i) b(i + 1, 0) Z π = k π kã k and define the product measure ν π by its marginals ν π (n) =ã n π n xZ −1 πx . One would hope that an invariant product measure for L b,p given by µ λ is also given by the invariant measure for Lb ,p written by ν π for some suitable π. This is indeed the case. Proof. We only need to prove the statement for a single marginal. So we assume that λ and π only have one index and prove that µ λ (N − n) = ν π (n). Recall assumption 1.12 which we use to obtain the third line in the following computation.
It follows that a n π n = b(1, So if we use this information to calculate the probabilities we obtain ν π (n) =ã n π ñ Z π = a N −n λ N −n Z λ = µ λ (N − n) which is what we wanted to prove.
The sum over the λ i where λ i < 1 turns to the sum over the π −1 i where π ≥ 1. The sum over the λ −1 i where λ i ≥ 1 turns to the sum over the π i where π < 1.