Decay of correlations for non H\"{o}lderian dynamics. A coupling approach

We present an upper bound on the mixing rate of the equilibrium state of a dynamical systems defined by the one-sided shift and a non H\"{o}lder potential of summable variations. The bound follows from an estimation of the relaxation speed of chains with complete connections with summable decay, which is obtained via a explicit coupling between pairs of chains with different histories.


Introduction
Let µ φ be the equilibrium state associated to the continuous function φ. In this paper we obtain upper bounds for the speed of convergence of the limit for φ with summable variations and T the one-sided shift. We show that this speed is (at least) summable, polynomial or exponential according to the decay rate of the variations of φ. The bounds apply for f ∈ L 1 (µ φ ) and g with variations decreasing proportionally to those of φ.
Previous approaches to the study of the mixing properties of the one-sided shift rely on the use of the transfer operator L φ , defined by the duality, If φ is Hölder, this operator, acting on the subspace of Hölder observables, has a spectral gap and the limit (1.1) is attained at exponential speed (Bowen, 1975). When φ is not Hölder, the spectral gap of the transfer operator may vanish and the spectral study becomes rather complicated. To estimate the mixing rate, Kondah, Maume and Schmitt (1996) proved first that the operator is contracting in the Birkhoff projective metric, while Pollicott (1997), following Liverani (1995), considered the transfer operator composed with conditional expectations. In contrast, our approach is based on a probabilistic interpretation of the duality (1.2) in terms of expectations, conditioned with respect to the past, of a chain with complete connections The convergence (1.1) is therefore related to the relaxation properties of this chain. In this paper, such relaxation is studied via a coupling method.
Coupling ideas were first introduced by Doeblin in his 1938 work on the convergence to equilibrium of Markov chains. He let two independent trajectories evolve simultaneously, one starting from the stationary measure and the other from an arbitrary distribution. The convergence follows from the fact that both realizations meet at a finite time. Instead of letting the trajectories evolve independently, one can couple them from the beginning, reducing the "meeting time" and, hence, obtaining a better rate of convergence (leading to the so-called Dobrushin's ergodic coefficient). Doeblin published his results in a hardly known paper in the Revue Mathématique de l'Union Interbalkanique. (For a description of Doeblin's contributions to probability theory we refer the reader to Lindvall 1991). His ideas were taken up and exploited only much later in papers by Athreya, Ney, Harris, Spitzer and Toom among others. The sharpness of the convergence rates provided by different types of Markovian couplings has been recently discussed by Burdzy and Kendall (1998).
In the context of dynamical systems, the recent papers by Coelho and Collet (1995) and Young (1997) consider the time two independent systems take to become close. This is reminiscent of the original coupling by Doeblin. In Bressaud, Fernández, Galves (1997), the coupling approach was generalized to treat chains with complete connections. These processes, introduced by Doeblin and Fortet (1937) (see also Lalley, 1986) appear in a natural way in the context of dynamical systems. They are characterized by having transition probabilities that depend on the whole past, albeit in a continuous manner. Due to this fact, the coupling can not ensure that two different trajectories will remain equal after their first meeting time. But the coupling used in the present paper, and in our preceeding one, has the property that if the trajectories meet they have a large probability of remaining equal, and this probability increases with the number of consecutive agreements. In the summable case, the coupling is such that with probability one the trajectories disagree only a finite number of times. In fact, our approach can also be applied under an assumption weaker than summability [(4.7) below], leading to trajectories that differ infinitely often but with a probability of disagreement that goes to zero. The method leads, in particular, to a criterium of uniqueness for g-measures proven by Berbee (1987). The mean time between succesive disagreements provides a bound on the speed of relaxation of the chain and hence, through our probabilistic interpretation of (1.2), of the mixing rate.
The paper is organized as follows. The main results and definitions relevant to dynamical systems are stated in Section 2. The relation between chains with complete connections and the transfer operator is spelled out in Section 3. In Section 4, we state and prove the central result on relaxation speeds of chains with complete connections. Theorem 1 on mixing rates for normalized functions is proven in Section 5, while Theorem 2 on rates for the general case is proven in Section 6. The upper bounds on the decay of correlations depend crucially on estimations of the probability of return to the origin of an auxiliary Markov chain, which are presented in Section A.

Definitions and statement of the results
Let A be a finite set henceforth called alphabet. Let us denote the set of sequences of elements of the alphabet indexed by the strictly negative integers. Each sequence x ∈ A will be called a history. Given two histories x and y, the notation x m = y indicates that x j = y j for all −m ≤ j ≤ −1.
As usual, we endow the set A with the product topology and the σ-algebra generated by the cylinder sets. We denote by C 0 (A, R) the space of real-valued continuous functions on A.
We consider the one-sided shift T on A, Given an element a in A and an element x in A, we shall denote by xa the element z in A such that z −1 = a and T (z) = x.
Given a function φ on A, φ : A → R, we define its sequence of variations (var m (φ)) m∈N , We shall say that it has summable variations if, and that it is normalized if it satisfies, We say that a shift-invariant measure µ on A is compatible with the normalized function φ if and only if, for µ φ -almost-all x in A, where the left-hand side is the usual conditional expectation of the the indicator function of the event {x −1 = a} with respect to the σ-algebra of the past up to time −2.
An equivalent way of expressing this is by saying that µ φ is a g-measure for g = e φ . If φ has summable variations, and even under a slightly weaker conditions, then such a measure is unique and will be denoted µ φ . The measure µ φ can also be characterized via a variational principle, in which context it is called equilibrium state for φ. For details see Ledrappier (1974), Walters (1975), Quas (1996) and Berbee (1987).
For a non-constant φ, we consider the seminorm and the subspace of C 0 (A) defined by, Given a real-valued sequence (γ n ) n∈N , let (S for all i ∈ N. For any n ≥ 1 we define We now state our first result.
Theorem 1 Let φ : A → R be a normalized function with summable variations and set Then,
This theorem is proven in Section 5, using the results obtained in Section 4 on the relaxation speed of chains with complete connections.
For each non-normalized function φ with summable variations there exists a unique positive function ρ such that the function is normalized (Walters, 1975). We call ψ the normalization of φ. The construction of compatible measures given in (2.5) looses its meaning for non-normalized φ. It is necessary to resort to an alternative characterization in terms of a variational principle (see eg. Bowen 1975) leading to equilibrium states. In Walters (1975) it is proven that: (a) φ with summable variations admits a unique equilibrium state, that we denote also µ φ ; (b) the corresponding normalized ψ, given by (2.14), admits a unique compatible measure µ ψ (even when the variations of ψ may not be summable), and Our second theorem generalizes Theorem 1 to non-normalized functions.
Theorem 2 Let φ : A → R be a function with summable variations and let ψ be its normalization. Let (n m ) m∈N be an increasing subadditive sequence such that the subsequence of the rests, k≥nm var k (φ) m≥0 , is summable, and for all f ∈ L 1 (µ φ ) and g ∈ V φ , for a computable constant C. Here γ * is defined as in (2.10) but using the sequence (γ * n ) n∈N .
The estimation of the large-n behavior of the sequence (γ * n ) n∈N given the behavior of the original (γ n ) n∈N only requires elementary computations. For the convenience of the reader we summarize some results in Appendix A.

Transfer operators and chains.
Let P be a family of transition probabilities on A × A, Given a history x, a chain with past x and transitions P , is the process (Z x n ) n∈Z whose conditional probabilities satisfy for all a ∈ A and all histories z with z j−n = x j , j ≤ −1, and such that This chain can be interpreted as a conditioned version of the process defined by the transition probabilities (3.1), given a past x (for more details, see Quas 1996).
Let φ : A → R be a continuous normalized function. The transfer operator associated to φ is the operator L φ acting on C 0 (A, R) defined by, This operator is related to the conditional probability (2.5) in the form This relation shows the equivalence of (1.2) and (3.4) as definitions of the operator. In addition, if φ is normalized we can construct, for each history x ∈ A, the chain Z x φ = (Z x n ) n∈Z with past x and transition probabilities Iterates of the transfer operator, L n φ g(x), on functions g ∈ C 0 (A) can be interpreted as expecta- From this expression and the classical duality (1.2) between the composition by the shift and the transfer operator L φ in L 2 (µ φ ), we obtain the following expression for the decay of correlations, This inequality shows how the speed of decay of correlations can be bounded by the speed with which the chain loosses its memory. We deal with the later problem in the next section.

Relaxation speed for chains with complete connections 4.1 Definitions and main result
We consider chains whose transition probabilities satisfy for some real-valued sequence (γ m ) m∈N , decreasing to 0 as m tends to +∞. Without loss of generality, this decrease can be assumed to be monotonic. To avoid trivialities we assume γ 0 < 1.
In the literature, a stationary process satisfying (4.1) is called a chain with complete connections.
For a set of transition probabilities satisfying (4.1), we consider, for each x ∈ A, the chain (Z x n ) n∈Z with past x and transitions P [see (3.2)-(3. 3)]. The following proposition plays a central role in the proof of our results.
Proposition 1 For all histories x, y ∈ A, there is a coupling ( U x,y n , V x,y n ) n∈Z of (Z x n ) n∈Z and (Z y n ) n∈Z such that the integer-valued process (T x,y n ) n∈Z defined by for n ≥ 0, where γ * n was defined in (2.10). The proof of this proposition is given in Section 4.4.
An immediate consequence of this proposition is the following bound on the relaxation rate of the processes Z x .
Corollary 1 For all histories x and y, for all a ∈ A,

4)
and, for k ≥ 1, This lemma is proved in Section 4.5.
Remark 1 Whenever γ * n → 0 , Remark 2 If X = (X n ) n∈Z is a stationary process with transition P satisfying (4.1), then Corollary 1 implies uniformly in the history x.

Coupling of chains with different pasts
Given a double history (x, y), we consider the transition probabilities defined by the maximal coupling By (4.11) this implies that Now, we fix a double history (x, y) and we define ( U x,y n , V x,y n ) n∈Z to be the chain taking values in A 2 , with past (x, y) and transition probabilities given by (4.12). If x m = y, (4.13) yields (4.14) We denote ∆ m,n := U j = V j , m ≤ j ≤ n . Notice that ∆ −m,−1 is the reunion over all the sequences x, y with x m = y of the events {( U j , V j ) = (x j , y j ) ; j ≤ −1}. Using the stationarity of the conditional probabilities, we obtain for all n ≥ 0.

Proof of Proposition 1
From this subsection on, will be working with bounds which are uniform in x, y, hence we will omit, with a few exceptions, the superscript x, y in the processes T x,y n (defined below), U x,y n and V x,y n . Let us consider the integer-valued process (T n ) n∈Z defined by: For each time n, the random variable T n counts the number of steps backwards needed to find a difference in the coupling. First, notice that (4.16) implies that, and all the other transition probabilities being zero. This process (T n ) n∈Z is not a Markov chain.
We now consider the integer-valued Markov chain (S (γ) n ) n≥0 starting from state 0 and with transition probabilities given by (2.9), that is p i,i+1 = 1 − γ i and p i,0 = γ i . Proposition 1 follows from the following lemma, setting k = 1.
Lemma 1 For each k ∈ N, the following inequality holds: Proof We shall proceed by induction on n. Since P(S (γ) 0 = 0) = 1, inequalities (4.20) holds for n = 0. Assume now that (4.20) holds for some integer n. There is nothing to prove for k = 0.. For k ≥ 1, (4.21) By the same computation, we see that Hence, using the recurrence assumption and the fact that (γ n ) n≥0 is decreasing we conclude that for all k ≥ 1.

Proof of Corollary 1
To prove (4.4), first notice that by construction the process ( U n ) n∈Z has the same law as (Z x n ) n∈Z and ( V n ) n∈Z has the same law as (Z y n ) n∈Z . Thus, P(Z x n = a) − P(Z y n = a) = P( U n = a) − P( V n = a) ≤ P( U n = V n )) (4.23) Hence, by definition of the process T n and Lemma 1, The proof of (4.5) starts similarly: P (Z x n , . . . , Z x n+k ) = (a 0 , . . . , a k ) − P (Z y n , . . . , Z y n+k ) = (a 0 , . . . , a k ) To conclude, we notice that,

Proof of Theorem 1
The proof of Theorem 1 is based on the inequality which follows from (3.7) and the fact that ( U x,y , V x,y ) n∈Z is a coupling between the chains with pasts x and y, respectively. An upper bound to the right-hand side is provided by Proposition 1. We see that the transition probabilities (3.6) satisfy condition (4.1), since whenever x, y ∈ A are such that x m = y for some m ∈ N. We can therefore apply Proposition 1 with which tends monotonically to zero if m≥1 var m (φ) < +∞.
To prove (2.12) we use the process (T x,y n ) n∈Z to obtain the upper bound To conclude, we must prove that the constant C is finite. By direct computation, (1 − γ m ) for n ≥ 2, (5.11) (1 − γ m ) .

Remark 3
The previous computations lead to stronger results for more regular functions g. For example, when g satisfies var k (g) ≤ ||g|| θ θ k (5.13) for some θ < 1 and some ||g|| θ < ∞ (Hölder norm of g), a chain of inequalities almost identical to those ending in (5.4) leads to On the other hand, if g is a function that depends only on the first coordinate, we get,

Proof of Theorem 2
We now consider the general case where the function φ is not necessarily normalized. In this case we resort to the normalization ψ define in (2.14) and we consider chains with transition probabilities P (a | x) = e φ(xa) ρ(xa) ρ(x) =: e ψ(xa) . (6.1) However, the summability of the variations of φ does not imply the analogous condition for ψ, because there are addition "oscillations" due to the cocycle log ρ − log ρ • T . Instead, for all m ≥ 0 (see Walters 1978). Hence, we can apply Theorem 1 only under the condition If this is the case, the correlations for functions f ∈ L 1 (µ) and g ∈ V ψ decay faster than γ * m , where γ m = e k≥m var k (φ) − 1.
To prove the general result without assuming (6.3) we must work with block transition probabilities, which are less sensitive to the oscillations of the cocycle. More precisely, given a family of transition probabilities P on A × A, let P n denote the corresponding transition probabilities on A n × A: P n+1 (a 0,n | x) = P (a n | a n−1 · · · a 1 x) · · · P (a 2 | a 1 x) P (a 1 | x) where a 0,n := (a 0 , . . . , a n ) ∈ A n+1 . (6.5) If the transition probabilities P are defined by a normalized function φ as in (3.6), then we see from (6.4) that the transition probabilities P n obey a similar relation P n (a 0,n−1 | x) = e φn(xa0,n−1) , (6.6) with φ n (xa 0,n−1 ) := n−1 k=0 φ(xa 0 · · · a k ) . (6.7) In particular, for transitions (6.1) the formula (6.4) yields A comparison of (6.8) with (6.2) shows that it is largely advantageous to bound directly the oscillations of ψ n . This is what we do in this section by adapting the arguments of Section 5.

Coupling of the transition probabilities for blocks
For every integer n, we define a family of transition probability P n on (A n ) 2 × A 2 by P n (a 0,n−1 , b 0,n−1 | x, y) = P n (· | x)× P n (. | y) (a 0,n−1 ; b 0,n−1 ) . (6.9) Let (n m ) m∈N be an increasing sequence. For each double history x, y, we consider the coupling (U x,y , V x,y ) m∈Z of the chains for n m -blocks with past x and y, defined by,

The dominating Markov process
Let us choose the length of the blocks in such a way that the sequence (n m ) m∈N is subadditive, i.e.
We now define the "dominating" Markov chain (S

Decay of correlations
We can now mimick the proof of Theorem 5 in terms of barred objects.
As (var m (φ)) m∈N is summable, there exists a subadditive sequence (n m ) m∈N such that the sequence α m of the tails α m = k≥nm var k (φ) (6.24) is summable: The transitions for blocks of size n satisfy P n (a 0,n−1 | x) P n (a 0,n−1 | y) ≥ e −var k (ψn) ( To prove the theorem, we now proceed as in (5.1) and (5.4)-(5.10) but replacing tildes by bars and putting bars over the processes (T n ) and (S (γ) n ). We just point out that, due to the subadditivity of n m , var (n m+k −nm) (φ) ≤ var n k (φ) uniformly in m.

A Returns to the origin of the dominating Markov chain
In this appendix we collect a few results concerning the probability of return to the origin of the Markov chain (S (γ) n ) n∈N defined via (2.9). (In the sequel we omit the superscript "(γ)" for simplicity.) Proposition 2 Let (γ n ) n∈N be a real-valued sequence decreasing to 0 as n → +∞. (1 − γ k ) = +∞, then P(S n = 0) → 0.
(iv) If (γ m ) decreases polynomially, then P(S n = 0) = O(γ n ). where the random variable τ is the time of first return to zero, defined in (5.8). The probabilities P(τ = n) were computed in (5.11) above. The relation (5.7) implies that these series are related in the form

Sketch of the proof
for all s ≥ 0 such that F (s) < 1.
It is clear that the radius of convergence of F is at least 1. In fact, This is a consequence of the fact that P(τ = n)/γ n−1 → P(τ = +∞) > 0, as concluded from (5.11).
Statement (ii) of the proposition is a consequence of the fact that the radius of convergence of the series G is at least 1 if m≥1 γ k < +∞. This follows from the relation (A.3) and the fact that the right-hand side of (A.4) is strictly less than one when the chain (S (γ) n ) is transient.
To prove statement (iii) let us assume that γ m ≤ Cγ m for some constants C < +∞ and 0 < γ < 1. By (A.5), the radius of convergence of F is γ −1 > 1 while, by (A.4), F (1) < 1. By continuity it follows that there exists s 0 > 1 such that F (s 0 ) = 1 and, hence, by (A.3), G(s) < +∞ for all s < s 0 . By definition of G, this implies that P(S n = 0) decreases faster than ζ n for any ζ ∈ (1, s −1 0 ). Statement (iv) is a consequence of the following lemma. Proof We start with the following observation. If i 1 + · · · + i k = n, then max 1≤m≤k i m .n/k and thus, for g is an increasing g(n) ≤ g (k i max ) , where i max = max 1≤m≤k i m . If we apply this to g(n) = 1/P(τ = n), which is increasing by (5.11), we obtain 1 ≤ P(τ = n) P(τ = k i max ) . (A.7) We now invoke the following a explicit relation between the coefficients of F and G.