There exists an interval exchange with a non-ergodic generic measure

We show that there exists an interval exchange and a point so that the orbit of the point equidistributes for a measure that is not ergodic.


Introduction
In the initial study of interval exchange transformations in the Soviet Union, the United States and western Europe a major motivation was basic ergodic theory. One aspect was to find bounds on the maximal number of ergodic measures. Katok [2] proved that n-IETs have at most n 2 ergodic measures. A liitle later Keane [3] proved that n-IETs have a finite number of ergodic measures and Veech [11] using different methods than Katok also proved that an n-IET has at most n 2 ergodic measures. Various examples of non-ergodic interval exchanges were constructed by Katok [2], Sataev [8] and Keane [4]. In a slightly different context examples were also constructed by Veech [10]. Masur [7] and Veech [12] later proved almost every interval exchange with an irreducible permutation is uniquely ergodic.
The present paper is devoted to the study of invariant measures of minimal and not uniquely ergodic interval exchange transformations. Now let (X, d) be a σ-compact metric space, B be the Borel σ algebra, µ be a Borel measure and T : X → X be µ-measure preserving.

Definition 1.
A point x ∈ X is said to be generic for µ if for every continuous compactly supported f : X → R we have Consider a σ-compact metric space. The Birkhoff ergodic theorem and the fact that continuous compactly supported functions with supremum norm have a countable dense set says that if a measure µ is ergodic then µ-almost every point is generic.

Definition 2.
A measure is called generic if it is has a generic point.
The question arises if a non-ergodic measure can be generic. In settings like the left shift on {0, 1} Z it is straightforward to build generic points for non-ergodic measures. Interval exchanges have fewer, much more constrained orbits and so these techniques do not work. However, in this paper we show that there exists an IET with a generic non-ergodic measure. Theorem 1. There exists a minimal non uniquely ergodic IET on 6 intervals with 2 ergodic measures and which has a generic measure that is not ergodic.
The same proof given in [2] gives the following. Thus if the number of ergodic measures is maximal, then no other measure can be generic. Acknowledgments: We thank M. Boshernitzan for asking this question and Oberwolfach where this project began. J. Chaika thanks the University of Chicago for its hospitality.

Rauzy Induction
We follow the description of interval exchange transformations introduced in [6] and also explicated in [1]. We have an alphabet A on d ≥ 2 letters. Break an interval I = [0, |λ|) into intervals {I α } α∈A and rearrange in a new order by translations. Thus the interval exchange transformation is entirely defined by the following data: (1) The lengths of the intervals (2) Their orders before and after rearranging The first are called length data, and are given by a vector λ ∈ R d . The second are called combinatorial data, and are given by a pair of bijections π = (π t , π b ) from A to (1, . . . , d). The bijections can be viewed as a pair of rows, the top corresponding to π t and the bottom corresponding to π b .
Given an interval exchange T defined by (λ, π) let α, β ∈ A the last elements in the top and bottom. The operation of Rauzy induction is applied when λ α = λ β to give a new IET T defined by (λ , π ) where λ , π are as follows. If λ α > λ β then π keeps the top row unchanged, and it changes the bottom row by moving β to the position immediately to the right of the position occupied by α. We say α wins and β loses. For all γ = α define λ γ = λ γ and define If λ β > λ α then to define π we keep the bottom row the same and the top row is changed by moving α to the position to the right of the position occupied by β. Then λ γ = λ γ for all γ = β and λ β = λ β − λ α . We say β wins and α loses.
In either case one has a new interval exchange T determined by (λ , π ) and defined on an interval I = [0, |λ |) where The map T : I → I is the first return map to a subinterval of I obtained by cutting from I a subinterval with the same right endpoint and of length λ ζ where ζ is the loser of the process described above.
Let ∆ be the standard simplex in R d and let P be the set of permuations on n letters. We can normalize so that all IET are defined on the unit interval. Let R : ∆ × P → ∆ × P then denote Rauzy induction.
There is a corresponding visitation matrix M = M (T ). Let {e γ } γ∈A be the standard basis. If α is the winner and β the loser then M (e γ ) = e γ for γ = α and M (e α ) = e α + e β . We can view M as simply arising from the identity matrix by adding the α column to the β column. We can projectivize the matrix M and consider it as M : ∆ → ∆.
When the interval excahnge T is understood, and we perform Rauzy induction n times then define M (1) = M (T ) and inductively That is, the matrix M (n) comes from multiplying M (n − 1) on the right by the matrix of Rauzy induction applied to the IET after we have done Rauzy n − 1 times. Let C σ (n) the column of M (n) corresponding to the letter σ ∈ A. Definition 3. We say an IET satisfies the Keane condition if the orbit of every point of discontinuity misses every other such point. It is a fact that such an IET is minimal.
If an interval exchange has all powers of Rauzy induction defined on it then it satisfies the Keane condition (see [9,Corollary 5.4] for a proof in a survey). We now specialize to IET on 6 intervals. We will take paths that return infinitely often to the arrangment Our paths will be of two types which we call the left and right paths. Our paths will depend on a sequence of parameters a k . Inductively we assume that for the k th time having just traveled on the left hand side we have taken a total of n k steps of Rauzy induction and returned to the above arrangement. Our matrix is M (n k ).
Remark 1 (A remark on notation). In what follows we suppress the matrix that the columns are being taken from. So C A (M (n k + r k )) is denoted C A (n k + r k ). Given a vectorv let |v| denote the sum of the absolute values of its entries (that is its 1 norm).
Inductively assume the columns satisfy |}. Now we go along the loop on the right side. In the first step of Rauzy induction F beats A. The second step F beats B. In the third step C beats F , and then C beats E. Now for a total of r k consecutive steps C loses to D. The number r k is chosen so that it is smallest odd number satisfying Now C will beat D. Now F beats C and then F beats D. Next E beats F consecutively s k times for a total of s k + r k + 7 steps until the first time that |C F (n k +r k +s k +7)| > a k+1 ·max{|C A (n k +7+r k +s k )|, |C B (n k +7+r k +s k )|}.
Then F beats E and we return to for the k + 1 st time. The number of steps we have taken is r k + s k + 8 so Notice when we return to and for a k sufficiently large compared to k, The diagrams below describe the paths on the right side and left side of the Figure 1. The path we take on the right hand side of the Rauzy diagram On the left hand side A beats F and then beats E; then D beats A and then B; then C beats D repeatedly r k+1 times until the first time that Then D beats C. Then A beats D and then C and then B beats A repeatedly s k+1 times until the first time that C A (n k+1 +r k+1 +7+s k+1 ) > a k+2 min{(C E (n k+1 +7+s k+1 +m k+1 ), C F (n k+1 +7+r k+1 +s k+1 )}.
Finally A beats B and we return.
Our main Theorem is then the following.
Theorem 4. For choices of a k there is a minimal 6 interval exchange transformation T with the property that the resulting simplices ∆ k = M (k)(∆) converge to a line segment as k → ∞. The endpoints are ergodic measures for T . One endpoint is the common projective limit of the column vectors C A , C B ; the other endpoint is the common projective limit of the column vectors C E , C F . Along the sequence n k the column vectors corresponding to C C , C D converge projectively to a common interior point of the segment and this limit point is a generic measure.

2.1.
Outline of proof. The proof is in two main steps. In the next subsection we show that ∩ ∞ n=1 M (n)∆ converges to a line segment. This is given by Proposition 2 with preliminary Proposition 1. This shows that there are two ergodic measures. We also show that the sequence of vectors given by the C th and D th columns of the matrices M (n) converge to a single point away from the endpoints in ∩ ∞ n=1 M (n)∆. This gives the invariant measure ν we are interested in. In the section after that we show that ν is generic by finding a generic point x.

2.2.
Convergence of column vectors. The first Lemma is simple Euclidean geometry. Our angles will be bounded away from π 2 and so sin x will be comparable with x. Lemma 1. Let v, w ∈ R n , and let θ(v, w) denote the angle between v and w. If θ(v + w, w) denotes the angle between v + w and w, we have We will now simplify notation for the column vectors, using v for a vector and eliminating some of the times of Rauzy induction.
We assume that we enter the right side after an even number of returns. We define v F (2k + 1), v E (2k + 1) to be the column vectors C F (n 2k + 3), C E (n 2k + 4) after F, E lose to C on the right side. We choose v F (2k +2) to be the column vector C F (n 2k + r 2k + m 2k + 7) after E is added to F on the right side and v E (2k + 2) the column vector C E (n 2k + r k + 8) = C E (n 2k+1 ) after C F is added to C E .
Similarly let v A (2k) be the column vector C A (n 2k−1 + 3) on the left side just after A has lost to D and v A (2k + 1) is the column vector C A (n 2k ) = C A (n 2k−1 + r 2k−1 +m 2k−1 +7) just after B has finished beating A. Similarly the column vectors v B (2k) and v B (2k + 1) are defined in terms of C B .
We define v R C (2k + 1) to be the column vector C C (n 2k + r 2k + 4) when C has finished losing to D on the right side and v R C (2k + 2) the column vector for C after C loses to F . This ensures that v R C (2k + 2) lies on the line joining v R C (2k + 1) and v F (2k + 1) which will be needed later. Similarly, z L C (2k) is the column vector for C on the left side after D has beaten C, and z L C (2k + 1) is the column vector after A has beaten C. We have similar definitions for v L D (2k), v L D (2k + 1) in terms of the column vectors C D .
The following Proposition gives the properties these vector satisfy. They will be used to prove the desired convergence results. Proposition 1. With a choice of a k there exist constants δ > 0, α < 1 such that for i, j ∈ {1, 2}, and all k (1) for σ, σ ∈ {E, F }, θ(v σ (k), v σ (k + 1)) < α k and similarly for A, B The same holds on the left and similarly for A, B.
lies on the line connecting v R C (2k + 1) and v F (2k + 1) and v L C (2k + 1) lies on the line connecting v L C (2k) and v A (2k) with the same for v D .
Proof. The vectors v E (2k + 1) (resp. v F (2k + 1)) arises from v E (2k) (resp.v F (2k)) from E (resp. F ) losing first to A on the left side and then to C back on the right side. Recall this means that the A column and then the C column are added to the E (resp.F column). Then by (A) and (B) and Lemma 1, and similarly for v F . Similarly we have On the right side the C E column is added to the C F column many times and then C F is added to C E . By (A) and (D) and Lemma 1, this gives The same argument shows that We have similar inequalities for the v A , v B . If we let a k be exponentially small, putting these together we see (1) is satisfied. We also note that on the right side F beats A, B only when smaller by the factor of a k and similarly on the left side A beats E, F only when smaller by a factor of a k+1 which gives Thus since initially the angle between the column for E and A is positive we can choose a k so (2) is satisfied. We prove (3),(4). When C or D loses to F , C C moves towards C F so The same holds for v L C (2k), v L C (2k + 1) and also for the corresponding v D . These imply (3),(4).
When we have finished on the right we have v R C (2k), v R D (2k). When we move to the left we change from v R C (2k), v R D (2k) to v L C (2k), v L D (2k). In doing so we add the C column to the D column r 2k times and then the D column is added back to the C column. Since r k grows exponentially this give (5) for j even and left. Moving from left to right similarly gives (5) when j is odd and we are on the right. In addition in increasing the index on both sides means adding the same column (either C A or C F ) to each so they remain exponentially close. This proves (5). Similarly the fact that C beats D repeatedly on the left means v L D (2k) moves close to v R C (2k) and then the fact that D is added to C means that v L C (2k) is close to v R C (2k). This is (6). We have the corresponding computation on the right giving (7).
The following is the basic Proposition that will allow us to find limiting invariant measures.
Proposition 2. Consider the sequence of vectors v σ (k) ∈ R 6 that satisfy the conclusions of Proposition 1, projectivized to lie in the simplex Proof. We can replace angles with Euclidean distance. The fact that the limits v 0 , v 1 of v E,F (k), v A,B (k) exist follows immediately from (1). In fact, they converge exponentially fast. The fact that the limits are distinct comes from (2).
Letū be the direction of the line connecting v 0 and v 1 . Let y denote a coordinate perpendicular toū. We claim that the y coordinates of . The fact that v R C (2k + 1) and v R C (2k + 2) lie on a line through v F (2k + 1) whose y coordinate goes to 0 exponentially fast together with (3) implies that the y coordinate of v R C (2k + 2) must decrease by a factor of approximately 1 − 1 k + O( 1 k 2 ) from the coordinate of v R C (2k + 1). The same is true for the change from even to odd indices of v L C , also by (3). But then (6) says that d(v R C (2k), v L C (2k)) → 0 exponentially fast and together with the fact that implies the y coordinates of v R,L C (k) must converge to 0. The same is true of v R,L D (k) by (3) and (7).
It follows from the above conditions that these v R , v L D (k) have a single limit v on the axis. Indeed since theū coordinate of v R C (2k + 2) decreases by 1 2k+1 with small error from that of v R C (2k + 1) and then increases by 1 2k+2 with small error from even to odd implies that the distance in theū coordinate between v R C (2k) and v R C (2k +2) is O( 1 k 2 ) so that these coordinates form a convergent Cauchy sequence. The fact that the distance in theū coordinate between v R C (2k) and v R C (2k + 1) goes to 0 means the entire sequence converges. The fact that (5) holds implies that the v R,L D (k) have the same limit. We claim that v / ∈ {v 0 , v 1 }. Suppose by contradiction that v = v 0 . (the argument for v 1 is similar) Assume withot loss of generality |v 1 − v 0 | = 1. For k even . That is, since v L D (k + 1) is assumed near 0, we move towards v 1 an amount about 1 k . When we move towards 0 on the next move we move not as far; Then again on the next move we move further away. On odd moves we will move further away from 0 than we move closer to 0 on even moves as soon as say thē u coordinate is at most some positive . This says we do not converge to 0. By a symmetric argument we see that no subsequence converges to q. χ Ij (T k (x)).

Getting a generic point
We conclude that ν(I j ) = µ(I j ). However the measures of initial intervals determine the invariant measure. This is because if T is minimal then {T i I σ } i∈Z,σ∈{A,...,F } generates the Borel σ-algebra.
the vector of visits to the original intervals. If T i is continuous on an interval U for 0 ≤ i < j then v j is constant on U and in an abuse of notation let v j (U ) be this vector.
Recall n k is the number so that after n k steps of Rauzy induction we have hit Let J k be the interval so that after n k steps of Rauzy induction the induced map, denoted S k , is defined on J k . Let A k , ..., F k be the six subintervals of the IET S k . For σ k one of these intervals and x ∈ σ k m k (x) = min{i > 0 : T i x ∈ J k } is constant on σ k and is |C σ k |.
Let m k (σ k ) be this number. In other words in this notation v m k (σ k ) (σ k ) = C σ k .

Also let
. This is the orbit of σ k under T before σ k returns to J k . Now suppose we have the induced map S k on J k and we are about to go to the left side of Rauzy induction. Recall r k+1 is the number of times on the left side that C beats D, and so for 0 ≤ i ≤ r k+1 , If on the other hand we are about to go to the right side then for 0 ≤ i ≤ r k+1 For simplicity let us assume that we are about to go to the left hand side. The arguments are symmetric if we are about to go to the right hand side. Recall r k+1 is odd. Let u k := r k+1 2 .
Lemma 3.Î k consists of two intervals of O k+1 (C k+1 ) and one interval of O k+1 (D k+1 ) Proof. On the left side D k beats A k and B k . Then C k as the bottom letter beats D k , the top letter r k+1 times before losing to it. Set For each i, after C k has beaten D k i times, we have with this disjoint union moving right to left along C k . The last term in the union which is at the left end of C k is an image of C k+1 . Moreover D k is a disjoint union of an image of C k+1 and an image of D k+1 . In addition S k acts by translation by the length of D k along C k . Then for i < r k+1 , S i k (C k ) ∩ C k is C k with the i copies of C k+1 ∪ D k+1 furthest on the left of C k deleted. Similarly, S −i k (C k ) ∩ C k deletes the i copies of C k+1 ∪ D k+1 furthest on the right of C k . It follows then from the definition ofÎ k that it is a single union of an image of C k+1 ∪ D k+1 and the last term in the union, which is an image of C k+1 . Now let I k =Î k ∩ O k+1 (D k+1 ). We conclude from the previous lemma that Then vm k (I k ) consists of u k iterates of the entire orbit O k (C k ), followed by the entire orbit of O(D k ), and then followed by the entire orbit of O(A k ). Now the fact that Let I k−1 be an interval that has been inductively defined so that it contains a unique interval of O k (C k ). Let I k be defined as O k (I k ) ∩ I k−1 . Observe that it contains a unique interval of O k+1 (D k+1 ) and therefore the number is well-defined and T j k is continuous on I k .
In fact these angles approach 0 exponentially fast.
Proof. This is similar to the proof of Proposition 1 because v j k (I k ) is made up of blocks of C k or D k and a smaller column of O(A k ).
We conclude the proof of the Main Theorem. We let We will show that x is generic for ν. To do that we show that the angle its visitation vector makes with the column C k goes to 0 as k → ∞. This is shown in the next Proposition.
Proof. By Lemma 5, the angle v j k−1 (I k ) = v j k−1 (I k−1 ) makes with v m k−1 (D k−1 ) (D k−1 ) goes to 0 with k and by Proposition 1 the angle of the latter with v m k (C k ) (C k ) also goes to 0. Thus the conclusion of the Proposition holds for j = j k−1 . Moreover We wish to increase the set of times in the orbit of I k . Again recall that we are assuming that we have just finished the right hand side. Since we first consider m k (C k ). Now v m k (C k ) (C k ) consists of r k copies of the vector of visits of the entire orbit O k−1 (D k−1 ) to the original intervals followed by a single copy of the vector of visits of O k−1 (C k−1 ) to the original intervals followed by a single copy of O k−1 (F k−1 ). Given that m k−1 (D k−1 ) = o(j k−1 ), Proposition 3 holds for any i in the initial interval [j k−1 , j k−1 + m k−1 (D k−1 )]. Then by Proposition 1 and the fact that the Proposition 3 holds for j k−1 , we have that

This implies that for any
i ∈ [j k−1 , j k−1 + r k m k−1 (D k−1 )], if one considers the orbit of I k at time i, then lim k→∞ θ(v i (I k ), v m k (C k ) (C k )) = 0.
Finally since m k (A k ) = o(u k m k (C k )) and this finishes the proof of the Proposition and the proof of the main theorem is complete.
We finish the paper with the proof of Theorem 2. This is the same proof given in [2] where it was shown that there are at most n 2 ergodic measures. We include it for completeness.
Proof of Theorem 2. We suspend the interval exchange to a translation surface (X, ω) of genus g = [ n 2 ]. The interval I becomes horizontal. Let x be a generic point for an invariant measure µ. Let n k → ∞ a sequence such that for each n k and 0 < i < n k there is no point of T i (x) in the interval between x and T n k (x). This means that if we take the vertical line through x to T n k (x) we can join x to T n k (x) by a segment J n k ⊂ I and the resulting closed curve γ n k is simple. Since x is generic the homology class γn k |γn k | converges in H 1 (X, R) to what is called an asymptotic cycle, denoted [γ(µ)].
It is easy to see that if µ and ν are different invariant measures they define different measures to intervals and so the corresponding asymptotic homology classes [γ(µ)] = [γ(ν)]. Moreover since the measures are nonatomic, for intervals J n k with |J n k | → 0, µ(J n k ) → 0 which implies that the asymptotic cycles have 0 intersection number that is, < [γ(µ)], [γ(ν)] >= 0. This says they form a Lagrangian subspace of H 1 (X, R). Since the dimension of homology is 2g, where g is the genus of X, a Lagrangian subspace has dimension at most g.