Perron-Frobenius theory and frequency convergence for reducible substitutions

We prove a general version of the classical Perron-Frobenius convergence property for reducible matrices. We then apply this result to reducible substitutions and use it to produce limit frequencies for factors and hence invariant measures on the associated subshift. The analogous results are well known for primitive substitutions and have found many applications, but for reducible substitutions the tools provided here were so far missing from the theory.


Introduction
One of the most investigated dynamical systems, with important applications in many areas, are subshifts that are generated by substitutions. If the substitution is primitive, then a number of well known and powerful tools are available, most notably the Perron-Frobenius theorem for primitive matrices, which ensures that the subshift in question is uniquely ergodic.
On the other hand, substitutions with reducible incidence matrices have only recently received some serious attention (see Remark 3.18 and Remark 7.3). One reason for this neglect is that the standard methods, employed in the primitive case for analyzing the dynamics of such substitutions and their incidence matrices, uses tools that so far didn't have analogues in reducible case. It is the purpose of this paper to provide these tools, and thus to extend the basic theory from the primitive to the reducible case.
We concentrate on substitutions ζ which are expanding, i.e. ζ does not act periodically or erasing on any subset of the given alphabet (for our notation and terminology on substitutions see §3.1).
Every non-negative irreducible square matrix has a power which is a block diagonal matrix, where every diagonal block is primitive. The classical Perron-Frobenius theorem asserts that, for any primitive matrix M and for any non-negative column vector v = 0, the sequence of vectors M t v, after normalization, converges to a positive eigenvector of M , and that the latter is unique up to rescaling.
In analogy with the above facts, in section 2 we introduce the PB-Frobenius form for matrices, which is set up so that, up to conjugation with a permutation matrix, every non-negative integer square matrix has a positive power which is in PB-Frobenius form. We prove the following convergence result for matrices in PB-Frobenius form; its proof spans sections 4-7 and can be read independently from the rest of the paper.
Theorem 1.1. Let M be a non-negative integer (n × n)-matrix which is in PB-Frobenius form. Assume that none of the coordinate vectors is mapped by a positive power of M to itself or to 0.
Then for any non-negative column vector v = 0 there exists a "limit vector" and v ∞ is an eigenvector of M .
In symbolic dynamics the classical Perron-Frobenius theorem plays a key role, when applied to the incidence matrix M ζ of a primitive substitution ζ: Any finite word w in the language L ζ associated to ζ : A → A * has the property that for any letter a i of the alphabet A, the number |ζ t (a i )| w of occurrences of w as a factor in ζ t (a i ), normalized by the word length |ζ t (a i )|, converges to a well defined limit frequency. The latter can be used to define the unique (up to scaling) invariant measure on the subshift Σ ζ defined by the primitive substitution ζ.
The purpose of this paper is to establish the analogous results for expanding reducible substitutions ζ. The key observation (Proposition 3.5) here is that for any n ≥ 2 the classical level n blow-up substitution ζ n (based on a derived alphabet A n which contains all factors w i ∈ L ζ of length |w i | = n as "blow-up letters") has incidence matrix M ζn in PB-Frobenius form, assuming that the incidence matrix M ζ is in PB-Frobenius form.
Combining Proposition 3.5 with Theorem 1.1 gives the following (see Lemma 3.2 and Proposition 3.11): Theorem 1.2. Let ξ be an expanding substitution on a finite alphabet A. Then there exist a positive power ζ = ξ s such that for any non-empty word w ∈ A * and any letter a i ∈ A the limit frequency lim t→∞ |ζ t (a i )| w |ζ t (a i )| exists.
As a consequence of Theorem 1.2 we obtain -precisely as in the primitive case -for any a i ∈ A an invariant measure on the subshift Σ ζ defined by the substitution ζ. However, contrary to the primitive case, in general this invariant measure will heavily depend on the chosen letter a i , see Question 3.15. We prove (see Remark 3.14): Corollary 1. 3. For any expanding substitution ζ : A → A * and any letter a i ∈ A there is a well defined invariant measure µ a i on the substitution subshift Σ ζ . For any non-empty w ∈ A * and the associated cylinder Cyl w ⊂ Σ ζ (see subsection 3.6) the value of µ a i is given, after possibly raising ζ to a suitable power according to Theorem 1.2, by the limit frequency µ a i (Cyl w ) = lim t→∞ |ζ t (a i )| w |ζ t (a i )| .
Although there are various generalizations of the classical Perron-Frobenius theorem for primitive matrices in the literature, we could not find one with the convergence statement as in Theorem 1.1, which is needed for our applications. Perron-Frobenius theory and its generalizations are relevant in many more branches of mathematics than just symbolic dynamics, including applied linear algebra, and some areas of analysis and probability theory (see for instance [AGN11], [BSS12] and [Lem06]). We expect that Theorem 1.1 will find useful applications in other contexts.
Our proof of Theorem 1.1 uses only standard methods from linear algebra and is hence accessible to mathematicians from all branches. The reader interested only in Theorem 1.1 may go straight to section 4 and start reading from there. The sections 4 to 7 are organized as follows: After setting up some definitions and terminology in section 4, we state Theorem 5.1, a slight strengthening of Theorem 1.1. To stay within the realm if this paper we phrase Theorem 5.1 for integer matrices, but this assumption is not used in the proof of Theorem 5.1.
The proof of Theorem 5.1 is done by induction over the number of primitive diagonal blocks in a suitable power of the given matrix M , and the induction step itself (Proposition 5.4) reveals a crucial amount of information about the dynamics on the non-negative cone R n ≥0 induced by iterating the map which is defined by the matrix M . The proof of Proposition 5.4, which involves a careful (and hence a bit lengthy) 3-case analysis, is assembled in section 6. In section 7.1 some results about the eigenvectors of such a matrix M are shown to be direct consequence of Proposition 5.4.

Non-negative matrices in PB-Frobenius form
A non-negative integer (n × n)-matrix M is called irreducible if for any 1 ≤ i, j ≤ n there exists an exponent k = k(i, j) such that the (i, j)-th entry of M k is positive. The matrix M is called primitive if the exponent k can be chosen independent of i and j. The matrix M is called reducible if M is not irreducible. Since in some places in the literature the (1 × 1)-matrix with entry 0 is also accepted as "primitive" we will be explicit whenever this issue comes up.
It is a well known fact for non-negative matrices that every irreducible matrix has a power which is, up to conjugation with a permutation matrix, a block diagonal matrix where every diagonal block is a primitive square matrix.
For the purposes of our results on reducible substitutions presented in the next section the following terminology turns out to be crucial: Let M be a non-negative integer square matrix as considered above, and assume that M is partitioned into matrix blocks which along the diagonal are square matrices.
Definition 2.2. (a) The matrix M is in PB-Frobenius form if M is a lower diagonal block matrix where every diagonal block is either primitive or power bounded. (b) If M is in PB-Frobenius form, then the special case of a diagonal block which is a (1 × 1)-matrix with entry 1 or 0 will be counted as PB block and not as primitive block, although technically speaking such a block could also be considered as "primitive". Lemma 2.3. Every non-negative square matrix M has a positive power M t which is in PB-Frobenius form (with respect to some block decomposition of M ).
Proof. This is an immediate consequence of the well known normal form for non-negative matrices, which says that, up to conjugation with a permutation matrix, M is a lower block diagonal matrix with all diagonal blocks are either zero or irreducible. It suffices now to rise M to a power such that every diagonal block matrix block is itself a block diagonal matrix with primitive matrix blocks, and to refine the block structure of M accordingly.
As is often done when working with non-negative matrices, we will use in this paper as norm on R n the 1 -norm, i.e.
a i e i = |a i | for all a 1 , . . . , a n ∈ R. In section 7 we prove the convergence result for matrices in PB-Frobenius form stated in Theorem 1.1, which is crucial for our extension of the classical theory for primitive substitutions to the much more general class of expanding substitutions in the next section. It turns out (see Proposition 3.5) that the class of PB-Frobenius matrices is precisely the class of matrices for which the blow-up technique known from primitive matrices can be extended naturally.
For practical purposes we formalize the condition that is used as assumption in Theorem 1.1: Definition-Remark 2.4. (1) An integer square matrix M is called expanding if none of the coordinate vectors e i is mapped by a positive power of M to itself or to 0.
(2) It is easy to see that this is equivalent to the condition that for any non-negative column vector v = 0 the length of the iterates satisfy M t v → ∞ for t → ∞.
(3) Let M be in PB-Frobenius form. The statement that "M is expanding" is equivalent to the requirement that no minimal diagonal matrix block M i,i of M is PB. Here minimal refers to the partial order on blocks as defined in section 4. Thus "M i,i is minimal" means that M v has non-zero coefficients only in the coordinates corresponding to M i,i , if the same assertion is true for v.

Dynamics of expanding substitutions
3.1. Basics of substitutions. A substitution ζ on a finite set A = {a 1 , a 2 , . . . a n } (called the alphabet) of letters a i is given by associating to every a i ∈ A a finite word ζ(a i ) in the alphabet A: This defines a map from A to A * , by which we denote the free monoid over the alphabet A. The map ζ extends to a well defined monoid endomorphism ζ : A * → A * which is usually denoted by the same symbol as the substitution. The combinatorial length of ζ(a i ), denoted by |ζ(a i )|, is the number of letters in the word ζ(a i ). We call a substitution ζ expanding if there exists k ≥ 1 such that for every a i ∈ A one has |ζ k (a i )| ≥ 2.
It follows directly that this is equivalent to stating that ζ is non-erasing, i.e. none of the ζ(a i ) is equal to the empty word, and that ζ doesn't act periodically on any subset of the generators.
Let A Z be the set of all biinfinite words . . . x −1 x 0 x 1 x 2 . . . in A, endowed with the product topology. It is equipped with the shift operator, which shifts the indices of any biinfinite word by −1, and is continuous.
Any substitution ζ defines a language L ζ ⊂ A * which consists of all words w ∈ A * that appear as a factor of ζ k (a i ) for some a i ∈ A and some k ≥ 0. Here factor means any finite subword of a word in A * or A Z , referring to the multiplication in the free monoid A * .
Furthermore, ζ defines a substitution subshift, i.e. a subshift Σ ζ ⊂ A Z which is the space of all biinfinite words in A which have the property that any finite factor belongs to L ζ .
A substitution ζ on A is called irreducible if for all 1 ≤ i, j ≤ n, there exist k = k(i, j) ≥ 1 such that ζ k (a j ) contains the letter a i . It is called primitive if k can be chosen independent of i, j. A substitution is called reducible if it is not irreducible. Note that any irreducible substitution ζ (and hence any primitive ζ) is expanding, except if A = {a 1 } and ζ(a 1 ) = a 1 .
Given a substitution ζ : A → A * , there is an associated incidence matrix M ζ defined as follows: The (i, j) th entry of M ζ is the number of occurrences of the letter a i in the word ζ(a j ). Note that the matrix M ζ is a non-negative integer square matrix. It is easy to verify that an expanding substitution ζ is irreducible (primitive) if and only if the matrix M ζ is irreducible (primitive), as defined in section 2.
It also follows directly that M ζ t = (M ζ ) t for any exponent t ∈ N. Furthermore, the incidence matrix M ζ is expanding (see Definition-Remark 2.4) if and only if the substitution ζ is expanding. In particular, we obtain directly from Lemma 2.3: Lemma 3.1. Every expanding substitution ζ has a positive power ζ t such that the incidence matrix M ζ t is PB-Frobenius and expanding.

Frequencies of letters.
For any letter a i ∈ A and any word w ∈ A * we denote the number of occurrences of the letter a i in the word w by |w| a i .
We observe directly from the definitions that the resulting occurrence vector v(w) := (|w| a i ) a i ∈A satisfies: The statement of the following lemma, for the special case of primitive substitutions, is a well known classical tool in symbolic dynamics (see [Que10,Proposition 5.8]). Lemma 3.2. Let ζ : A * → A * be an expanding substitution. Then, up to replacing ζ by a positive power, for any a ∈ A and any a i ∈ A the limit frequency exists. The resulting limit frequency vector v ∞ (a) := (f a i (a)) a i ∈A is an eigenvector of the matrix M ζ .
Proof. By Lemma 3.1 we can assume that, up to replacing ζ by a positive power, the incidence matrix M ζ is in PB-Frobenius form and expanding. Thus, Theorem 1.1 applied to the occurrence vector v(a) gives the required result, where we note that M t ζ v(a) = v(ζ t (a)) = |ζ t (a)| is a direct consequence of equality (3.1) and the definition of the norm in section 2.
Notice that, as for primitive substitutions, it follows that the sum of the coefficients of the limit frequency vector v ∞ (a) is equal to 1. However, contrary to the primitive case, for a reducible substitution ζ the limit frequency vector v ∞ (a) will in general depend on the choice of a ∈ A. 3.3. Frequencies of factors via the level n blow-up substitution. Recall from section 3.1 that for any substitution ζ we denote by L ζ the subset of A * which consists of all factors of any iterate ζ k (a i ), for any letter a i ∈ A. We say that w is used by a i if w appears as a factor in some ζ k (a i ). We see from Lemma 3.2 that the frequencies of letters are encoded in the incidence matrix M ζ ; however, this matrix doesn't give us any information about the frequencies of factors. In order to understand the asymptotic behavior of frequencies of factors one has to appeal to a classical "blowup" technique for the substitution (see for instance [Que10]). We now give a quick introduction to this blow-up technique, which will be crucially used below.
Let n ≥ 2, and denote by A n = A n (ζ) the set of all words in L ζ of length n. We consider A n as the new alphabet, and define a substitution ζ n on A n as follows: For w = a 1 a 2 . . . a n ∈ A n , consider the word ζ(a 1 a 2 . . . a n ) = x 1 x 2 . . . x |ζ(a 1 )| x |ζ(a 1 )|+1 . . . x |ζ(w)| .
That is, ζ n (w) is defined as the ordered list of first |ζ(a 1 )| factors of length n of the word ζ(w). As before, ζ n extends to A * n and A Z n , by concatenation. Here a word w ∈ A * n of length k is an ordered list of k words of length n in A * . Namely, w = w 0 w 1 . . . w k such that |w i | = n for all i = 1, . . . , k. We call ζ n the level n blow-up substitution for ζ. From this definition it follows directly that (ζ n ) t = (ζ t ) n , hence we will omit the parentheses. Observe that for w = a 1 a 2 . . . a n ∈ A n , we have |ζ n (w)| = |ζ(a 1 )|, from which it follows that for an expanding substitution ζ the blow-up substitution ζ n is expanding, for any n ≥ 2.
One of the classical tools that is used to understand irreducible substitutions and their invariant measures is the following: Then for any n ≥ 1, the incidence matrix M ζn for the level n blow-up substitution ζ n is again primitive.
We show that the analogue is true for expanding substitutions with possibly reducible incidence matrices: Then for any n ≥ 1, the incidence matrix M ζn for the level n blow-up substitution ζ n is again in PB-Frobenius form.
The proof of this proposition, which is one of the main results of this paper, requires several lemmas; we assemble all of them in the next subsection.
3.4. The proof of Proposition 3.5. Let ζ : A * → A * be a substitution as before, and let A ⊂ A be a ζ-invariant subalphabet, i.e. we assume that ζ(a ) ∈ A * for any a ∈ A , where we identify the free monoid A * with the submonoid of A * that is generated by the letters from A .
For most applications one may chose A to be a maximal proper ζ-invariant subalphabet of A, although formally we don't need this assumption. The terminology below comes from thinking of A A as representing the "top stratum" for the reducible substitution ζ.
For any n ≥ 2 and for the level n blow-up substitution ζ n : A n → A * n we consider the subalphabet A n ⊂ A n which is given by all words w = x 1 . . . x n with x i ∈ A that are used by some a i ∈ A . From the ζ-invariance of A it follows directly that A n is ζ n -invariant.
We now partition the letters w of A n A n , i.e. w = x 1 . . . x n is a word of length n which is used by some a i ∈ A A but not by any a i ∈ A , into two classes: ( Remark 3.6. From the definition of the map ζ n and from the ζ-invariance of A it follows directly that the top-transition words together with A n constitute a ζ n -invariant subalphabet of A n . Indeed, recall that for any w = x 1 . . . x n ∈ A n the image ζ t n (w) is a word w 1 w 2 . . . w r in A n , with r = r(t) = |ζ t (x 1 )| such that w k is the prefix of length n of the word obtained from ζ t (w) by deleting the first k − 1 letters. Thus it follows that the first r − (n − 1) of the words w k are factors of ζ t (x 1 ), and that the last n − 1 of the words w k have at least their first letter in ζ t (x 1 ). Hence, if x 1 ∈ A , then the first r − (n − 1) of the words w k belong to A n , and the last n − 1 words w k are all top-transition.
We now consider the incidence matrices M ζ and M ζn : From the ζ-invariance of A it follows that after properly reordering the letters of A the matrix M ζ is a 2×2 lower triangular block matrix, with M ζ| A as lower diagonal block. Similarly, M ζn is a 3 × 3 lower triangular block matrix, with M ζn| A n as bottom diagonal block. The top-used edges form the top diagonal block, and top-transition edges form the middle diagonal block.
The arguments given below work also for the special case where A is empty; in this case the bottom diagonal block of M ζ and the two bottom diagonal blocks of M ζn have size 0 × 0, so that both, M ζ and M ζn , consist de facto of a single matrix block.
Lemma 3.7. The middle diagonal block of M ζn as defined above is power bounded.
Proof. Using the same terminology as in Remark 3.6 we recall that for w = x 1 . . . x n ∈ A n and ζ t n (w) = w 1 w 2 . . . w |ζ t (x 1 )| it follows from x 1 ∈ A that only the last n − 1 words w k may possibly be in A n A n , but their first letter always belongs to A . This shows that independently of t any coefficient in the middle diagonal block of M ζ t n is bounded above by n, for any t ≥ 1. Lemma 3.8. If the top block diagonal matrix of M ζ is power bounded, then so is the top block diagonal matrix of M ζn .
Proof. From the hypothesis that the top block diagonal matrix of M ζ is power bounded we obtain that there is a constant K ∈ N such that for any letter a i ∈ A A and any t ≥ 0 the number of letters x i of the factor ζ t (a i ) that do not belong to A is bounded above by K. But then it follows directly that there can be at most K top-used letters y 1 . . . y n from A n A n in any of the ζ t n (w) with w = x 1 . . . x n ∈ A n A n top-used, since any such y 1 . . . y n must have its initial letter y 1 in ζ t (x 1 ), and y 1 must belong to A A . Remark 3.9. From the definition of "top-used" and from the finiteness of A n it follows that there is an exponent t ≥ 0 such that for any word u ∈ A n A n (and hence in particular for any top-used u) there is a letter a i ∈ A A such that u is a factor of the word ζ t (a i ) for some positive integer t ≤ t. Proof. It suffices to show that there is an integer t 0 ≥ 0 such that for any two top-used words w = x 1 . . . x n and w of A n the word w is a factor of the prefix of length |ζ t 0 (x 1 )| of ζ t 0 (w). From the assumption that the top diagonal block matrix of M ζ is primitive we know that there is an exponent t 1 ≥ 0 such that for any two letters a and a of A A the word ζ t 1 (a ) contains as factor the letter a, for any integer t 1 ≥ t 1 . From the observation stated in Remark 3.9 we deduce that there is an exponent t 2 ≥ 0 such that w is a factor of ζ t 2 (a ) of some letter a of A A , for some positive integer t 2 ≤ t 2 . Thus from setting a = x 1 and a = a it follows that w is a factor of ζ t 1 +t 2 (x 1 ). This shows the claim, for t 0 = t 1 + t 2 .
We now obtain as direct consequence of the above Lemmas: Proof of Proposition 3.5. The claim that the incidence matrix M ζn is in PB-Frobenius form follows from an easy inductive argument over the number of blocks in the PB-Frobenius form of M ζ : At each induction step the top left diagonal block of M ζ is either primitive or power bounded, and all other blocks are assembled together in an invariant subalphabet A of the given alphabet A. Then M ζn is considered as above as 3 × 3 lower triangular block matrix. For the two upper diagonal blocks the claim follows directly from the above lemmas. The bottom diagonal block is equal to ζ n | A n , which is equal to the incidence matrix of (ζ| A ) n . But for ζ| A the claim can be assumed to be true via the induction hypothesis.
3.5. Level n limit frequencies. We can now state the analogue of Lemma 3.2 for words w of length n ≥ 2 instead of letters a i ∈ A. As done there for n = 1, we can use all words w from the alphabet A n = A n (ζ) as "coordinates" and consider, for any word w ∈ A * n , the level n occurrence vector v n (w ) := (|w | w ) w∈An . Again we obtain: Proposition 3.11. Let ζ : A → A * be an expanding substitution. Then, up to replacing ζ by a power, the frequencies of factors converge: For any word w ∈ A * of length |w| ≥ 2 and any letter a ∈ A the limit frequency Proof. Set n = |w|. If w does not belong to A n , then |ζ t (a)| w = 0 for all t ∈ N, so that we can assume w ∈ A n . By Lemma 3.1 we can assume that, up to replacing ζ by a positive power, the incidence matrix M ζ is in PB-Frobenius form. Thus we can apply Proposition 3.5 to obtain that the blow-up incidence matrix M ζn is also in PB-Frobenius form. Furthermore, if ζ is expanding, then so is ζ n , and hence M ζn .
From the definition of ζ n we have the following estimate: For any two w, w 1 ∈ A n , with Now, let w ∈ A n be a word of length n that starts with the letter a. As in the proof of Lemma 3.2 we can thus use Theorem 1.1, which applied to the level n occurrence vector v n (w ) gives that lim t→∞ |ζ t n (w )| w |ζ t n (w )| exists, and together with the above observation equals to Similar to the case where n = 1 in Lemma 3.2 it follows that the sum of the coefficients of the limit frequency vector v ∞ n (a) is equal to 1. Again, for an expanding reducible substitution ζ the limit frequency vector v ∞ n (a) will in general depend on the choice of a ∈ A.
3.6. Invariant measures for expanding substitutions. Recall from section 3.1 that the subshift Σ ζ associated to a substitution ζ is the space of all biinfinite words which have the property that any finite factor belongs to L ζ . Any word In the classical case where ζ is primitive, it is well known that the subshift Σ ζ defined by ζ is uniquely ergodic. In this case the limit frequency f w (a) obtained in Proposition 3.11 is typically used to describe the value that the invariant probability measure µ ζ takes on the cylinder Cyl w defined by any w ∈ L ζ ⊂ A * (see section 5.4.2 of [Que10]).
In the situation treated in this paper, where ζ is only assumed to be expanding (so that M ζ may well be reducible), there is no such hope for a similar unique ergodicity result. However, the definition of invariant measures on Σ ζ , through limit frequencies as known from the primitive case, extends naturally via the results of this paper to any expanding reducible substitution ζ. We will use the remainder of this subsection to elaborate this, and to comment on some related developments.
Every shift-invariant measure µ on Σ ζ defines a function ω µ : A * → R ≥0 by setting ω µ (w) := µ(Cyl w ) if w belongs to L ζ , and ω µ (w) := 0 otherwise. Conversely, it is well known (see for instance [FM10]) that any function ω : A * → R ≥0 is defined by an invariant measure µ on the full shift A Z if and only if ω is a weight function, i.e. ω satisfies the Kirchhoff conditions spelled out in Definition 3.12 below. In this case ω determines µ, i.e. there is a unique invariant measure µ on A Z that satisfies ω = ω µ . Furthermore, the support of µ is contained in Σ ζ ⊂ A Z if and only of ω(w) = 0 for all w ∈ A * L ζ .
Proposition 3.13. Let ζ : A → A * be an expanding substitution, raised to a suitable power according to Proposition 3.11. Then for any letter a ∈ A the function given by the limit frequencies f w (a) from Proposition 3.11, satisfies the Kirchhoff conditions.
Proof. We consider ζ t (a) as in Proposition 3.11 and observe that any occurrence of a word w as factor in ζ t (a), unless it is a prefix, together with its preceding letter a i in ζ t (a) gives an occurrence of the factor a i w, and conversely. The analogous statement holds for factors wa i . Hence for every w ∈ A * each of the two equalities in Definition 3.12, for ω(w) := |ζ t (a)| w , either holds directly, or else it holds up to an additive constant ±1. Since by the assumption that ζ is expanding we have |ζ t (a)| → ∞ , the Kirchhoff conditions must hold for the limit quotient function ω a = f w (a) = lim t→∞ |ζ t (a)|w |ζ t (a)| . Remark 3.14. Since for any a ∈ A and any w / ∈ L ζ the limit frequencies satisfy f a (w) = 0, we obtain directly from Proposition 3.13 that the weight function ω a defines an invariant measure µ a on Σ ζ . This proves Corollary 1.3 from the Introduction.
From the definition via limit frequencies it follows immediately that any of the µ a is a probability measure, i.e. µ a (Σ ζ ) = 1. Contrary to the primitive case, for an expanding substitution ζ distinct letters a i of A may well define distinct measures µ a i on Σ ζ . However, as it happens in the primitive case, distinct a i ∈ A may also define the same measure µ a i . This raises several natural questions: Question 3.15. Let ζ be an expanding substitution as before.
(1) What is the precise condition on letters a, a ∈ A such that they define the same measure µ a = µ a on Σ ζ ? (2) Are there invariant measures on Σ ζ that are not contained in the convex cone C ζ , by which we denote the set of all non-negative linear combinations of the µ a ? (3) Which of the measures in C ζ have the property that in addition to being invariant under the shift operator they are also projectively invariant under application of the substitution ζ ? By this we mean that there exist some scalar λ > 0 such that the image measure ζ * (µ) on Σ ζ satisfies ζ * (µ)(X) = λµ(X) for any measurable subset X ⊂ Σ ζ .
Attempting seriously to find answers to these questions with the methods laid out here goes beyond the scope of this paper. We limit ourselves to the following: Remark 3.16. Our analysis of the eigenvectors of non-negative matrices in PB-Frobenius form in §7, when combined with the technique presented in §3.4 above to understand simultaneous eigenvectors for all blow-up level incidence matrices, seems to have the potential to show that the convex cone C ζ is spanned by invariant measures that are determined by the principal eigenvectors (see §7) of the "level 1" incidence matrix M ζ . In particular -regarding Question 3.15 (1) -it seems feasible that µ a = µ a if and only if a and a define coordinate vectors e a and e a which converge (up to normalization) to the same eigenvector of M ζ .
Remark 3.17. In the special case where the substitution ζ, reinterpreted as "positive" endomorphism of the free group F (A) with basis A, is invertible with no periodic non-trivial conjugacy classes in F (A), a negative answer to Question 3.15 (2) follows from the the main result of our paper [LU15], which was our original motivation to do the work presented here.
In much more generality reducible substitutions on the whole and Question 3.15 (2) in particular have already been treated in the literature, by the work of Bezuglyi-Kwiatkowski-Medynets-Solomyak, see [BKMS10] and the papers cited there. A more restricted class of substitutions had been treated previously by Hama-Yuasa, see [HY11]. In particular, the following should be noted: Remark 3.18. It is shown in [BKMS10] for expanding substitutions ζ with a mild extra restriction that the ergodic invariant probability measures on the subshift Σ ζ are in 1-1 correspondence with the normalized (extremal) distinguished eigenvectors (see Remark 7.3) of the incidence matrix M ζ (or perhaps rather, of the incidence matrix of a conjugate substitution defined there).
However, a direct translation of the results of [BKMS10], which is based on Bratteli diagrams and Vershik maps, to the framework of the work presented here seems to be non-evident.
Also in this context, in particular with respect to Question 3.15 (3) above, we note: Remark 3.19. In the recent preprint [BHL15] a conceptually new machinery (called "train track towers" and "weight towers") for subshifts in general has been developed, and applied as special case to reducible substitutions ζ as considered here. As a main result a bijection has been established there between the non-negative eigenvectors of M ζ and the "invariant" measures on Σ ζ . Although limit frequencies are not treated in [BHL15], it can be seen via weight functions that this bijection is the same as the one indicated in Remark 3.16 above.
However, a crucial difference to the work presented here is that in [BHL15] "invariant" means not just shift-invariance but also projective invariance with respect to the map on measures induced by the substitution ζ, see  = C .
We say that the growth type of g is strictly bigger than that of f if It follows directly that, given any infinite family of vectors U = ( u t ) t∈N in R n , then any two normalization functions h and h for U must be of the same growth type, and that, conversely, any other function h : N → R which is of the same growth type, can be used as normalization function for U : the family of values ut h (t) converges to some non-zero vector in R n , and the latter must be a positive scalar multiple of the above limit vector v U .
The following is a direct consequence of the definitions: Lemma 4.2. Let U = ( u t ) t∈N and U = ( u t ) t∈N be two infinite family of vectors in R n , and define U + U = ( u t + u t ) t∈N . Let h : N → R and h : N → R be normalization functions for U and U respectively.
(1) If the growth type of h is strictly bigger than that of h , then h is also a normalization function for U + U . Similarly, if the growth type of h is strictly smaller than that of h , then h is a normalization function for U + U .
(2) If h and h have the same growth type, then a normalization function for U + U is given by both, h or h .

Lower triangular block matrices.
Let M be a non-negative integer square matrix. Assume that the rows (and correspondingly the columns) of M are partitioned into blocks B i so that M is a lower triangular block matrix with square diagonal matrix blocks. We now define a relation on the set of blocks as follows: We write B i B j if and only if B i = B j and if there exists a non-negative vector v which has non-zero coefficients only in the block B i , such that for some t ≥ 1 the vector M t v has a non-zero coefficient in the block B j . This is equivalent to stating that for some t ≥ 1, in the matrix M t the off-diagonal matrix block in the i th block column and the j th block row has at least one positive entry.
For any block B i we define the dependency block union C(B i ) to be the union of all blocks B j with B i B j .
Observe that, if every diagonal block of M is either irreducible or a (1 × 1)-matrix, this relation defines a partial order on the blocks, denoted by writing Let us denote by C n the non-negative cone in R n with respect to the fixed "standard basis" e 1 , . . . , e n . For any block B i we define the associated cone B i as the set of all non-negative column vectors in C n that have non-zero entries only in the block B i , i.e. all convex combinations of those e i that "belong" to B i .
A block cone C is a subcone of C n which has the property that each cone B i is either "contained or disjoint", i.e. one has either B i ⊂ C or B i ∩ C = { 0}. Unless otherwise stated, we are only interested in block cones C that are invariant under the action of M , i.e. M v ∈ C for any v ∈ C. This is equivalent to stating that for any block B with B ⊂ C the block cone C(B) (called the dependency block cone) associated to the dependency block union C(B) is contained in C.

Primitive Frobenius Form.
Let M be a non-negative integer square matrix as considered above, and assume that M is partitioned into matrix blocks so that M is a lower triangular block matrix, and along the diagonal all matrix blocks are squares. (2) For every block B i we refer to the Perron-Frobenius eigenvalue λ i of the corresponding diagonal block of M as the PF-eigenvalue of the block B i . This includes (for the special case of a (1 × 1)-zero block B i ) the possible value λ i = 0.  For any matrix M in primitive Frobenius form we define the growth type associated to any of its blocks B i as follows: Among the blocks B j with B j B i , we consider the maximal PFeigenvalue λ max (B i ) := max{λ j | B i B j }, and the longest (or rather: "a longest") chain of blocks as the growth type function of the block B i . Similarly, we define the growth type function h C : N → R of any union of blocks C (or of the associated block cone C) as the maximal growth type function h j of any B j which belongs to C.
Definition 4.5 (Dominant Interior). Let C be the block cone associated to any union C of blocks. Define the dominant interior of C as follows: Pick some longest chain of blocks B i k B i k−1 . . . B i 1 as above, i.e. all B i j have PF-eigenvalue λ i k = λ i k−1 = . . . = λ i 1 = λ max (C) (in other words: the block B i j is part of a "realization" of the growth type function h C ).
Let v ∈ C be a vector for which the coordinates, for all vectors e i of the standard basis that belong to one of the blocks B i j , are non-zero. The dominant interior of C consists of all such vectors v, for any longest chain of blocks as above, which may of course vary with the choice of v. If B i B j , then by definition of for some integer k = k(i, j) the power M k has in its offdiagonal block M k j,i some positive coefficient a p,q . If both, M i,i and M j,j are primitive non-zero, it follows that for M k+2s the same diagonal block is positive, and this is also true for any exponent t ≥ k + 2s.
If B i B j and M i,i is primitive non-zero but M j,j is zero, we deduce from above the positive coefficient a p,q of M k j,i that for M k+s all coefficients in the p-th line of the block M k+s j,i must be positive. We now use the fact that the diagonal zero matrix M j,j must be a (1 × 1)-matrix, so that M k+s j,i consists of a single line, which is thus positive throughout. The same argument holds for any t = k + s with s ≥ s.
If B i B j and M j,j is primitive non-zero but M i,i is zero, we deduce from above the positive coefficient a p,q of M k j,i that for M k+s all coefficients in the q-th column of the block M k+s j,i must be positive. We now use the fact that the diagonal zero matrix M i,i must be a (1 × 1)-matrix, so that M k+s j,i consists of a single column, which is thus positive throughout. The same argument holds for any t = k + s with s ≥ s.
If B i B j and both, M i,i and M j,j are zero matrices, then M 2 has as (j, i)-th block the zero matrix, and the same is true for all powers M t with t ≥ 2.
Finally, if it doesn't hold that B i B j , then by definition of the (j, i)-th block of any positive power of M is the zero matrix.
with v * t ∈ B i and u * t ∈ C i . Then there is a bound t 0 ∈ N depending only on M such that for every t ≥ t 0 the vector u * t is contained in the dominant interior of C i .
Proof. Let t 0 be as in Lemma 4.6. Then v * t + u * t = M t v has positive coordinates in all blocks B j of C i for which M has a primitive non-zero diagonal block M j,j . Since M has no zero-columns, the maximal eigenvalue for the blocks in C i must be strictly bigger then 0. Thus the dominant interior of C i is defined through chains of blocks which are primitive non-zero. Hence u * t is contained in the dominant interior. 4.4. An example. Before proceeding with the proof of the main theorem, we discuss an example explaining the above concepts: Let M be the following matrix: We have the following relations: Hence, with the above definitions B 1 has growth type t(2 + √ 2) t , B 2 has t( √ 5+3 2 ) t , B 3 has (2 + √ 2) t , and B 4 has ( √ 5+3 2 ) t . The dependency blocks are given by C(B 1 ) = B 2 ∪B 3 ∪B 4 , C(B 2 ) = C(B 3 ) = B 4 , and C(B 4 ) = ∅.
The dominant interiors are given (where • X denotes the interior of a space X) There is one more M -invariant block cone, given by C = B 2 + B 3 + B 4 . Its dominant interior is given by

Convergence for primitive Frobenius matrices
The goal of this and the following section is to give a complete proof of the following result. For related statements the reader is directed to the work of H. Schneider [Sch86] and the references given there.
Theorem 5.1. Let M be a non-negative integer square matrix which is in primitive Frobenius form as given in Definition 4.3. Assume that M has no zero columns. Then for any non-negative vector v = 0 there exists a normalization function h v such that This result is proved by induction, and the induction step has some interesting features in itself, so that we pose it here as independent statement. But first we state a property which will be used below repeatedly: Definition 5.2. Let M be as in Theorem 5.1, and let C be a union of matrix blocks such that the associated block cone C ⊂ C n is M -invariant, with growth type function h C = λ t * t d * for some value λ * ≥ 1. We say that C satisfies the convergence condition CC(C) if for every vector u ∈ C the sequence 1 h C (t) M t u converges to a vector u ∞ which is either an eigenvector u ∞ ∈ C of M , or else one has u ∞ = 0. We require furthermore that u ∞ = 0 if u is contained in the dominant interior of C (as defined above in Definition 4.5).
Remark 5.3. (a) For u ∞ as in Definition 5.2 the condition u ∞ = 0 implies directly that u ∞ is an eigenvector of M . (b) Its eigenvalue is always equal to λ * , as follows directly from the following consideration: Proposition 5.4. Let M be a non-negative integer square matrix which is in primitive Frobenius form, with no zero columns. Let B be any block of the associated block decomposition, and let C := C(B) be the corresponding dependency block union (see §4.1). Let B and C be the block cones associated to B and C respectively. Let λ ≥ 0 and λ u ≥ 1 be the maximal PF-eigenvalues of B and C respectively, and let h : t → λ t * t d (for λ * = max{λ, λ u }) and h u : t → λ t u t du be the growth type functions for B and C respectively (see §4. 3).
Assume that C satisfies the above convergence condition CC(C). Then for every vector 0 = v 0 ∈ B the sequence converges to an eigenvector w ∞ of M which satisfies: where v ∞ is the extended PF-eigenvector (see Definition 4.3 (3)) of the primitive diagonal block of M corresponding to B, the vector w 0 ∈ C is entirely determined by v ∞ , and λ( v 0 ) ∈ R >0 depends on v 0 . (2) If λ = λ u then w ∞ = λ( v 0 ) u ∞ , where u ∞ = 0 is an eigenvector of C that depends only on the above extended PF-eigenvector v ∞ , and λ( v 0 ) ∈ R >0 depends on v 0 . (3) If λ < λ u then w ∞ = 0 is an eigenvector of C that may well depend on the choice of v 0 .
Before proving Proposition 5.4 in section 6, we first show how to derive Theorem 5.1 from Proposition 5.4. We first show that Proposition 5.4 also implies the following: Lemma 5.5. Assume that B and C as well as B and C are as in Proposition 5.4. Then we have: (1) The cone B+C associated to the block union B∪C satisfies the convergency condition CC(B+C).
(2) Assume that C is contained in a larger block cone C with growth type function h , and assume that C satisfies the convergency condition CC(C ). Then the cone B + C also satisfies the convergency condition CC(B + C ).

Proof.
(1) If B belongs to the blocks of B ∪ C that determine the dominant interior of B + C, then the eigenvalue of the PF-eigenvector of B satisfies λ ≥ λ u ≥ 1, and is maximal among all PF-eigenvalues for blocks in B ∪ C. If λ > λ u , then We note that case (3) of Proposition 5.4 is excluded by the inequalities λ ≥ λ u , and that in cases (1) and (2) of Proposition 5.4 our claim lim If B does not belong to the blocks of B ∪ C that determine the dominant interior, then we have λ u > λ, so that we are in case (3) of Proposition 5.4: In this case, however, any vector in the dominant interior of B + C must also belong to the dominant interior of C. The growth type function for B ∪ C is given by h = h u , and hence the claim follows from our assumption CC(C).
(2) Similar to the situation considered above in the proof of (1), if B does not belong to the blocks that determine the dominant interior of B + C , then any vector in the dominant interior of B + C must also belong to the dominant interior of C , and the growth type function for B + C is equal to that for C , so that the claim follows from the assumption CC(C ).
If on the other hand B belongs to the blocks that determine the dominant interior of B + C , then the growth type function for B + C is equal to that of B, so that part (1) shows that the limit vector is non-zero for any v = 0 in the dominant interior of B + C. Any vector w in the dominant interior of B + C can be written as sum w = v + u + w 0 where w 0 belongs to B + C but not to its dominant interior, while v lies in the dominant interior of B + C and u in the dominant interior of C , and at least one of them is non-zero. Thus we deduce the claim follows directly from Lemma 4.2, applied to v and u.
We will now prove Theorem 5.1, assuming the results of Proposition 5.4. The proof of Proposition 5.4 is deferred to section 6.
Proof of Theorem 5.1. Consider the block decomposition of M according to its primitive Frobenius form, and denote by B the top matrix block. Let C = C(B) be the corresponding dependency block union.
If C is empty, then B is minimal with respect to the partial order on blocks (as defined in subsection 4.2). In this case, from the assumption that M has no zero columns, it follows that B is not a zero matrix. Hence the claim of Theorem 5.1 for any vector v ∈ B follows directly from the classical Perron-Frobenius theory.
If C is non-empty, it follows from the previously considered case that the maximal eigenvalue for C satisfies λ u ≥ 1. Thus via induction over the number of blocks contained in C we can invoke Lemma 5.5 (2) to obtain that the convergency condition CC(C) holds.
We can hence apply Proposition 5.4 to get directly the the claim of Theorem 5.1 for any nonnegative vector v ∈ B.
We can then assume by induction that the claim of Theorem 5.1 is true for any vector u = 0 that has zero-coefficients in the B-coordinates. Now, an arbitrary vector w = 0 in the non-negative cone C n can be written as a sum w = v + u, with v and u as before, and at least one of them is different from 0. Hence the claim of Theorem 5.1 follows from Lemma 4.2.
Remark 5.6. The last proof also shows the following slight improvement of Theorem 5.1: For every primitive block B i of the Frobenius form of M , and for any vector v = 0 in the associated non-negative cone B i , the normalization function h v from Theorem 5.1 for the family (M t v) t∈N is of the same growth type as the function h i defined in section 4.3.
Recall from section 2 that a i e i = |a i | .
The following elementary observation is repeatedly used in the next section.
Lemma 5.7. Let M be a non-negative integer (n × n)-matrix. Assume that there exists a function h : N → R >0 such that for any vector u in the non-negative cone C n = (R ≥0 ) n the sequence 1 h(t) M t u converges to a limit vector u ∞ ∈ C n which is either equal to 0 or else an eigenvector u ∞ ∈ C n of M . Then there is a "universal constant" K = K(C) > 0 which satisfies: for any t ∈ N and for any (not necessarily non-negative) v ∈ R n .
Proof. We first consider the finitely many coordinate vectors e i from the canonical base of R n and observe that the hypothesis for any t ∈ N and any i = 1, . . . , n.
An arbitrary vector v = a i e i ∈ R n satisfies || v|| = |a i | ≥ |a i | · || e i ||, which gives thus proving the claim for K(C) := nK 0 .
6. Proof of the Proposition 5.4 Let us consider an arbitrary vector 0 = v 0 ∈ B, and define iteratively, for any integer t ≥ 1, vectors v t ∈ B and u t ∈ C through Therefore, for any t ≥ 1, we compute Case 1: Assume that λ u < λ.
In this case the diagonal block M ii of M corresponding to B is primitive. Let v ∈ B be the extended PF-eigenvector of M as given in section 4.3.
Let u ∈ C be the non-negative vector determined by the equation Then we compute: Recall that, since u ∈ C, by assumption there is a vector u ∞ ∈ C with Hence we deduce that for some constant K ≥ 0 one has We now observe: In other words, v + 1 λ w is an eigenvector of M with eigenvalue λ which is contained in the nonnegative cone B + C spanned by B and C.
We now consider an arbitrary vector v 0 ∈ B, as well as the vectors v t ∈ B and u t ∈ C as defined iteratively at the beginning of this section. For any integer s with 1 ≤ s ≤ t − 1, we have We now consider the limit of this sum for t → ∞: By the classical Perron-Frobenius theorem for primitive non-negative matrices we have lim t→∞ v t = λ v for some λ > 0. From our definition of the v t and u t it follows that their lengths v t and u t are uniformly bounded. We can hence apply Lemma 5.7 to the subspace R m ⊂ R n generated by C in order to deduce that there is a uniform bound to the length of any of the 1 λ m u ·m du M m u t−m . Hence for any s ≥ 0 the sum As a consequence, for any ε > 0 there is a value s = s(ε) ≥ 0 such that for any t ≥ s + 2 the third term of the above sum (6) satisfies: On the other hand, for large values of t the vectors v t−m−1 will be close to λ v, and hence u t−m will be close to λ u, for u as defined above by means of the eigenvector v. That is, for any ε > 0 there is a bound t 0 = t 0 (ε) ≥ 0 such that for any t ≥ t 0 there is a (not necessarily non-negative !) vector w t of length w t ≤ ε with u t = λ u + w t . This gives, for any s ≤ t − t 0 : where K(ε) is the constant from Lemma 5.7 (again applied to the subspace generated by C). As a consequence, for any t ≥ s + t 0 (ε) and some constant K which only depends on C the second term in the above sum will be εK -close to which converges (according to the above definition of w) to λ λ w as s tends to infinity.
Given ε > 0, use the first part of our considerations to find s = s(ε) which ensures that the third term in the above sum 6 is smaller than ε. We then find t 0 = t 0 ( ε K ), and consider any value t ≥ t 0 + s. The above derived estimates give where w * t is a (not necessarily non-negative) error term that satisfies w * t ≤ ε. Therefore we obtain which proves the claim for w 0 = 1 λ w.
Case 2: Assume that λ u = λ. Similar to the previous case we first consider the extended PF-eigenvector v ∈ B corresponding to the block B. Recall that u ∈ C is the vector given by the equation We compute: The first term in this sum tends to 0 when t goes to infinity. In order to understand the limit of the second term in the above sum ( † †) we recall from the inductive hypothesis in Proposition 5.4 that the vectors 1 λ s · s du M s u converge for s → ∞ to some vector u ∞ in C.
Since we need it later, we observe here that it follows from Lemma 4.7 that some iterate M t u belongs to the dominant interior of C . Thus the inductive hypothesis in Proposition 5.4 states that u ∞ = 0 is an eigenvector of M .
In both cases, we derive that for any ε > 0 there exists a bound s(ε) ≥ 0 such that for all s ≥ s(ε) we have 1 λ s · s du M s u − u ∞ ≤ ε , from which we deduce that Thus we can split the second term in the above sum ( † †) as follows: For fixed ε > 0 and hence fixed s(ε) the second term in the last sum converges to 0 as t tends to ∞, since In order to compute the first term in ( † †) we observe that This shows that We note here that 1 t du+1 t−1 k=0 k du ≤ 1 for all t ≥ 1. On the other hand, for sufficiently large t, so that, using the above observation that u ∞ = 0, we conclude that the limit vector λ 0 u ∞ with (1) is an eigenvector of M in C. This proves the claim for the extended PF-eigenvector v.
We now consider an arbitrary vector v 0 ∈ B, as well as the vectors v t ∈ B and u t ∈ C as defined iteratively as before. We obtain: The first term in this sum tends to 0 when t goes to infinity. In order to understand the limit of the second term we observe that the primitivity of the diagonal matrix block of M corresponding to B i implies that the v t converge to λ v for some scalar λ > 0. We write (as in Case 1) u t+1 = λ u+ w t+1 and note that for any ε > 0 there exists an integer t 0 = t 0 (ε) such that w t+1 ≤ ε for any t ≥ t 0 . As in Case 1 we have 1 λ t u · t du M t w t ≤ K(ε) for all t ≥ t 0 where K(ε) is the constant given by Lemma 5.7.
As before, let s(ε) be an integer which ensures for all s ≥ s(ε) that 1 λ s · s du M s u − u ∞ ≤ ε , from which we deduce that 1 where v PF i is the extended PF-eigenvector (see Definition 4.3 (3)) of the primitive diagonal block of M corresponding to B i , and w i ∈ C(B i ).
(2) The vector v(B i ) is the only eigenvector in B i + C(B i ) which admits such a decomposition: Any other eigenvector in B i +C(B i ) is either contained in C(B i ), or else it is a scalar multiple of v(B i ). Hence, v(B i ) will be called the "principal eigenvector" of B i (or of B i + C(B i )).
Proof. Any non-zero vector v ∈ B i + C(B i ) can be written as v = v 0 + u, with v 0 ∈ B and u ∈ C(B i ). From the hypothesis that B i is principal it follows that the growth type of C(B i ) and thus that of u is strictly smaller than that of B i , which is given by the function h(t) = λ t i . Case (1) of Proposition 5.4 thus shows that, if v 0 = 0, then 1 h(t) M t (v 0 ) converges to a scalar multiple of the eigenvector v PF i + w i , where w i ∈ C(B i ) is uniquely determined by the extended eigenvector v PF i . It follows directly that either v 0 = 0 and thus v ∈ C(B i ), or else for some λ > 0. In particular, we observe that any eigenvector in B i + C(B i ) which is not contained in C(B i ) must (up to rescaling) agree with v PF i + w i . The latter is indeed an eigenvector with eigenvalue λ i , by Remark 5.3 and Lemma 5.5 (1).
We will denote by C(λ) ⊂ C n the non-negative cone spanned by all principal eigenvectors of M with eigenvalue λ. As before, we write here C n to denote the standard non-negative cone in R n . We also recall that for matrices in primitive Frobenius form there is a natural partial order on the blocks (see subsection 4.2), to which we refer below when a block is called "minimal" or "maximal". Proof. Clearly any v ∈ C(λ) { 0} is an eigenvector with eigenvalue λ. For the converse implication we consider a maximal block B of M , and assume by induction over the number of blocks in M that the claim is true for the restriction of M to the invariant block C spanned by all coordinate vectors not contained in B. If B is not principal, it follows directly from the cases (2) and (3) of Proposition 5.4 that any eigenvector of M must have zero entries in the coordinates that belong to B, so that the claim follows from the induction hypothesis.
Similarly, if B is principal but the eigenvalue λ of v is different from the PF-eigenvalue λ 0 of B, it follows from case (1) of Proposition 5.4 that v belongs to C, so that the claim follows again from the induction hypothesis.
Finally, if B is principal with P F -eigenvalue equal to λ, and with principal eigenvector v PF + w, then by the M -invariance of C we can apply Lemma 7.1 to obtain a decomposition v = λ ( v PF + w) + u for some vector u ∈ C and some scalar λ ≥ 0. Since both, v and v PF + w are eigenvectors with eigenvalue λ, the same is true for u. Hence the claim follows again from our induction hypothesis.

Remark 7.3.
(1) Eigenvectors of non-negative matrices have been investigated previously by several authors, see for instance [ESS14] and [Rot75] and the references given there. Indeed, the statements of Lemma 7.1 and Proposition 7.2 are very close to results obtained there.
for some normalization function h v for the vector v. The same statement (up to replacing v ∞ by a scalar multiple) stays valid if we replace h v by any other normalization function for v. Thus in particular for the normalization function (see Remark 4.1) h v (t) = ||M t v|| we want to consider the accumulation points of the values .
As is true for all sequences of type f n (x) for which for some fixed k the subsequence f kn (x) converges, the sequence of vectors M t v h v (t) must accumulate (up to rescaling) onto the finite M -orbit of lim