Dynamically defined subsets of generic self-affine sets

In dynamical systems, shrinking target sets and pointwise recurrent sets are two important classes of dynamically defined subsets. In this article we introduce a mild condition on the linear parts of the affine mappings that allow us to bound the Hausdorff dimension of cylindrical shrinking target and recurrence sets. For generic self-affine sets in the sense of Falconer, that is by randomising the translation part of the affine maps, we prove that these bounds are sharp. These mild assumptions mean that our results significantly extend and complement the existing literature for recurrence on self-affine sets.


Introduction
The shrinking target problem in dynamical systems investigates the "size" of the set of points that recur to a collection of (shrinking) targets infinitely many times. Letting (X, T, µ) be a dynamical system with invariant measure µ and a collection of (measurable) subsets (B k ) k∈N , B k ⊆ X one investigates R((B k ) k ) = {x ∈ X : T k (x) ∈ B k for infinitely many k ∈ N}.
Often these sets are dense in the original space X, as well as G δ , and so dimension theory is used to classify the sizes of such sets. The Hausdorff dimension is the most appropriate choice here, as dense G δ sets have full dimension for, e.g. the packing-, Minkowski-, and Assouad-type dimensions.
The shrinking target problem was first investigated by Hill and Velani for Julia sets who analysed their Hausdorff dimension [12] and found a zero-one law for its Hausdorff measure [13]. The shrinking target problem has intricate links to number theory when using naturally arising sets in Diophantine approximation as the shrinking targets. This has received a lot of attention over recent years, see for instance [1,5,18,23,24] for shrinking target sets and [2,6,8,9,16,19,20,21] for related research.
The literature of recurrence sets so far has focussed mostly on zero-one laws for conformal and one dimensional dynamics, such as β-transformations, see Tan and Wang [26], and Zheng and Wu [28]. For self-similar and self-conformal dynamics these questions were explored by Seuret and Wang [27], who also gave a pressure formula for the Hausdorff dimension, as well as Baker and Farmer [3] who stated a zero-one law dependent on a convergence condition of the size of the neighbourhoods. Finally, and most recently, Kirsebom, Kude, and Persson [17] studied linear maps on the d-dimensional torus.
The above works mostly concern dynamical systems in R 1 or conformal dynamics and transitioning to higher dimensional non-conformal dynamics presents severe challenges. To circumvent the extreme challenges that affinities pose, a common strategy is to "randomise" the affine maps by considering typical translation parameter. This approach was first considered by Falconer in his seminal article [7], whose conditions were significantly relaxed by Solomyak [25] and generalised by Jordan, Pollicott and Simon [14].
This typicality with respect to the translation parameter allows one to say more about the regularity of the attractors and is a commonly employed strategy, see for example [14]. Using such randomisation, Koivusalo and Ramírez [18] gave an expression for the Hausdorff dimension of a self-affine shrinking target problem. They show that for a fixed symbolic target with exponentially shrinking diameter and well-behaved affine maps, the Hausdorff dimension is typically given by the zero of an appropriate pressure function. Strong assumptions are made on the affine system, as well as the fixed target and in this article we significantly improve upon their results.
We will show that for a large family of self-affine systems and dynamical targets with nonfixed centres the Hausdorff dimension is given by the intersection of two pressures: one being the standard self-affine pressure function, the other being an inverse lower pressure related to the target. Crucially, we do not expect the target to be fixed and the inverse pressure to exist.
Our condition also allows us to investigate the dimensions of sets with a pointwise recurrence, a quantitative version of recurrence for self-affine dynamics. As far as we are aware, this is the first time this was attempted for non-conformal dynamics in higher dimensions.

Self-affine sets and symbolic space
Let A = {A 1 , A 2 , · · · , A N } be a collection of non-singular d × d contracting matrices. Let t = {t 1 , t 2 , · · · , t N } be a collection of N vectors in R d .
Let {1, · · · , N } be a finite alphabet and write Σ n , Σ * , Σ for the union of words of length n, the union of all finite length words, and all infinite words, respectively. For words i ∈ Σ n and j ∈ Σ we write i = i 1 i 2 · · · i n and j = j 1 j 2 · · · to denote the individual letters of i and j. For a word i ∈ Σ * , let | i | denote the length of i. For any two words i, j ∈ Σ, let us denote the common prefix by i ∧ j, that is, be an iterated function system formed by affine maps on R d . For a finite word i ∈ Σ * , let It is a classical result that there exists a unique non-empty compact set Λ ⊂ R d such that To avoid singleton sets we assume that N ≥ 2 throughout. Let us denote by π = π t the natural projection from Σ to the attractor of Φ t , that is, Clearly, π t (i) = f i 1 (π t (σ i)) and so For any ball B, clearly, A(B) is an ellipsoid and, as it was shown in [7, Proof of Proposition 5.1], it can be covered by at most (4|B|) d α 1 (A)···α ⌊s⌋ (A) α ⌈s⌉ (A) ⌊s⌋ -many cubes with side length α ⌈s⌉ (A). The pressure of the self-affine system is defined as where we note that this limit exists because of the subadditivity of ϕ t (A). Further, the pressure is continuous in t, strictly decreasing, and satisfies P (0) = log N and P (t) → −∞ as t → ∞. Throughout the paper we will use the following extra condition: Assume that A is such that for every s > 0 there exists C > 0 and K ∈ N such that for every i, j ∈ Σ * there exists k ∈ Σ K with Similar conditions has been introduced earlier by Feng [10] and Käenmäki and Morris [15]. Feng [10,Proposition 2.8] showed that under a mild irreducibility condition there exists C > 0 and Later, this inequality was generalised by Käenmäki and Morris [15,Lemma 3.5] for the singular value function under more restrictive but natural irreducibility conditions. Unfortunately, the uncertainty of the length of the "buffer" word k in the previous conditions does not allow us to study shrinking target and recurrence sets effectively. We will show in Section 2.4 and Section 5 that under some irreducibility and proximality assumptions, Condition 2.1 holds.

Shrinking targets
Let (λ k ) k∈N ∈ (Σ * ) N be a sequence of target cylinders. We are interested in the shrinking target set R t ((λ k ) k∈N ) = π t i ∈ Σ : σ k i ∈ [λ k ] for infinitely many k ∈ N .
For our sequence of target cylinders, we define the following inverse lower pressure: If lim inf n→∞ |λ k | k < ∞ then there exists a unique solution s 0 to the equation P (s 0 ) = α(s 0 ) ≥ 0, see Lemma 3.4. Otherwise s 0 = 0. We prove that this value gives the Hausdorff dimension of the shrinking target set under some assumptions on the matrices A.  Similar result has been obtained by Koivusalo and Ramírez [18] for shrinking targets on self-affine sets. Firstly, they assume that there exists a constant C > 0 such that for every , secondly, they assume that α(t) is taken as a limit. The first condition holds only for a restrictive family of matrices, see Remark 2.6. By using a more detailed analysis on the pressure function, we were able to relax the condition on the limit as well.

Recurrence sets
Now, we turn our attention to the recurrence sets. Let ψ : N → N, and let β = lim inf n→∞ ψ(n) n . Consider the set S t (ψ) := π t i ∈ Σ : σ k i ∈ [i | ψ(k) ] for infinitely many k ∈ N .
Let us define the square-pressure function Note that the limit exists again because of the subadditivity of ϕ t (A). Further, the pressure is continuous in t, strictly increasing, and satisfies P 2 (0) = − log N and P 2 (t) → ∞ as t → ∞. where r 0 is the unique solution of the equation Moreover, L d (S t (ψ) > 0 for Lebesgue-almost every t if r 0 > d.
The equation (2.3) applies specifically only to the case when β ≤ 1, for other values of β it needs to be modified accordingly. The condition β < 1 is purely technical and relies on the fact that the buffer word in Condition 2.1 depends on both of the words before and after it. Hence, for recurrence rates greater than 1 it might cause "self-dependence" in the buffer word, which then may not exist. We note that under the stronger assumption on the matrices by Koivusalo and Ramírez [18], Theorem 2.4 can be generalized for any value β ∈ [0, ∞] with a straightforward modification of (2.3) and the proof of Theorem 2.4.

Irreducibility of matrices
Let us denote by ∧ k R d the k-th exterior product of R d . For A ∈ GL d (R), we can define an invertible linear map A ∧k : Let us consider the following tensor product of the exterior algebras Again, for A ∈ GL d (R), we can define an invertible linear map A : W → W by setting for u = u 1 ⊗ · · · ⊗ u d−1 , We define a linear subspace W of W , which is generated by the flags of R d as follows: We call W the flag vector space. Note that the flag space W is invariant with respect to the linear map A for A ∈ GL d (R).
We say that A ∈ GL d (R) is fully proximal if it has d distinct eigenvalues in absolute value. Note that A is fully proximal if and only if A ∧k is 1-proximal for every k if and only if A is 1-proximal on W . We say that the tuple A is fully proximal if there exists a finite product A i 1 · · · A i k formed by the elements in A, which is fully proximal.
We say that the tuple A is fully strongly irreducible or strongly irreducible over W if there are no finite collections V 1 , . . . , V n of proper subspaces of W such that A∈A n k=1 AV k = n k=1 V k . Proposition 2.5. Let A be a tuple of matrices in GL d (R) such that A is fully proximal and fully strongly irreducible. Then for every 0 < s < d there exists C > 0 and K ∈ N such that for every Remark 2.6. Koivusalo and Ramírez [18] assumed that there exists a constant D > 0 such that Bárány, Käenmäki and Morris [4,Corollary 2.5] showed that this condition for planar matrix tuples A is equivalent with the following: A can be decomposed into two sets A e and A h such that A e is strongly conformal (i.e. can be transformed into orthonormal matrices with a common base transformation) and if A h = ∅, then A h has a strongly invariant multicone C (i.e.
Assuming fully strong irreducibilty and fully proximality is clearly a less restrictive requirement. For instance, in case of planar matrices fully strong irreducibility and fully proximality is equivalent with strong irreducibility and proximality.
Using Proposition 2.5 we obtain the following immediate corollaries.
Corollary 2.7. Let A be a collection of d×d matrices. Suppose that A is fully strongly irreducible and fully proximal and Corollary 2.8. Let A be a collection of d×d matrices. Suppose that A is fully strongly irreducible and fully proximal and where r 0 is the unique solution of the equation Structure. We prove Theorem 2.2 in Section 3 and Theorem 2.4 in Section 4 using Condition 2.1. First, we derive elementary results on the inverse lower pressure α defined in (2.1) in Section 3.1. We will also recall results about the pressure P and prove the uniqueness of the solution of P (s 0 ) = α(s 0 ). We proceed in Section 3.2 by proving the upper bound to Theorem 2.2 and finish the lower bound proof in Section 3.3 with an energy estimate. Similarly, Section 4.1 is devoted to show the upper bound and Section 4.2 is to show the lower bound of Theorem 2.4. Section 5 contains the proof of Proposition 2.5, which shows that the assumptions in Corollary 2.7 and 2.8 are sufficient.

Basic properties and the inverse lower pressure function
Let (λ k ) k∈N ∈ (Σ * ) N be a sequence and let α be the corresponding inverse lower pressure defined in (2.1).
Observe that by definition Assume that α(s) = 0. Since −1/n log ϕ s (A λn ) ≥ 0, this implies that there is a subsequence n k such that 1/n k log ϕ s (A λn k ) ր 0. But then 1/n k log γ s|λn k | = s|λ n k |/n k log γ ր 0 and so |λ n k |/n k ց 0, as required.
For the other direction assume |λ n k |/n k → 0 for some subsequence n k . Then, for any t ≥ 0, Combining this with the trivial inequality α(t) ≥ 0 we get the desired conclusion that α(t) = 0 for all t ≥ 0.
Similarly, if the modified pressure function is extremal in the other direction it must be extremal everywhere.
The proof is analogous to that of Lemma 3.1 and is left to the reader.
Proof. Note that by Lemma 3.1 and Lemma 3.2 the inverse lower pressure satisfies 0 < α(t) < ∞ for all t > 0. Letting t = 0, we have and so This shows that α(t) is continuous at t = 0. For any t > 0 and ε > 0 sufficiently small we have where in the last inequality we applied (3.2) with s = t − ε. Hence, which shows that α(t) is strictly monotone increasing on (0, ∞).
For an s > 0, let n k (s) be a sequence for which the lower limit in α(s) is achieved. Then by Hence for every s > 0, This implies that Thus, which together with (3.3) implies continuity.
Proof. If lim inf k→∞ |λ k |/k > 0 then the first statement follows by Lemma 3.3 since P (0)−α(0) = log N , and P (t) − α(t) → −∞ as t → ∞ and P (t) − α(t) is strictly motonone decreasing. If lim inf k→∞ |λ k |/k = 0 then by Lemma 3.1 α(t) ≡ 0 for t ≥ 0 and then the uniqueness of the solution follows by the uniqueness of the root of P . The second conclusion follows from the observation that α(t) ≥ 0 for all t ≥ 0.
The following lemma is standard, but we include it for completeness.

Upper bound to Theorem 2.2
Note that R t ((λ k ) k ) is a lim sup set that can be written as Temporarily fix t ≥ 0. By definition, for every δ > 0 there exists k 0 large enough such that for all k ≥ k 0 . This can be rearranged to give Similarly, for every δ > 0, we obtain for large enough k. For the lower bounds, we note that for all δ > 0 there exists a subsequence k n such that and for large enough k, by submultiplicativity and existence of the limit. Assume that lim inf k |λ k |/k < ∞. Let s > s 0 and note that P (s) − α(s) < 0. We set δ > 0 small enough such that η := P (s) − α(s) + 2δ < 0. Let B be a ball with sufficiently large radius such that f i (B) ⊂ B for all i = 1, . . . , N . Hence, by using the cover given in [7, Proof of Proposition 5.1] we obtain Since s > s 0 was arbitrary, we conclude that dim H R t ((λ k ) k ) ≤ s 0 for all t.
Finally, consider the case when lim inf k |λ k |/k = ∞. Let s > 0 be arbitrary and again write γ = max i∈Σ 1 {α 1 (A i )}. Recall that #Σ 1 = N and observe that there exists M such that |λ k | ≥ 2k log N/(s log γ −1 ) for k ≥ M . Therefore γ s|λ k | ≤ N −2k for large enough k. The Hausdorff measure bound above becomes As s > 0 was arbitrary, this shows that dim H R t ((λ k ) k ) = 0 for all t.

Lower bound to Theorem 2.2
To simplify the exposition we will abuse notation slightly and write ϕ s (i) instead of ϕ s (A i ) for i ∈ Σ * .
For every sufficiently large p ∈ N and s < min{s 0 , d}, we construct a measure ν s p on the symbolic space Σ and investigate its projection under the self-affine iterated function system. Let m k be a sequence on which the lower limit in α(s) is achieved and take a very sparse subsequence such that n k=1 m k ≤ (1 + 2 −n )m n and m n ≥ 2 n where K is the length of the buffer word defined in Condition 2.1. We may further assume, without loss of generality, that m 1 ≫ p and that m k ≥ 2 k . By the pigeonhole principle there exists 1 ≤ p 0 ≤ p + K such that m k = p 0 + (K + p)q for infinitely many q. Again, by taking subsequences, we may assume that m k is always of the form p 0 + (K + p)q for some q. If p 0 > K then we define p 0 := p 0 − K otherwise let p 0 := p 0 + p.
We will obtain ν s p as the weak limit of descending measures ν s p,k : Σ → [0, 1]. The construction is fairly intricate and involves splitting the measure into blocks of length p with "buffers" of length K in-between that are given by Condition 2.1. However, at each position m ℓ , we want to append λ m ℓ . To ensure consistency of lengths, we need to slightly modify λ m ℓ by extending the words to be of length p + q(K + p) for some q ≥ 0. To this end we define λ ′ m ℓ = λ m ℓ 11 . . . 1, where the number of symbol 1's is p − |λ m ℓ | mod (K + p). Let For every i 1 , i 2 ∈ Σ * denote the word in Condition 2.1 by k(i 1 , i 2 ) ∈ Σ K . We define a collection of symbols K n by induction. Let K 0 := Σ p 0 Suppose that K n is defined for some n ≥ 0. Then let us define K n+1 as To ease notation let ℓ k denote the length of words in K k . Observe that by construction, every i ∈ K n can be written of the form where for every k ∈ {2, . . . , n + 1}, i k ∈ Ω(ℓ k−1 + K) and k k = k(i 1 k 1 . . . k k−1 i k , i k+1 ). While the cylinders in K n consist of the same number of blocks (n + 1) and buffers (n), their lengths are not necessarily p 0 + n(p + K) due to the different lengths of λ ′ m i . Their lengths are however, by construction, always of length p 0 + q(p + K) for some integer q ≥ n. This ensures that we can construct a lim sup set of codings K by Let η(n) denote the number of λ ′ m ℓ blocks in K n . Then We start by defining ν s p,0 on cylinders of length no less than p 0 by for i ∈ Σ p 0 = K 0 and h ∈ Σ * . This uniquely defines a probability measure on Σ, i.e. ν s p,0 (Σ) = 1. We define ν s p,n on cylinders with prefix in K n by Observe that for any cylinder set O ⊆ Σ, the measures ν s p,k (O) are eventually monotone decreasing and hence ν s p (O) ≤ ν s p,k (O) for all sufficiently large k ∈ N, where ν s p is the weak limit of (ν s p,k ) k∈N .
Lemma 3.6. Let k ∈ N 0 . Then, where C is the constant appearing in Condition 2.1.
Remark 3.7. Observe that the summations in (3.9) are all over the same set. We have changed the subscript to emphasise these two points of view of K k versus its constituent parts.
Proof. The last inequality follows from the submultiplicativity of ϕ s and that ϕ s (k j ) < 1. The first inequality follows inductively from repeated application of Condition 2.1 as follows: The base case k = 0 follows trivially, since K 0 = Σ p 0 . For the induction step assume that (3.9) holds for k ≥ 0. Applying Condition 2.1 to words in K k+1 gives and the induction hypothesis immediately gives which completes the proof.
The proof of Theorem 2.2 reduces mainly to the following technical lemma.
Lemma 3.8. Let s 0 > 0 be such that P (s 0 ) = α(s 0 ). Then for all 0 < t < s < s 0 and sufficiently large p, Proof. Let p ∈ N be large enough such that γ (s−t)p < C, where 0 < γ < 1 and 1 > C > 0 are the constants appearing in Lemma 3.5 and Condition 2.1, respectively. Since s < s 0 , we have P (s) > α(s) and we can pick δ > 0 such that P (s) − α(s) > 4δ and choose p (which so far only depends on the C and γ) large enough such that we may apply (3.5) and (3.6) with δ, moreover, we require that pδ > KP (s) − 2Kδ.
Recall that ν s p is supported on K and note that for all distinct i, j ∈ K, their longest common prefix i ∧ j must be a word of the form i 1 k 1 . . . i n i ′ for some i ′ ∈ Σ ≤(p+K) = p+K k=0 Σ k and n maximal. To see this, assume | i ′ | > p + K. Then, i ′ must have a prefix of the form k n λ ′ m j . But since all words h ∈ K satisfy (σ m j h)| |λ ′ m j | = λ ′ m j , so must j and we obtain i ∧ j = i 1 k 1 . . . i n k n λ ′ m j i ′′ for some finite word i ′′ . This however, contradicts the maximality of n and our claim follows.
Note further that by the boundedness of the length of i ′ by p+K as well as the non-singularity of the matrices A i , there exists a universal constant D for the IFS such that (3.11) The double integral (3.10), together with (3.11) simplifies to the following sum Thus, and by definition of K n , by Lemma 3.5 for some 0 < γ < 1. Again, let η(n) denote the number of λ m l blocks in K n . Using Lemma 3.6, we can bound for some c > 0. Then by (3.5) and (3.6) e m i (α(s)+δ) .

Applying (3.8) and pδ > KP
Clearly, ℓ n ≥ m η(n) + |λ ′ m η(n) | and ℓ n ≥ Now we can apply (3.7) to obtain, Coupling this with the observation that C −1 γ (s−t)p < 1 and (1+2 −n )α(s)−(1−2 −n )P (s))+3δ < 0 for sufficiently large n, the expression above is bounded by a geometric series with ratio less than one and hence is bounded. It immediately follows that (3.10) is bounded and the t energy of ν s p is finite, as required.
Proof of Theorem 2.2. To show that dim H R t ((λ k ) k ) ≥ s 0 for Lebesgue-almost every t, it is enough to show that for every t < s 0 we have dim H R t ((λ k ) k ) ≥ t for Lebesgue-almost every t.
Let t < s < s 0 and p be as in Lemma 3.8. By Frostman's lemma (see for example [22,Chapter 8]), it is enough to show that where the right-hand side is finite by Lemma 3.8. Now, let us turn to the proof that L d (R t ((λ k ) k )) > 0 for Lebesgue-almost every t if s 0 > d. Let s be such that s 0 > s > d. It is enough to show that (π t ) * ν s p ≪ L d , and by [22,Theorem 2.12], to do so it is enough to prove that where the right-hand side is finite again by Lemma 3.8.
Remark 4.1. We note that if β > 1 then the argument above is not optimal.

Lower bound for Theorem 2.4
The proof is analogous to the lower bound of Theorem 2.2 with some necessary modifications. Let p ∈ N an integer which will be specified later. Let m k be a sequence on which the lower limit β = lim inf n→∞ ψ(n)/n is achieved and take a sparse subsequence such that Let us choose p 0 as in Section 3.3, so m k = p 0 + (p + K)q for every k ≥ 1 for some q ∈ N. To ensure consistency of lengths again, we need to slightly modify ψ(m ℓ ) by extending the words to be of length p + q(K + p) for some q ≥ 0. To this end we define ψ ′ (ℓ) We construct a measure ν s p similarly to Section 3.3, except that the elements in Ω(k) depend on the previous elements. More precisely, let For every i 1 , i 2 ∈ Σ * denote the word in Condition 2.1 by k(i 1 , i 2 ) ∈ Σ K . We define a collection of symbols K n by induction. Let K ′ 0 := Σ p 0 Suppose that K ′ n is defined for some n ≥ 0. Then let us define K ′ n+1 as Denote by ℓ ′ k the length of words in K ′ k . Observe that by construction, again every i ∈ K n can be written of the form where for every k ∈ {2, . . . , n+1}, i k ∈ Ω(i 1 k 1 . . . i k−1 , ℓ k−1 +K) and k k = k(i 1 k 1 . . . k k−1 i k , i k+1 ). Let η ′ (n) denote the number of recurrences in K n . Then We start by defining ν s p,0 on cylinders of length no less than p 0 by for i ∈ Σ p 0 = K 0 and h ∈ Σ * . This uniquely defines a probability measure on Σ, i.e. ν s p,0 (Σ) = 1. We define ν s p,n on cylinders with prefix in K n by Lemma 4.2. Let r 0 > 0 be such that (1 − β)P (r 0 ) = βP 2 (r 0 ). Then for all 0 < t < s < r 0 and sufficiently large p, Proof. By similar argument to the beginning of Lemma 3.8, it is enough to show that Denote by η(n) the number of returns in K ′ n . By definition, m η(n) is the position of the last return, and it returns to [j | ψ(m η(n) ) ]. Unfortunately, j | ψ(m η(n) ) is not necessarily an element of K ′ k for all k > 0. Let k n be the smallest integer such that ψ(m η(n) ) ≤ ℓ kn , where we recall that ℓ n is the length of the elements of K ′ n . Clearly, for every j = j 1 k 1 . . . k kn j kn+1 ∈ K ′ kn ϕ s (j 1 )ϕ s (j 2 ) . . . ϕ s (j kn+1 ) ≥ ϕ s (j), and for j ∈ K ′ n ϕ s (j | ψ(m η(n) ) )) ≥ ϕ s (j ′ ), where j ′ is the unique element in K ′ kn such that j ≺ j ′ . Moreover, for every j ∈ K ′ n there are n − η(n)− (k n − η(k n ))-many Σ p components in the sequence σ ℓ kn j. Hence, we obtain that Using (4.2), we get Using the defining properties (4.1) of the sequence m n , we have ≤ c ∞ n=1 C −n γ (s−t)ℓn exp − (P (s) − δ)(1 − 2 −n )m η(n) + (P (s) + P 2 (s))(β + δ)m η(n) − 3 log γs2 −n m η(n) Coupling this with the observation that C −1 γ (s−t)p < 1 and −(P (s) − δ)(1 − 2 −n ) + (P (s) + P 2 (s))(β + δ) − 2 log γs2 −n < 0 for sufficiently large n, the left hand side is finite and the proof is complete. Now, the proof of Theorem 2.4 is identical to the proof of Theorem 2.2 by replacing Lemma 3.8 with Lemma 4.2, so we omit it.

Justification of Condition 2.1
In this section, we give a sufficient condition under which Condition 2.1 holds. The proof is not only a modification of the proof but also an application of Käenmäki and Morris [15,Proposition 4.1]. First, let us recall some definitions and notations from algebraic geometry, following Goldsheid and Guivarc'h [11] and Käenmäki and Morris [15].
Let us denote by ∧ k R d the kth exterior product of R d . That is, let {e 1 , . . . , e d } be the standard orthonormal basis of R d and define for all k = 1, . . . , d and let ∧ 0 R d = R by convention. The wedge product ∧ : If v ∈ ∧ k R d can be expressed as a wedge product of k vectors of R d then v is said to be decomposable. Let us define the Hodge star operator * : ∧ k R d → ∧ d−k R d to be the bijective linear map satisfying * (e i 1 ∧ · · · ∧ e i k ) = sgn(i 1 , . . . , i d )e i k+1 ∧ · · · ∧ e i d for all where v = v 1 ∧ · · · ∧ v k and w = w 1 ∧ · · · ∧ w k . For A ∈ GL d (R), we can define an invertible linear map A ∧k : ∧ k R d → ∧ k R d by setting A ∧k (e i 1 ∧ · · · ∧ e i k ) = (Ae i 1 ) ∧ · · · ∧ (Ae i k ) and extending by linearity.
For every matrix A ∈ GL d (R), there exists a basis of orthonormal vectors {u 1 , . . . , u d } such that Au i = α i (A) and {α 1 (A) −1 Au 1 , . . . , α d (A) −1 Au d } is orthonormal. Hence, the operator norm of A ∧k is Thus, for every 0 < s ≤ d, the singular value function can be written as Similarly, we say that A is strongly k-irreducible if there is no finite collection of proper subspaces V 1 , . . . , V n of ∧ k R d such that n k=1 A∈A A ∧k V k = n k=1 V k . Denote by S(A) the semi-group induced by A. The following lemma is due to Käenmäki and Morris [15,Proposition 4.1].
for every A ∈ S(A) then A is neither strongly k-irreducible nor strongly (k + 1)-irreducible.
For two vectorspaces V and W , let us define the tensor product V ⊗ W as follows where for any v 1 , v 2 ∈ V , w 1 , w 2 ∈ W and α ∈ R Let us consider the following tensor product of the exterior algebras We define the inner product of W for u = u 1 ⊗ · · · ⊗ u d− and extend it in a bilinear, symmetric way. We define a linear subspace W of W , which is generated by the flags of R d as follows: We call W the flag vector space. Again, for an A ∈ GL d (R), we can define an invertible linear mar A : W → W by setting for u = u 1 ⊗ · · · ⊗ u d−1 and extending by linearity. It is easy to see that A : W → W for A ∈ GL d (R). Let us denote the restriction of the inner product ·, · ∧ and norm · ∧ to W by ·, · W and · W .
We say that A ∈ GL d (R) is fully proximal if it has d distinct eigenvalues in absolute value. Note that A is fully proximal if and only if A ∧k is 1-proximal for every k if and only if A is 1-proximal on W . We say that the tuple A is fully proximal if there exists an A ∈ S(A) which is fully proximal.
We say that the tuple A is fully strongly irreducible or strongly irreducible over W if there are no finite collection V 1 , . . . , V n of proper subspaces of W such that A∈A n k=1 Before we prove Proposition 2.5, we need to recall two important tools.
Lemma 5.2. Suppose that A is fully proximal and fully strongly irreducible then A ⊤ = {A ⊤ 1 , . . . , A ⊤ N } and A m = {A 1 · · · A m } A 1 ,...,Am∈A are also fully proximal and fully strongly irreducible for m ≥ 1.
Proof. Let A ∈ GL d (R) be a fully proximal matrix, and let λ 1 , . . . , λ d and v 1 , . . . , v d be the corresponding eigenvalues and eigenvectors. Then it is easy to see that any nonzero Now, let us suppose that A is not fully strongly irreducible and we show that then A ⊤ is not fully strongly irreducible too. Let V 1 , . . . , V n be proper subspaces of W such that A∈A thus it follows that A ⊤ is not fully strongly irreducible. Similarly, the fully proximality of A implies clearly the fully proximality of A m . Moreover, if A m is not fully strongly irreducible then there exists a finite family of proper subspaces V 1 , . . . , V n of W such that A 1 ,...,An∈A n i=1 A 1 · · · A n V i = n i=1 V i . Thus, the tuple A is not fully strongly irreducible for the family n i=1 m−1 k=0 exists and has rank 1. Moreover, Im(G k (A)) = span{v 1 ∧ . . . ∧ v k }, where v i is an eigenvector corresponding to the i-th largest eigenvalue in absolute value. Moreover, for a fully proximal matrix A ∈ S 0 (A), Im( G(A)) = Im(G 1 (A)) ⊗ · · · ⊗ Im(G d−1 (A)). (5.1) The following lemma is a corollary of Goldsheid and Guivarc'h [11, Theorem 2.14]. Proof. Let us argue by contradiction. First, suppose that A is not strongly k-irreducible for some k ∈ {1, . . . , d − 1}. Let V 1 , . . . , V n be a finite collection of proper subspaces of ∧ k R d such that n ℓ=1 A∈A A ∧k V k = n ℓ=1 V ℓ . Let It is easy to see that V ℓ is a proper subspace of W for all ℓ = 1, . . . , d − 1 and n ℓ=1 A∈A A V k = n ℓ=1 V ℓ , which is a contradiction. Now, suppose that there exists a finite collection V 1 , . . . , V n of proper subspaces of W such that L(A) ⊆ n i=1 P(V i ). Without loss of generality, we may assume that V 1 , . . . , V n is minimal in the sense that L(A) ∩ P(V i ) is not contained in a finite union of subspaces of V i . Indeed, if L(A) ∩ P(V i ) n ′ i=1 P(V ′ i ) for a finite collection of proper subspaces V ′ 1 , . . . , V ′ n ′ of V i , then one can replace V i with V ′ 1 , . . . , V ′ n ′ . Clearly, the procedure terminates in finitely many steps. We will show that for every A ∈ A and every j ∈ {1, . . . , n} there exists i ∈ {1, . . . , n} such that AV j = V i . Clearly, A (L(A) ∩ P(V j )) ⊆ P( AV j ).
Since A is invertible on W we get But by the minimality assumption of V 1 , . . . , V n , the subspace A −1 V i ∩ V j must be equal to V j for an i ∈ {1, . . . , n}.
Thus, n ℓ=1 A∈A A V k = n ℓ=1 V ℓ , which is again a contradiction.
Proof of Proposition 2.5. Let us argue by contradiction. Namely, there exists s > 0 such that for every C > 0 and K ∈ N there exist i C,K , j C,K ∈ Σ * such that for all k ∈ Σ K ϕ s (A i C,K k j C,K ) < Cϕ s (A i C,K )ϕ s (A j C,K ) We may first assume that s / ∈ N, the proof of the integer case is similar and even simpler. For short, let ⌊s⌋ = k and ⌈s⌉ = k + 1 By the singular value decomposition of A i C,K and A j C,K , let ∧ · · · ∧ v (C,K) j . So for every C > 0 and K ∈ N and for all k ∈ Σ K A ∧k .
By compactness and possibly taking a subsequence, we may assume that