Dimension of Gibbs measures with infinite entropy

We study the Hausdorff dimension of Gibbs measures with infinite entropy with respect to maps of the interval with countably many branches. We show that under simple conditions, such measures are symbolic-exact dimensional, and provide an almost sure value for the symbolic dimension. We also show that the lower local dimension dimension is almost surely equal to zero, while the upper local dimension is almost surely equal to the symbolic dimension. In particular, we prove that a large class of Gibbs measures with infinite entropy for the Gauss map have Hausdorff dimension zero and packing dimension equal to 1/2, and so such measures are not exact dimensional.


Introduction
In this paper we study the dimension of measures invariant under a certain class of maps of the unit interval [0, 1]: expanding Markov Renyi (EMR) maps. These maps T : [0, 1] → [0, 1] admit representations by means of symbolic dynamics, and satisfy smoothness properties that allow us to use ergodic theoretic methods to study their geometric properties. Given an ergodic T-invariant probability measure µ, we are interested in the pointwise behaviour of the local dimension Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
where B(x, r) denotes the open ball of centre x and radius r. This limit in general may not exist, in which case we study the corresponding limit superior and limit inferior. When the limit exists almost everywhere, we say that the measure is exact dimensional. If this is the case, by ergodicity of µ the value of the local dimension is constant almost everywhere. Knowledge of the almost sure behaviour of the local dimension yields information about the Hausdorff and the packing dimension of the measure. There are two dynamical quantities which are particularly relevant when studying the local dimension of such measures: the metric entropy h µ (or simply the entropy) and the Lyapunov exponent λ µ of (T, µ) (see section 2 for the de nitions of h µ , λ µ and dimension). Formulae relating the dynamical invariants h µ , λ µ and the local dimension have been extensively studied for the last few decades in the case h µ < ∞. For Bernoulli measures invariant under the Gauss map, Kinney and Pitcher proved in [KP66] that if the measure µ is de ned by a probability vector p = {p i }, the Hausdorff dimension of µ can be computed with the formula dim H µ = − ∞ n=1 p n log p n 2 1 0 | log x|dµ (x) provided that ∞ n=1 p n log n < ∞. For more general maps of the interval, in [LM85] the authors proved that for a C 1 map T : [0, 1] → [0, 1] where T and T ′ are piecewise monotonic and the Lyapunov exponent λ µ is positive, if µ is an invariant ergodic probability measure, then [LM85, corollary in the appendix] we have that the such measure is exact dimensional and lim r→0 log µ(B(x, r)) log r = h µ λ µ .
In particular, dim H µ = h µ /λ µ . Other versions of the formula were proved by Young and Hofbauer, Raith in [You82] and [HR92], among others. In all of these examples, it is assumed 0 < λ µ < ∞. In the context of countable Markov systems, Mauldin and Urbanski proved ([MU03, theorem 4.4.2]) the following theorem: Volume lemma. Let (X, T ) be a countable Markov shift coded by the shift in countably many symbols (Σ, σ). Suppose that µ is a Borel shift-invariant ergodic probability measure on Σ such that at least one of the numbers H(µ, α) or λ µ is nite, where H(µ, α) is the entropy of µ with respect to the natural partition α in cylinders of Σ. Then µ is exact dimensional and where π : Σ → X is the coding map. The case when λ µ = 0 was studied by Ledrappier and Misiurewicz in [LM85], wherein they constructed a C r map of the interval and a non-atomic ergodic invariant measure which has zero Lyapunov exponent and is such that the local dimension does not exist almost everywhere. More precisely, they show that the lower local dimension and upper local dimension are not equal ([LM85, theorem 1]): almost everywhere. For this construction, the authors consider a class of unimodal maps (Feigenbaum's maps). The focus of the article is twofold: in the rst place, we investigate the Hausdorff dimensions of invariant ergodic measures for piecewise expanding maps of the interval with countably many branches. In particular, we focus on maps exhibiting similar properties to the Gauss map (EMR maps and Gauss-like maps, see de nitions 2.1 and 2.2 respectively) and measures with in nite entropy and in nite Lyapunov exponent. In the second place, we show that the measures considered are not exact dimensional, by showing that the upper dimension is positive while the lower dimension is zero almost everywhere. Theorem 1. Let T : [0, 1] → [0, 1] be a Gauss-like map and µ be an in nite entropy Gibbs measure of controlled decay, and such that the decay ratio s exists.
This shows that there is a dimension gap for this class of maps and measures. For the Gauss map, s = 1/2. The Gibbs assumption on the measure implies that a certain sequence of observables can be seen as a non-integrable stationary ergodic process and allows us to use some tools of in nite ergodic theory developed by Aaronson and Nakada (see [Aar77,AN03]). In particular, the pointwise behaviour of the Birkhoff sums excluding the biggest term of such sums (trimmed sums) plays a fundamental role in our arguments. We remark that the methods used in the context of nite entropy fail, as they rely on the fact that the measure and diameter of the iterates of the natural Markov partition decrease at an exponential rate given by h µ and λ µ respectively, enabling the use of coverings by balls of different scales. To tackle this problem, we make use of more re ned coverings of balls, which are capable of detecting the asymptotic interaction between the Gibbs measure and the Lebesgue measure.
The paper is structured as follows. In section 2 we introduce the notation used throughout the paper as well as the main objects of study. We also state the results of the paper. In section 3 we compute the symbolic dimension and characterize it in terms of the Markov partition. In section 4 we study the consequences of h µ = λ µ = ∞ at the level of the asymptotic rate of contraction of the cylinders. In sections 5 and 6 we prove the results for the Hausdorff and the Packing dimension respectively. We nish the article stating some questions of interest that could not be answered with the methods used in this paper.

The class of maps
We start introducing the EMR (expanding-Markov-Renyi) maps of the interval.
Definition 2.1. We say that a map T : I → I of the interval I = [0, 1] is an EMR map if there is a countable collection of closed intervals {I(n)}, with disjoint interiors int I(n), such that: (a) The map is C 2 on n int I(n), (b) Some power of T is uniformly expanding, i.e., there is a positive integer r and a constant α > 1 such that |(T r ) ′ (x)| α for all x ∈ n int I(n), (c) The map is Markov and can be coded by a full shift: int T(I(n)) = [0, 1] for all n, (d) The map satis es Renyi's condition: there is a constant E > 0 such that This class of maps was rst introduced in [PW99] in the context the multifractal analysis of the Lyapunov exponent for the Gauss map. Renyi's condition provides good estimates for the Lebesgue measure of the cylinders associated to the Markov structure of the map (see next subsection). For simplicity, we will assume that the maps are orientation preserving (the orientation reversing case only differs from the orientation preserving case in the relative position of the cylinders). The set of branches must accumulate at least at one point, and we assume that it accumulates at exactly one point: we also assume that the branches accumulate on the left endpoint of I (the case when the branches accumulate in the right endpoint of I is analogous). Re-indexing if necessary, we can assume that I(n + 1) < I(n) for all n. Let r n = |I(n)|.
Definition 2.2. We say that an EMR map T is a Gauss-like map if it satis es the following conditions: (a) r n > 0 for every n ∈ N, (b) r n+1 r n , (c) n r n = 1, (d) 0 < K r n+1 /r n K ′ < ∞ for some constants K, K ′ , (e) {r n } decays polynomially as n goes to in nity (see de nition 3.7).
We want to keep in mind piecewise linear functions as the main example, as for this class of maps, calculations are simpli ed. We will also keep in mind the example of the Gauss map. In gure 1 we see an orientation preserving version of the Gauss map.

Markov structure and symbolic coding
We describe now the Markov structure of the maps considered. Given a nite sequence of natural numbers (a 1 , . . . , a n ) ∈ N n , the nth level cylinder associated to (a 1 , . . . , a n ) is the set I(a 1 , . . . , a n ) = I a 1 ∩ T −1 (I(a 2 )) ∩ · · · ∩ T −(n−1) (I(a n )). Let O = n k T −n (∂I(k)), then given x ∈ [0, 1]\O and n ∈ N, there exists a unique sequence (a 1 (x), a 2 (x), . . .) ∈ N N such that x ∈ I(a 1 (x), . . . , a n (x)) for every n. We denote this sequence by (a 1 , a 2 , . . .) when x is clear from the context. We also denote I n (x) = I(a 1 , . . . , a n ) and we say x is coded by the sequence (a n ). From now on, whenever we say x ∈ I, we mean x ∈ I\O.
We endow the space Σ with the topology generated by the cylinders de ned above. Then the map π : Σ → I\O given by π((x n )) = n I(x 1 , . . . , x n ) is a continuous bijection.
Given x ∈ I\O with coding sequence (a n ) and n 1, denote by I l n (x) = I(a 1 , . . . , a n−1 , a n + 1) (resp I r n (x) = I(a 1 , . . . , a n−1 , a n − 1) if a n 2) the level n cylinder on the left (resp right) of I n (x). Also, denote byÎ n (x) = I n (x) ∪ I r n (x) ∪ I l n (x). If there is no risk of confusion, we omit the dependence on x.
Renyi's condition introduced in the previous subsection implies that the length of each cylinder is comparable to the derivative of the iterates of the map at any point of the cylinder. More precisely, 0 < D −1 |(T n ) ′ (x)| · |I(a 1 , . . . , a n )| D < ∞ for every nite sequence (a 1 , . . . , a n ) ∈ N n and x ∈ I(a 1 , . . . , a n ).

The class of measures
We start by de ning Gibbs measures: Definition 2.3. Let µ be an invariant probability measure with respect to T. Then we say that µ is a Gibbs measure associated to the potential log ϕ : Σ → R, that is, there exists a constant C > 0 so that where x is any point in C(a 1 , . . . , a n ), (a 1 , . . . , a n , . . .) is any sequence in Σ, S n f(x) is the Birkhoff sum of f at the point x, and P(log ϕ) is a constant (depending on the potential) called the topological pressure of log ϕ.
Throughout this work we will assume that P(log ϕ) = 0, otherwise we can take the zero pressure potential log ϕ − P(log ϕ). It is important to note that it is not trivial that this will not affect our computations, and we will show later how we can overcome that dif culty. The sequence p n = µ(I(n)) will be of particular relevance for our computations.
We can project this measure to I by settingμ = π −1 • µ. We assume these measures are invariant and ergodic with respect to T. We will denote by µ both the measure in the symbolic space and the projected measure.
We de ne the nth variation of the potential log ϕ by var n (log ϕ) = sup{| log ϕ(x) − log ϕ(y)||x, y ∈ I(a 1 , . . . , a n ), (a 1 , . . . , a n ) ∈ N n }. Both de nitions for s andŝ agree since µ is a Gibbs measure. Note also that the de nitions above are independent of the choice of the point x n representing each cylinder if var 1 (log ϕ) < ∞. By the Cersàro-Stolz theorem (see [Fur13, appendix B]) we can write the decay ratio as s = lim n→∞ n k=1 p n log p n n k=1 p n log r n . Definition 2.5. Assume that var 1 (log ϕ) < ∞. Suppose that for the sequence sequence q = {q n } n∈N = {ϕ(x n )} we have for every n ∈ N, for some constants K, K ′ . Then we say that µ has controlled decay.
This condition prevents the existence of large jumps for the potential along suf ciently sparse subsequences of {x n }. By the Gibbs property, the properties hold if we replace q n by p n .

Entropy and Lyapunov exponent
Our main assumption on the class of measures is that they have in nite entropy. This can be expressed by saying that the potential −log ϕ is not integrable with respect to µ.
Our de nition of entropy differs from the conventional (see [Wal82, chapter 4]), as we deal with partitions with in nite entropy. For this, recall the Shannon-McMillan-Breiman theorem adapted to our system, which in the case of Gibbs measures, is equivalent to the Ergodic theorem: Theorem 2.6 (Shannon-McMillan-Breiman). For any Gibbs measure µ associated to a potential with nite rst variation, the limit exists µ-almost everywhere and is constant. If n − p n log p n < ∞, then such constant is nite; otherwise, it is equal to in nity.
The proof for the case when the series is nite can be found in [VO16, section 9.3]. The proof for the in nite case follows from lemmas 2.7 and 3.2, using that the measures have the Gibbs property. We de ne then the entropy h µ as the almost sure value of the limit (2.1).

Lemma 2.7. For a Gibbs measure with nite rst variation, the entropy h µ is nite if and only if any of the series
Proof. The partition of [0, 1] by cylinders {I(n)} is a generating partition, and hence Sinai's generator theorem (see [VO16, corollary 9.2.5]) allows us to compute the entropy of µ using the entropy of this partition. The entropy of µ with respect to this partition is given by The convergence of this series is equivalent to the convergence of − ∞ n=1 q n log q n since we have exp(−var 1 log ϕ) < q n ϕ(x) < exp(var 1 log ϕ), for any x ∈ I(n). We de ne the Lyapunov exponent as By the bounded distortion property, this integral converges if and only if the series − ∞ n=1 q n log r n converges.

Hausdorff and packing dimension
In this section we introduce the dimension theory elements we will study throughout this work.
Recall the diameter of a set U ⊂ R is given by For a cover U of a set X ⊂ R, its diameter is given by Definition 2.8. Given X ⊂ R and α ∈ R, the α−dimensional Hausdorff measure of X is given by where the in mum is taken over nite or countable covers U of X with diam U δ.
It is possible to prove that there exists a number s ∈ [0, ∞] such that m(X, α) = ∞ for t < s and m(X, α) = 0 for t > s, since m(X, α) is decreasing in α for a xed set X.
Definition 2.9. The unique number We extend the notion of Hausdorff dimension to nite Borel measures on R: Definition 2.10. Let µ be a nite Borel measure on R. The Hausdorff dimension of µ is de ned by We de ne now the analogue notion of packing dimension Definition 2.11. We say that a collection of balls {U n } n ⊂ R is a δ−packing of the set E ⊂ R if the diameter of the balls is less than or equal to δ, they are pairwise disjoint and their centres belong to E. For α ∈ R, the α−dimensional pre-packing measure of E is given by where the supremum is taken over all δ−packings of E. The α−dimensional packing measure of E is de ned by where the in mum is taken over all covers We extend the notion of packing dimension to nite Borel measures on R.
Definition 2.12. Let µ be a nite Borel measure on R. The packing dimension of µ is de ned by Bounding the Hausdorff dimension from above or the Packing dimension from below usually involves the use of a single suitable cover of the space, while for bounds from below and above respectively, we have to deal with every cover of the space. There are several tools to help with this problem, and we will make use of the so called (local) mass distribution principles. For this, we introduce the notion of local dimension.
Definition 2.13. The lower and upper pointwise dimensions of the measure µ at a point x ∈ X is given by When both limits coincide, we call the common value the pointwise dimension of µ at x and denote it by d µ (x) and say that µ is We state now the local version of the mass distribution principle.
Proposition 2.14. Let X ⊂ R and α ∈ (0, ∞], then Proof. This follows from proposition 2.3 of [Fal97]. In particular, if d µ (·) is constant almost everywhere, then dim H µ is equal to that constant value. Analogously, if d µ (·) is constant almost everywhere, then dim P µ is equal to that constant value.
A notion of dimension which is more adapted to the underlying structure of our dynamical system is the symbolic dimension, which we proceed to de ne.
Definition 2.15. Given x ∈ I, we de ne the lower symbolic dimension of µ at x by and the upper symbolic dimension of µ at x by If δ(x) = δ(x), then we de ne the symbolic dimension of µ at x as the common value, denote it by δ(x), and we say that This notion is fundamental as it provides a connection between symbolic dynamics and the geometric properties of the measures we are interested in.

Computation of the symbolic dimension
We prove now that under the above assumptions, the Gibbs measure µ is symbolic exact dimensional, and this dimension coincides with the decay ratio. This result does not depend on the length decaying ratio of the partition of the interval.
In general the Lyapunov exponent majorizes the entropy. In a more general setting, this result is known as Ruelle's inequality (see [Rue78]).
Proof. This is an immediate consequence of the volume lemma (theorem 1): if λ µ = ∞, then dim H µ = ∞ which is impossible.
We prove a well known fact about non-integrable observables.
Proof. The proof is an standard application of the monotone convergence theorem. Assume f is positive (otherwise, decompose f into its positive and negative part) and let M > 0. Then by Birkhoff's Ergodic theorem applied to min{ f, M}. By the monotone convergence theorem, This result implies in particular that we can assume that the pressure of our potential is zero, as S n (log ϕ) dominates −nP(log ϕ) when logϕ is not integrable.
We formulate a lemma regarding the metric and measure theoretic properties of the cylinders associated to the map. This will allow us to write geometric quantities in ergodic theoretic terms. Its proof is a standard applications of the bounded distortion and Gibbs properties. Lemma 3.3. For every nite sequence (a 1 , . . . , a n ) ∈ N n and j ∈ N, we have that a 1 , . . . , a n−1 , a n + m)| − n−1 k=1 log r a k − log j k=m r a n +k | nD 1 + D 2 , (c) | log | ∞ m=0 I(a 1 , . . . , a n−1 , a n + m)| − n−1 k=1 log r k − log( ∞ k=m r a n +k )| nD 1 + D 2 , (d) | log µ (I(a 1 , . . . , a n )) − n k=1 log p a k | nG 1 + G 2 , (e) | log µ j m=0 I (a 1 , . . . , a n−1 , a n + m) − n−1 k=1 log p a k − log( j k=m p a n +k )| nG 1 + G 2 , ( f ) | log µ ∞ m=0 I (a 1 , . . . , a n−1 , a n + m) − n−1 k=1 log p a k − log ∞ k=m p a n +k | nG 1 + G 2 , where D 1 , D 2 are distortion constants and G 1 , G 2 are constants arising from the Gibbs property.
We proceed to compute the symbolic dimension of our system. This result holds regardless of the decay rate of the sequence {r n }.
Theorem 3.4. Let T be an EMR map and µ a Gibbs measure with controlled decay and in nte entropy. Then if the decay ratio exists, we have that µ is symbolic-exact dimensional and for µ-almost every x ∈ I, Proof. By lemma 3.2 applied to the observables log ϕ and log r a 1 and lemma 3.3, we have = lim inf n→∞ log(q a 1 · · · q a n ) log(r a 1 · · · r a n ) , = lim inf n→∞ log(q a 1 · · · q a n ) log(r a 1 · · · r a n ) , for almost every x ∈ I, and analogously for the upper symbolic dimension δ(x) = lim sup n→∞ log(q a 1 · · · q a n ) log(r a 1 · · · r a n ) where (a 1 , a 2 , . . .) is the sequence coding x. With a similar argument, we can also show that the same holds true if we switch q n for p n : δ(x) = lim inf n→∞ log(p a 1 · · · p a n ) log(r a 1 · · · r a n ) , and analogously for the upper symbolic dimension. For x ∈ I and n, k 1, de ne that is, the number of times the orbit of x visits the interval I k in the rst n steps. Recall that from the Birkhoff theorem, we have that for every k, lim n→∞ f n,k n = p k for µ−almost every x ∈ I. In particular, the orbit of almost every x ∈ I visits every cylinder I(n) in nitely many times. Fix x in the set where the convergence holds, and then de ne m : The previous remark shows that m is unbounded, and it is clearly non-decreasing. Thus, we can write Given ǫ > 0, there exists n 1 such that log p k log r k − s < ǫ for every k n 1 , that is, (−log p k ) < (ǫ + s)(−log r k ) for k n 1 . For n large enough so that m(n) > n 1 , we write log(p k 1 · · · p k n ) log(r k 1 · · · r k n ) = n 1 k=1 f n,k (− log p k ) + m(n) k=n 1 +1 f n,k (− log p k ) n 1 k=1 f n,k (− log r k ) + m(n) k=n 1 +1 f n,k (− log r k ) .
We split the sum in two different parts: .
For k = 1, . . . , n 1 taking ǫ k = p k /2 there exists n 2 n 1 such that np k 2 f n,k 3np k 2 for every n n 2 . Thus, the terms n 1 k=1 f n,k (− log p k ) and n 1 k=1 f n,k (− log r k ) grow linearly in n for n large enough. We will show that m(n) k=n 1 +1 f n,k (− log r k ) grows faster than linear as a function of n.
Given M > 0, since the Lyapunov exponent is in nite, there exists n 3 such that m(n) for every n n 3 . Now, for k = n 1 + 1, . . . , m(n 3 ), take ǫ k = p k /2 and so there exists n 4 n 3 such that for every n n 4 . This shows that A(n) → 0 as n → ∞. To estimate B(n), we note that k=n 1 +1 f n,k (− log r k ) n 1 k=1 f n,k (− log r k ) + m(n) k=n 1 +1 f n,k (− log r k ) Using the same argument as above, we can show that m(n) k=n 1 +1 f n,k (− log r k ) grows faster than linear, so lim B(n) s + ǫ. This shows that δ(x) s.
The proof of the opposite inequality is analogous.

The decay ratio
Now we proceed to study the properties of the decay ratio. In fact, we show that for in nite entropy measures, it is completely determined by the properties of the partition {I(n)|n ∈ N}: Definition 3.5. The convergence exponent of the partition {r n } of I is de ned by Proposition 3.6. In general, we have that s ∞ s. Under the assumption that h µ = ∞, we also have s s ∞ .
Proof. Given ǫ > 0, there exists n 1 such that (ǫ + s) log r n < log p n < (s − ǫ) log r n for every n n 1 , and thus r s+ǫ n < p n for every n n 1 . Summing over n we get Hence, s ∞ s + ǫ for every ǫ > 0 and so s ∞ s. Now, suppose that s ∞ < s, and hence, there is α > 0 such that s ∞ s ∞ + α < s and Recall the one sided limit criterion for convergence of series: let a b , b n > 0 sequences such that lim sup n→∞ a n b n = c ∈ [0, ∞) and b n < ∞. Then a n < ∞. Let f : [0, ∞) → R be the function de ned by It is easy to see that f is continuous. Taking a n = r s−ǫ n (− log r n ) and b n = r s ∞ +α n and using the continuity of f, we get that lim sup n→∞ a n b n = lim n→∞ r ǫ n (− log r n ) = 0.
We conclude that contradicting the fact that the entropy is in nite.
We give now a de nition for the asymptotic decay of the sequence {r n }.
Definition 3.7. The asymptotic rate of the sequence {r n } is de ned as α = sup{t 0| lim n→∞ n t r n < ∞}.
We say that {r n } decays polynomially if α > 1, and we say that {r n } decays superpolynomially if α = ∞.
Note that if r n has polynomial decay with asymptotic α, then s ∞ = 1/α. If we know the asymptotic of {r n }, we can compute the asymptotic of the tail of the series of {r n }: Lemma 3.8. If the asymptotic of {r n } is α > 1, then the asymptotic of {R n = m n r n } is α − 1.
Proof. It suf ces to show that the sets A = {t 1|lim n→∞ n t r n < ∞} and A ′ = {t 0|lim n→∞ n t−1 R n < ∞} are the same. Let t ∈ A, then lim n→∞ n t r n = d, and so given ǫ, there is n 0 ∈ N such that for n n 0 . Hence, for n n 0 , from which follows that t − 1 ∈ A ′ . Now, if t ∈ A ′ , we have that lim n→∞ n t−1 R n = d ′ < ∞, and thus, given ǫ > 0, there is n 1 ∈ N such that This implies that from which follows that t + 1 ∈ A, proving the assertion.

Infinite ergodic theory
In this section we explore the consequences of the non-integrability of the functions − log r a 1 and − log p a 1 (or equivalently, h µ = λ µ = ∞). Using tools of in nite ergodic theory we can prove that the diameter of the cylinders decreases faster than exponentially from a given level to the next.

Finite Lyapunov exponent argument
We proceed to show now one of the usual arguments used to compute Hausdorff dimensions and remark how it fails in our case.

Lemma 4.1. Let T be an EMR map and µ a Gibbs measure. Then for almost every x ∈ I and every r > 0 there exists n such that
log µ(B(x, r)) log r log µ(I n (x)) log |I n−1 (x)| . (4.1) Proof. This is a well known argument and can be found for instance in [Pes08]. Given r > 0, there exists a unique integer n = n(r) such that |I n (x)| < r |I n−1 (x)| so then Then log µ(I n (x)) log µ (B(x, r)), and since logr log|I n−1 (x)|, we obtain log µ(B(x, r)) log r log µ(I n (x)) log |I n−1 (x)| as we wanted.
In a similar way, it is possible to show that where C 1 , C 2 are constants arising from the controlled decay property and Renyi's property respectively. Note that if λ µ < ∞, then inequalities (4.1) and (4.2), and the Ergodic theorem would immediately imply that s = dim H µ = dim P µ. However, since in our case λ µ = ∞, the previous argument does not work. In fact, here lies the main dif culty of the in nite entropy and Lyapunov exponent case. The following theorem shows that the situation is as bad as it can get: for almost every point, the diameter of the cylinders decreases arbitrarily from one level to the next.

Theorem 4.2. Let T be a Gauss-like map and µ an in nite entropy
The proof of the rst equality is an immediate consequence of recurrence. We postpone the proof of the second equality. We will return to this issue once we set up the appropriate tools to prove this result.

Corollary 4.3. For almost every x ∈ I, we have that d(x) s and hence dim H µ s.
The main tool that we will use to prove theorem 4.2 are results about the pointwise behaviour of trimmed sums.

Trimmed convergence
In this section we introduce some in nite ergodic theory notions and results. De ne {g n = −log r 1 • T n−1 }. The cumulative distribution function of g 1 is F (t) = µ(g 1 t), and it can be seen that µ(g 1 ) := 1 0 g 1 dµ = λ µ . By invariance of the measure, the cumulative distribution of g n is the same as F . As we saw in lemma 3.2, the Ergodic theorem fails to provide nontrivial information. This result was vastly generalized by Robbins and Chow for i.i.d. random variables in [CR61] and in the ergodic stationary case by Aaronson in [Aar77] who proved the following theorem: It is possible to prove that the lack of convergence in the previous theorem is due to a nite number of terms which are not comparable in size to the rest of the terms of the sum. This was proved in the i.i.d. case by Mori in [Mor76,Mor77] and in the stationary ergodic case by Aaronson and Nakada in [AN03]. We formulate the result by Aaronson and Nakada in a setting appropriate for our purposes.
We denote the ergodic sum of a function f by S n ( f )(x) and de ne S ′ . When the dependence of S n ( f )/S ′ n ( f ) on f is clear, we drop it from the notation and write S n . We refer to S ′ n as the trimmed ergodic sum of f.  Suppose that the process is continued fraction mixing with exponential rate (see [ AN03] ), and that ∞ n=1 ε 2 (n) n < ∞.
Then {X n } has trimmed convergence.
As remarked in [AN03], any Gibbs-Markov map is CF-mixing with exponential rate. For our particular sequence, the series in the previous theorem can be explicitly expressed in terms of the sequences {p n } and {r n }: Then the sequence {g n = −log r 1 • T n−1 } has trimmed convergence.
Proof. We show that if ∞ n=1 (log r n ) 2 (p 2 n + 2p n p n+1 ) < ∞, Let F (t) = µ(X t) and note that We compare the above sum to the corresponding integral. We can then see that if x ∈ [0, −log r 1 ) then F (x) = 1, while if x ∈ [−log r n , −log r n+1 ) for n 1 then Call now a n = (log r n ) 2 , b n = ∞ i, j=n p i p j .
Then, the above expression has the form ∞ n=1 (a n+1 − a n )b n which can be written as With this, the integral becomes (log r n ) 2 (p 2 n + 2p n p n+1 ) as we wanted to prove.
We show now that the trimmed convergence condition is satis ed by systems for which {r n } decays polynomially or slower. Then the sequence {g n = −log r 1 • T n−1 } has trimmed convergence.
Proof. Since p n and p n+1 are comparable, it suf ces to prove that ∞ n=1 (log r n ) 2 p 2 n < ∞.
Note that {p n } ⊂ ℓ 2 and we have that Since the sequence {p n } is decreasing, we have that Comparing in the limit the series of the left hand side to the series n p 2 n (log r n ) 2 , we get that this series converge. Since the trimmed sum is o(b n ), the rst condition must hold in a set of full measure. Let (a n ) be the coding sequence of x. With an argument analogue to the one used in the proof of theorem 3.4, the limit in question is equivalent to lim sup n→∞ n k=1 log r a k n−1 k=1 log r a k = 1 + lim sup n→∞ log r a n log(r a 1 · · · r a k n−1 ) Given 1 > ε > 0, there exists n 0 such that for every n n 0 at x. Since lim sup S n b n = ∞, given an integer M > 0 there exists n 1 n 0 such that S n 1 (x) b n 1 > 2M + 1 at x. Combining these two inequalities, we obtain Now, there exists an index j ∈ {1, . . . , n 1 } such that g j = max{g 1 , . . . , g n 1 } at x, and so S ′ j (x) = S j−1 (x). Since the g i are positive, we have that and hence This implies that lim sup n→∞ g n (x) S n−1 (x) = ∞ and so lim sup n→∞ log |I n | log |I n−1 | = ∞ as we wanted to prove.

Computing the Hausdorff dimension
With the tools developed in the previous sections, we proceed with the dimension computations. Now we prove an upper bound for dim H µ. This bound is related to the tail decay ratioŝ. We prove two necessary lemmas to give the desired bound. The rst lemma shows that {p n } decays slower than any polynomial, while the second lemma, shows the existence ofŝ and that s = 0 for Gauss-like maps.
Lemma 5.1. Suppose that the decay ratio exists and it is equal to s, the sequence {r n } decays polynomially and the measure µ has in nite entropy. Then for all δ > 0, there exist constants C, n 0 such that p n C n 1+δ for all n n 0 .
Proof. Let α > 0 be the polynomial decay of r n . Then by proposition 3.6, s = s ∞ = 1/α, we can take ǫ > 0 small enough so that ǫα + ǫs + ǫ 2 < δ. Then there exists C > 0 and n 0 ∈ N such that C n α+ǫ r n log r s+ǫ n log p n for all n n 0 . This implies that C s+ǫ n 1+δ C s+ǫ n (α+ǫ)(s+ǫ) p n for all n n 0 as we wanted.
Lemma 5.2. Under the same assumptions of the previous lemma, the tails decay ratioŝ (de nition 2.4) exists and is equal to zero.
Proof. By the lemma above, for δ > 0, there are constants C, n 0 such that p n C n 1+δ for all n n 0 . This implies that ∞ m=n p m C δn δ F Pérez for n n 0 . On the other hand, if we take ǫ < α − 1, there exists n 1 such that r n C n α−ǫ for n n 1 and consequently, ∞ m=n r m C (α − ǫ − 1)n α−ǫ−1 for n n 1 . Hence .
Letting δ → 0 we conclude the result.
Now we can compute the lower local dimension, and consequently, obtain the Hausdorff dimension of the measure. where I m·ℓ n (x) = I(a 1 (x), . . . , a n−1 (x), a n (x) + m). Then .

Proposition 5.3. Suppose T is a Gauss-like map and µ is an in nite entropy
Note now that the above inequality can be expressed in terms of the sequences {p n }, {r n } using lemma 3.3 log r a k + log ∞ m=0 r a n +m + nD 1 + D 2 where G 1 , G 2 are constants arising from the Gibbs property and the nite rst variation of the potential, and D 1 , D 2 are constants arising from the bounded distortion property. Thus, we have log µ(B(x, r n )) log r n n−1 k=1 log p a k + log ∞ m=0 p a n +m − nG 1 − G 2 n−1 k=1 log r a k + log ∞ m=0 r a n +m + nD 1 + D 2 .
For n large enough, we have that Thus, if a n is large enough, we have log µ(B(x, r n )) log r n (s + ǫ) n−1 k=1 log r a k + ǫ log ∞ m=0 r a n +m − nG 1 − G 2 n−1 k=1 log r a k + log ∞ m=0 r a n +m + nD 1 + D 2 .
If α > 1 is the polynomial decaying ratio of {r n }, then by lemma 3.8 we get that the tail decay asymptotic of ∞ m=0 r n+m is α − 1. We can then rewrite the above inequality as log µ(B(x, r n )) log r n (s + ǫ) n−1 k=1 log r a k + ǫK(α − 1) log r a n − nG 1 − G 2 n−1 k=1 log r a k + K(α − 1) log r a n + nD 1 + D 2 .
where K is the constant implied in the tail asymptotic for {r n }. By theorems 4.4 and 4.2, we can take an increasing subsequence a n k so that lim k→∞ − log r a n k − n k −1 k=1 log r a k = ∞, lim k→∞ − 1 n k log r a n k = ∞.
We get then lim k→∞ log µ(B(x, r n k )) log r n k ǫ Letting ǫ → 0 we conclude that d(x) = 0 as we wanted.
From the above result, we can conclude that for such measures, dim H µ = 0.

Packing dimension
In the previous section we completely determined the Hausdorff dimension of the measures of our interest. Now we proceed to compute the packing dimension. First we give a lower bound for the upper local dimension. The proof uses similar ideas to the proof of proposition 5.3: we choose a particular cover of the ball and use that the Birkhoff sums for the potentials − log p a 1 , − log r a 1 grow faster than linear. almost everywhere, where f n,k is as de ned in the proof of theorem 3.4. Lemma 3.2 and theorem 3.4 hold in a set of full measure as well. We pick a point x where the three results hold. Since p 1 < 1, we can pick a subsequence k n ր ∞ such that a k n = 1 for every n. Then, for all n, take r n = min{|I k n |, |I r k n |, |I ℓ k n |} = |I ℓ k n |. Here we denote I ℓ n = I(a 1 , . . . , a n−1 , a n + 1) and I r n = I(a 1 , . . . , a n−1 , a n − 1) whenever a n > 1. This choice of r n implies that B(x, r n ) ⊆ I ℓ k n ∪ I k n ∪ I r k n . From the Gibbs property and the fact that ϕ(x n ), ϕ(x n+1 ), and r n , r n+1 are comparable, it follows that there are constants C 1 , C 2 > 0 such that µ(I ℓ n ∪ I n ∪ I r n ) C 1 µ(I n ) and |I ℓ n | C 2 |I n | for every n. Using this and lemma 3.3 we have that log µ(B(x, r n )) log r n log(C 1 µ(I k n )) log(C 2 |I k n |) By lemma 3.2 and theorem 3.4, the last expression converges to s, as desired.
Giving an upper bound for the upper local dimension requires a more involved analysis of the geometric structure of the partition and its relation to the geometry of the balls. We will need the following lemma: Lemma 6.2. Suppose that {r n } decays polynomially with degree α > 1. Then, for every 0 < δ < min{1/3, (α − 1)/(α + 1)}, 0 < η < 1/2 there exists k 0 ∈ N such that log n+k m=k p m log n+k+1 m=k−1 r m for all k k 0 and n ∈ N.

Proof.
Recall that for such sequence {r n }, we have that s = 1/α. Fix 0 < δ < min{1/3, s(α − 1)/(α + 1)}, 0 < η < 1/2. Note that this implies that we can nd k 0 ∈ N such that for all k k 0 . It can be proved using calculus that for δ < (α − 1)/2, the inequality holds for suf ciently large k, so we can take k 0 large enough so that this holds. Finally, we can take k 0 large enough so that we also have for all k k 0 . Let n ∈ N. We divide in two cases: Case 1: n k. Then Hence log n+k m=k p m log n+k+1 m=k−1 r m .
We use the following lemma: if and only if b a.
We can use this with a = (1 + δ)log(2k), b = (α − δ)log(k − 1) − log 3 and c = log(n + 1). This implies that log n+k m=k p m log n+k+1 m=k−1 r m for all k k 0 , as we wanted to prove.
With the previous lemma, we can now prove the upper bound for the upper local dimension. The proof is based on carefully choosing the covers of the balls; such covers must be ne enough so they are not affected by theorem 4.2. This means that we want to cover the ball with cylinders of the same scale, otherwise, the cover would yield trivial bounds. Proof. Let x be a point where theorem 3.4 and lemma 3.2 applied to f = − log r a 1 hold. Given r > 0, there exists a unique natural number n = n(r) such that |I n (x)| < r |I n−1 (x)|.
Note that n → ∞ as r → 0. Let δ > 0 and η as in lemma 6.2. Then there exists k 0 ∈ N such that log n+k m=k p m log n+k+1 m=k−1 r m 1 + δ α − δ + η for all k k 0 . Recall that by I m·r n (x) we denote the cylinder I(a 1 , . . . , a n−1 , a n − m), where (a n ) is the sequence coding x and m < a n . We separate the proof in two cases: Case 1: I(a 1 , . . . , a n−1 , k 0 ) ⊂ B(x, r).
This implies that there exists k 1 ∈ N such that log r a k + log k 1 k=0 r a n −k + nD 1 + D 2 .
Corollary 6.5. For an in nite entropy Gibbs measure µ with in nite entropy and controlled decacy, associated to a Gauss-like map, we have that 0 = d(x) < s = d(x) for almost every point, and hence µ is not exact dimensional.
With this we have found the almost sure behaviour of the local dimensions, and hence, we have obtained values for both the packing and the Hausdorff dimension.

Final remarks
Theorem 1 implies that for maps such that {r n } decays polynomially, the Hausdorff dimension of ergodic invriant measures with in nite entropy is equal to zero under mild independence and regularity assumptions on the measure. Question 1. Is there an ergodic invariant measure µ for a Gauss-like map with h µ = λ µ = ∞, and dim H µ > 0?
We believe that the in nite entropy condition and the polynomial decay of the size of the partition forces the Hausdorff dimension to drop to zero. We also formulate two questions for a more general case: Question 2. What can be said about the almost sure value of the symbolic dimension when µ is only assumed to be ergodic? Question 3. What can be said about dim H µ when µ is only assumed to be ergodic?
The main dif culty with questions 2 and 3 is that our methods rely on the asymptotic independence of the digits in the symbolic space. This implies that we can write the measure and diameter of cylinders in the form of Birkhoff sums, allowing us to use ergodic theoretic methods to study the almost sure behaviour of such sums.
For measures which do not satisfy any kind of independence assumption, we are not able to use such techniques.