Dimension of Gibbs measures with infinite entropy

We study the Hausdorff dimension of Gibbs measures with infinite entropy with respect to maps of the interval with countably many branches. We show that under simple conditions, such measures are symbolic-exact dimensional, and provide an almost sure value for the symbolic dimension. We also show that the lower local dimension dimension is almost surely equal to zero, while the upper local dimension is almost surely equal to the symbolic dimension. In particular, we prove that a large class of Gibbs measures with infinite entropy for the Gauss map have Hausdorff dimension zero and packing dimension equal to $1/2$, and so such measures are not exact dimensional.


Introduction
In this paper we study the dimension of measures invariant under a certain class of maps of the unit interval [0, 1]: Expanding Markov Renyi (EMR) maps. These maps T : [0, 1] → [0, 1] admit representations by means of symbolic dynamics, and satisfy smoothness properties that allow us to use ergodic theoretic methods to study their geometric properties. Given an ergodic T-invariant probability measure µ, we are interested in the pointwise behavior of the local dimension d(x) = lim r→0 log µ(B(x, r)) log r .
Knowledge of the almost sure behavior of the local dimension yields information about the Hausdorff and the packing dimension of the measure. There are two dynamical quantities which are particularly relevant when studying the local dimension of such measures: the metric entropy h µ (or simply the entropy) and the Lyapunov exponent λ µ of (T, µ). The connection between the entropy and the Lyapunov exponent and the local dimension is well understood when the entropy is finite. Our goal is to investigate the case when both of these quantities are infinite.
Formulae relating the dynamical invariants h µ , λ µ and the local dimension have been extensively studied for the last few decades in the case h µ < ∞. provided that ∞ n=1 p n log n < ∞. In [LM85] the authors proved that for a C 1 map T : [0, 1] → [0, 1] where T and T are piecewise monotonic and the Lyapunov exponent λ µ is positive, if µ is an invariant ergodic probability measure, then we have that lim r→0 log µ(B(x, r)) log r = h µ λ µ .
In particular, dim H µ = h µ /λ µ . Other versions of the formula were proved by Young and Hofbauer, Raith in [You82] and [HR92], among others. In all of these examples, it is assumed 0 < λ µ < ∞. In the context of countable Markov systems, Mauldin and Urbanski proved the following theorem: Theorem 1.1 (Volume Lemma, [MU03]). Let (X, T ) be a countable Markov shift coded by the shift in countably many symbols (Σ, σ). Suppose that µ is a Borel shiftinvariant ergodic probability measure on Σ such that at least one of the numbers H(µ, α) or λ µ is finite, where H(µ, α) is the entropy of µ with respect to the natural partition α in cylinders of Σ. Then where π : Σ → X is the coding map.
The coding map can be interpreted as a means to go from the symbolic representation of the dynamics to the geometric space. When the local dimension exists and is constant almost everywhere, we say that the measure µ is exact dimensional.
The case when λ µ = 0 was studied by Ledrappier and Misiurewicz in [LM85], wherein they constructed a C r map of the interval and a non-atomic ergodic invariant measure which has zero Lyapunov exponent and is such that the local dimension does not exist almost everywhere. More precisely, they show that the lower local dimension and upper local dimension are not equal: log µ(B(x, r)) log r < lim sup r→0 log µ(B(x, r)) log r = d µ (x) almost everywhere. For this construction, the authors consider a class of unimodal maps (Feigenbaum's maps).
We investigate the Hausdorff dimensions of invariant ergodic measures for piecewise expanding maps of the interval with countably many branches. In particular, we focus on maps exhibiting similar properties to the Gauss map and measures with infinite entropy and infinite Lyapunov exponent. Our main result is (see next section for the definitions): Theorem. Let T : [0, 1] → [0, 1] be a Gauss-like map and µ be an infinite entropy Gibbs measure satisfying assumption 1 and such that the decay ratio s exists . Then d µ (x) = 0, d µ (x) = s almost everywhere.
We can also compute the almost sure value of the symbolic dimension. The Gibbs assumption on the measure implies that a certain sequence of observables can be seen as a non-integrable stationary ergodic process and allows us to use some tools of infinite ergodic theory developed by Aaronson. In particular, the pointwise behavior of trimmed sums plays a fundamental role in our arguments. We also prove that the packing dimension of such measures is equal to the decay ratio, and conclude that such systems are not exact dimensional. We remark that the methods used in the context of finite entropy fail, as they rely on the fact that the measure and diameter of the iterates of the natural Markov partition decrease at an exponential rate given by h µ and λ µ respectively, enabling the use of coverings by balls of different scales.
To tackle this problem, we make use of more refined coverings of balls, which are capable of detecting the asymptotic interaction between the Gibbs measure and the Lebesgue measure.
The study of the Hausdorff dimension of sets for which their points have infinite Lyapunov exponent has already been considered: see for instance [FLM10] where the authors compute the Hausdorff dimension of sets with prescribed digits on their continued fraction expansion, or [FSU14] where the authors construct a measure invariant under the Gauss map which gives full measure to the Liouville numbers.
Since the Liouville numbers are a zero dimensional set, such measure is also zero dimensional. Our result shows that this is the case for a large class of measures.
The dimension of Bernoulli measures for the Gauss map G was studied by Kifer, Peres and Weiss in [KPW01], where they show that there is a universal constant ε 0 > 10 −7 so that for every Bernoulli measure on the symbolic space coding the Gauss map, where π is the coding map. This inequality holds even for the case where the entropy of the measure is infinite. They also show that for an infinite entropy Bernoulli measure µ, the Hausdorff dimension satisfies dim H µ ≤ 1/2. Their method relies on estimating the dimension of the sets of points for which the frequency of a sequence of digits in their continued fraction expansion differs from the expected value by a certain threshold is uniformly (with respect to the sequence of digits) bounded from 1, and a bound on the dimension of points that lie in unusually short cylinders. This situation has been recently studied by Jurga and Baker (see [Jur18] and [BJ18]) using different methods. Concretly, in [Jur18] the author uses ideas of the Hilbert-Birkhoff cone theory and extract information about the dynamics through the transfer operator. On the other hand, in [BJ18]) the authors construct a Bernoulli measure µ q such that sup p dim H µ p = dim H µ q , where the supremum is taken over all Bernoulli measures. This in conjunction with the Variational Principle (see [Wal82]) yield their result.
The paper is structured as follows. In section 2 we introduce the notation used throughout the paper as well as the main objects of study. We also state the results of the paper. In section 3 we compute the symbolic dimension and characterize it in terms of the Markov partition. In section 4 we study the consequences of λ µ = ∞ at the level of the asymptotic rate of contraction of the cylinders. In sections 5 and 6 we prove the results for the Hausdorff and the Packing dimension respectively. We finish the article stating some questions of interest that could not be answered with the methods used in this paper.

Notation and statement of main results
2.1. The class of maps. We start introducing the EMR (Expanding-Markov-Renyi) maps of the interval.
Definition 2.1. We say that a map T : I → I of the interval I = [0, 1] is an EMR map if there is a countable collection of closed intervals {I(n)} (with disjoint interiors int I(n)) such that: 1. The map is C 2 on n int I(n), 2. Some power of T is uniformly expanding, i.e., there is a positive integer r and a constant α > 1 such that |(T r ) (x)| ≥ α for all x ∈ n int I(n), 3. The map is Markov and can be coded by a full shift (see next subsection), 4. The map satisfies Renyi's condition: there is a constant E > 0 such that This class of maps was first introduced in [PW99] in the context the multifractal analysis of the Lyapunov exponent for the Gauss map. Renyi's condition provides good estimates for the Lebesgue measure of the cylinders associated to the Markov structure of the map (see next subsection). For simplicity, we will assume that the maps are orientation preserving (the orientation reversing case only differs in the relative position of the cylinders). The set of branches must accumulate at least at one point, and we assume that it accumulates at exactly one point: we also assume that the branches accumulate on the left endpoint of I (the case when the branches accumulate in the right endpoint of I is analogous). Re-indexing if necessary, we can assume that I(n + 1) < I(n) for all n. Let r n = |I(n)|.
Definition 2.2. We say that an EMR map T is a Gauss-like map if it satisfies the following conditions: 1. r n > 0 for every n ∈ N, 2. r n+1 ≤ r n , 3. n r n = 1, 4. 0 < K ≤ r n+1 /r n ≤ K < ∞ for some constants K, K , 5. {r n } decays polynomially as n goes to infinity (see definition (3.7)).
We want to keep in mind piecewise linear functions as the main example, as for this class of maps, calculations are simplified. We will also keep in mind the example of the Gauss map.
2.2. Markov structure and symbolic coding. We describe now the Markov structure of the maps considered. Given a finite sequence of natural numbers (a 1 , . . . , a n ) ∈ N n , the n-th level cylinder associated to (a 1 , . . . , a n ) is the set I(a 1 , . . . , a n ) = I a1 ∩T −1 (I(a 2 ))∩. . .∩T −(n−1) (I(a n )). Let O = n k T −n (∂I(k)), then given x ∈ [0, 1]\O and n ∈ N, there exists a unique sequence (a 1 (x), a 2 (x), . . .) ∈ N N such that x ∈ I(a 1 (x), . . . , a n (x)) for every n. We denote this sequence by by (a 1 , a 2 , . . .) when x is clear from the context. We also denote I n (x) = I(a 1 , . . . , a n ) and we say x is coded by the sequence (a n ) Let Σ = N N and σ : Σ → Σ be the full shift over N. Then the cylinders in the symbolic space are defined by C(a 1 , a 2 , . . . , a n ) = {(x n ) ∈ Σ | x j = a j for j = 1, . . . , n}} .
We endow the space Σ with the topology generated by the cylinders defined above. Then the map π : Σ → I \ O given by π((x n )) = n I(x 1 , . . . , x n ) is a continuous bijection.
Given x ∈ I \ O with coding sequence (a n ) and n ≥ 1, denote by I l n (x) = I(a 1 , . . . , a n−1 , a n + 1) (resp I r n (x) = I(a 1 , . . . , a n−1 , a n − 1) if a n ≥ 2) the level n cylinder on the left (resp right) of I n (x). Also, denote byÎ n (x) = I n (x) ∪ I r n (x) ∪ I l n (x). If there is no risk of confusion, we omit the dependence on x.
Renyi's condition introduced in the previous subsection implies that the length of each cylinder is comparable to the derivative of the iterates of the map at any point of the cylinder. More precisely, 0 < D −1 ≤ |(T n ) (x)| · |I(a 1 , . . . , a n )| ≤ D for every finite sequence (a 1 , . . . , a n ) ∈ N n and x ∈ I(a 1 , . . . , a n ).
2.3. The class of measures. We start by giving the usual definition of Gibbs measures: Definition 2.3. Let µ be an invariant measure with respect to T . Then we say that µ is a Gibbs measure associated to the potential log ϕ : Σ → R, that is, there exist constants A, B > 0 so that where x is any point in C(a 1 , . . . , a n ), (a 1 , . . . , a n , . . .) is any sequence in Σ, S n f (x) is the Birkhoff sum of f at the point x, and P (log ϕ) is a constant (depending on the potential) called the topological pressure of log ϕ.
Throughout this work we will assume that P (log ϕ) = 0, otherwise we can take the zero pressure potential log ϕ − P (log ϕ). It is important to note that it is not trivial that this will not affect our computations, and we will show later how we can overcome that difficulty. The sequence p n = µ(I(n)) will be of particular relevance for our computations.
We can project this measure to I by settingμ = π −1 •µ. We assume these measures are invariant and ergodic with respect to T . We will denote by µ both the measure in the symbolic space and the projected measure.
Our main assumption on the class of measures is that they have infinite entropy. This can be expressed by saying that the potential − log ϕ is not integrable with respect to µ. In fact, by the Gibbs property, the Shannon-McMillan-Breiman entropy can be written as The last equality is a consequence of Lemma 3.2.
Definition 2.4. Let x n be the unique fixed point of T in I(n). We define then the decay ratio by The tail decay ratio is defined bŷ Both definitions for s andŝ agree since µ is a Gibbs measure. Note also that the definitions above are independent of the choice of the point x n representing each cylinder if var 1 (ϕ) < ∞. By Cersàro-Stolz theorem we can write the decay ratio as s = lim n→∞ n k=1 p n log p n n k=1 p n log r n . Assumption 1. Assume that var 1 (log ϕ) < ∞. For the sequence sequence q = {q n } n∈N = {ϕ(x n )} we assume that for every n ∈ N, we have The second condition prevents the existence of large jumps for the potential along sufficiently sparse subsequences of {x n }. By the Gibbs property, the properties hold if we replace q n by p n .
2.4. Entropy and Lyapunov exponent. Since our measures are Gibbs and the potential has finite first variation, we can write the entropy of the system simply as We define the Lyapunov exponent as By the bounded distortion property, we can write the Lyapunov exponent as where L is a distortion constant (independent of µ). Thus, λ µ is infinite if and only if the series above is divergent. Throughout this work, we assume that both numbers h µ and λ µ are infinite, and hence we can think of λ µ as defined by the series above.
2.5. Hausdorff and packing dimension. In this section we introduce the dimension theory elements we will study throughout this work. Recall the diameter of a set U ⊂ R is given by For a cover U of a set X ⊂ R, its diameter is given by Definition 2.5. Given X ⊂ R and α ∈ R, the α−dimensional Hausdorff measure of X is given by where the infimum is taken over finite or countable covers U of X with diam U ≤ δ.
It is possible to prove that there exists a number s ∈ [0, ∞] such that m(X, α) = ∞ for t < s and m(X, α) = 0 for t > s, since m(X, α) is decreasing in α for a fixed set X.
Definition 2.6. The unique number is called the Hausdorff dimension of X.
We extend the notion of Hausdorff dimension to finite Borel measures on R: Definition 2.7. Let µ be a finite Borel measure on R. The Hausdorff dimension of µ is defined by We define now the analogue notion of Packing dimension Definition 2.8. We say that a collection of balls {U n } n ⊂ R is a δ−packing of the set E ⊂ R if the diameter of the balls is less than or equal to δ, they are pairwise disjoint and their centres belong to E. For α ∈ R, the α−dimensional pre-packing measure of E is given by where the supremum is taken over all δ−packings of E. The α−dimensional packing measure of E is defined by where the infimum is taken over all covers {E i } of E. Finally, we define the packing dimension of E by We extend the notion of packing dimension to finite Borel measures on R.
Definition 2.9. Let µ be a finite Borel measure on R. The Packing dimension of µ is defined by It is important to remark that the definitions of dimension for measures is not standard. For instance, a different definition often used is given by We refer to these as lower Hausdorff (packing) dimensions.
Bounding the Hausdorff dimension from above or the Packing dimension from below usually involves the use of a single suitable cover of the space, while for bounds from below and above respectively, we have to deal with every cover of the space. There are several tools to help with this problem, and we will make use of the so called (local) Mass Distribution Principles. For this, we introduce the notion of local dimension.
Definition 2.10. The lower and upper pointwise dimensions of the measure µ at a point x ∈ X is given by When both limits coincide, we call the common value the pointwise dimension of µ at x and denote it by d µ (x) and say that µ is exact dimensional if d = d almost everywhere.
If d µ (x) = d, then µ(B(x, r)) ∼ r d for small values of r. We state now the local version of the Mass Distribution Principle.
A notion of dimension which is more adapted to the underlying structure of our dynamical system is the symbolic dimension, which we proceed to define.
Definition 2.12. Given x ∈ I, we define the lower symbolic dimension of µ at x by and the upper symbolic dimension of µ at x by If δ(x) = δ(x), then we define the symbolic dimension of µ at x as the common value, denote it by δ(x), and we say that µ is symbolic exact dimensional if δ = δ.
2.6. Main results. The estimates we prove depend on asymptotic relations between the measure and the length of cylinders defining the system. The main results are then: Theorem 2.13. Let T be an EMR map, and µ be an infinite entropy Gibbs measure satisfying assumption 1. If the decay ratio exists and it is equal to s, then If we assume that the decay of {r n } is polynomial and the measure satisfies the regularity conditions given by assumption 1, we can compute the local dimensions: Theorem 2.14. Let T be a Gauss-like map, and µ be an infinite entropy Gibbs measure satisfying assumption 1. If the decay ratio exists and it is equal to s, then Consequently, 0 = dim H µ < s = dim P µ.

Symbolic dimension
3.1. Computation of the symbolic dimension. We prove now that under the above assumptions, the Gibbs measure µ is symbolic exact dimensional, and this dimension coincides with the decay ratio. This result does not depend on the length decaying ratio of the partition of the interval.
In general the Lyapunov exponent majorizes the entropy. In a more general setting, this result is known as Ruelle's inequality (see [Rue78]).
Proof. This is an immediate consequence of the Volume Lemma (theorem 1. We prove a well known fact about non-integrable observables.  This result implies in particular that we can assume that the pressure of our potential is zero, as S n (log ϕ) dominates −nP (log ϕ) when log ϕ is not integrable.
We formulate a lemma regarding the metric and measure theoretic properties of the cylinders associated to the map. This will allow us to write geometric quantities in ergodic theoretic terms. Its proof is a standard applications of the bounded distortion and Gibbs properties.
We proceed to compute the symbolic dimension of our system. This result holds regardless of the decay rate of the sequence {r n }.
Theorem 3.4. Let T be an EMR map and µ a Gibbs measure with infinte entropy satisfying Assumption 1. Then if the decay ratio exists, we have that µ is symbolicexact dimensional and for µ-almost every x ∈ I, δ(x) = s.
Proof. By Lemma 3.2 applied to the observables log ϕ and log r a1 and Lemma 3.3, we have .
We analyse separately the two bits of the sum: .
For k = 1, . . . , n 1 taking k = p k /2 there exists n 2 ≥ n 1 such that for every n ≥ n 2 . Thus, the terms n1 k=1 f n,k (− log p k ) and n1 k=1 f n,k (− log r k ) grow linearly in n for n large enough. We will show that m(n) k=n1+1 f n,k (− log r k ) grows faster than linear.
Given M > 0, since the Lyapunov exponent is infinite, there exists n 3 such that for every n ≥ n 3 . Now, for k = n 1 + 1, . . . , m(n 3 ), take k = p k /2 and so there exists n 4 ≥ n 3 such that f n,k ≥ np k 2 for every n ≥ n 4 and k = n 1 + 1, . . . , m(n 3 ). Thus for every n ≥ n 4 . This shows that S 1 (n) → 0 as n → ∞. To estimate S 2 (n), we note that Using the same argument as above, we can show that m(n) k=n1+1 f n,k (− log r k ) grows faster than linear, so lim S 2 (n) ≤ s + . This shows that The proof of the opposite inequality is analogous.
3.2. The decay ratio. Now we proceed to study the properties of the decay. In fact, we show that for infinite entropy measures, it is completely determined by the properties of the partition {I(n) | n ∈ N}: Definition 3.5. The convergence exponent of the partition {r n } of I is defined by Proposition 3.6. In general, we have that s ∞ ≤ s. Under the assumption that h µ = ∞, we also have s ≤ s ∞ .
Proof. Given > 0, there exists n 1 such that ( + s) log r n < log p n < (s − ) log r n for every n ≥ n 1 , and thus r s+ n < p n for every n ≥ n 1 . Summing over n we get Hence, s ∞ ≤ s + for every > 0 and so s ∞ ≤ s. Now, Suppose that s ∞ < s, and hence, there is α > 0 such that s ∞ ≤ s ∞ + α < s and Recall the one sided limit criterion for convergence of series: let a b , b n > 0 sequences such that lim sup n→∞ a n b n = c ∈ [0, ∞) and b n < ∞. Then a n < ∞.
Let f : [0, ∞) → R be the function defined by It is easy to see that f is continuous. Taking a n = r s− n (− log r n ) and b n = r s∞+α n and using the continuity of f , we get that lim sup n→∞ a n b n = lim n→∞ r n (− log r n ) = 0.
We conclude that contradicting the fact that the entropy is infinite.
We give now a definition for the asymptotic decay of the sequence {r n }.
Definition 3.7. The asymptotic of the partition {r n } is defined as α = sup{t ≥ 0 | lim n→∞ n t r n < ∞}.
We say that {r n } decays polynomially if α > 1, and we say that {r n } decays superpolynomially if α = ∞.
Note that if r n has polynomial decay with asymptotic α, then s ∞ = 1/α. If we know the asymptotic of {r n }, we can compute the asymptotic of the tail of the series of {r n }: Lemma 3.8. If the asymptotic of {r n } is α > 1, then the asymptotic of R n = m≥n r n is α − 1.
Proof. It suffices to show that the sets A = {t ≥ 1 | lim n→∞ n t r n < ∞} and A = {t ≥ 0 | lim n→∞ n t−1 R n < ∞} are the same. Let t ∈ A, then lim n→∞ n t r n = d, and so given , there is n 0 ∈ N such that (d − ) n t < r n < (d + ) n t . for n ≥ n 0 . Hence, for n ≥ n 0 , from which follows that t − 1 ∈ A . Now, if t ∈ A , we have that lim n→∞ n t−1 R n = d < ∞, and thus, given > 0, there is n 1 ∈ N such that This implies that from which follows that t + 1 ∈ A, proving the assertion.

Infinite ergodic theory
In this section we explore the consequences of the non-integrability of the function − log r a1 (or equivalently, λ µ = ∞). Using tools of infinite ergodic theory we can prove that the diameter of the cylinders decreases faster than exponentially from a given level to the next.
4.1. Finite Lyapunov exponent argument. We proceed to show now one of the usual arguments used to compute Hausdorff dimensions and remark how it fails in our case.
Lemma 4.1. Let T be an EMR map and µ a Gibbs measure. Then for almost every x ∈ I and every r > 0 there exists n such that log µ(B(x, r)) log r ≤ log µ(I n (x)) log |I n−1 (x)| . (4.1) Proof. This is a well known argument and can be found for instance in [Pes08]. Given r > 0, there exists a unique integer n = n(r) such that |I n (x)| < r ≤ |I n−1 (x)| so then Then log µ(I n (x)) ≤ log µ (B(x, r)), and since log r ≤ log |I n−1 (x)|, we obtain log µ(B(x, r)) log r ≤ log µ(I n (x)) log |I n−1 (x)| as we wanted.
In a similar way, it is possible to show that where C 1 , C 2 are constants arising from assumption 1 and Renyi's property respectively. Note that if λ µ < ∞, then inequalities (4.1) and (4.2) , and the Ergodic Theorem would immediately imply that s = dim H µ = dim P µ. However, since in our case λ µ = ∞, the previous argument does not work. In fact, here lies the main difficulty of the infinite entropy and Lyapunov exponent case. The following lemma shows that the situation is as bad as it can get: for almost every point, the diameter of the cylinders decreases arbitrarily from one level to the next. The proof of the first equality is an immediate consequence of recurrence. We postpone the proof of the second equality. We will return to this issue once we set up the appropriate tools to prove this result. The main tool that we will use to prove proposition 4.2 are results about the pointwise behavior of trimmed sums.

4.2.
Trimmed convergence. Note that the sequence {X n = − log r 1 • T n−1 } can be seen as a positive ergodic stationary process on [0, 1] with respect to µ, an infinite entropy Gibbs measure satisfying assumption 1. The distribution function of X 1 is F(t) = µ(X 1 ≥ t), and it can be seen that E(X 1 ) = λ µ . As we saw in Lemma 3.2, the Ergodic Theorem fails to provide non-trivial information. This result was vastly generalized by Robbins and Chow for i.i.d. random variables in [CR61] and in the ergodic stationary case by Aaronson in [Aar77] who proved the following theorem: It is possible to prove that the lack of convergence in the previous theorem is due to a finite number of terms which are not comparable in size to the rest of the terms of the sum. This was proved in the i.i.d. case by Mori in [Mor76], [Mor77] and in the stationary ergodic case by Aaronson and Nakada in [AN03]. We formulate the result by Aaronson and Nakada in a setting appropriate for our purposes.
We denote the ergodic sum of a function f by S n (f ) and define S n (f ) = S n (f ) − max{f, . . . , f • T n−1 }. We refer to S n as the trimmed ergodic sum of f .
Then, there exists a sequence {b n } such that almost surely, and we say that {X n } has trimmed convergence.
As remarked in [AN03], any Gibbs-Markov map is CF-mixing with exponential rate. For our particular random variables, the series in the previous theorem can be explicitly expressed in terms of the sequences {p n } and {r n }: Then the sequence {X n = − log r 1 • T n−1 } has trimmed convergence.
Proof. We show that if ∞ n=1 (log r n ) 2 (p 2 n + 2p n p n+1 ) < ∞, Let F(t) = µ(X ≥ t) and note that We compare the above sum to the corresponding integral. We can then see that if x ∈ [0, − log r 1 ) then F(x) = 1, while if x ∈ [− log r n , − log r n+1 ) for n ≥ 1 then Call now a n = (log r n ) 2 , b n = ∞ i,j=n Then, the above expression has the form ∞ n=1 (a n+1 − a n )b n which can be written as With this, the integral becomes (log r n ) 2 (p 2 n + 2p n p n+1 ) as we wanted to prove.
We show now that the trimmed convergence condition is satisfied by systems for which {r n } decays polynomially or slower.
Lemma 4.7. Suppose that Then the sequence {X n = − log r n } has trimmed convergence.
Proof. Since p n and p n+1 are comparable, it suffices to prove that ∞ n=1 (log r n ) 2 p 2 n < ∞.
Note that {p n } ⊂ 2 and we have that Since the sequence {p n } is decreasing, we have that Comparing in the limit the series of the left hand side to the series n p 2 n (log r n ) 2 , we get that this series converge. Since the trimmed sum is o(b n ), the first condition must hold in a set of full measure Z 2 . Let Z = Z 1 ∩ Z 2 and x ∈ Z. Given 1 > ε > 0, there exists n 0 such that S n b n − 1 < ε for every n ≥ n 0 at x. Since lim sup Sn bn = ∞, given an integer M > 0 there exists n 1 ≥ n 0 such that S n1 b n1 > 2M + 1 at x. Combining these two inequalities, we obtain Now, there exists an index j ∈ {1, . . . , n 1 } such that X j = max{X 1 . . . , X n1 } at x, and so S j = S j−1 . Since the X i are positive, we have that and hence This implies that lim sup n→∞ X n S n−1 = ∞ and so lim sup n→∞ log |I n | log |I n−1 | = ∞ as we wanted to prove.

Computing the Hausdorff dimension
With the tools developed in the previous sections, we proceed with the dimension computations.
Now we prove an upper bound for dim H µ. This bound is related to the tail decay ratioŝ. We prove two necessary lemmas to give the desired bound. The first lemma shows that {p n } decays slower than any polynomial, while the second lemma, shows the existence ofŝ and thatŝ = 0 for Gauss-like maps.
Lemma 5.1. Suppose that the decay ratio exists and it is equal to s, the sequence {r n } decays polynomially and the measure µ has infinite entropy. Then for all δ > 0, there exist constants C, n 0 such that p n ≥ C n 1+δ for all n ≥ n 0 .
Proof. Let α > 0 be the polynomial decay of r n . Then by proposition 3.6, s = s ∞ = 1/α, we can take > 0 small enough so that α + s + 2 < δ. Then there exists C > 0 and n 0 ∈ N such that C n α+ ≤ r n log r s+ n ≤ log p n for all n ≥ n 0 . This implies that C s+ n 1+δ ≤ C s+ n (α+ )(s+ ) ≤ p n for all n ≥ n 0 as we wanted.
Lemma 5.2. Under the same assumptions of the previous lemma, the tails decay ratioŝ exists and is equal to zero.
Proof. By the lemma above, for δ > 0, there are constants C, n 0 such that p n ≥ C n 1+δ for all n ≥ n 0 . This implies that ∞ m=n p m ≥ C δn δ for n ≥ n 0 . On the other hand, if we take < α − 1, there exists n 1 such that r n ≤ C n α− for n ≥ n 1 and consequently, .
Letting δ → 0 we conclude the result. Now we can compute the lower local dimension, and consequently, obtain the Hausdorff dimension of the measure. where I m· n (x) = I(a 1 (x), . . . , a n−1 (x), a n (x) + m). Then and so .
Note now that the above inequality can be expressed in terms of the sequences {p n }, {r n } using Lemma 3.3 where G 1 , G 2 are constants arising from the Gibbs property and the finite first variation of the potential, and D 1 , D 2 are constants arising from the bounded distortion property. Thus, we have log µ(B(x, r n )) log r n ≤ n−1 k=1 log p a k + log ( For n large enough, we have that Thus, if a n is large enough, we have log µ(B(x, r n )) log r n ≤ (s + ) If α > 1 is the polynomial decaying ratio of {r n }, then by Lemma 3.8 we get the tail decay asymptotic ∞ m=0 r n+m 1 n α−1 .
We can then rewrite the above inequality as log µ(B(x, r n )) log r n ≤ (s + ) where K is the constant implied in the tail asymptotic for {r n }. By Lemma 4.4 and Proposition 4.2, we can take an increasing subsequence a n k so that We get then lim k→∞ log µ(B(x, r n k )) log r n k ≤ŝ + Letting → 0 we conclude that d(x) ≤ŝ as we wanted.
From the above result, we can conclude that for such measures, dim H µ = 0.

Packing dimension
In the previous section we completely determined the Hausdorff dimension of the measures of our interest. Now we proceed to compute the packing dimension. First we give a lower bound for the upper local dimension. The proof uses similar ideas to the proof of Lemma 5.3: we choose a particular cover of the ball and use that the Birkhoff sums for the potentials − log p a1 , − log r a1 grow faster than linear. in Z, where f n,k is as defined in the proof of (3.4). Intersect Z 1 with the subset Z 2 ⊂ I of full measure given by Lemma 3.2 and pick a point x ∈ Z 1 ∩ Z 2 . Since p 1 < 1, we can pick a subsequence k n ∞ such that a kn = 1 for every n. Then, for all n, take r n = min{|I kn |, |I r kn |, |I kn |} = |I kn |. Here we denote I n = I(a 1 , . . . , a n−1 , a n + 1) and I r n = I(a 1 , . . . , a n−1 , a n − 1) whenever a n > 1. This choice of r n implies that B(x, r n ) ⊆ I kn ∪ I kn ∪ I r kn . From the Gibbs property and the fact that ϕ(x n ), ϕ(x n+1 ), and r n , r n+1 are comparable, it follows that there are constants C 1 , C 2 > 0 such that µ(I n ∪ I n ∪ I r n ) ≤ C 1 µ(I n ) and |I n | ≥ C 2 |I n | for every n. Using this and Lemma 3.3 we have that log µ(B(x, r n )) log r n ≥ log(C 1 µ(I kn )) log(C 2 |I kn |) By Lemma 3.2 and Theorem 3.4, the last expression converges to s, as desired.
Giving an upper bound for the upper local dimension requires a more involved analysis of the geometric structure of the partition and its relation to the geometry of the balls. We will need the following lemma: Lemma 6.2. Suppose that {r n } decays polynomially with degree α > 1. Then, for every 0 < δ < min{1/3, (α − 1)/(α + 1)}, 0 < η < 1/2 there exists k 0 ∈ N such that for all k ≥ k 0 and n ∈ N.
With the previous lemma, we can now prove the upper bound for the upper local dimension. The proof is based on carefully choosing the covers of the balls; such covers must be fine enough so they are not affected by Proposition 4.2. This means that we want to cover the ball with cylinders of the same scale, otherwise, the cover would yield trivial bounds.
Proposition 6.4. Suppose T is a Gauss-like map and µ is an infinite entropy Gibbs measure satisfying assumption 1. Then lim sup r→0 log µ(B(x, r)) log r ≤ s for µ almost every x ∈ I.
Proof. Let x be a point where Theorem 3.4 and Lemma 3.2 applied to f = − log r a1 hold. Given r > 0, there exists a unique natural number n = n(r) such that |I n (x)| < r ≤ |I n−1 (x)|.
Note that n → ∞ as r → 0. Let δ > 0 and η as in Lemma 6.2. Then there exists k 0 ∈ N such that log n+k m=k p m log n+k+1 m=k−1 r m ≤ 1 + δ α − δ + η for all k ≥ k 0 . Recall that by I m·r n (x) we denote the cylinder I(a 1 , . . . , a n−1 , a n −m), where (a n ) is the sequence coding x and m < a n . We separate the proof in two cases: Case 1: I(a 1 , . . . , a n−1 , k 0 ) ⊂ B(x, r).
This implies that there exists k 1 ∈ N such that r an−k + nD 1 + D 2 .
Corollary 6.5. For an infinite entropy Gibbs measure µ satisfying (2.3), associated to a Gauss-like map, we have that 0 = d(x) < s = d(x) for almost every point, and hence µ is not exact dimensional.
With this we have found the almost sure behavior of the local dimensions, and hence, we have obtained values for both the packing and the Hausdorff dimension.

Final remarks
Theorem 2.14 implies that for maps such that {r n } decays polynomially, the Hausdorff dimension of ergodic invriant measures with infinite entropy is equal to zero under mild independence and regularity assumptions on the measure.
Question 1. Is there an ergodic invariant measure µ for a Gauss-like map with h µ = λ µ = ∞, r n = p n and dim H µ > 0?
The condition r n = p n rules out the Lebesgue measure, which clearly has dimension equal to 1. We can construct such measure if we assume that r n decays slower than polynomial.
We also formulate two questions for a more general case: Question 2. What can be said about the almost sure value of the symbolic dimension when µ is only assumed to be ergodic?
Question 3. What can be said about dim H µ when µ is only assumed to be ergodic?
The main difficulty with question 2 is that our methods rely on the asymptotic independence of the digits in the symbolic space. This implies that we can write the measure and diameter of cylinders in the form of Birkhoff sums, allowing us to use ergodic theoretic methods to study the almost sure behavior of such sums.
On the other hand, the main difficulty with question 3 is that one of the main ergodic theoretic tools we use (Theorem 4.5) assumes the process {X n } is CFmixing. For measures which do not satisfy any kind of independence assumption, we are not able to use such techniques.