The Hausdorff dimension of measures for iterated function systems which contract on average

In this note we consider measures supported on limit sets of systems that contract on average. 
In particular, we present an upper bound on their Hausdorff dimension.

1. Introduction and statement of results. In this note we want to consider measures supported on limit sets of systems that contract on average. There have been many articles concerning finding upper bounds on the Hausdorff dimension of measures for attractors (and stationary measures for strictly contracting iterated function systems (IFS)) or the closely related case of repellers and invariant measures for expanding maps. For conformal maps there are quite comprehensive results and, even in the case of non-conformal results there are a number of strong results. For example, [4], [6] deal with the linear case and [3], [12] with the nonlinear case. In [1] iterates of random functions which contract on average are considered and this idea can be put into the framework of IFS which contract on average. In [9] with the addition of random errors the exact value of the dimension is computed almost surely. However, when we turn to the problem of estimating the Hausdorff dimension of measures, for IFS which contract on average, there are fewer results and most previous authors have concentrated on the case when the maps are conformal. Our aim is to find upper bounds for the general (non-conformal and contracting on average) case. Although there have been several papers which provide upper bounds for the Hausdorff dimension of the measures defined by such systems, including [8] and [5], our results are also new in the uniformly contracting case.
In this paper we shall consider an iterated function system in R d which contracts on average. Our aim is to provide a sharp upper bound for the Hausdorff dimension 236 THOMAS JORDAN AND MARK POLLICOTT of natural measures defined using such systems. Let 0 < γ We can denote γ 1 = min 1≤i≤m γ (i) 1 and γ 2 = max 1≤i≤m γ (i) 2 . Let Σ m be the full shift on m symbols. We shall consider ergodic probability measures µ on Σ m satisfying If this is the case we say that the iterated function system contracts on average. The sequence f i1 • · · · • f in (0) converges for µ almost all i (see [1]) and we will denote This is well defined for µ-almost all i and so we can let ν = µ • Π −1 . If the system is uniformly contracting and the open set condition is satisfied then [12] gives an upper bound for the dimension. If there are expansions in the IFS then the limit set will include ∞ and in many cases is equal to be the whole of R d (see [8] for examples). For the linear case where the contractions are less than 1 2 typical values for the Hausdorff dimension of the attractor and stationary measures can be computed [4], [6]. If the linear maps have norm less than 1 then adding random perturbations gives an almost sure equality, [6].
For A ∈ Lin(R d , R d ) we define the singular values to be the eigenvalues of ( Our first result describes the subadditive behaviour of φ s (f i , x).

Lemma 1.
For any 0 ≤ s ≤ d the function φ s satisfies the following subadditive property Proof. This can be proved using Lemma 2.1 in [4] which states that for T, U ∈ Lin(R d , R d ) we have that where φ s (T ), etc., have the obvious interpretation. By the chain rule we have that Let g s : Σ m → R be defined by It follows by the Birkhoff Ergodic Theorem that lim n→∞ 1 n n−1 j=0 g s (σ j i) = g s (i)dµ(i) =: g s (µ) for µ almost all i ∈ Σ m .
Given a Borel set X we define H s ρ (X) to be the infimum of the summations j r s j where ∪ j B(z j , r j ) ⊇ X, is a finite cover by balls B(z j , r j ) with radii satisfying r j ≤ ρ. The Hausdorff dimension of X is then given by We recall that the Hausdorff dimension of measure is the infimum of the dimensions of Borel sets of full measure. We now have our first upper bound for the Hausdorff dimension of ν.
However, it is possible to improve on this bound. For i ∈ Σ m we consider the values 1 n φ s (f i1 • · · · • f in , Π(σ n i)).
By the sub-additive ergodic theorem [7] this converges almost surely to Now we consider the iterated function system formed by taking the iterates f i1 • · · · • f in and the same measure µ. In this case we can define g s n (µ) by and note that considering the system of nth level iterates will cause the entropy to be multiplied by n. Thus applying Lemma 2 gives that dim H (ν) ≤ s n where s n satisfies 1 n g sn n (µ) = −h(µ). Moreover, by the subadditive ergodic theorem we have that Our next step is to show how f s (µ) can be written in terms of Lyapunov exponents.

Lemma 3. There exist constants
where for µ almost all i Proof. This follows from Oseledec's Multiplicative Ergodic Theorem [7].
We now come to our main result.
Theorem 1. Let ν = µ • Π −1 be the stationary measure for an iterated function system as defined above. We have that Proof of Theorem 1 (assuming Lemma 2). The proof of Lemma 2 will be given in the remainder of the paper. Fix 0 ≤ s ≤ d. It follows by the definitions of g s n (µ) and f s (µ) and by Lemma 3 that and thus We now need to show this is the minimum attained by (4). To see this note that if and so s 0 is the minimum obtained by (4).
In the case of conformal maps we always have an equality, although there exist examples of affine contractions for which there is a strict inequality. However, with random perturbations to affine contractions we can recover the equality in the context of random attractors [6].
2. Calculating Hausdorff measure and Hausdorff dimension. The key to our proof of Lemma 2 is estimating how one iteration of each map effects the Hausdorff measure. For this purpose we need a simple result regarding the derivative of a diffeomorphism. and Proof. Let ǫ > 0 we can find ρ 1 such that condition (6) is satisfied by Frechet differentiability of f . The Frechet derivative Df : R d → R d is a linear map. Since the singular values of Df are bounded away from 0 we can choose δ such that Since Df is uniformly continuous we can find ρ 2 such that for ||y − z|| ≤ ρ 2 and any x ∈ f (B(0, 1)), δ) denotes an δ-neighbourhood of the image D z f (B(0, 1)).
Since the singular values of D z f are given by the principal axes of the ellipsoid D z f (B(0, 1)) it follows that D y f (B(0, 1)) will be contained inside the ellipsoid with principal axes α 1 (f, z)+δ, . . . , α d (f, z)+δ. Similarly, D z f (B(0, 1)) will be contained inside the ellipse with axes α 1 (f, y) + δ, . . . , α d (f, y) + δ. Thus for each 1 ≤ j ≤ d, We can now prove a result estimating the effect on Hausdorff measure of an iteration of f . This is very similar in nature to Lemma 3 and Corollary 1 in [12].
Proof. We choose ǫ > 0 to be sufficiently small such that (1 + ǫ)e ǫ < 2. By Lemma 4 we can find ρ such that both (6) and (7) are satisfied. Let H s ρ (A) = h. It follows that for δ > 0 we can find a finite set of balls B(z i , r i ) where ∪ i B(z i , r i ) ⊇ A, r i ≤ ρ for all j and i r s j < h + δ. By definition, the sets f (B(z i , r i )) cover f (A). Furthermore, due to our choice of ρ and Lemma 4 these will be contained in ellipses with principal axes (1 + ǫ)r i α j (f, x)e ǫ , j = 1, · · · , d. More precisely, by (6) f (B(z i , r i )) is contained in an ellipse with principal axes (1 + ǫ)r j (α j (f, z j )) and we can then apply (7). Choose k such that k − 1 ≤ s ≤ k. We can cover f (B(z i , r i )) with exp(log (α 1 (f, x) Since δ was arbitrary the proof is complete.

THOMAS JORDAN AND MARK POLLICOTT
The next lemma provides a simple method for giving an upper bound to the Hausdorff dimension on a measure. Lemma 6. Let µ be a probability measure on R d . If we can find a sequence of (bounded) sets A n such that Fix t ∈ N and let Y t = ∩ n≥t B n . Observe that For any n ≥ t a cover of B n is also a cover of Y t and so H s (Y t ) = 0 thus implying dim H Y t ≤ s. Furthermore µ(∪ t∈N Y t ) = 1 and dim H (∪ t∈N Y t ) ≤ s which is sufficient to complete the proof.
3. Proof of Lemma 2. The method of proof of Lemma 2 involves applying Lemma 6. To begin we need to define a suitable sequence of sets. This will be done by defining suitable subsets on the shift space, Σ m and then projecting to R d . Recall the definition of η given in (1). Fix ǫ > 0 such that η + ǫ < 0. We then choose t such that g t (µ) + h(µ) = −3ǫ. It is clear that as ǫ → 0 we have t → s from above. Let C 0 > 2 t d t/2 and choose N such that We would next like to choose sets X n ⊂ Σ m such that any i ∈ X n satisfies: and 4. if we denote r n = n 2 , then we have that Π(σ nN (i)) ∈ B(0, r n ).
We then write Λ n = ΠX n . It remain to show that we can choose sets X n , and in such a way that Λ n satisfies the conditions of Lemma 6. We begin with establishing the first condition.
Proof. By the definition of ν and Λ n , to get that ν(Λ n ) → 1 it suffices to show that, in addition, we can find sets X n such that µ(X n ) → 1. Thus, it suffices to show that as n → ∞ the µ measure of sequences satisfying limiting conditions corresponding to the four conditions above will converge to 1. Then by Egoroff's theorem we can deduce that sets satisfying (1)-(4) exist. For example, the fact that sequences satisfying conditions (1) and (2) have measure tending to 1 follows from the Shannon-Macmillan-Brieman Theorem and the Birkhoff Ergodic Theorem [7], respectively. For condition (3), note by the Birkhoff Ergodic Theorem applied to log γ (i1) 2 we have that for µ-almost all i The result then follows since For condition (4) we first note that µ{i : Π(i) ∈ B(0, r n )} → 1 as n → ∞.
Since µ is shift invariant it then follows that which is sufficient to complete the proof.
We now need to consider the second condition from Lemma 6. We define a sequence {β n } by β n = 2 √ de nN (η+ǫ) . Fix ρ as in Lemma 5. For a sequence i ∈ X n we want to consider the following set where k = log n. An important property of these sets is that for l ∈ B nN (i, ρ) and 0 ≤ j ≤ nN we have that Π(σ j l) ∈ B(Π(σ j i), ρ). For notational convenience we will write Lemma 8. We can find a finite set Y n ⊂ Σ m with at most Proof. By property (1) of X n each i ∈ X n satisfies and hence since µ is a probability measure it follows that there are at most e nN (h(µ)+ǫ) choices for the first nN elements of a sequence in X n . Fix one of these choices and note that we can find a centred covering with at most 2 √ dn 2 n N log γ 2 ρ balls of size less than ρ γ N k 2 .
We now use Lemma 5 to estimate the Hausdorff measure of one of these sets.

THOMAS JORDAN AND MARK POLLICOTT
Lemma 9. Fix ρ as in Lemma 5. Let i ∈ X n . We have that Proof. For 1 ≤ k ≤ n consider the sets and note that they all have diameter less than ρ. Thus we can apply Lemma 5 iteratively n times to get ) where b n is as in the statement of the Lemma. By the subadditivity of φ t and the definition of g t we have that   and the proof is complete.
By applying Lemmas 8 and 9 we get a result which shows that the sets Λ n satisfy the conditions to apply Lemma 6.
Lemma 10. We have that and c n → 0 as n → ∞.
In this case the limit set can be viewed as a graph of a measurable function over the interval [0, 1]. Remark. In Example 1, if we change µ to the (p, 1 − p)-Bernoulli measure, then as p → 0 the upper bound becomes larger than 2, and thus gives no useful information.
On the other hand, if p → 1 then the upper bound converges to 0. In the case of Example 2, the system only contracts on average if p < log 2 log 3 . As p → 0 the upper bound converges to 0.
Example 3. Let f 1 , f 2 : R 2 → R 2 be defined by f 1 (x, y) = (0.3x + 0.2y, 0.2x + 0.3y) f 2 (x, y) = (1.2x + 0.2y + 1, 0.1x + 1.2y + 1) Let µ on the shift space be the 1 2 , 1 2 -Bernoulli measure and let ν be the natural projection. In many cases, calculating the Lyapunov exponents can be an extremely difficult problem, but the upper bound in Lemma 2 remains easier to calculate. By taking iterates of the function it is possible to improve this estimate and eventually the values will converge to that given in Theorem 1. For this example, we give below the upper bounds s n given by the argument following Lemma 2 for different values of n. 5. Final remarks. The inequality in Theorem 1 is reminiscent of the famous Kaplan-Yorke Conjecture, relating Lyapunov dimension and the Hausdorff dimension of attractors for invertible maps. Let λ 1 ≥ λ 2 ≥ · · · ≥ λ k be the Lyapunov exponents for a C 2 map and an ergodic measure ν. There is an inequality
The conjecture of Kaplan-Yorke is that there is an equality for generic maps. In the case of surfaces with λ 1 > 0 > λ 2 it was shown by Young [10] that In particular, when ν is an SRB-measure we have that h(ν) = λ 1 and the Kaplan-Yorke conjecture holds. On the other hand, there are other examples where the equality fails. In principle, one could expect to use the above results to derive corresponding results for iterated function schemes satisfying the open set condition. However, for iterated function schemes which only contract on average, the open set condition typically fails and these results cannot be applied.