Quantitative recurrence and the shrinking target problem for overlapping iterated function systems

In this paper we study quantitative recurrence and the shrinking target problem for dynamical systems coming from overlapping iterated function systems. Such iterated function systems have the important property that a point often has several distinct choices of forward orbit. As is demonstrated in this paper, this non-uniqueness leads to different behaviour to that observed in the traditional setting where every point has a unique forward orbit. We prove several almost sure results on the Lebesgue measure of the set of points satisfying a given recurrence rate, and on the Lebesgue measure of the set of points returning to a shrinking target infinitely often. In certain cases, when the Lebesgue measure is zero, we also obtain Hausdorff dimension bounds. One interesting aspect of our approach is that it allows us to handle targets that are not simply balls, but may have a more exotic geometry.


Dynamical systems with choice
For many dynamical systems one may be interested in, from both a pure and applied perspective, one may encounter situations where it is natural for an element of the state space to have a choice for its forward trajectory. This phenomenon can be observed in the setting of random walks, and similarly in the setting of applied models for real world events that have some in-built randomness. This paper is motivated by the following general question: For such dynamical systems, what extreme behaviours do we observe if instead of considering a single choice of forward trajectory, we consider all forward trajectories?
In this paper we study this question in the context of dynamical systems arising from iterated function systems. The extreme behaviours we are interested come from the shrinking target problem and quantitative recurrence. Before detailing our results we provide the relevant background from these areas and from Fractal Geometry.

Fractal Geometry
Given a finite set of invertible d × d matrices {A i } i∈I satisfying A i < 1 for all i ∈ I, and a finite set of vectors {t i } i∈I each belonging to R d , we can associate the set of contracting maps We call {S i } i∈I an affine iterated function system or IFS for short. Importantly, one can associate to any IFS a unique, non-empty, compact set X satisfying X = i∈I S i (X).
The set X is called the self-affine set or invariant set associated to {S i } i∈I . These X often exhibit fractal like behaviour, and are a well studied family of fractal sets (see [13]). Given an IFS {S i } i∈I , for each i ∈ I we let T i : R d → R d be the the map given by T i (x) = S −1 i (x). It follows from the definition of a self-affine set, that for any x ∈ X there exists i ∈ I such that T i (x) ∈ X. When S i (X) ∩ S j (X) = ∅ for all i = j this choice of T i is unique for all x. This means that under this assumption we have a well defined map T : X → X given by T (x) = T i (x) for x ∈ S i (X). When there exists distinct i and j satisfying S i (X) ∩ S j (X) = ∅, we say that the iterated function system is overlapping. In this case there exists x for which we have a choice of T i satisfying T i (x) ∈ X. In other words, there exists x ∈ X for which we have a choice of forward trajectory. It is these overlapping iterated function systems that will be the main focus of this paper, and for which we will consider our motivating question.
We finish this discussion on iterated function systems, and in particular on overlapping iterated functions systems, by emphasising that the study of these objects is currently a very active area of research. We refer the reader to the articles [21,22,34,35,39,40] for recent advances in the study of overlapping iterated function systems and their associated self-affine sets.

Shrinking targets
The general framework for shrinking target problems in R d is the following: Let T : X → X be a continuous map defined on some Borel set X ⊂ R d . Given a sequence of points x = (x n ) ∞ n=1 ∈ X N and (E n ) ∞ n=1 a sequence of Borel subsets of R d , we associate the set W (x, (E n )) := {x ∈ X : T n (x) ∈ x n + E n for i.m. n ∈ N}.
Here and throughout we use i.m. as a shorthand for infinitely many. Often (E n ) is taken to be a nested sequence of sets containing the origin, and x is a constant sequence. As such the study of the sets W (x, (E n )) is commonly known as the shrinking target problem. Typically one is interested in establishing measure-theoretic and dimension results for the sets W (x, (E n )). If one were to equip X with a Borel probability measure µ, one can try to determine µ(W (x, (E n ))). If ∞ n=1 µ({x : T n (x) ∈ x n + E n }) < ∞, then it is a simple consequence of the Borel Cantelli lemma that µ(W (x, (E n ))) = 0. If ∞ n=1 µ({x : T n (x) ∈ x n + E n }) = ∞, then determining µ(W (x, (E n ))) is a much more challenging problem. Generally speaking, if the dynamical system T : X → X is mixing sufficiently quickly with respect to µ, then ∞ n=1 µ({x : T n (x) ∈ x n + E n }) = ∞ implies µ(W (x, (E n ))) = 1. Numerous results exist verifying this principle, see for instance [12,33]. For more background on shrinking target problems we refer the reader to [8,12,19,20,27,28,29] and the references therein.
Our framework for studying shrinking targets for overlapping iterated function systems is the following: Given an IFS S = {S i } i∈I with self-affine set X, a sequence x ∈ X N , and a sequence of Borel sets (E n ), we let W (S, x, (E n )) := x ∈ X : (T i N • · · · • T i 1 )(x) ∈ x N + E N for i.m. (i 1 , . . . , i N ) ∈ ∞ n=1 I n .
Recall that to define an IFS we need a finite set of contracting matrices A = {A i } i∈I and a finite set of translation vectors {t i } i∈I . Each S i then satisfies S i (x) = A i x + t i . As such given an IFS S = {S i } i∈I with corresponding set of matrices A = {A i } i∈I , we can define (1.1) We will often suppress the dependence of λ(A) on S from our notation. For the family of IFSs we will be interested in, the parameter λ(A) will determine whether X typically has positive Lebesgue measure. In the context of shrinking targets, λ(A) will determine the fastest rate 1 at which the Lebesgue measure of E n can converge to zero, yet we could still hope for W (S, x, (E n )) to have positive Lebesgue measure (see Theorem 2.1, Theorem 2.3, and Theorem 4.3). We will introduce some additional notation in the special case when (E n ) is a sequence of balls centred at the the origin: Given an IFS S, x ∈ X N , and h : N → [0, ∞), we associate the set We denote this set by W (S, x, h). In the special case when x = (y) ∞ n=1 is a constant sequence, we adopt the simpler notation W (S, y, (E n )) for W (S, x, (E n )), and W (S, y, h) for W (S, x, h).

Recurrence
The notion of recurrence is fundamental in dynamical systems. If T : X → X is a measure preserving transformation of a probability space (X, B, µ), the famous Poincaré recurrence theorem states that for any set A ∈ B satisfying µ(A) > 0, we have that µ-almost every x ∈ A satisfies T n (x) ∈ A for infinitely many n ∈ N (see [42] for a proof of this statement). Under relatively weak assumptions this measure-theoretic statement can be seen to imply a metric one. If X is equipped with a metric d so that (X, d) is separable and B is the Borel σ-algebra, then Poincaré's recurrence theorem implies that for µ-almost every x ∈ X we have lim inf n→∞ d(T n (x), x) = 0.
It is natural to wonder whether this metric statement can be further improved upon and replaced with something more quantitative. With this goal in mind the following framework is natural: Let X and T be as above. Given a function ψ : N → [0, ∞), we define the set of points that return at the rate ψ : R(ψ) := {x ∈ X : d(T n (x), x) ≤ ψ(n) for i.m. n ∈ N}.
Just as for shrinking target sets, we are typically interested in establishing measure-theoretic and dimension results for the sets R(ψ). The first result in this direction is due to Boshernitzan [10]. He proved that if (X, d) is a separable metric space, µ is a Borel probability measure, and T : X → X is a measure preserving transformation, then if the α-dimensional Hausdorff measure is σ-finite on (X, d), for µ-almost every x ∈ X we have lim inf n→∞ n 1/α d(T n (x), x) < ∞.
Moreover, if we also assume that H α (X) = 0, then Boshernitzan proved that for µ-almost every x ∈ X we have lim inf n→∞ n 1/α d(T n (x), x) = 0.
The µ-almost everywhere rate of recurrence Boshernitzan's result provides only depends upon the metric properties of the set X. However, it is natural to expect that the measure µ would also influence the recurrence behaviour of a µ-typical point. This issue was addressed in a paper of Barreira and Saussol [6]. They proved that if T : X → X is a Borel measurable map defined on X ⊂ R d , and µ is a T -invariant probability measure, then for µ-almost every x ∈ X we have lim inf n→∞ n 1/α d(T n (x), x) = 0 for any α > lim inf r→0 log µ(B(x, r)) log r .
More recently, a number of papers have appeared that bring the quantitative recurrence theory more closely in line with the shrinking target theory. In particular, it has been shown that µ(R(ψ)) is related to the convergence/divergence properties of ∞ n=1 ψ(n) δ for some appropriate δ > 0 (see [5,8,11,23,24,25]). A common assumption appearing in each of these papers was a uniformity assumption on the local behaviour of the measure µ. It turns out that this assumption is essential if one wants µ(R(ψ)) to be related to the convergence/divergence properties of ∞ n=1 ψ(n) δ . In a recent paper together with Allen and Bárány [1], the first author proved that if T : X → X is the left shift on a topologically mixing subshift of finite type and µ is the Gibbs measure for some Hölder continuous potential satisfying a weak non-degeneracy condition, then there exists ψ for which µ(R(ψ)) is not determined by the convergence/divergence properties of ∞ n=1 ψ(n) δ for any δ.
For iterated function systems we study the following recurrence sets. Given an IFS S and a sequence of Borel sets (E n ), we let We also introduce the following more specific framework when our recurrence neighbourhoods are balls. Given an IFS S and h : N → [0, ∞), we let Just as for the shrinking target problem, the parameter λ(A) plays an important role in the study of quantitative recurrence. On the exponential scale it describes the best rate of recurrence we could hope to observe for a Lebesgue typical point (see Theorem 2.1 and Theorem 2.3).

Statements of results
We begin this section by introducing some more notation relating to iterated function systems. Given a finite set of invertible d × d matrices A = {A i } i∈I satisfying A i < 1 for all i ∈ I, one can associate a parameterised family of iterated function systems in the following way. To each T = (t 1 , . . . , t I ) ∈ R #I·d we associate the IFS S T = {S i,T (x) = A i x + t i } i∈I . We also let T i,T = S −1 i,T for each i ∈ I and T . We denote the corresponding self-affine set by X T . To each T ∈ R #I·d we associate a surjective projection map π T : I N → X T given by When we equip I N with the product topology then π T is also continuous.
This parameterised family of IFSs was originally studied by Falconer [14]. Under the assumption that A i < 1/3 for all i ∈ I, he showed that for Lebesgue almost every T the Hausdorff dimension of X T is equal to the affinity dimension of A. See [14] for the definition of the affinity dimension. Similarly, he proved that when the parameter λ(A) from (1.1) is strictly greater than 1, then X T has positive Lebesgue measure for Lebesgue almost every T . Falconer's results were extended by Solomyak in [36] to the case where the set of matrices satisfies the weaker assumption that A i < 1/2 for all i ∈ I. For a set of matrices with λ(A) > 1, so that X T typically has positive Lebesgue measure, it is natural to then ask whether X T also has interior points. Not much is known about this, but very recently, sufficient conditions for X T to almost surely have non-empty interior were given in [15].
We are interested in proving results on the Lebesgue measure of shrinking target sets and quantitative recurrence sets for S T that hold for Lebesgue almost every T . Proving statements that only hold for Lebesgue almost every T might at first appear to be unnatural. Particularly when compared to the traditional shrinking target framework where every point in the domain has a unique forward orbit. However, Theorem 4.1 and the discussion after its proof demonstrate that such a restriction is entirely necessary. Often there exists a dense set of T for which the corresponding shrinking target sets and quantitative recurrence sets exhibit different behaviour to that observed for a Lebesgue typical T .
To prove positive measure results for W (S T , x, h) and R(S T , h) it is necessary for us to impose the following additional assumptions on the function h. Given B ⊂ N we define the upper density of B to be Given ǫ > 0 we let We then define With all of this terminology established we can now state our results.
Theorem 2.1. Let A = {A i } i∈I be a finite set of invertible d×d matrices satisfying the following properties: • There exists a positive diagonal matrix A such that A i = A for all i ∈ I.
Then the following statements are true: 1. For Lebesgue almost every T ∈ R #I·d , for any x ∈ X N T and h ∈ H we have that W (S T , x, h) has positive Lebesgue measure.
2. For Lebesgue almost every T ∈ R #I·d , for any x ∈ X T and h ∈ H that is decaying regularly and decreasing, we have that Lebesgue almost every y ∈ X T is contained in W (S T , x, h).

Let
U be an open subset of R #I·d and i ∈ I N . Assume that π T (i) ∈ int(X T ) for Lebesgue almost every T ∈ U . Then for Lebesgue almost every T ∈ U , for any h ∈ H that is decaying regularly and decreasing, we have that Lebesgue almost every x ∈ X T is contained in 4. For Lebesgue almost every T ∈ R #I·d , for any h ∈ H we that R(S T , h) has positive Lebesgue measure.
We emphasise that in our second and third statements we are only considering targets centred at a single point. Moreover, the third statement only applies when int(X T ) = ∅ for Lebesgue almost every T belonging to the open subset U . The significance of our third statement is that it gives sufficient conditions so that infinitely many targets are hit if we only consider the orbit arising from a single sequence j ∈ I N .
For a general set of matrices A satisfying A i < 1/2 for all i ∈ I, and for a general sequence of Borel sets (E n ), we are not able to prove the appropriate analogue of Theorem 2.1. However, if we weaken our expectations and restrict to sets (E n ) with Lebesgue measure satisfying L d (E n ) = λ(A) −n , then we are able to prove a positive result. Importantly, this result also holds for a more general set of matrices. This is the content of Theorem 2.3 below. In the statement of this theorem we will make use of the following notation. Given C > 0 we let W (S, x, C) and R(S, C) denote the shrinking target set and recurrent set corresponding to the constant function h(n) = C. Before stating Theorem 2.3 we define a useful notion that will be used in the proofs of Theorem 2.1 and Theorem 2.3. This notion will allow us to upgrade certain positive measure statements to full measure statements.
We call a family of open sets V for which the following properties hold a density basis for a Borel set X: • For all x ∈ X there are arbitrarily small V ∈ V containing x.
• For any Borel set A ⊂ X we have By V → x we mean that diam(V ∪ {x}) → 0. Let X be the self-affine set of an iterated function system. Assume that L d (X) > 0. We say that the self-affine set X is differentiation regular if there exists a density basis V for X and a constant η > 0, such that for every x ∈ X there exists a sequence {V j (x)} in V for which the following properties are satisfied: In [35] Shmerkin proved the following statement.
Lemma 2.2. Let {S i } i∈I be an iterated function system with self-affine set X. Assume that L d (X) > 0 and that one of the following properties are satisfied: 1. d = 2 and all the matrices {A i } are equal.
2. All the matrices A i are simultaneously diagonalisable.
3. There is a finite set W ⊂ R d of at least d linearly independent elements such that Then X is differentiation regular.
Equipped with the notion of differentiation regular we can now state Theorem 2.3.
Theorem 2.3. Let A = {A i } i∈I be a finite set of invertible d×d matrices satisfying the following properties: Then the following statements are true: 1. Let (i n,m ) (n,m)∈N×N ∈ I N×N be the sequence of centres for the shrinking targets, and (E n ) be a sequence of Borel sets satisfying the following properties: • There exists Q > 0 such that E n ⊂ [−Q, Q] d for all n ∈ N.
• L d (E n ) = λ(A) −n for all n.
• For all s ∈ (0, 1) and n ∈ N we have s · E n ⊂ E n .
• There exists C > 0 such that for each n ∈ N there exists r n > 0 satisfying Then for Lebesgue almost every T ∈ R #I·d , we have Moreover, for Lebesgue almost every T ∈ R #I·d , the set W (S T , (π T ((i n,m ) ∞ m=1 )) ∞ n=1 , (E n )) has positive Lebesgue measure.
2. Let i ∈ I N and assume that X T is differentiation regular for Lebesgue almost every T ∈ R #I·d . Then for Lebesgue almost every T ∈ R #I·d , Lebesgue almost every x ∈ X T is contained in the set C>0 W (S T , π T (i), C).
3. Let (E n ) be a sequence of Borel sets satisfying the following properties: • There exists Q > 0 such that E n ⊂ [−Q, Q] d for all n ∈ N.
• L d (E n ) = λ(A) −n for all n.
• For all s ∈ (0, 1) and n ∈ N we have s · E n ⊂ E n .
• There exists C > 0 such that for each n ∈ N there exists r n > 0 satisfying Then for Lebesgue almost every T ∈ R #I·d , we have Moreover, for Lebesgue almost every T ∈ R #I·d , the set R(S T , (E n )) has positive Lebesgue measure.
Lemma 2.2 lists a number of conditions on the iterated function system under which X T is differentiation regular, and the statement 2. of the above Theorem holds.
As previously mentioned, λ(A) −n is the fast rate at which the Lebesgue measure of E n can converge to zero, yet we could hope for W (S, x, (E n )) to have positive Lebesgue measure. If one considers a faster rate, it is natural to wonder whether one can obtain Hausdorff dimension results instead. With this goal in mind we introduce the following framework. Given an IFS S, x ∈ X N and s > 1, we let We denote this set by W s (S, x). When x is a constant sequence, i.e. x = (y) ∞ n=1 , then we use W s (S, y) to denote W s (S, x). As a corollary of the proof techniques of Theorem 2.3, and a Mass Transference Principle of Wang and Wu [41], we will prove the following result on the almost sure dimension of W s (S, x).. Theorem 2.4. Let A = {A i } i∈I be a finite set of invertible d×d matrices satisfying the following properties: • There exists a positive diagonal matrix A such that A i = A for all i ∈ I, with diagonal entries λ 1 , . . . , λ d .
• There exists an open set U ⊂ R #I·d such that X T has non-empty interior for Lebesgue almost every T ∈ U .
Then for any j ∈ I N , for Lebesgue almost every T ∈ U, for any ball B centred at X T we have It follows from a result of Feng and Feng [15] that if we replace our assumption λ(A) > 1 with the stronger assumption #I · |Det(A)| 2 > 1, then our fourth assumption is immediately satisfied and we can in fact take U = R #I·d .
In Theorems 2.1 and 2.3 our statements for recurrence sets only establish positive Lebesgue measure. It is natural to expect that these recurrence sets in fact have full Lebesgue measure. Motivated by this shortcoming, we introduce a family of IFSs for which we can establish this stronger full measure statement.
Given λ ∈ (1/2, 1) we associate the IFS S λ = {S 0 (x) = λx, S 1 (x) = λx + 1}. For each S λ the corresponding invariant set equals [0, 1 1−λ ]. For each λ ∈ (1/2, 1) we let µ λ denote the law of the random variable ∞ i=0 ǫ i λ i where each ǫ i takes values 0 and 1 with equal probability. The probability measure µ λ is known as the Bernoulli convolution corresponding to λ. Determining the dimension of µ λ , and determining those λ for which the corresponding Bernoulli convolution is absolutely continuous are two important problems that have attracted much attention. We refer the reader to [21,22,34,37,39,40] for a more detailed survey of Bernoulli convolutions and for an overview of some recent results.
We call a number β ∈ (1, 2) a Garsia number if it is an algebraic integer with norm ±2, whose Galois conjugates are all of modulus strictly greater than 1. This family of algebraic integers was first introduced in [16], where it was shown that if λ is the reciprocal of a Garsia number then µ λ is absolutely continuous with bounded density. Examples of Garsia numbers include 2 1/n for n ≥ 1, and 1.08162 . . . the appropriate root of x 6 + x 5 − x − 2 = 0. For more on Garsia numbers we refer the reader to the survey [17] by Hare and Panju. Our main result for this family of IFSs is the following. The corresponding shrinking target analogue of Theorem 2.5 was obtained in [2]. We emphasise that our method of proof for Theorem 2.5 is different to that given in [2].
Using the mass transference principle of Beresnevich and Velani [9], we will use Theorem 2.5 to prove the following result which applies when R(S λ , h) has zero Lebesgue measure. Theorem 2.6. Let λ ∈ (1/2, 1) be such that λ −1 is a Garsia number and h : N → [0, ∞) be bounded. Then for any ball B contained in [0, Remark 2.7. It is interesting to compare Theorems 2.1, 2.3, and 2.5 with existing results on the shrinking target problem and quantitative recurrence. The most significant difference between our work and the existing body of work is that we establish positive measure results when our target sets/recurrence neighbourhoods shrink to zero exponentially fast. This is perhaps not surprising as we consider multiple forward orbits. Nevertheless, it is a significant change to the case of a single orbit where for a shrinking target set or a recurrence set to have positive Lebesgue measure, the target sets/recurrence neighbourhoods must typically shrink to zero at a polynomial rate.
Remark 2.8. We finish this introductory section by drawing a comparison between this paper and [4]. In [4] the first author studied the following family of limsup sets: Given S an IFS, z ∈ X, and (B n ) a sequence of balls, we let In [4] the first author studied the sets Q(S, z, (B n )) for the parameterised family of affine iterated function systems appearing in Theorems 2.1 and 2.3. In the special case where each matrix defining the family is a similarity, i.e. satisfies Ax − Ay = r x − y for all x, y ∈ R d for some r ∈ (0, 1), then the methods used in [4] can be used to prove weak versions of Theorems 2.1 and 2.3. On a technical level, what makes the analysis of this paper particularly challenging when compared to [4], is that instead of working with limsup sets defined by a sequence of balls, the limsup sets we eventually work with are defined using ellipses or more exotic shapes. Because of this potentially more complicated geometry, the arguments from [4] do not apply and new ideas are required.
The rest of the paper is structured as follows. In Section 3 we establish some notation and collect some technical results that we will use in the proofs of our theorems. In Section 4 we prove a number of straightforward theorems that demonstrate how a recurrence set or a shrinking target set can have zero Lebesgue measure. These theorems highlight some of the technical obstacles that need to be overcome to prove our results. In Section 5 we prove Theorem 2.1 and in Section 6 we prove Theorem 2.3. In Section 7 we prove Theorem 2.4 and in Section 8 we prove Theorems 2.5 and 2.6.

Preliminaries
In this section we introduce some notation and collect some technical results that we will use in the proofs of our main theorems.

Notation
Suppose that an IFS {S i } i∈I is given. We let I * = ∞ n=1 I n . For a word i = (i 1 , . . . , i n ) ∈ I * we let S i = S i 1 • · · · • S in and T i = T i 1 • · · · • T in . Similarly, given a finite set of matrices {A i } i∈I and i = (i 1 , . . . , i n ) ∈ I * , we let A i = A i 1 • · · · • A in . We denote the length of a word i ∈ I * by |i|. Given two distinct words i, j ∈ I n we let |i ∧ j| = min{k : i k = j k }, and let ij denote their concatenation. We also let i ∞ denote the element of I N corresponding to the infinite concatenation of i with itself. For i = (i 1 , . . . , i n ) ∈ I * we let i = (i n , . . . , i 1 ). We emphasise that for all x ∈ X. We will also on occasion write f = O(g) which will have the same meaning as f ≪ g. We write f ≍ g if f ≪ g and g ≪ f . When we want to emphasise a dependence of the underlying constant on some other parameter, We let L d denote the Lebesgue measure on d-dimensional Euclidean space.

Technical results
The following standard lemma is due to Kochen and Stone [26]. For a proof of this lemma see either [18,Lemma 2.3] or [38, Lemma 5].
Lemma 3.1. Let (X, B, µ) be a finite measure space and E n ∈ B be a sequence of sets such that ∞ n=1 µ(E n ) = ∞. Then .

Given a Borel set
The following result, known as the Lebesgue density theorem, will play an important role in our analysis. For a proof see [30].
In particular, Lebesgue almost every x ∈ E is a density point for E.
The following lemma is one of Bonferroni's inequalities. We include a proof for the sake of completeness.
Proof. We will show that for any x ∈ X we have Using this inequality and integrating with respect to µ yields our result. If x / ∈ ∪ n k=1 E k then (3.1) holds trivially. Now suppose that x ∈ ∪ n k=1 E k . Let j ≥ 1 be such that n k=1 χ E k (x) = j. After relabelling our sets we may assume that x ∈ E 1 , x ∈ E 2 , . . . , x ∈ E j , and x / ∈ ∪ n k=j+1 E k . Then we have Using this inequality, we observe Therefore (3.1) holds and our proof is complete.
The following technical framework is adapted from [4]. Let Ω denote a metric space equipped with a Borel measure η, and let X denote some compact subset of R d . For each n ∈ N, we assume that we are given a Borel set E n and a finite set of continuous functions {f l,n : Ω → X} Rn l=1 . We are interested in the distribution of the elements of {f l,n (ω) + s · E n } Rn l=1 for a η-typical ω and for Given ω ∈ Ω, s > 0, and n ∈ N we let We also let The following lemma follows from Lemmas 3.2 and 3.5 from [4]. In these lemmas the sets E n were always taken to be balls, but the proofs still apply with only minor notational changes in the more general setting where the sets E n are only assumed to be Borel sets.
Then we have the following information about the upper density Finally we need the following estimate, a transversality condition in the self-affine context.
Let i, j ∈ I N be such that i = j and let R > 0 be arbitrary. Then for any Borel set E ⊂ R d we have Here and throughout, if |i ∧ j| = 1 then A i 1 ,...i |i∧j|−1 is simply the identity matrix.
Proof. This statement essentially follows from an argument due to Solomyak [36], which is in turn based upon an argument due to Falconer [14]. We include the details for the sake of completeness. Let i = j ∈ I N . We start by observing that Which by the definition of |i ∧ j| can be rewritten as Which in turn we can express as Solomyak in [36] proved that under the assumptions of this lemma, there exists C > 0 independent of i and j such that either ( The other case is handled similarly. It follows from the above that π T (i) − π T (j) ∈ E is equivalent to which in turn is equivalent to Let us now fix a set of vectors {t i } i∈I\{i |i∧j| } and consider t i |i∧j| as a variable. Using the fact that we see that (3.2) implies that Now applying Fubini's theorem, it follows that

Basic results
Before moving on to the proofs of our main theorems, we explore the ways in which the conclusions of these theorems can fail. The proofs of the following statements serve as a warm up for what is to follow.
Theorem 4.1. Let S be an iterated function system. Suppose that there exists j ′ , j ∈ I * such that j ′ = j and S j ′ = S j . Then for any x ∈ X N and sequence of Borel sets satisfying L d (E n ) ≪ λ(A) −n , the set W (S, x, (E n )) has zero Lebesgue measure. Similarly, the set R(S, (E n )) has zero Lebesgue measure.
Proof. Let j ′ , j ∈ I * be such that j ′ = j and S j ′ = S j . By considering S j ′ j and S jj ′ if necessary, we may assume that j ′ and j have the same length. Let us suppose |j ′ | = |j| = k for some k ∈ N.
Then we have by the definition (1.1) of λ(A) that Now let x = (x n ) ∈ X N be arbitrary, and let (E n ) be an arbitrary sequence of Borel sets satisfying We start by proving that W (S, x, (E n )) has zero Lebesgue measure. Observe that for any n > k we have The second sum in this product is equal to 1 by the definition (1.1) of λ(A). Continuing from here, using the definition of γ we have We know that γ ∈ (0, 1), hence it follows from the estimate above that Applying the Borel Cantelli lemma our result follows. Now we bring our attention to proving the recurrence result. Again we assume that (E n ) is a sequence of Borel sets satisfying for any i ∈ I * . Using this identity, we observe the following for any i ∈ I n : for any Borel set E. Using (4.1), together with our assumption L d (E n ) ≪ λ(A) −n , it follows that for any i ∈ I n we have We now observe that for any n > k we have Using this equality and (4.2), the rest of the proof follows from an analogous argument to that used to prove that W (S, x, (E n )) has zero Lebesgue measure.
Theorem 4.1 is straightforward but it exhibits one of the main difficulties in proving our theorems. For many choices of {A i } i∈I , there is a dense set of T ∈ R #I·d such that the corresponding IFS S T admits two distinct words i, j ∈ I * satisfying S i,T = S j,T . This means that there is a dense set of exceptions for which the conclusions of Theorems 2.1 and 2.3 do not hold. This set of exceptions is what makes our analysis challenging. The following statement shows that the presence of two distinct words satisfying S i = S j is not the only mechanism preventing positive measure from occurring. Theorem 4.2. For each 1 ≤ l ≤ d, let S l = {S i } i∈I l be an IFS acting on R satisfying the following properties: • There exists λ l ∈ (0, 1) such that for each i ∈ I l we have S i (x) = λ l x + t i for some t i ∈ R.
Let S be the product IFS acting on R d given by Proof. We begin our proof by remarking that for the product IFS S the self-affine set is [0, 1] d . We also remark that for this IFS we have Without loss of generality, we can assume that #I 1 ·λ 1 = min 1≤l≤d {#I l ·λ l } and that #I 1 ·λ 1 < #I 2 · λ 2 . These statements imply that Here π 1 : R d → R is the projection map to the first coordinate. It follows from the above that Using this inequality, together with (4.3), we obtain The fact L d (W (S, (x n ), 1)) = 0 now follows from the Borel-Cantelli lemma. The proof that L d (R(S, 1))) = 0 follows by a similar application of the Borel-Cantelli lemma. We omit the proof of this statement.
The following statement shows that in the statement of Theorem 2.1, it is absolutely necessary to include a divergence assumption on the function h. Theorem 4.3. Let S be an iterated function system and assume that h : N → [0, ∞) satisfies ∞ n=1 h(n) < ∞. Then W (S, x, h) has zero Lebesgue measure for any x ∈ X N , and R(S, h) has zero Lebesgue measure.
We will only prove that W (S, x, h) has zero Lebesgue measure for any x ∈ X N . The proof that R(S, h) has zero Lebesgue measure follows using the same arguments used in the proof of Theorem 4.1. Let Therefore, by our assumption on h we have The result now follows by the Borel Cantelli lemma.

Proof of Theorem 2.1
The first step in the proof of Theorem 2.1 is to obtain information about the distribution of the ellipses for small values of s and for a typical T . The following lemma provides that information. It is convenient at this point to introduce some additional notation. Suppose a set of matrices A = {A i } and a vector T ∈ R #I·d are given, then for each i ∈ I N , s > 0 and n ∈ N we let The following lemma is based upon an argument due to Benjamini and Solomyak [7], which is in turn based upon an argument due to Peres and Solomyak [32]. Then for any R > 0, n ∈ N, s > 0 and i ∈ I N , we have Proof. Let A be such that A i = A for all i ∈ I. Such a matrix exists by our underlying assumptions. We start our proof by remarking that for any n ∈ N and s > 0, there exists an ellipse such that for any i ∈ I N and j ∈ I n , we have We also remark that It follows from (5.1) that for two words j, k ∈ I n , we have Because E n is convex and symmetric around the origin, we have E n − E n = 2E n . Therefore We now rewrite our integral as follows In the final line we used the fact that S j, We remark that when A i = A for all i ∈ I, then our formula for λ(A) can be simplified to In the penultimate line we used our assumption λ(A) > 1 to conclude that ∞ k=1 λ(A) −k = O(1). In other words, our desired conclusion holds for the specific choice of sequence (π T ((i ′ n,m ) m )) n . To complete our proof, we now need to show that for any T ∈ [−R, R] #I·d for which our desired conclusion holds for the specific sequence (π T ((i ′ n,m ) m )) n , the same conclusion simultaneously holds for any sequence (x n ) ∈ X N T for the same choice of parameters. This fact follows from the simple observation that for any n ∈ N and i ∈ I n , we have

Proof
. Crucially the right hand side of the above does not depend upon i. This observation implies that for each n ∈ N, T ∈ R #I·d , and x n ∈ X T , the sets {S i,T (x n )} i∈I n and {S i,T (π T ((i ′ n,m ) ∞ m=1 ))} i∈I n are translates of each other. Therefore for any n ∈ N and s > 0. Therefore our desired conclusion holding for the specific sequence (π T ((i ′ n,m ) m )) n immediately implies the same conclusion for all (x n ) ∈ X N T for the same choice of parameters. This completes the proof.
With Lemma 5.2 we can now prove Theorem 2.1.1.
Proof of Theorem 2.1.1. To prove our statement, it suffices to show that the desired conclusion holds for Lebesgue almost every T ∈ [−R, R] #I·d where R > 0 is arbitrary. In what follows R will be fixed. Now let T belong to the full measure set of parameters for which the conclusion of Lemma 5.2 is satisfied. Let x = (x n ) ∈ X N T and h ∈ H be arbitrary. We now set out to prove that W (S T , x, h) has positive Lebesgue measure.
It follows from the definition of H (see (2.1)) and Lemma 5.2, that there exists c, s > 0 such that if we let We now fix such a c and s. For each n ∈ N we let W n ⊂ I n be a set of words satisfying #W n = T (n) and Instead of working directly with Euclidean balls it is more convenient to work with balls defined using the supremum norm. As such, replacing s with a potentially smaller constant if necessary, we may assume that for each n ∈ N we have that It follows immediately from the definition of h ′ that for n ∈ N and i ∈ W n , we have Moreover, by the definition of W n , we also know that for distinct i and j in W n we have It also follows from the definition that n∈N h ′ (n) = ∞. Again motivated by a desire to work with balls defined with respect to the supremum norm rather than the Euclidean norm, we defineh : N → [0, ∞) according to the ruleh(n) = h ′ (n)/d. Let us denote We observe that for all n ∈ N. Crucially, since n∈N h ′ (n) = ∞ we have For each n ∈ N we let It follows from (5.6) and (5.7) that the union defining Z n is disjoint. To prove our result, we will study the set It follows from (5.5) and (5.7) that Therefore to prove our result, it suffices to show that Z ∞ has positive Lebesgue measure. To do this we will use Lemma 3.1. We start by proving that the divergence assumption of Lemma 3.1 is satisfied. The union defining Z n is disjoint and hence for each n ∈ N we have The first ≍ relation follows as #W n ≍ #I n , and the second uses that λ(A) = #I · |Det(A)| when A i = A for all i ∈ I. It follows now from (5.8) and (5.9) that n∈N L d (Z n ) = ∞. Hence we satisfy the first criterion of Lemma 3.1. It remains now to obtain meaningful bounds for L d (Z n ∩ Z n ′ ) for distinct n, n ′ ∈ N . This we do below.
We begin by remarking that since A is a positive diagonal matrix, there exists λ 1 , λ 2 , . . . , λ d ∈ (0, 1) such that For each n ∈ N and i ∈ W n , we have We also let Importantly we have E n ⊂ E ′ n for all n ∈ N. It also follows from (5.4) that for n ∈ N and distinct i, j ∈ W n , we have We now set out to bound L d (Z n ∩ Z n ′ ) for n ′ > n. We split our analysis into two cases. Without loss of generality we may assume that λ 1 = max 1≤i≤d {λ i }.
Put more informally, (5.10) means that the rectangle E ′ n ′ is wider in the first coordinate than E n .
If n < n ′ ≤ n + ⌊ 1/d logh(n)−log s log λ 1 /λ(A) 1/d ⌋ so (5.10) holds, then for each i ∈ W n if j ∈ W n ′ is such that Now using the fact that the elements of the set {S j,T (x n ′ ) + E ′ n ′ } j∈W n ′ are disjoint, it follows from a volume argument that Using (5.11), we now see that To continue, note that #W n ≍ (#I) n . Using this fact together with the identity λ(A) = #I · |Det(A)|, we see that the above satisfies  In the penultimate inequality we used that λ(A) (n ′ −n)(#J−d)/d ≤ λ(A) −(n ′ −n)/d for any J ⊂ {2, . . . , d} such that J = ∅. In the final inequality we used that J⊂{2,...,d},J =∅ 1 = O(1). Summarising the above, we have shown that if n < n ′ ≤ n + ⌊ 1/d log h(n)−log s log λ 1 /λ(A) 1/d ⌋, then for all 1 ≤ i ≤ d. Now by a similar argument to that given in Case 1, we have that if i ∈ W n and j ∈ W n ′ are such that Therefore by a volume argument, we have that for any i ∈ W n # j ∈ W n ′ : Recalling the definition of H(n), we can continue from this estimate and obtain Using this bound, it follows that for n ′ > n + ⌊ 1/d log h(n)−log s log λ 1 /λ(A) 1/d ⌋ we have We are now ready to apply the estimates from Cases 1 and 2. Combining (5.12) with (5.14), we see that for any n ′ > n we have Using this bound and Lemma 3.1, we obtain Therefore L d (Z ∞ ) > 0. This completes our proof.

Proofs of Theorem 2.1.2 and Theorem 2.1.3
We now bring our attention to the proofs of statements 2. and 3. from Theorem 2.1. We begin by proving a number of technical statements. Our first step is the following variant of the well known 3r covering lemma.
is decreasing for each 1 ≤ i ≤ d. Then for any sequence of points (y n ) ∞ n=1 ⊂ R d we have that there exists M ⊂ N satisfying: Proof. We define M inductively. Let M 1 = {1}. Then for any n ≥ 1 such that (y n + E n ) ∩ (y 1 + E 1 ) = ∅ we have (y n +E n ) ⊆ (y 1 +3E 1 ). This follows from the assumption that (δ i,n ) is decreasing for each i. Now let k ∈ N. Suppose that M k = {m i } k i=1 has been constructed and satisfies the following: for any n ∈ N such that (y If this is not the case we let We then define M k+1 := M k ∪ {m k+1 }. By definition, y m k+1 + E m k+1 is disjoint from all the y m i + E m i with i ≤ k. Now suppose n ≥ 1 is such that (y n + E n ) ∩ k+1 i=1 (y m i + E m i ) = ∅. If n satisfies (y n + E n ) ∩ k i=1 (y m i + E m i ) = ∅ then (y n + E n ) ⊆ k i=1 (y m i + 3E m i ) by our inductive hypothesis. If n satisfies (y n + E n ) ∩ k i=1 (y m i + E m i ) = ∅ and (y n + E n ) ∩ (y m k+1 + E m k+1 ) = ∅, then n ≥ m k+1 by the definition of m k+1 . Now using the fact that each (δ i,n ) is decreasing we have that (y n + E n ) ⊂ (y m k+1 + 3E m k+1 ). It is clear therefore that if n is such that (y . Repeating these steps we see that either this process eventually terminates and we can define M = M k for some k ∈ N, or this process continues indefinitely and we can define M = ∪ ∞ k=1 M k . In either case it is clear that M satisfies both 1. and 2.
The following lemma demonstrates that under appropriate conditions, the Lebesgue measure of a shrinking target set defined by balls does not change if we multiply the radii of these balls by an arbitrarily small positive constant.
Proof. We may assume that L d (W (S, (x n ) ∞ n=1 , h)) > 0. Otherwise our result follows trivially. For the purposes of obtaining a contradiction, suppose our desired equality does not hold. In that case there exists 0 < c ≤ 1 such that This in turn implies that there exists N ∈ N such that Our goal now is to contradict this inequality by showing that has no density points. It will then follow from Theorem 3.2 that this set has zero Lebesgue measure, and as such we will have the desired contradiction. Suppose that z is a density point for this set. Note that z is then automatically a density point for W (S, (x n ) ∞ n=1 , h). As such there exists r ′ > 0 such that for all 0 < r < r ′ . Now, for each i ∈ I * , denote Let r < r ′ be arbitrary and let N ′ > N be sufficiently large that Let By the definition of Ω and N ′ we have Now it follows from (5.20) and the first inclusion in (5.21) that At this point we want to apply Lemma 5.3. However, before we can apply this lemma we need to check that we satisfy the relevant hypothesis. Because each of the contractions S i share the same matrix part, and this matrix is a positive diagonal matrix, there exists λ 1 , . . . , λ d such that for each i ∈ I * we have . Now using the fact that h is decreasing, we see that the set of rectangles can be enumerated so that the hypothesis of Lemma 5.3 is satisfied; that is, the side lengths of the rectangles are all simultaneously decreasing. As such we can assert that there exists Ω ′ ⊂ Ω, such that for i, i ′ ∈ Ω satisfying i = i ′ we have For any i ∈ Ω ′ we have Now using (5.23), (5.25) and the above we have Now using (5.21) and (5.26), and remembering that N ′ > N , we have Since r < r ′ was arbitrary, this means that z is not a density point for Hence we have obtained the desired contradiction. This completes our proof.
The following lemma shows that under appropriate conditions, if a subset of a self-affine set has positive measure, then almost every element of the self-affine set is contained in some image of this set.
Lemma 5.5. Let S = {S i } i∈I be an IFS with self-affine set X. Assume that L d (X) > 0 and that X is differentiation regular. Then for any Borel set W ⊂ X satisfying L d (W ) > 0 we have Proof. Let V and η be as in the definition of differentiation regular in Section 2. Now let x ∈ X be arbitrary. Let {V j (x)} be a sequence in V whose existence is guaranteed by the definition of differentiation regular. Let us fix a V j (x) and let i ′ be the corresponding word in the definition of differentiation regular satisfying Then we have the following: It follows therefore that for any x ∈ X we have Therefore, by the definition of a density basis, it follows that Lebesgue almost every x ∈ X is contained in i∈I * S i (W ) . This completes our proof.
We are now in a position to prove Theorem 2.1.2.
Proof of Theorem 2.1.2. Let T belong to the full measure set for which the conclusion of Theorem 2.1.1 is true. We will now show that this T also satisfies the conclusion of Theorem 2.1.2, and in doing so complete our proof. Let us now fix x ∈ X T and h ∈ H that is decaying regularly and decreasing. Because of our assumptions on T , we know that L d (W (S T , x, h)) > 0. Since A is a diagonal matrix we know that X is differentiation regular by Lemma 2.2. Therefore, by Lemma 5.4, and Lemma 5.5, we have that Lebesgue almost every element of X T belongs to Therefore to prove the result it suffices to show that if y ∈ B then y ∈ W (S T , x, h). With this goal in mind, we fix y ∈ B arbitrarily. By definition, there exists j ∈ I * and z ∈ 0<c≤1 W (S T , x, c·h) such that y = S j,T (z). Now let γ > 0 be such that for all n ∈ N. Such a γ exists because of our assumption that h is decaying regularly. We let Suppose k ∈ I * is such that Then by the definition of c, we have Since z ∈ 0<c≤1 W (S T , x, c · h) there are infinitely many k such that (5.27) is satisfied. It follows that there are infinitely many k such that (5.28) is satisfied. Therefore y ∈ W (S T , x, h). This completes the proof.
The proof of Theorem 2.1.3 is similar to the proof of Theorem 2.1.2. We include the details for completion.
Proof of Theorem 2.1.3. Let U ⊂ R #I·d and i ∈ I N be as in the statement of Theorem 2.1.3. Duplicating the arguments given in the proof of Theorem 2.1.2, we can show that for Lebesgue almost every T ∈ U , for any h ∈ H that is decaying regularly and decreasing, we have that Lebesgue almost every x ∈ X T is contained in Now let us fix a T ∈ U belonging to the full measure set for which this conclusion is true. By our underlying assumptions, we may also assume that this T satisfies π T (i) ∈ int(X T ). We now show that this T also satisfies the conclusions of Theorem 2.1.3.
Let us now fix h ∈ H that is decaying regularly and decreasing. Since each element of S T maps sets of Lebesgue measure zero to sets of Lebesgue measure zero, our fixed parameter T also satisfies Now using the fact π T (i) ∈ int(X T ), we see that we can replace h with a sufficiently small bounded function if necessary, so that without loss of generality for all n ∈ N we have We will now show that any element of is contained in x : ∃j ∈ I N such that (T jn,T • · · · • T j 1 ,T )(x) ∈ B π T (i), h(n) λ(A) n 1/d for i.m. n ∈ N .
Which by (5.29) will complete our proof.
Let us fix x ∈ C. If x ∈ C then x ∈ B, therefore we can use the argument given in the proof of Theorem 2.1.2 to show that there exists i 1 ∈ I * such that By (5.30) we know that T i 1 ,T (x) ∈ int(X T ). Combining this with the fact x ∈ C and therefore not in T −1 i 1 ,T (X T \ B), it follows that T i 1 ,T (x) ∈ B. Therefore there exists j 1 and y such that Now let γ > 0 be such that h(n + 1) h(n) ≥ γ for all n ∈ N; just as in the proof of Theorem 2.1.2. If we let then it follows from the argument given in the proof of Theorem 2.1.2 that if We let i 2 = kj 1 . Using (5.30) and the fact x ∈ C, we may conclude that T i 2 i 1 ,T (x) ∈ B. As such we can repeat the above argument to assert that there exists a word i 3 ∈ I * such that It is clear that this process can be continued indefinitely, and as such we can define a sequence of words (i p ) N p=1 such that for all p ∈ N we have Our results now follows by taking our desired sequence to be the infinite concatenation of the words {i p } ∞ p=1 , i.e. j = i 1 i 2 i 3 . . .. .

Proof of Theorem 2.1.4
The proof of Theorem 2.1.4 is similar to the proof of Theorem 2.1.1. For this reason we only include an outline. The key technical result which allows us to recast our recurrence set statements into a framework that resembles that used to study shrinking targets sets is the following.
Lemma 5.6. Let S = {S i } i∈I be an IFS and i ∈ I * . Then for any Borel set E ⊂ R d we have Moreover, if we assume that A i < 1/2 for all i ∈ I, then there exists c, C > 0 depending only on max i∈I A i such that and for all i ∈ I * and r > 0.
Proof. Let i = (i 1 , . . . , i n ). We begin by observing the following equivalences: This completes the proof of the first claim in the statement. We now focus on the second part of our lemma. We assume that A i < 1/2 for all i ∈ I. Let λ := max i∈I A i . By definition λ ∈ (0, 1/2). It is useful at this point to think of the linear Let c = 1 − λ 1−λ and c ′ = 1 + λ 1−λ . Here c > 0 since λ ∈ (0, 1/2). Equation (5.33) implies that for all r > 0. This in turn implies (5.32).
To see why (5.31) is true, notice that (5.33) and (5.34) imply that for all x ∈ C d . Therefore the absolute value of every eigenvalue of ∞ k=1 A k−1 i is bounded above by c ′ and below by c. Now using the fact that the determinant of ∞ k=1 A k−1 i is the product of its eigenvalues, we assert that there exists C > 0 depending only upon max i∈I A i such that Using the fact that the determinant is multiplicative, we can now multiply through by |Det(A i )| in the above and conclude (5.31). This completes our proof. Now given an IFS S satisfying A i < 1/2 for all i ∈ I, it follows from Lemma 5.6 that there exists c > 0 such that for any function h : So to prove Theorem 2.1.4, it is sufficient to prove a positive measure result for the sets on the left hand side of the above inclusion. The sets are amenable to the same methods we used to prove Theorem 2.1.1. In particular, suppose we are given {A i } a finite set of matrices satisfying the assumptions of Theorem 2.1.1, then for each T ∈ R #I·d , i ∈ I * and s > 0 we let Lemma 3.5 can be used in a similar way to prove the following analogue of Lemma 5.1, the proof of which we omit.
Lemma 5.7. Let {A i } i∈I be a collection of matrices satisfying the assumptions of Theorem 2.1. Then for any R > 0, n ∈ N, s > 0 we have Once equipped with Lemma 5.7, the proof of Theorem 2.1.4 follows the same argument as Theorem 2.1.1.

Proof of Theorem 2.3
Suppose one were given a sequence (i n,m ) ∈ I N×N and a sequence of Borel sets (E n ) satisfying the hypothesis of Theorem 2.3.1, the key to proving Theorem 2.3.1 is to understand for small s > 0 and for arbitrary n ∈ N the Lebesgue measure of the set i∈I n S i,T (π T ((i n,m ) ∞ m=1 ) + s · E n ) for a typical T . To obtain meaningful bounds we need to understand the measure of the intersection of two sets in this union for a typical T . This is the content of Lemma 6.1. Using this lemma we can then obtain for arbitrary R > 0 and n ∈ N a useful expression for This is the content of Proposition 6.2. This latter expression is what allows us to prove our result.
Lemma 6.1. Let {A i } i∈I be a finite set of matrices satisfying A i < 1/2 for all i ∈ I. Let (i n,m ) n,m ∈ I N×N and (E n ) be a sequence of Borel sets satisfying the assumptions of Theorem 2.3.1. Then for any n ∈ N and s > 0, if i, j ∈ I n are distinct words such that |i ∧ j| = k, we have Proof. Let us begin our proof by fixing (i n,m ) n,m and (E n ) satisfying the assumptions of Theorem 2.1.1. Fix n ∈ N and s > 0. For any j ∈ I n we have Let r n > 0 and C > 0 be as in the statement of Theorem 2.3. So in particular we have We let r * n > 0 be a sufficiently small real number 3 satisfying Let us now fix i, j ∈ I n such that |i ∧ j| = k. For p = (p 1 , . . . , p d ) ∈ Z d , denote and then let It follows from the properties of r * n listed above, the fact L d (E n ) = λ(A) −n , and (6.1) that Using the above inclusions and (6.2) we have Now using the definition of V j we have Now notice that if T is such that for some p ∈ V j we have Using this observation, Lemma 3.5 and (6.1), we have the following for each p ∈ V j Using this bound together with (6.3) and (6.4), we then have This completes the proof.
Proof. The second statement follows from the following computation: We now move on to the first statement. We start with the following inequality which is an immediate application of Lemma 3.3 It follows from the argument given above in the derivation of statement 2 that i∈I n L d (S i,T (π T ((i n,m ) ∞ m=1 ) + s · E n )) = s d for all T ∈ R #I·d . Therefore we have It remains to bound the second term. Applying Lemma 6.1 we observe the following: In the penultimate line we used that λ(A) > 1 so n This completes our proof.
Equipped with Proposition 6.2 we can now prove Theorem 2.3.1.
Applying statement 2 from Proposition 6.2 we have Now applying statement 1 from Proposition 6.2 we have Cancelling terms from either side, this inequality yields It follows from this equation that we can choose s ∈ (0, 1) sufficiently small in a manner that depends upon ǫ and R such that In what follows we will assume that we have chosen such an s and we will denote it by s * . Let By the continuity of the Lebesgue measure from above, we have Using the above inequality, we see that to prove our result it suffices to show that the desired conclusions hold for any T ∈ A ∞ (s * ). This we do below. Now using the assumption s · E n ⊂ E n for all s ∈ (0, 1), we have the following for any As such, the first conclusion of Theorem 2.3.1 follows once we observe that i∈I n S i,T (π T ((i n,m ) ∞ m=1 ) + E n ) coincides with {x : ∃(i 1 , . . . , i n ) ∈ I n such that (T in,T • · · · • T i 1 ,T )(x) ∈ π T ((i n,m ) ∞ m=1 ) + E n } for any T ∈ R #I·d . We will now prove that the second conclusion of Theorem 2.3.1 holds for any T ∈ A ∞ (s * ). We want to use continuity of the Lebesgue measure again, however to do this we must know that for some m ∈ N. At this point we use our assumption that there exists Q > 0 such that belongs to some bounded domain in R d and so (6.5) holds for all m 4 . Now freely using the continuity of the Lebesgue measure from above, we see that the following holds for any T ∈ A ∞ (s * ) In the final line we used for all T ∈ A ∞ (s * ). This completes our proof.
Remark 6.3. In the above we showed that for any ǫ, there is a set A ∞ (s * ) with a complement of measure at most ǫ, such that for all T ∈ A ∞ (s * ), the measure of the set W (S T , (π T ((i n,m ) ∞ m=1 ) ∞ n=1 , (E n )) is bounded below by a positive constant. It should be noted, however, that this constant depends on A ∞ (s * ) and in particular, the larger we insist the measure of A ∞ (s * ), the smaller the constant. In particular, this result is far away from a 0-full measure type result.
together with our assumption that X T is differentiation regular for Lebesgue almost every T ∈ R #I·d , we know that for Lebesgue almost every T ∈ R #I·d the set W (S T , π T (i), L d (B(0, 1)) −1/d ) has positive Lebesgue measure and X T is differentiation regular. In what follows we fix a T satisfying these two properties. Applying Lemma 5.5, we know that Lebesgue almost every x ∈ X T belongs to To complete our proof, we will now show that if Let y ∈ W (S T , π T (i), L d (B(0, 1)) −1/d ) and j ∈ I * be such that S j,T (y) = x. If k ∈ I * is such that T k,T (y) ∈ B π T (i), 1 (L d (B(0, 1))λ(A) |k| ) 1/d (6.6) then (T k,T • T j,T )(x) ∈ B π T (i), 1 (L d (B(0, 1))λ(A) |k| ) 1/d .
Taking C = λ(A) |j|/d · L d (B(0, 1)) −1/d , we see that the above implies Because by definition there exists infinitely many k ∈ I * such that (6.6) holds, it follows that x ∈ W (S T , π T (i), C) and our result follows.

Proof of Theorem 2.3.3
The proof of Theorem 2.3.3 is similar to the proof of Theorem 2.3.1. As such we only include an outline. The first step is to use Lemma 5.6. This lemma allows us to assert that for each for any sequence of Borel sets (E n ) ∞ n=1 . Now using Lemma 3.5 and Lemma 5.6 we can prove the following analogue of Lemma 6.1.
Lemma 6.4. Let {A i } i∈I be a finite set of matrices satisfying A i < 1/2 for all i ∈ I. Let (E n ) be a sequence of Borel sets satisfying the assumptions of Theorem 2.3.3. Then for any n ∈ N and s > 0, if i, j ∈ I n are distinct words such that |i ∧ j| = k, we have Combining Lemma 3.3 and Lemma 6.4 allows us to prove the following statement.
Proposition 6.5. Let {A i } i∈I be a finite set of matrices satisfying A i < 1/2 for all i ∈ I.
Let (E n ) be a sequence of Borel sets satisfying the assumptions of Theorem 2.3.3. Then for any n ∈ N and R > 0 we have Once we are equipped with Proposition 6.5, the proof of Theorem 2.3.3 follows the same argument as Theorem 2.3.1. The choice of s * in the proof is a priori dependant on the term i∈I n Det( ∞ k=1 A k i ) λ(A) n appearing in Lemma 6.5. However, by (5.31) proved above, this number is essentially a constant with respect to n, and so does not introduce extra difficulty when compared to the proof of Theorem 2.3.1.

Proof of Theorem 2.4
To prove Theorem 2.4, we will apply the Mass Transference Principle of Wang and Wu [41,Theorem 3.1]. Rather than just consider iterated function systems, they find lower bounds for the Hausdorff dimension of limsup sets defined by a general system of rectangles of side lengths ρ a i +t i N . Loosely speaking, Wang and Wu show that when one has appropriate measure-theoretic knowledge about the limsup set for rectangles with side lengths ρ a i N , then this can be used to obtain a lower bound for the Hausdorff dimension of the shrunk limsup set defined using the side lengths ρ a i +t i N . Here ρ N is a sequence shrinking to 0 with N , and a 1 , . . . , a d , t 1 , . . . , t d ≥ 0 determine the shape of the rectangles.
Our first step towards proving Theorem 2.4 is to establish that for a Lebesgue typical T a suitable local ubiquity property is satisfied. See [41,Definition 3] for the definition of local ubiquity.
Proposition 7.1. Let {A i } i∈I be a finite set of matrices satisfying the assumptions of Theorem 2.4 and j ∈ I N . Let U ⊂ R #I·d be as in the statement of Theorem 2.4. Then for Lebesgue almost every T ∈ U , there exists a constant c > 0 such that for any ǫ > 0 and ball B contained in X T we have Proof. We begin our proof by fixing a T ∈ U belonging to the full measure set for which the conclusion of Theorem 2.3.1 holds with (E n ) a sequence of balls with Lebesgue measure λ(A) −n and the targets centred at π T (j). By our assumptions, we may also assume that T is such that X T has non-empty interior.
We now fix a ball B contained in X T . We will show that there exists c > 0, that does not depend upon our choice of B, such that lim sup Since for any ǫ > 0 we have for N sufficiently large, we see that (7.1) implies our proposition. We now focus on proving (7.1).
Let C be a large cube containing X T and L ∈ N be sufficiently large that Then we have Applying Lemma 5.3 to the rectangles {S i (C) : i ∈ Ω B ′ }, we see that there exists Ω B ′ ⊂ Ω B ′ satisfying the following properties: By our assumptions on T , we know that there exists c > 0 such that for infinitely many N, where E N is the ball centred at the origin satisfying L d (E N ) = λ(A) −N . Using properties 1. and 2. above, we see that if N is such that (7.2) is satisfied then we have: Summarising, we have shown that Equation (7.1) now follows once we observe that for N sufficiently large, if for some i ∈ I N +L , then x ∈ S i B π T (j), log(N + L) λ(A) N +L Equipped with Proposition 7.1 we can now prove Theorem 2.4.
Proof of Theorem 2.4. Fix j ∈ I N . Let T ∈ U belong to the full measure set for which the conclusion of Proposition 7.1 holds and for which X T has non-empty interior. Let B be an arbitrary ball with centre in X T . Replacing B with a ball contained within B if necessary, we can assume without loss of generality that B ⊂ X T . This follows from our assumption on T that ensures X T has non-empty interior. We fix s > 1. We now set out to obtain a lower bound for the Hausdorff dimension of W s (S T , π T (j)) ∩ B. Let ǫ > 0 be arbitrary. Instead of directly bounding the Hausdorff dimension of W s (S T , π T (j)) ∩ B from below, we will bound the Hausdorff dimension of for any ǫ. Therefore a lower bound for the Hausdorff dimension ofW (ǫ, B) is also a lower bound for the Hausdorff dimension of W s (S T , π T (j)) ∩ B. We emphasise thatW (ǫ, B) is a limsup set formed of rectangles with centres contained in {S i,T (π T (j))} i∈∪ ∞ n=1 I n and side lengths for each 1 ≤ i ≤ d. Equipped with this notation, we can now directly apply Theorem 3.1 from [41] to obtain Using the language of [41], here we are taking Jn = I n for each n ∈ N.
Lemma 8.1 is well known for differences of the form π λ (a0 ∞ ) − π λ (a ′ 0 ∞ ) (see [16]). Our proof of Lemma 8.1 is a minor adaptation but we include the details for completion.
Proof. Let us fix λ ∈ (1/2, 1) the reciprocal of a Garsia number, and a, a ′ ∈ {0, 1} n two distinct sequences. We observe that π λ (a ∞ ) − π λ (a ′∞ ) = Therefore to prove our lemma, we need to show that there exists C > 0 which does not depend upon a or a ′ , such that 6 Let β = λ −1 and let γ 1 , . . . , γ k denote the Galois conjugates of β. Since β has norm ±2 it is not a zero of any non-trivial polynomials with coefficients in {−1, 0, 1}. Therefore n i=1 a i β n−i+1 − n i=1 a ′ i β n−i+1 = 0. Moreover we also have n i=1 a i γ n−i+1 j − n i=1 a ′ i γ n−i+1 j = 0 for each Galois conjugate of β. Since β is an algebraic integer we have The last line follows from the fact β has norm ±2 so |β · k j=1 γ j | = 2. We have shown that Our result now follow upon dividing both sides by 2 n .
The following lemma immediately follows from Lemma 5.6. It is essentially the first part of this lemma rewritten for our current purposes. Lemma 8.2. Let λ ∈ (1/2, 1) and a ∈ {0, 1} n . Then x ∈ B π λ (a ∞ ), λ |a| r 1−λ |a| if and only if T a (x) ∈ B(x, r).
We are almost ready to proceed with our proof of Theorem 2.5. However, before we do, we need to recall two results. Theorem 8.3 (Garsia [16]). If λ is the reciprocal of a Garsia number then the Bernoulli convolution µ λ is absolutely continuous. [31]). If µ λ is absolutely continuous then µ λ is equivalent to L 1 | [0, 1 1−λ ] . Proof of Theorem 2.5. Since λ is the reciprocal of a Garsia number, we know by Theorem 8.3 that µ λ is absolutely continuous with respect to the Lebesgue measure. Therefore it admits a Radon-Nikodym derivative d λ . By definition we have that d λ (x) > 0 for µ λ almost every x. Now using Theorem 8.4, we in fact have that d λ (x) > 0 for Lebesgue almost every x ∈ [0, 1 1−λ ]. By Lebesgue's differentiation theorem [30] we know that Given any function h : N → [0, ∞) satisfying ∞ n=1 h(n) = ∞, we will show that there exists C > 0 such that for any r sufficiently small we have

Theorem 8.4 (Mauldin and Simon
Since almost every x ∈ [0, 1 1−λ ] satisfies (8.3), it will follow from (8.4) that the set of density points for R(S λ , h) c ∩ [0, 1 1−λ ] has zero Lebesgue measure. By Theorem 3.2 it will then follow that Lebesgue almost every x ∈ [0, 1 1−λ ] is contained in R(S λ , h). As such to prove our result it is sufficient to prove that (8.4) holds.
Using (8.5) together with the definition of µ λ,n , we see that there exists N ∈ N such that if we let Ω r,n := {a ∈ {0, 1} n : π λ (a ∞ ) ∈ B(x ′ , r)} then 2 n d λ (x ′ )r 2 ≤ #Ω r,n ≤ 8 · 2 n d λ (x ′ )r Replacing h with a smaller function is necessary, it follows from Lemma 8.1 that without loss of generality we can assume that the union defining E n is always disjoint. We then let We may also assume without loss of generality that h is bounded. As such, it follows from Lemma 8.2 that E ∞ ⊂ R(S λ , h) ∩ [x ′ − r, x ′ + r]. Therefore to prove that (8.4) holds, we need to show that there exists C > 0 such that To prove this inequality we use Lemma 3.1. Since the balls in the union defining E n are disjoint, it follows from (8.6) that h(n)rd λ (x ′ ) 4 ≤ L 1 (E n ) ≤ 4h(n)rd λ (x ′ ). (8.8) Which by our assumption on h implies ∞ n=N L 1 (E n ) = ∞. So we satisfy the assumptions of Lemma 3.1. It remains to obtains good bounds for L 1 (E n ∩ E m ). For a fixed a ∈ Ω r,n , it follows from Lemma 8.1 and a volume argument that for m > n ≥ N we have # b ∈ Ω r,m : B π λ (a ∞ ), h(n) 2 n+1 ∩ B π λ (b ∞ ), h(m) 2 m+1 = ∅ ≪ h(n) 2 n+1 · 2 m + 1.
Using this bound and (8.6) we have In summary, we have shown that L 1 (E ∞ ) ≫ rd λ (x ′ ). This is exactly the content of (8.7) and so our proof is complete.
We now move on to the proof of Theorem 2.6. The application of the mass transference principle from [9] is standard so we only give brief details.
Proof. The first part of Theorem 2.6 follows from Lemma 8.2 and a covering argument. For the second part, notice that Lemma 8.2 implies that for any function h : N → [0, ∞) we have This equality allows us to reinterpret the set R(S λ , h) in terms of a limsup set coming from a sequence of balls. With this reinterpretation, we naturally fall into the framework of [9]. In particular, combining Theorem 3 from [9] with Theorem 2.5 we obtain our result.