Recurrence of bipartite planar maps

This paper concerns random bipartite planar maps which are defined by assigning weights to their faces. The paper presents a threefold contribution to the theory. Firstly, we prove the existence of the local limit for all choices of weights and describe it in terms of an infinite mobile. Secondly, we show that the local limit is in all cases almost surely recurrent. And thirdly, we show that for certain choices of weights the local limit has exactly one face of infinite degree and has in that case spectral dimension $4/3$ (the latter requires a mild moment condition).


Introduction and main results
A planar map is a finite connected graph embedded in the 2-sphere. Motivated by questions of universality, Marckert and Miermont [30] introduced a class of probability distributions on bipartite planar maps, where a weight q d/2 is given to each face of degree d. (A bipartite map is one in which all faces have even degree, hence d/2 is an integer.) Due to the discovery of certain bijections between planar maps and labeled trees [8,34] progress on this model of random planar maps has been tremendous, see e.g. [25] for a recent review. Much of the focus has been on the scaling limit where the map is rescaled by some power of its size and one studies the limit in the Gromov-Hausdorff sense of the corresponding metric space, see e.g. [17,26,27,28,32]. Other papers focus on the local limit, where one does not rescale the graph and the limiting object, when it exists, is an infinite graph, see e.g. [3,9,10,23,31] for results on special cases and related models. So far the local limit has been shown to exist only for certain choices of the weights q i . This paper presents three main contributions to the theory of local limits of planar maps (precise definitions and statements appear in the following subsections). Our first main result is the existence, and a description, of the local limit for arbitrary choices of the weigths q i , see Theorem 1.1. We show this by using a connection to simply generated trees, and a recent general limit theorem due to Janson for the latter object [16]. The approach is similar to the one of Chassaing and Durhuus [9] and later Curien, Ménard and Miermont [10] in the case of quadrangulations. Our second main result is that the limit map is almost surely recurrent for all choices of the weights q i , see Theorem 1.2. This is proved using a recent general result of Gurel-Gurevich and Nachmias [13], and relies on establishing exponential tails of the degrees in the local limit (Theorem 4.2). Our third main result focuses on finer properties of random walk in the local limit, for parameters in a certain 'condensation phase'. In this phase, the local limit almost surely has a face of infinite degree. Under an additional moment condition, we show that the spectral dimension of the map in this case is almost surely 4/3 (see Theorem 1.3). Roughly speaking this follows from the fact that, from the point of view of a simple random walk, the map is tree-like (although it is in general not a tree). This result relies on recent general methods for expressing the spectral dimension in terms of volume and resistance growth due to Kumagai and Misumi [24]. We now define the model more precisely.

Planar maps.
A planar map is a finite, connected graph embedded in the 2-sphere and viewed up to orientation preserving homeomorphisms of the sphere. A connected component of the complement of the edges of the graph is called a face. The degree of a face f , denoted by deg(f ) is the number of edges in its boundary; the edges are counted with multiplicity, meaning that the same edge is counted twice if both its sides are incident to the face. We sometimes consider rooted and pointed planar maps: the root is then a distinguished oriented edge e = (e − , e + ), and the point is a fixed marked vertex, which will be denoted by ρ. All maps we consider are bipartite; this is equivalent to each face having an even degree. We denote the set of finite bipartite, rooted and pointed planar maps by M * f , and we denote the subset of maps with n edges by M * n . For a planar map m we denote the set of vertices, edges and faces by V (m), E(m) and F (m) respectively. For a map m ∈ M * f and an integer r ≥ 0, let B r (m) denote the planar subgraph of m spanned by the set of vertices at a graph distance ≤ r from the origin e − of the root edge. Note that B r (m) is a planar map; for r ≥ 1 it is rooted, and it is pointed if the vertex ρ is at a graph distance (1.1) This metric on rooted graphs was introduced in [6]. Denote by M * the completion of M * f with respect to d M . Thus M * is a metric space, which we further make into a measure space by equipping it with the Borel σ-algebra. The elements of M * which are not finite are called infinite planar maps and the set of infinite planar maps is denoted by M * ∞ . An infinite planar map m can be represented by an equivalence class of sequences (m i ) i≥0 of finite planar maps having the property that for each r ≥ 0, B r (m i ) is eventually the same constant for every representative. We then call m the local limit of the sequence (m i ) i≥0 . The equivalence class defines a unique infinite rooted graph, which may or may not be pointed.
We will consider probability measures on M * which are defined via a sequence (q i ) i≥1 of non-negative numbers, as follows. Define a sequence of probability measures (µ n ) n≥1 on M * by first assigning to each finite map This definition was first introduced by Marckert and Miermont [30].

1.2.
Main results. Our first main result establishes a weak limit of (µ n ) n≥1 in the topology generated by d M . In order to exclude the trivial case when all faces have degree two we demand that q i > 0 for some i ≥ 2. Certain qualitative properties of the limit map can be determined by the value of a quantity κ which we will now define. For convenience, we will define a new sequence (w i ) i≥0 , expressed in terms of the parameters (q i ) i≥1 as and we let w 0 = 1. The reason for this definition will become clear when we explain the connection between the maps and simply generated trees in Section 3.3. Denote the generating function of (w i ) i≥0 by and denote its radius of convergence by R. If R > 0 define γ = lim tրR tg ′ (t) g(t) (1.6) and if R = 0 let γ = 0. Note that the ratio in (1.6) is continuous and increasing in t by [16,Lemma 3.1]. Next define the number τ ≥ 0 to be (1) the (unique) solution t ∈ (0, R] to tg ′ (t)/g(t) = 1, if γ ≥ 1; or (2) τ = R, if γ < 1. Then define the probability weight sequence (π i ) i≥0 by and let ξ be a random variable distributed by (π i ) i≥0 . We will view ξ as an offspring distribution of a Galton-Watson process and we denote its expected value by κ. One easily finds that κ = min{γ, 1} ≤ 1 (1.8) Theorem 1.1. For all choices of weights (q i ) i≥1 the measures µ n converge weakly to some probability measure µ (in the topology generated by d M ). The infinite map with distribution µ is almost surely one-ended and locally finite. If κ = 1 all faces are of finite degree. If κ < 1 the map contains exactly one face of infinite degree. If κ = 0 the limit is the UIPTree.
Special cases of this result have been established previously, see [9,10,23] for the case of uniform quadrangulations. Other related results in the case of non-bipartite graphs have been established for uniform triangulations [3] and uniform maps [31].
The proof of Theorem 1.1 appears at the end of Section 2 and relies on two bijections: first a bijection due to Bouttier, Di Francesco and Guitter (BDG) [8] from M * f to a class of labelled trees called mobiles, and then a bijection which maps random mobiles of the form we consider to simply generated trees. In [16], Janson established a general convergence result for simply generated trees, which allows us to deduce the corresponding convergence result for planar maps. This correspondence was previously used by Janson and Stefánsson to study scaling limits of planar maps in [17]. We note here that one may deduce more details about the structure of the limiting map than those stated in Theorem 1.1 from the upcoming Theorem 3.1. The latter result concerns the local limit of the mobiles along with the procedure of constructing the maps out of the mobiles. The local limit of the mobiles has an explicit description in terms of a multi-type Galton-Watson process as will be explained in Section 3.1. We expect that, similarly to the case of quadrangulations [10], one may recover the infinite mobile from the infinite map. We remark here that the bipartite case, on which we focus, is easier since the BDG bijection has a particularly simple form in this case, which directly gives the correspondence with simply generated trees. In [8] bijections between trees and more general types of maps are described, see also [33].
Throughout the paper, we will let M denote the infinite random map distributed by µ from Theorem 1.1. Recall that simple random walk on a locally finite graph G is a Markov chain starting at some specified vertex, which at each integer time step moves to a uniformly chosen neighbour. Recall also that G is recurrent if simple random walk returns to its starting point with probability 1. Our second main result is the following, and is proved in Section 4: For all choices of the weights q i , the map M is almost surely recurrent.
The proof relies on a recent result on recurrence of local limits [13]. To be able to apply this result we show that the degree of a typical vertex in M has an exponential tail, see Theorem 4.2. We note here that the degree of a typical face need not have exponential tails (even in the case κ = 1).
Our third main result concerns the case κ < 1, when M has a unique face of infinite degree. This phase has been referred to as the condensation phase in the corresponding models of simply generated trees [16,18]. Our results concern the asymptotic behaviour of the return probability of a simple random walk after a large number of steps. Let p G (n) be the probability that simple random walk on G is back at its starting point after n steps. The spectral dimension of G is defined as log n (1.9) provided the limit exists. It is simple to check that the limit is independent of the initial location of the walk if G is connected. Let ξ be the random variable defined below (1.7) and recall that κ = E(ξ).
Recall that M is the UIPTree when κ = 0. It was shown in [12] (see also [4,11,20]) that the spectral dimension of the UIPTree (and even the more general class of critical Galton-Watson trees with finite variance conditioned to survive) is almost surely 4/3. Our results in the case κ = 0 are therefore in agreement with those results. When 0 < κ < 1 the map M is however no longer a tree but we show that from the point of view of the random walk it is still tree-like (see also [7] for the phenomenon that maps with a unique large face are tree-like). This is perhaps not surprising in view of recent results in [17], where it is shown, for regular enough weights, that as n → ∞ Figure 1. A simulation of a planar map with κ = 0.66 and w i ∼ i −3 . The map has 794 vertices and 579 faces. The drawing is non-isometric and non-proper. the scaling limit of the maps, with the graph metric rescaled by n −1/2 , is a multiple of Aldous' Brownian tree. In this sense, the maps are globally tree like although they contain a number of small loops, see Fig. 1 for an example. The condition β > 5/2 is probably not optimal but is the best that can be obtained with our methods. We suspect that β > 1 suffices.
It is worth noting here that the value 4/3 for the spectral dimension is also encountered in critical percolation clusters of Z d for d large enough. It was conjectured by Alexander and Orbach [2] that the spectral dimension of the incipient infinite cluster for percolation on Z d (d ≥ 2) should be 4/3. This conjecture is generally believed to be true for d > 6 (but false for d ≤ 6) and has been proven for d ≥ 19 and for d > 6 when the lattice is sufficiently spread out [22].
Finally, Theorem 1.3 may be seen as a refinement of Theorem 1.2 (for κ < 1) in the sense that if a graph G is recurrent, and the spectral dimension d s (G) exists, then d s (G) ≤ 2. We do not prove the existence of d s (M ) other than in the case covered by Theorem 1.3.

1.3.
Outline. The paper is organized as follows. In Section 2 we introduce rooted plane trees and mobiles and explain how they may be related to planar maps via the BDG bijection. In Section 3 we prove Theorem 1.1 on the existence and characterization of the local limit, and in Section 4 we prove Theorem 1.2 on recurrence. Section 5 is devoted to the spectral dimension in the condensation phase when κ < 1 (Theorem 1.3). In order to improve the readability of the main text we collect proofs of some lemmas in the Appendix.

Trees and mobiles
The study of planar maps is intimately tied up with the study of trees, as will be explained in the following sections. In this section we introduce our main definitions and tools for studying trees. As a 'reference' we will use a certain infinite tree T ∞ , whose vertex set is V (T ∞ ) = n Z n i.e. the set of all finite sequences of integers. The tree T ∞ is closely related to the standard Ulám-Harris tree (which has vertex set n N n ), and is defined as follows. Firstly, the concatenation of two elements u, v ∈ V (T ∞ ) is denoted by uv. The unique vertex in Z 0 (the empty sequence) is called the root (not to be confused with the root edge of a map) and is denoted by ∅. The edges in T ∞ are defined by connecting every vertex vi, i ∈ Z, v ∈ V (T ∞ ), to the corresponding vertex v. In this case v is said to be the parent of vi and vi is said to be the child of v. More generally, v is said to be an ancestor of v ′ if v ′ = vu for some u ∈ V (T ∞ ) and in that case v ′ is said to be a descendant of v. We denote the genealogical relation by ≺ i.e. v ≺ v ′ if and only if v is an ancestor of v ′ . The generation of v is defined as the number of elements in the sequence v or equivalently as the graph distance of v from the root, and is denoted by |v|.
2.1. Rooted plane trees. A rooted, plane tree T , with vertex set V (T ), is defined as a subtree of T ∞ containing the root ∅ and having the following property: For every vertex v ∈ V (T ) there is a number out(v) ∈ {0, 1, . . .} ∪ {∞}, called the outdegree of v, such that vi ∈ V (T ) if and only if ⌊−out(v)/2⌋ < i ≤ ⌊out(v)/2⌋, see Fig. 2. The degree of a vertex v is denoted by deg(v) and defined as deg(v) = out(v) + 1 if v = ∅ and deg(∅) = out(∅). For each vertex v we order its children from left to right by declaring that vi is to the left of vj if • i = 0 (v0 is the leftmost child) or • ij > 0 and i < j or • i > 0 and j < 0. Our definition of a rooted plane tree is equivalent to the conventional definition (see e.g. [16]) if the tree is locally finite, i.e. out(v) < ∞ for all v ∈ V (T ). However, it differs slightly if the tree has a vertex v of infinite degree since in our case v both has a leftmost and a rightmost child whereas conventionally it would only have a leftmost child. It will be important to have this property when describing planar maps in the so-called condensation phase. All trees we consider in this paper will be plane trees and we will from now on simply refer to them as trees.  Figure 2. An example of a plane tree T . The subtree T [3] is indicated by dashed edges and gray vertices.
Denote the set of trees with n edges by Γ n and the set of all finite trees by Γ f = n Γ n . In this paper we will only consider infinite trees T which have either of the two properties: (1) T is locally finite and there is exactly one infinite self-avoiding path starting at the root called an infinite spine; or (2) Exactly one vertex in T has infinite degree and T contains no infinite spines. The unique self-avoiding path from the root to the vertex of infinite degree is in this case called a finite spine. Denote the set of such infinite trees by Γ ∞ and let Γ = Γ f ∪ Γ ∞ . A tree satisfying (1) or (2) can be embedded in the plane in such a way that all vertices are isolated points, no edges cross and so that the ordering of its vertices is preserved. When we refer to embeddings of trees later on we will always assume that these properties hold.
When an infinite tree has a spine (finite or infinite, as above) we will denote the sequence of vertices on the spine, ordered by increasing distance from the root, by ∅ = S 0 , S 1 , . . .. When there is a vertex of infinite degree we will denote it by s and we will denote its children by s i , i ∈ Z orderered from left to right in the same way as before.
One may define a metric on Γ in much the same way as we did for M * , as follows. For every R ≥ 0 define the set Fig. 2. Define the metric The set Γ is equipped with the Borel σ-algebra generated by d Γ .

2.2.
Mobiles. It will be convenient to emphasise the distinction between vertices in a tree that belong to odd and even generations, respectively. For each tree T ∈ Γ we therefore colour the root and vertices in every even generation white and we colour vertices in every odd generation black. The set of black (resp. white) vertices in the tree T will be denoted by V • (T ) (resp. V • (T )). Let Γ ⊙ ∞ be the subset of Γ ∞ where only black vertices can have infinite degree and define Γ ⊙ = Γ f ∪ Γ ⊙ ∞ . For a finite tree T ∈ Γ n , define the left contour sequence (c • The sequence is extended to i > 2n by 2n periodicity. Similarly define the right contour sequence (c (R) i ) i≥0 by replacing leftmost with rightmost in the above definition. Next, define the contour sequence (c i ) i∈Z by We will refer to each occurrence of a vertex v in the contour sequence as a corner of v. Note that v has deg(v) number of corners. We extend the above definitions to elements in Γ ∞ in the obvious way (there is only one infinite period); this is possible due to how the infinite trees are constructed and how the children of the vertex of infinite degree are ordered. Note that for a tree T in Γ, the contour sequence visits all vertices. We will sometimes use the term clockwise (respectively, counterclockwise) contour sequence, which refers to progressing through the contour sequence c i by increasing (respectively, decreasing) the index i. Define the white contour sequence (c • i ) i∈Z by c • i = c 2i for all i ∈ Z. Note that every white vertex appears in this sequence. Similarly, for a tree with a (finite or infinite) spine let (S • i ) i≥0 be a sequence of the white vertices on the spine defined by S • i = S 2i . For trees T ∈ Γ ⊙ , we will consider integer labels (ℓ(v)) v∈V • (T ) assigned to the white vertices of T , and which obey the following rules.
(1) For all i ∈ Z, ℓ(c • i+1 ) ≥ ℓ(c • i ) − 1 (for every black vertex u, the labels of the white vertices adjacent to u can decrease by at most one in the clockwise order around u).
(2) If T has an infinite spine then inf i≥0 ℓ(S • i ) = −∞. (3) If T has a vertex of infinite degree then inf i≥0 ℓ(s i ) = inf i<0 ℓ(s i ) = −∞. A tree T along with the labels ℓ which obey the above rules is called a mobile and will typically be denoted by θ = (T, ℓ). If the root has label k ∈ Z, the set of such mobiles with n edges will be denoted by Θ ∞ . As explained in the next subsection, mobiles are an essential tool in the study of planar maps.
For a finite tree T there is a useful alternative way of describing rule (1) for the labels on T , see e.g. [27]. For this purpose we introduce, for each r ≥ 1, the set Let u be a black vertex in T of degree r, and denote its white parent by u (0) and its white children by u (1) , u (2) , . . . , u (r−1) , ordered from left to right. Assign to u an element (x 1 (u), . . . , x r (u)) from E r . Having done this for all black vertices u, label the white vertices of T recursively as follows. First label the root by some fixed k. If for a black vertex u we have that ℓ(u (0) ) = y 0 then let The elements from E r thus provide the increments of the labels of the white vertices clockwise around each black vertex. Note that the minimum allowed increment is −1, in accordance with rule (1). The finite sequence (ℓ(u (j) )) 0≤j≤r is called a discrete bridge of length r. From this description it is easy to count the number λ(T ) of different allowed labellings of the finite tree T . By a standard 'balls-and-boxes' argument, the number of elements in E r is 2r−1 r−1 . Therefore, the number of ways of labeling T , given that its root has a fixed label, is We conclude this subsection by defining a metric also on the set Θ (0) . For a mobile θ = (T, ℓ), let θ [R] be the labeled tree consisting of T [R] and the labels ℓ restricted to the white vertices in T [R] . Note that θ [R] is in general not a mobile since the labels do not necessarily satisfy the rules listed above. We define a metric d Θ on Θ (0) by and we equip Θ (0) with the Borel σ-algebra.
2.3. The Bouttier-Di Francesco-Guitter bijection. We will recall the rooted and pointed version of the Bouttier-Di Francesco-Guitter (BDG) bijection between mobiles and planar maps [8]. Consider a finite mobile n and embed T in the plane. Let (c • i ) i∈Z be its white contour sequence and for each i define the successor of i as with the convention that inf{∅} = ∞. Add a point ρ to the complement of T in the plane and define c • ∞ = ρ. Define the successor of a white vertex c Note that every white vertex in the mobile has a unique successor. A planar map m ∈ M * n is constructed from θ, along with a variable ǫ ∈ {−1, 1}, as follows: Draw an arc from each corner of a white vertex in θ to its successor (in such a way that no arcs cross). Then delete all black vertices and edges belonging to θ. The white vertices of θ along with the external point ρ are the vertices of m and ρ takes the role of the marked vertex. The arcs between white vertices take the role of the edges of m. The root edge of m is defined as the arc from c • 0 to σ(c • 0 ) and its direction is determined by the value of ǫ. If ǫ = 1 (ǫ = −1) it is directed towards (away from) c • 0 . The faces in m correspond to the black vertices in θ, the degree of a face being twice the degree of the corresponding black vertex. Furthermore, the labels of the vertices in m inherited from the labels in θ carry information on distances to the marked point ρ. Namely, if d gr is the graph distance on m and v = ρ is a vertex in m then (2.10) The above construction defines a mapping Φ : Θ (0) f ×{−1, 1} → M * f which is a bijection. For the inverse construction of Φ, see [8]. The mapping Φ can be extended to infinite elements in Θ (0) by a similar description, as follows. If θ = (T, ℓ) is an infinite mobile we embed T in the plane such that its vertices are isolated points, as described in Section 2.1. Recall that if T has a spine then inf i≥0 ℓ(S • i ) = −∞ and if it has a vertex of infinite degree then inf i≥0 ℓ(s i ) = −∞. Therefore, every white vertex in the mobile still has a unique successor which is also a white vertex in the mobile. (The other condition, inf i<0 ℓ(s i ) = −∞, ensures that the resulting embedded graph is locally finite.) The construction of the arcs and the root edge is the same as before and due to the fact that every successor is contained in the mobile, no external vertex ρ is needed. The resulting embedded graph, which we call Φ(θ, ǫ), is thus rooted but not pointed.
In the following proposition we give Θ (0) the topology of (2.7), and {−1, 1} the discrete topology. The set Θ (0) × {−1, 1} is given the product topology.   ∞ . In the case when T has a unique infinite spine the proof is nearly identical to that of [10, Proposition 2], which deals with quadrangulations and labelled trees. The difference in the case when T has a unique vertex of infinite degree is first of all that the left and right contour sequences are independent. Here we need to use the condition that inf i≥0 ℓ(s i ) = inf i<0 ℓ(s i ) = −∞ cf. Section 2.2. Using this the proof of (1) and (3) proceeds in the same way as in [10]. Secondly, when T has a unique vertex of infinite degree it is not one-ended in the usual sense. However, for each R ≥ 0 the complement of the truncated tree T [R] has exactly one infinite connected component and this property along with how the edges in the corresponding map are constructed from θ, guarantees that Φ(θ, ǫ) is one-ended. We leave the details to the reader.
2.4. Random mobiles. In this subsection we define a sequence (μ n ) n≥1 of probability measures on Θ (0) × {−1, 1} which corresponds, via Φ, to the sequence (µ n ) n≥1 on M * . We start by defining a sequence of probability measures (ν n ) n≥1 on the set of trees Γ ⊙ which we then relate to (μ n ) n≥1 .
Let (w i ) i≥0 be as in (1.4) and assign to each finite tree T ∈ Γ f a weight and defineν Recall that λ(T ), defined in (2.6), denotes the number of ways one can assign labels to the white vertices of a finite tree T . For each ((T, ℓ), ǫ) ∈ Θ (0) × {−1, 1} and each n ≥ 1, definẽ The following result is then well-known [30].
Lemma 2.2. For each n ≥ 1, the measure µ n is the image ofμ n by the mapping Φ.
(2) Given the tree T , labeling its root by 0 and (a) assigning a labeling ℓ to the white vertices of T uniformly from the set of allowed labelings; or equivalently (b) for every r ≥ 1 assigning independent uniform elements from E r to each black vertex of degree r and defining ℓ recursively as described in and above (2.5). (3) Selecting independently an element ǫ uniformly from {−1, 1}. Note that the only 'part' of the measure (μ n ) n≥1 which depends on the parameters (w i ) i≥0 of the model is the 'tree part' (ν n ) n≥1 . We will therefore first focus our attention on the latter. Also note, for future reference, that step (2b) above has the following alternative description. Let X 1 , X 2 , . . . be independent, all with the same distribution given by (2.14) A uniformly chosen element of E r has the same distribution as the sequence (X 1 , X 2 , . . . , X r ) conditioned on r j=1 X j = 0.

The local limit
This section is devoted to the proof of Theorem 1.1. We start by describing the weak limit of the unlabelled mobiles, that is the sequence (ν n ) n≥1 , see Theorem 3.1. We then describe in Section 3.2 how to 'put the labels back on', and this gives us a proof of Theorem 1.1. The proof of Theorem 3.1 in turn relies on the theory of simply generated trees, which is described in Section 3.3. The proof of Theorem 3.1 is given in Section 3.5.
3.1. Weak convergence of unlabelled mobiles. In this subsection we state a convergence theorem for the measuresν n which we prove in Section 3.5. Recall that vertices in odd generations are coloured black and even generations white. Recall also the definition of (π i ) i≥0 and ξ from (1.7).
Let ξ • and ξ • be random variables in {0, 1, 2 . . .} with distributions given by (These appear in [30,Proposition 7], where the law of ξ • is denoted µ 0 and the law of ξ • is denoted µ 1 .) Also, letξ • andξ • be random variables in {1, 2, . . .} ∪ {∞} having distributions given by and Thusξ • is the sized-biased version of ξ • , and similarly forξ • in the case κ = 1. We now define a probability measureν on infinite trees, by describing a random treeT with lawν. We letT be a modified multi-type Galton-Watson tree having four types of vertices: normal black and white vertices and special black and white vertices. The root is white and is declared to be a special white vertex. Vertices have offspring independently according to the following description. Special white vertices give birth to black vertices, their number having the law ofξ • ; one of the black children is chosen uniformly at random to be special and the rest are declared normal. Special black vertices give birth to white vertices, their number having the law ofξ • . If the number of white children is finite, one of them is chosen uniformly to be declared special and the rest normal. If the number of white children is infinite, all of them are declared to be normal. Normal white vertices give birth to normal black vertices, their number having the law of ξ • , and normal black vertices give birth to normal white vertices, their number having the law of ξ • . We will now describe how a typical treeT looks like depending on the parameters (w i ) i . Defineκ and note thatκ ≤ 1, and thatκ < 1 if and only if κ < 1. First of all, the special vertices define a spine which is infinite if and only ifκ = 1. Ifκ < 1 the spine ends with a black vertex of infinite degree, which has only normal white children. In that case its lengthL (number of edges) has a geometric distribution: The normal children of the vertices on the spine are root vertices of independent two-type Galton-Walton processes where white (resp. black) vertices have offspring distributed as ξ • (resp. ξ • ). We will call these Galton-Watson processes outgrowths from the spine. If π 0 < 1 the mean number of offspring in two consecutive generations in an outgrowth is given by Thus the outgrowths are critical ifκ = 1 and sub-critical otherwise. In both cases they are almost surely finite and therefore the treeT is at most one ended.
To summarize, we have the two following qualitatively different cases. In the caseκ = 1 the treeT has an infinite spine consisting of special white and black vertices. The outgrowths from the spine are independent critical two-type Galton-Watson processes as described above. In the caseκ < 1 the treeT has a finite spine with geometrically distributed length (3.6). The spine consists of special black and white vertices and has outgrowths, which are independent, sub-critical two-type Galton-Watson processes. In the extreme caseκ = 0 we have π 0 = 1 and thus κ = 0. In this caseT is deterministic and consists of a white root having a single black vertex of infinite degree and all outgrowths empty.
We have the following.
Theorem 3.1. The sequence of measures (ν n ) n≥1 on Γ ⊙ converges weakly toν (the law ofT ) as n → ∞ in the topology generated by d Γ .
The proof uses the theory of simply generated trees and is therefore deferred until Section 3.5.

3.2.
Weak convergence of labelled mobiles. We will now use Theorem 3.1 to construct an infinite random mobile ϑ in Θ (0) and show that it appears as the limit of the sequence (θ n ) n≥1 distributed by (μ n ) n≥1 . Recall thatμ n is obtained fromν n by 'putting on the labels' and also sampling the direction ǫ of the root edge.
To construct ϑ start with the random treeT with lawν. GivenT , assign independently to each of its black vertices v of finite degree r an element B(v) selected uniformly from E r . IfT has a black vertex s of infinite degree, assign to that vertex a sequence of independent random variables (X i ) i∈Z which are independent of the B(v) and with the law (2.14). Define the labels ℓ(v) by first labelling the root ℓ(∅) = 0, and then letting the B(v) determine the increments around v as described above (2.5), and in addition letting the increments around s be given by the sequence (X i ) i∈Z .
Let ǫ ∈ {−1, +1} be uniformly chosen and independent of the random variables in the paragraph above. Finally, let ϑ = (T , ℓ) be the corresponding infinite mobile. Proof. Let θ n = ((T n , ℓ n ), ǫ n ) have lawμ n . Since T n ⇒T it suffices to show that ℓ n ⇒ ℓ where ℓ is the labeling of ϑ above. In both θ n and ϑ, the label increments around different black vertices are independent. The increments around a black vertex of finite degree are in both cases uniformly chosen from the set E r in (2.4), and we are thus done if we show that the increments around a vertex of degree ω(n) → ∞ converge to the corresponding sequence (X i ) i∈Z . This follows from the following claim, which is easily verified by explicit 'balls-in-boxes' enumeration and Stirling's approximation.
Claim: Let X 1 , X 2 , . . . be independent and with law given in (2.14). Then for each fixed R ≥ 1 and all a 1 , . . . , a R ∈ {−1, 0, 1, . . . } we have that Now we can prove the weak convergence of the probability measures µ n on planar maps: Proof of Theorem 1.1. By Lemma 2.2 we have that µ n = Φ(μ n ), and by Proposition 2.1 that Φ is continuous. The weak convergence of µ n towards µ follows from Lemma 3.2. The limit is one-ended by Proposition 2.1 and the presence of a face of infinite degree when κ < 1 follows from the existence of a black vertex of infinite degree inT . When κ = 0 the treeT is deterministic and consists of a single black vertex of infinite degree with white neighbours of degree 1, and can be seen as the local limit as r → ∞ of a single black vertex of degree r with white neighbours of degree 1. The labels are determined by a uniformly chosen element of E r , and it follows that the corresponding map is a uniformly chosen plane tree with r + 1 vertices.
3.3. Simply generated trees. In this section we describe the model of simply generated trees and state a general convergence theorem by Janson [16]. In the following section we then describe a bijection Ψ : Γ f → Γ f which relates the probability measures (ν n ) n≥1 (defined in (2.12)) to the simply generated trees. We then extend Ψ to a mapping Ψ : Γ → Γ ⊙ and show that it is continuous. This will allow us to use the convergence results for the simply generated trees to prove Theorem 3.1.
Simply generated trees are random trees defined by a sequence of probability measure (ν n ) n≥1 on Γ as follows. Let (w i ) i≥0 be a sequence of nonnegative numbers and assign to each finite tree T a weight and define We assume that the weight sequence (w i ) i≥0 is defined as in and above (1.4). Janson obtained a general convergence theorem for simply generated trees in the local topology which applies for every choice of weight sequence [16]. Before stating the theorem we need a few definitions. Let π i be defined as in (1.7) and as before let ξ be a random variable distributed by (π i ) i≥0 with mean κ ∈ [0, 1]. In the extreme case κ = 0 one has simply π i = δ i,0 . Define a random variableξ on {0, 1, . . .} ∪ {∞} by (3.10) We will now construct a modified Galton-Watson tree T which arises as the local limit of the simply generated trees. We will denote the law of T by ν.
In T there will be two types of vertices, called normal vertices and special vertices. The root is declared to be special. Normal vertices have offspring independently according to the distribution ξ whereas special vertices have offspring independently according to the distributionξ. All children of normal vertices are normal and if a special vertex has infinite number of children they are all normal. (In our case we assume that the infinite number of children is ordered from left to right as explained in the beginning of Section 2 whereas conventionally they are ordered as N. This small difference will clearly not affect the main result). Otherwise, all children of a special vertex are normal except for one which is chosen uniformly to be special.
The tree T has different characteristics depending on whether ξ is critical (κ = 1) or sub-critical (κ < 1). In the critical case T has a unique infinite spine composed of the special nodes and the outgrowths from the normal children of the vertices on the spine are independent critical Galton-Watson trees distributed by ξ. In the sub-critical case the linear graph composed of the special nodes is almost surely finite ending with a special node having infinite number of normal children. It is thus a finite spine and it has a length L distributed by P(L = i) = (1 − κ)κ i for i ≥ 0. The outgrowths from the normal children of the vertices on the spine are then sub-critical Galton-Watson trees distributed by ξ. In the extreme case κ = 0, T has a spine of length 0 and the root has an infinite number of normal children which have no children themselves. In this case the tree is therefore deterministic.
Since the UIPTree appears repeatedly in this paper it is useful to note that T ∼ UIPTree when w i = 1 for all i in which case κ = 1. In this case Theorem 3.3 (Janson [16]). For any sequence (w i ) i≥0 such that w 0 > 0 and w k > 0 for some k ≥ 2 the sequence of measures (ν n ) n≥1 on Γ converges weakly towards ν (the distribution of T ) with respect to the topology generated by d Γ .
3.4. The bijection Ψ. The mapping Ψ : Γ f → Γ f which we describe now will map the model of simply generated trees onto the unlabelled mobiles. In order to describe it we will temporarily violate our colouring convention of Section 2. Instead of coulouring even generations white and odd generations black, we will now colour vertices of degree one (that is, leaves) white and all other vertices black. The mapping Ψ will then precisely map the white vertices to even generations and black vertices to odd generations.
Start with a finite tree T ∈ Γ n having n ≥ 1 edges and colour the vertices as described above. Let v be a white vertex (leaf) and note that v appears exactly once in the contour sequence (c i ) i∈Z (up to periodicity).
and define the sequence Now, construct a new tree T ′ from T by drawing an arc from each white vertex v to the corresponding black vertices in (b i (v)) 1≤i≤η(v) . Then throw away the edges from T and let the arcs just drawn become the edges of T ′ . The root of T ′ is defined as the first white vertex in the right contour sequence. The left to right ordering of the children of a white vertex in T ′ is inherited from the ordering of (b i (v)) 1≤i≤η(v) . See Fig. 3 for an example.
We let the vertices of T ′ inherit the colours of the corresponding vertices in T and note that T ′ then has a white root and that every even generation is white and every odd generation is black, as claimed previously. The black vertices in T ′ have degree equal to their original outdegree, i.e. their degree is reduced by one. (This is true also for the black root if one attaches a half-edge to it, as represented in Fig. 3. This construction defines a bijection Ψ from Γ f to itself. For the inverse construction see [17]. DefineΓ to be the set of trees in Γ whose right contour sequence visits infinitely many vertices of degree 1 (that is, leaves). We consider the latter condition to be satisfied for a finite tree by periodicity of the contour sequence. It is straightforward to see that the measures (ν n ) n≥1 and ν are all supported onΓ. The function Ψ can be extended to a function Ψ :Γ → Γ ⊙ by exactly the same construction as in the finite case. Proof. Let T, T 1 , T 2 , . . . ∈Γ with T n → T in the local topology, i.e. for each R ≥ 0 there is an n 0 such that for all n ≥ n 0 . If T is finite it follows immediately that Ψ(T n ) → Ψ(T ), hence we assume that T is infinite. First look at the case when T has an infinite spine S = (S k ) k≥0 . Denote by (S k i ) i≥0 the subsequence of vertices on the spine which have the property that their rightmost child is not on the spine (the outgrowth to the right of it is nonempty). Furthermore, for each S k i , let v i be the first white vertex following it in the right contour sequence (it is necessarily in the nonempty outgrowth to the right of S k i ).
To prove the continuity of Ψ we need to show that for any fixed R ≥ 0 the sequence Ψ(T n ) [R] is eventually constant. We choose an R ′ large enough such that T [R ′ ] contains v ⌈R/2⌉ and the vertices S 0 , S 1 , . . . , S k ⌈R/2⌉ on the spine along with their (finite) outgrowths. Then any vertex of T not in T [R ′ ] maps outside Ψ(T ) [R] , and thus Ψ(T ) [ When T has a vertex of infinite degree the proof goes along the same lines and is left to the reader.
The next result is originally from [17] and the proof follows directly from the construction of the bijection Ψ on the set of finite trees. Lemma 3.5 ([17]). Let (w i ) i≥0 be defined as in and above (1.4) and let (ν n ) n≥1 be the sequence of measures defined in (2.12). Let (ν n ) n≥1 be as in (3.9). Then for each n ≥ 1,ν n is the image of ν n by the mapping Ψ.
3.5. Proof of Theorem 3.1. Let (T n ) n≥1 be a sequence of trees distributed by (ν n ) n≥1 . By Theorem 3.3 and Lemma 3.5 it holds that T n → Ψ(T ) in distribution. The only thing left is to show that Ψ(T ) =T in distribution. In this proof we follow the colouring convention of the previous subsection, that is vertices of degree one in T are white and the rest black.
Firstly, it is straightforward to see that Ψ(T ) has a (unique) infinite spine if and only if T has an infinite spine, and that Ψ(T ) has a (unique) vertex of infinite degree if and only if T has a vertex of infinite degree. Indeed, if T has a vertex of infinite degree then (the image of) this vertex has infinite degree also in Ψ(T ). If T has an infinite spine S 0 , S 1 , . . . then the infinite spine of Ψ(T ), call it S ′ , may be found as follows. The black vertices in S ′ are the black vertices in S whose rightmost children are not special and their order in S ′ is inherited from their order in S . A white vertex in S ′ preceding a given black vertex v ′ in S ′ is the first white vertex in the right contour sequence of T which appears after the first occurrence of the black vertex in T corresponding to v ′ .
We start by checking that the black vertices have the correct outdegree distribution in Ψ(T ), then that Ψ(T ) has the independence structure of a (modified) Galton-Watson tree, and finally that the white vertices have the right outdegree distribution. We will divide the black vertices in T into three categories: (1) Normal black vertices.
(2) Special black vertices which have a special child as the rightmost child. (3) Special black vertices which do not have a special child as the rightmost child. The vertices belonging to (1) and (2) correspond exactly to the normal black vertices in Ψ(T ), and the vertices in (3) correspond to the special black vertices. Indeed, a vertex of type (1) has outdegree in T taking value i ≥ 1 with probability and the probability that a vertex of type (2) has outdegree i ≥ 1 in T equals the conditional probability that a special vertex in T has i children given that the rightmost child is special, which is Since the mapping Ψ reduces the degree of black vertices by 1 we see that the outdegree of the black vertices in Ψ(T ) corresponding to (1) and (2) takes value i ≥ 0 with probability π i+1 /(1 − π 0 ), in agreement with the distribution of ξ • . The probability that a vertex of type (3) has outdegree 1 ≤ i < ∞ in T equals the conditional probability that a special vertex in T has i children given that the rightmost child is not special, which is Similarly, vertices of type (3) have infinite degree with probability P (ξ = ∞)/π 0 = (1 − κ)/π 0 . Again, by shifting by one we find that this agrees with the distribution ofξ • . We now consider the white vertices. For a white vertex v in T we recall the definitions of η(v) and (b i (v)) 1≤i≤η(v) from Section 3.4. We will suppress the argument v in the following for easier notation. The white vertex in Ψ(T ) corresponding to v in T will be denoted by v ′ . If v ′ = ∅ then its offspring (in Ψ(T )) correspond exactly to the black vertices b 1 , . . . , b η in T , whereas if v ′ = ∅ then its offspring correspond to b 2 , . . . , b η , with b 1 corresponding to the parent of v ′ . Conditioning on the number of offspring of a white vertex v ′ ∈ Ψ(T ) thus corresponds to conditioning on the length of a 'rightmost' ancestry path in T . From this it is easy to see that the children of v ′ have independent numbers of offspring in Ψ(T ), and furthermore that the same holds for the white vertices forming the following generation in Ψ(T ). This implies that Ψ(T ) has the correct independence structure.
It remains to check that the white vertices have the correct offspring distribution. Starting with the root ∅, its offspring in Ψ(T ), ordered from left to right, consist of: firstly, some number i ≥ 0 of black vertices of type (2) above; next, one vertex of type (3); and finally some number j ≥ 0 of vertices of type (1). The number of offspring of ∅ is then k = i + j + 1, and this occurs with probability Here the first factors (1 − π 0 ) i π 0 are due to the occurrence, in the sequence (b i ) 1≤i≤η , of i special vertices each of whose rightmost child is not special, followed by one special vertex whose rightmost child is special. The remaining factors (1 − π 0 ) j π 0 are due to the occurrence of j normal vertices with at least one offspring each, followed by one with no offspring. Summing over the possible values of i gives the probability kπ 2 0 (1 − π 0 ) k−1 of ∅ having k offspring, in agreement with the distribution ofξ • . Note that, given the outdegree (number of offspring) of ∅, the black child of type (3) is uniformly distributed.
Having dealt with the root of Ψ(T ), the remaining white vertices v in T are divided into two categories: (1) either η = 1, or η > 1 and b 2 is normal; (2) η > 1 and b 2 is special. White vertices v in category (1) correspond exactly to the normal white vertices in Ψ(T ). In this case, each black vertex in (b i ) 2≤i≤η is normal and has at least one child in T , whereas v has no child in T . Thus, by (3.13) the outdegree of the white vertex v ′ in Ψ(T ) satisfies agreeing with the distribution of ξ • . Case (2) is handled in the same way as the case v ′ = ∅, showing that the outdegree in Ψ(T ) is distributed asξ • . Thus we have shown that Ψ(T ) =T in distribution.

Recurrence
In this section we prove Theorem 1.2. As mentioned previously, we will rely on a general result established in [13], which we begin by describing. Suppose (G n ) n≥1 is a sequence of finite graphs, and that in each graph G n is singled out a root vertex o n . One may define a local limit of such a sequence of rooted graphs (G n , o n ) in much the same way as in Section 1.1: (G n , o n ) converges locally to (G, o) if for each r, the graph ball of (G n , o n ) centered at o n with radius r eventually equals the corresponding graph ball of (G, o). Now suppose that each (G n , o n ) is a random, planar graph, viewed up to isomorphism of rooted graphs. We say that the root o n has the stationary distribution if, given G n , the probability that o n is some fixed vertex v of G n is proportional to the degree of v. Building on results by Benjamini and Schramm [6], who considered the case when the maximum degree in G n is uniformly bounded, Gurel-Gurevich and Nachmias proved the following: 13]). Let (G n , o n ) be a sequence of finite, random planar graphs such that o n has the stationary distribution for each n, and such that (G n , o n ) converge weakly to (G, o) in the local topology. If the degree distribution of o in G has an exponential tail, then G is almost surely recurrent.
In applying this result to our situation, we take the root vertex o n to be the origin e − of the root edge e. There are two main steps to applying Theorem 4.1: firstly, proving that e − has the stationary distribution under each µ n ; and secondly, proving that the degree of e − has an exponential tail under µ.
The claim that e − has the stationary distribution is equivalent to the statement that the directed edge e is chosen uniformly among all directed edges of m. By a simple calculation, this follows from the fact that the probability assigned by µ n to a rooted map does not depend on the choice of root edge. To prove Theorem 1.2 it therefore suffices to show that e − has an exponential tail under µ.

4.1.
Bound on the degrees in M . Recall from Sections 3.1-3.2 the infinite mobile ϑ = ((T , ℓ), ǫ) which (via the BDG bijection) defines the map M . The treeT consists of a spine of special black and white vertices, with finite normal trees attached on the left and right sides. In this section we will describe a slightly different way of constructing ϑ which will let us deduce bounds on the degrees of the vertices in M .
Recall the random variables ξ • and ξ • of Section 3.1; they have the distribution of the outdegree of normal black and white vertices inT , respectively. We will assume in this section that π 0 < 1, or equivalently that κ > 0. When π 0 = 1, M is the UIPTree and its degree distribution (and the fact that it is recurrent) is well known. It is essential for our argument that ξ • is geometrically distributed: P(ξ • = i) = π 0 (1 − π 0 ) i for i ≥ 0. Also recall that for each normal black vertex v ofT with outdegree r ≥ 1, the clockwise label increments around v form a discrete bridge (X (r) 1 , . . . , X (r) r+1 ) with law described just below (2.14). We shall be particularly interested in the event that X (r) r+1 ≥ 1; that is to say, the last increment in the clockwise order is 1 or more, or equivalently the first increment in the anticlockwise order is −1 or less. For reasons that will become clear soon, we define The sum equals the probability that a normal black vertex has outdegree at least 1 and that the last clockwise increment is at least 1. Let ζ, ζ 1 , ζ 2 , . . . be independent random variables, having the geometric distribution P(ζ = k) = p(1 − p) k−1 for k ≥ 1. Also let and let ζ ′ be independent of the ζ:s with geometric distribution P(ζ ′ = k) = p ′ (1 − p ′ ) k−1 for k ≥ 1. Since π 0 < 1 and since we always assume that q i > 0 for some i ≥ 2 we have that p > 0 and p ′ > 0.
Finally recall the concept of stochastic domination: a random variable X is stochastically dominated by a random variable Y if there is a coupling P of X and Y such that P(X ≤ Y ) = 1. Let ξ • 1 , ξ • 2 be independent copies of ξ • , independent also of the ζ i and ζ ′ . This section is devoted to the following result: (1 + ζ j ). (4.1) Theorem 4.2 immediately gives that the degree of the root in M has exponential tails, and as explained above, Theorem 1.2 therefore follows once we prove Theorem 4.2.
Proof. Recall that the root e − of M is either the same vertex as the root ∅ of ϑ (if ǫ = −1) or it is the successor of ∅ (if ǫ = +1). We will show that in the first case the degree is bounded by the smaller number (1 + ζ j ), (4.2) and in the second case by (4.1).
We begin with the case ǫ = −1 when e − = ∅. Our argument (and bound on the degree) in this case actually applies slightly more generally, to any vertex in M which corresponds to a vertex 'in a fixed position in ϑ'.
We will describe what we mean by this below. The argument relies on a construction of ϑ which proceeds progressively through the counterclockwise contour sequence (see Section 2.2). We start by defining a 'template' ϑ(0) of ϑ, by letting ϑ(0) denote the mobile obtained by sampling the spine and all white vertices adjacent to the spine, as well as the labels of all these white vertices (subject to ∅ having label 0, say). For convenience we slightly modify ϑ(0) by placing a 'half-edge' at ∅, which allows us to distinguish between the 'left' and 'right' sides of ∅. We denote the white contour sequence of ϑ(0) by (c • i (0)) i∈Z . This is defined as in Section 2.2, except that (due to the half-edge) ∅ is repeated one extra time.
Here is a summary of the main idea; details will follow. Let ε 1 , ε 2 , . . . be independent Bernoulli variables taking value 1 with probability 1 − π 0 , and note that ξ • + 1 has the law of the smallest k such that ε k = 0. The procedure starts at some white vertex w, and each time a white vertex, say v, is visited the next value ε i is examined to determine whether v has 'another' black child. If so, the number of white children of this new black vertex is sampled (along with their labels) and we proceed to the next white vertex in the counterclockwise contour order. This procedure will create the part of ϑ which lies after the initial vertex w in the counterclockwise contour sequence, and will therefore let us examine the number of white vertices in ϑ which have w as their successor. Here is a more detailed description, see also Fig. 4 for an illustration. Let ξ • 1 , ξ • 2 , . . . be independent, distributed as ξ • . We take all the ζ ′ , ζ i , ε i and ξ • i to be independent of each other. We now describe the construction starting at a white vertex w of ϑ(0). We start our construction at the last visit of the (clockwise) contour sequence to w; that is, the largest i ∈ Z such that c • i (0) = w. By shifting the indices, we may (and will) assume that this smallest index is i = 0. We now examine ε 1 . If ε 1 = 1 we do the following. First attach a black vertex to c • 0 (0). Then examine the value of ξ • 1 ; if ξ • 1 = r ≥ 1 we attach to this black vertex r further white vertices. Finally sample the labels of the new white vertices by sampling an independent copy of the discrete bridge (X (r) 1 , . . . , X (r) r+1 ). We denote the mobile thus obtained by ϑ(1). If, on the other hand, ε 1 = 0 then we just let ϑ(1) = ϑ(0). We let (c • i (1)) i∈Z denote the white contour sequence of ϑ(1), indexed so that c • i (1) = c • i (0) for all i ≥ 0. Thus the new white vertices are placed immediately counterclockwise (in the contour sequence) from our starting point c • 0 (0).
We thus obtain a new mobile ϑ(2) and a new white contour sequence (c • i (2)) i∈Z , which we index so that c • i (2) = c • i (1) for all i ≥ −1. The procedure is then carried out inductively. In this way we obtain (in the limit) a mobile whose counterclockwise contour sequence, started at w, agrees in distribution with that of ϑ. We now explain how this construction, started at the white vertex w, allows us to bound the degree of w in M . By the BDG bijection, the neighbours of w in M are either successors of w, or have w as a successor. The successors of w are easy to count in the procedure above: each new visit to w corresponds to exactly one successor. For a normal white vertex w the number of visits to w has the law of 1 + ξ • , while for a special white vertex the number of visits is the sum of two independent copies of 1 + ξ • . (The first corresponding to the visits to w on the 'left side' of the spine, and the second to the visits on the 'right side'.) Thus, the successors of w account for the '1+' in the summand of (4.2).
It remains to count the number of times w appears as the successor of other white vertices. If v has w as a successor we will call v a predecessor of w, and we count the predecessors of w with multiplicity. By shifting the labels we may assume that w has label 0, and hence the predecessors all have label 1. We group the predecessors v of w by how many times we have visited w before we visit v. (This is well-defined as we will never visit w between visits to v.) The numbers of predecessors in the various groups are independent, and we claim that the number of predecessors in each group is stochastically bounded by ζ. This will establish (4.2).
In order to establish the claim we define a certain 'stopping event' A. Suppose at stage i ≥ 2 in the construction we visit a white vertex v with label 1. Thus v is a potential predecessor of w. Let A i be the event that (i) ε i = 1, (ii) ξ • i = r ≥ 1, and (iii) X (r) r+1 ≥ 1. If this event occurs, then the next vertex visited in the construction is a recently added vertex with label 0 or less. Thus, until we visit w again, any white vertex we visit which has label 1 cannot have w as successor: there must be a vertex with label 0 occurring before v in the (clockwise) contour sequence. The number of 'attempts' before an event A i occurs has the distribution of ζ (although in general we may get fewer predecessors in a group since we may return to w before there is a successful 'attempt'). This proves that the bound (4.2) applies to any white vertex w in the 'template' ϑ(0), in particular to ∅.
Before proceeding to the case ǫ = +1 we note that, although we have assumed that w is a white vertex on or adjacent to the spine, it is clear that each time a white vertex is visited for the first time in the construction above, the exact same procedure starts afresh at that vertex. Thus we may speak of starting the construction at an arbitrary white vertex w of ϑ. Such a vertex is what was referred to above as a vertex in a 'fixed position in ϑ', and the bound (4.2) applies to the degree of any such vertex. We will not describe formally what we mean by 'fixed position', but essentially the construction above may be translated into a deterministic coordinate system in which each white vertex w has a fixed coordinate. If ǫ = +1 the root has a random position in this coordinate system and we require an additional argument to bound its degree, which we now describe.
We now show that the degree of e − is bounded by (4.1) when ǫ = +1, that is when e − is the successor of ∅. Our description for this case will be slightly less detailed. Again we start with the template ϑ(0). Denote the half-edge attached to ∅ by h, and note that the predecessors of e − fall into the following three categories depending on their location in ϑ: (1) those between h and the first occurrence of e − in the clockwise contour sequence, (2) those between h and the first occurrence of a vertex labelled −1 in the counterclockwise contour sequence, and (3) those which are descendants of e − in ϑ. (The vertex labelled −1 in case (2) may be e − itself if it is a special vertex of ϑ.) To bound the number of predecessors in category (2) we may apply a similar scheme as above, constructing the part of ϑ counterclockwise from h until the stopping event A occurs. That is, each time we encounter a white vertex v labelled 0 we add this to the list of possible predecessors of e − , stopping if v has a black child u of outdegree r ≥ 1 and the first child of u in the counterclockwise order has label −1 or less. Thus category (2) has size dominated by ζ.
To bound category (1) we apply a similar scheme, but proceed in the clockwise order starting at h. Each time we visit a white vertex v it has 'another' black child u with probability 1 − π 0 , in which case we sample the outdegree r of u and then the label increments (X = −1. Again, the number of white vertices labelled 0 that we encounter before stopping is geometrically distributed, independent of our bound in case (2), but this time with parameter p ′ . Note that we do not stop before encountering e − in the clockwise contour sequence, hence the number in category (1) is dominated by ζ ′ .
Having thus, in the course of case (1), located e − , we note that some of the predecessors in category (3) may have already been counted in our bounds for cases (1) and (2). However, we obtain an upper bound on category (3) if we assume this not to be the case. We may apply a similar argument as when e − = ∅ to deduce that the number in category (3) is dominated by (Here the number of groups to be considered is dominated by ξ • 1 + ξ • 2 + 1 since one group was already covered by cases (1)-(2).) Finally, taking into account that e − has ξ • 1 + ξ • 2 + 2 successors, we arrive at the bound (4.1).

5.
The spectral dimension when κ < 1 In this section we focus on the case when κ < 1, i.e. when M has a face of infinite degree. Let ϑ = (T , ℓ) be the mobile with distributionμ, and M = Φ(ϑ, ǫ) the corresponding map. (See Sections 3.1-3.2 for the definition of ϑ and Section 2.3 for Φ.) By Theorem 1.1, the law of M is µ. Denote the graph metric of M by d. The black vertex of infinite degree in ϑ will be denoted by s as before. We will use recent results by Kumagai and Misumi [24] which allow us to calculate the spectral dimension of M and thereby prove Theorem 1.3. Their methods involve establishing suitable bounds on resistance and volume growth. Shortly we will give the necessary definitions and state the results we need from [24], but here is a rough outline of the argument.
Intuitively, our arguments rely on showing that the resistance and volume growth in M are governed by s, in the sense that if we truncate ϑ by removing everything except s and its white nearest neighbours, then the volume and resistance growth are largely unaffected. The map associated via the BDG bijection with the truncated mobile is the UIPTree (cf. Theorem 1.1) so the spectral dimension of M should be that of the UIPTree, which is 4/3. For our argument to work, apart from the main assumption E(ξ) = κ < 1 we also need the technical assumption that there exists a β > 5/2 such that E(ξ β ) < ∞. We will assume these properties to hold in the remainder of this section unless otherwise stated.
We begin by discussing some results that allow us to make precise the idea that the vertex of infinite degree 'dominates' the structure of the mobile ϑ.

Decorations.
Recall that the neighbours of s in ϑ are denoted (s i ) i∈Z , and ordered as in Section 2.1. Thus s 0 is the parent of s, and hence also the last vertex on the finite spine of ϑ. For i = 0, the vertex s i is the root of a subcritical (modified) Galton-Watson tree consisting of the descendants of s i ; we call these trees decorations and denote them by D i . Also s 0 may be viewed as the root of a tree, consisting of all vertices which are not descendants of s 0 . This tree consists of a spine of geometrically distributed length with subcritical Galton-Watson trees attached to it, see Section 3.1. We denote this tree by D 0 and call it the bad decoration since it is considerably larger than the other decorations.
Denote the number of vertices in decoration i by |D i | = |V (D i )|. The first lemma explains how the moment condition on ξ provides corresponding moment conditions on |D i |. A proof is given in the Appendix.
The next lemma is from [17, Lemma 2.1] and relates the maximum displacement of labels in a decoration to the number of vertices it contains.
For any r > 0 there is a constant C(r) such that For easier notation we will sometimes write ∆ℓ i = ∆ℓ(D i ).

5.2.
Resistance and volume. As mentioned above we will prove Theorem 1.3 by proving bounds on volume and resistance growth in M and then appealing to the results of [24]. We first recall the basic definitions of electrical networks, see e.g. [29] for more details. Let G = (V, E) be a locally finite graph, with vertex set V and edge set E. Eventually we will consider G = M . A resistance is a function r : E → In the case when all c e = 1 we also write R eff G for the effective resistance. We write R eff G ({x}, {y}) = R eff G (x, y) etc. Let ϑ ⋆ be the truncated mobile obtained from ϑ by throwing away all vertices except s and (s i ) i∈Z , keeping the labels of these vertices. Let M ⋆ = Φ(ϑ ⋆ , ǫ) and note that M ⋆ is the UIPTree. The directed root edge in M ⋆ obtained by the BDG construction will be denoted by (r, r1) and we will use the convention that r is the root of M ⋆ and that r1 is the leftmost child of r. The infinite spine in M ⋆ will be denoted by S ⋆ and the vertex in S ⋆ at a distance i from r will be denoted by S ⋆ i . Denote the graph metric on M ⋆ by d ⋆ . Note that in the case κ = 0 we have M = M ⋆ almost surely.
The results of [24] are formulated in terms of an arbitrary metric on the vertex set of the graph under consideration (not necessarily the graph metric). Recall that we identify V (M ) with V • (T ). We will be using the metric d # on V • (T ) defined by where δ a,b is the Kronecker delta. Denote by M # the graph whose vertex set is V • (T ) and with edges between vertices at d # -distance one. It is clear that M # is a tree and that it contains M ⋆ as a subtree. However, M ⋆ is not in general a subgraph of M . In the following, we will take r to be the reference vertex in M , M ⋆ and M # when referring to graph balls and resistance (r corresponds to the vertex 0 defined in [24, above Eq. (1.3)]).
For v ∈ V • (T ), let ω(v) be the number of edges adjacent to v in M and extend ω to a measure on V • (T ). Define and write ω(R) for ω(B(R; d # )). We will be using the following result from [24]. For each λ > 1 define the random set By [24,Theorem 1.5], if there are λ 0 > 1 and c, q > 0 such that Here is an outline of the rest of this section. To prove (5.8) we will treat each of the four inequalities defining J(λ) separately, in a sequence of lemmas. Bounds on the volume ω(R) will be treated in Section 5.3 (Lemmas 5.5 and 5.6). In Section 5.4 we establish an upper bound on the probability that ∃y ∈ B(R; d # ) : R eff M (r, y) > λd # (r, y) (Lemma 5.8). Finally, in Section 5.5 we deal with the hardest part of the argument, which is to establish an upper bound on the probability that R eff M (r, B(R; d # ) c )) < λ −1 R (Lemma 5.9). For this result we will use a technique of 'projecting long bonds', inspired by methods previously applied to long-range percolation models in [24, Proposition 2.1] and [5, Lemma 3.8].

5.3.
Bounds on the volume. We begin with the upper bound on the volume. We will need two preliminary lemmas. First we state the following result on the labels in ϑ ⋆ which will also be used in the bounds on resistance growth. A proof is given in the Appendix. There is a constant C > 0 such that for all R, λ ≥ 1 we have The second lemma bounds the minimum label in B(R; d # ).
and therefore Thus, for any γ > 0 In the first term in the last expression, we treat separately the contribution of the bad decoration and the sum of the others. For any 0 < δ < 1 the first term may be bounded by where we used Markov's inequality on both terms along with R 2−2r ≤ 1. By Lemmas 5.1 and 5.2, both expected values in the last expression are finite since E(ξ r ) < ∞. Applying Lemma 5.3 to the second term on the right hand side of (5.15) we finally obtain where c 1 , c 2 , c 3 > 0 are constants. The choice γ = λ 4r/3 optimizes the inequality.
Finally, we arrive at the upper bound on the volume.
Proof. We argue on the event when ǫ = −1 in which case ℓ(r) = 0. The other case is treated in a similar way, the difference being that ℓ(r) = −1 and therefore one has to shift labels accordingly in the arguments. Define We claim that a vertex in the set B(R; d # ) is not connected by an edge in M to any vertex outside the set H(R). To see this, first observe that The successor of a vertex in B(R; d # ) as defined in (2.8)  Since i + and i − are increasing functions, then for any η > 0 the first term may be estimated from the above by where c 1 , c 2 > 0 are constants and α = min{2r − 2, 2r/3}. In the last step, the estimate of the first term was obtained using Lemma 5.3 and the second estimate came from using Lemma 5.4. The second term on the right hand side of (5.22) is estimated by separating the bad decoration from the rest as in the proof of Lemma 5.4 and we obtain for any 0 < δ < 1 where we used Markov's inequality and then Minkowski's inequality. By Lemma 5.1 both expected values in the last expression are finite since E(ξ r ) < ∞. Thus, we finally have where c 3 , c 4 > 0 are constants. Choosing η as a positive power of γ and γ as a positive power of λ allows one to deduce that the exponent α ′ gives the optimal bound.
The following result gives the lower bound on the volume.
Lemma 5.6. There exists a λ 0 > 1 and a constant c > 0 such that We are now equipped to establish an upper bound on the resistance provided in the following lemma.
Proof. First observe that d(r, v) ≥ R eff (r, v) for all v ∈ V (M ). We may rule out the case that the vertex v in (5.29) is equal to r and we note that Using these facts together with Lemma 5.7 one can thus estimate the probability in (5.29) from the above by Since B(R; d # ) is finite, the outermost maximum is attained at some vertex, sayv(R) (which may clearly be chosen to be in M ⋆ ). Then, for any γ > 0 we may estimate (5.30) from the above by The expression in the first line of (5.31) may be estimated from the above by first estimating the maximum in the numerator by the sum of all the terms and then separating the bad decoration (first term in the sum) from the rest exactly as was done in the estimate of (5.15) in the proof of Lemma 5.4. Conditioning on d ⋆ (r,v(R)) and using Lemma 5.2 along with similar arguments as in Lemma 5.4 yields the following upper bound on the first term where c 1 , c 2 > 0 are constants. We may use Lemma 5.3 to estimate the expression in the second line of (5.31). In the case ǫ = −1, writing d ⋆ (r, r,v(R) ) = D, one has (5.33) and max{d ⋆ (r,v(R)), 1} ≥ D. This yields the upper bound, where c 3 > 0 is a constant. The case ǫ = 1 is treated in the same way taking into account that labels are shifted. Combining the estimates (5.32) and (5.34) and choosing γ = λ 4r/3 gives an optimal upper bound on (5.31) and yields the exponent α. Lemma 5.9. Let β > 5/2 and assume E(ξ β ) < ∞. For 0 < q < min(1, 2β − 5) there exists a constant c(q) > 0 and a λ 0 > 1 such that for every R ≥ 1 and λ ≥ λ 0 .
To prove this lemma we will compare resistances in M with resistances in the tree M # with certain non-constant conductances. In this section we write e = uv ∈ M if e is an edge of M with endpoints u and v. Define To see this, we note that the network (M # , c) can be obtained by modifying M according to the following procedure. Let e = uv be an arbitrary edge of M and for convenience assume that e is directed from u to v. If u ⋆ = v ⋆ we 'short' e by identifying u and v. Otherwise, subdivide e into |e| ⋆ series resistors each with resistance 1/|e| ⋆ so that the total resistance is still 1.
Then 'short' this network by identifying the origin of the first resistor with u ⋆ (if u = u ⋆ ) and by identifying the endpoint of the jth resistor with the endpoint of the jth step on the geodesic from u ⋆ to v ⋆ in M ⋆ (the last identification is only necessary if v = v ⋆ ), see Fig. 5. By the series-and parallel laws, this gives the conductance (5.36); by the 'shorting law' we obtain (5.37).
By (5.37) we have 38) The last inequality holds because the boundary of B(R − 1; d ⋆ ) separates r from the boundary of B(R; d # ). This estimate is not useful when R = 1 but one may easily treat that case separately by noting that B(1; d # ) consists of S ⋆ 1 and the vertices in the decoration D(r). In what follows we will be considering resistances of the form R eff (M ⋆ ,c) (r, B(R, d ⋆ ) c ). Our objective will be to show that for 0 < q < min(1, 2β − 5) there is a constant c > 0 such that This will prove Lemma 5.9 since by Markov's inequality (5.40) We start by finding a vertex z R in M ⋆ which separates r from the boundary of B(R; d ⋆ ). Recall that S ⋆ is the spine in M ⋆ . Denote the subgraph of M ⋆ consisting of S ⋆ i and the collection of finite outgrowths from the normal children of S ⋆ i by W ⋆ i . Thus W ⋆ i is a tree, and we denote the collection of vertices in generation n of W ⋆ i by W ⋆ i (n) (where W ⋆ i (0) = {S ⋆ i }). Let z R = S ⋆ ℓ for 0 ≤ ℓ ≤ R/2 chosen maximal such that for all j < ℓ, we have W ⋆ j (R − j) = ∅ (ie, the tree W ⋆ j does not reach level R in M ⋆ ). Note that every path from r to level R in M ⋆ goes through z R .
Define L R = d ⋆ (r, z R ) (thus L R = ℓ in the above). The distribution of L R is easy to compute. Recall the construction of T in Section 3.3 and that T ∼ M ⋆ when w i = 1 for all i. Let (Y n ) n≥0 denote a random sequence with the same distribution as (|W ⋆ i (n)|) n≥0 . Thus Y n is the size of the n:th generation in the modified Galton-Watson process defined as follows. Firstly, Y 0 = 1 and the zeroth generation offspring distribution is given by For all later generations the offspring distribution is (π i ) i≥0 . We collect a few facts about (Y n ) n≥ in the Appendix. Using (6.17), we have for k < ⌊R/2⌋ that Turning now to (5.39), we begin by considering the case L R > 0. Since where in the second step we used the series law. Let q ∈ (0, 1) and write Here we used Hölder's inequality in the second step and subadditivity in the third step, before conditioning on L R . On the event L R > 0 we have that denotes the number of edges e = uv ∈ A k such that |e| ⋆ = d ⋆ (u ⋆ , v ⋆ ) = n. We aim to show that (still on the event L R > 0) the sum in (5.45) is bounded by a constant. It is easy to check, using (5.41) and (5.42), that E[L −q R 1 {L R >0} ] is of the order R −q for q ∈ (0, 1). Thus finiteness of (5.45) will give (5.39) in the case L R > 0. Now we turn to the case L R = 0. Writing x ∼ y if x, y are adjacent vertices in M ⋆ , we can use the simple fact that We have that rx denotes the number of edges e = uv ∈ A rx such that |e| ⋆ = d ⋆ (u ⋆ , v ⋆ ) = n. Since P(L R = 0) is of order R −1 this will establish (5.39) in the case L R = 0 provided we show that the sum is bounded by a constant. We thus need bounds on A rx . The strategy will be to bound firstly the number of edges in M from the decoration D i to the decoration D j , and secondly the number of pairs i and j such that a given edge lies on the path from s i to s j in M ⋆ . This is the contents of the following two lemmas.
For the first lemma we recall that, for x, y ∈ M ⋆ , x, y denotes the first vertex of M ⋆ on the successor geodesics of both x and y. Recall the definition of a corner, just below (2.3). Denote the set of corners around white vertices in a decoration D i by C i and note that |C i | = |D i |. For two corners u and v we write uv ∈ M if there is an edge in M between the vertices corresponding to u and v.
Lemma 5.10. Let s i , s j ∈ M be such that i < j, n := d ⋆ (s i , s j ) ≥ 2, and write m := d ⋆ (s i , s i , s j ). For any r, r ′ > 0 there is a constant C > 0 such that To simplify the counting of geodesics in M ⋆ , we use the convention that the geodesic between s i and s j is directed from i to j if and only if i < j. Writing n = d ⋆ (s i , s j ) and m = d ⋆ (s i , s i , s j ), note that 0 < m ≤ n and that the labels will first decrease for the first m steps from s i to s j and increase for the remaining n − m steps. Let Γ (k) n,m be the number of (directed) geodesics which (i) contain the edge (S ⋆ k−1 , S ⋆ k ), (ii) are of length n, and (iii) have decreasing labels on exactly the first m steps.
Note that in the definition of Γ (k) n,m , the endpoints s i and s j are not fixed. We will consider separately the case when one of the endpoints is fixed to be s 0 , and thus is the root of the bad decoration. The reason is that the optimal exponent r or r ′ in Lemma 5.10 is worse when the corresponding decoration is the bad decoration D 0 , but this will be countered by the fact that the number of geodesics is smaller when one of the endpoints is fixed. We definê Γ (k,α) n,m , α ∈ {1, 2}, to be the number of directed geodesics satisfying (i)-(iii) and in addition that s i = s 0 (resp. s j = s 0 ) if α = 1 (resp. α = 2).
Lemma 5.11. There is a constant c > 0 such that for any ℓ ≥ 1 and The same bounds apply to E( x∼r Γ (rx) n,m | L R = 0) but we leave the details to the reader. One small difference in the proof is that one must use the fact that the number of neighbours of r in M ⋆ has finite second moment, but apart from this it is very similar to the proof of Lemma 5.11.
Before proving Lemmas 5.10 and 5.11 we show how they give (5.39) and hence Lemma 5.9.
Proof of (5.39). We need to show that the two sums (5.45) and (5.47) are bounded by constants. We start with (5.45), assuming until further notice that where the primed sum is over all integers i < j such that • the edge (S ⋆ k−1 , S ⋆ k ) lies on the geodesic from s i to s j in M ⋆ , and • the labels on the first m steps on this geodesic are decreasing.
Note that Lemma 5.11 bounds the number of terms in this sum, whereas Lemma 5.10 bounds the summand.
First note that we may assume that n ≥ 2: for n = 1 the sum (5.51) consists of a single term which we may bound by a constant. For n ≥ 2 we split the primed sum in (5.51) into three parts: firstly, the sum over j > 0 with i = 0 fixed, secondly the sum over i < 0 with j = 0 fixed, and thirdly the sum over i and j both not equal to 0. This will allow us to apply the corresponding bounds of Lemma 5.11.
We apply Lemma 5.10 with the following choices of r, r ′ : r = 2β − 2 if i = 0, r = 2β − 4 if i = 0, r ′ = 2β if j = 0, and r ′ = 2β − 2 if j = 0. By Lemma 5.1, and since E(ξ β ) < ∞, the expectations in Lemma 5.10 are finite for these choices and may be bounded by the same constant. On applying Lemma 5.11 we therefore find that (for n ≥ 2) there is a c > 0 such that for some c > 0. We deduce that, for some constant c ′ > 0, This is finite for 0 < q < 2β − 5, as required. The argument for (5.47), when L R = 0, is the same using the remark immediately below Lemma 5.11.
Proof of Lemma 5.10. Writing z = s i , s j , note that ℓ(z) = ℓ(s i ) − m, and thus ℓ(s j ) = ℓ(s i ) − 2m + n. (5.55) Note that the (m − 1)st successor σ m−1 (s i ) of s i in M ⋆ appears before s j in the contour sequence of the mobile ϑ. For there to be an edge from u ∈ C i to v ∈ C j it cannot be the case that the label of σ m−1 (s i ) is strictly smaller than that of u. Thus so that ∆ℓ j ≥ ℓ(s j ) − ℓ(v) ≥ n − m. Thus, using that D i and D j are independent of M ⋆ and of each other, The result now follows from Lemma 5.2 and Markov's inequality.
Proof of Lemma 5.11. Start by considering (5.48). We treat two cases separately, Case (1) when the part of the geodesic intersecting the spine is directed away from the root and Case (2) when it is directed towards the root. Write Γ In Case (1), the geodesic starts in an outgrowth from a vertex, say r 1 , on the spine. Denote the collection of outgrowths from r 1 by T 1 . It then proceeds m steps in the direction of decreasing labels. Since it crosses the edge S ⋆ k−1 , S ⋆ k on the spine, then after m steps it must enter an outgrowth from a vertex on the spine, say r 2 , which is different from r 1 . Denote the collection of outgrowths from r 2 by T 2 . By the definition of the direction of the geodesic it has to enter T 2 on the right hand side of the spine. We cut the geodesic into four parts. The first part is from the starting point until Figure 6. The vertical path represents the spine in M ⋆ and only the outgrowths T 1 and T 2 are drawn. In Case (1) a geodesic starts at level n 1 in T 1 and ends at level n 4 in the right hand part of T 2 . In Case (2) a geodesic starts in the left hand part of T 2 and ends at level n 1 in T 1 . The + and − signs indicate whether the labels on the geodesic increase (+) or decrease (−) when one follows the direction of the geodesic.
it hits r 1 , call the length of that part n 1 . The next part is from r 1 to the vertex S ⋆ k , call its length n 2 ≥ 1. The third part is from S ⋆ k to r 2 , call its length n 3 and the final part is from r 2 to the end, call its length n 4 . Then Conditional on L R the distributions of T 1 and T 2 are given by and The inequality sign is only due to the fact that we replace the number of elements at level n 4 in the part of T 2 on the right hand side of the spine by the total number of elements at level n 4 in T 2 . Using Lemma 6.1 and recalling that n 4 = n − m we get the upper bound where k 1 and k 2 are positive constants.
In Case (2) we have a similar picture. The geodesic starts from the part of T 2 which lies to the left of the spine and ends in T 1 . In this case (5.58) and (5.59) are replaced by the conditions n 1 + n 2 + n 3 = n − m (5.64) n 4 = m, (5.65) but everything else is the same. We thus get where k 3 and k 4 are positive constants. Now turn to (5.49). In this case the origin of the geodesic is s 0 and thus contains the bad decoration. Recall that ǫ denotes the direction of the root edge in the BDG bijection. When ǫ = −1 the root edge is directed away from s 0 and thus by the definition of r, r = s 0 . We may then go trough the same argument as for (5.48), Case (1) except now n 1 = 0, n 2 = k and one does not have to take the distribution of T 1 into account. Similarly, when ǫ = +1, s 0 is the leftmost child of r and in this case n 1 = 1 and n 2 = k. Using these values, (5.62) may be estimated by the expression stated in (5.49). Finally, (5.50) follows in the same way but corresponds to Case (2) in (5.50) and thus m is replaced by n − m.

Appendix
In this section we prove Lemmas 5.1, 5.3 and 5.7 and conclude by collecting a few results on the modified Galton Watson process (Y n ) n≥0 defined above (5.41).
Proof of Lemma 5.1. Let Z be a Galton-Watson tree with offspring distribution ξ and denote the total number of vertices in Z by N . First consider the case i = 0. Then the bijection between simply generated trees and mobiles shows that |D i | d = N . We can interpret N as a first-passage time in a random walk with drift, according to the following standard 'depth-first-search' construction of Z. First sample the number ξ 1 of children of the root. If ξ 1 = 0 we are done (and the tree has size 1). Otherwise pick the 'leftmost' of the children of the root and independently of ξ 1 sample its number ξ 2 of children. If ξ 2 > 0 we repeat this for the new leftmost child, otherwise we repeat the procedure for the leftmost of the remaining ξ 1 − 1 children of the root that have not yet been 'investigated'. The same procedure is repeated until the entire tree has been constructed. By considering the number of vertices left to investigate after k steps in this construction we see that N is the smallest value of k such that the random walk as required.
The argument for D 0 is similar, but the bijection from simply generated trees has a more complex result this time. The vertices of D 0 consist both of the outgrowth from the infinite degree vertex in T which is immediately to the right of the spine, as well as the vertices of finite degree on the spine of T , along with their outgrowths. The length L of the spine satisfies P(L = i) = κ i (1 − κ). The number of outgrowths of a vertex of finite degree on the spine is distributed asξ, where P(ξ = k) = P(ξ − 1 = k |ξ < ∞) = (k + 1)π k+1 κ (6.3) (see (3.10)). Thus for all s ≥ 0 E(ξ s ) ≤ 1 κ E(ξ s+1 ). (6.4) Let (ξ i ) i≥1 be independent copies ofξ, and let N and (N i,j ) i,j≥1 be independent copies of N above, all independent of each other and of L. It follows from the description above that for all s ≥ 0 (6.5) Let s = r − 1. If s ≥ 1 it follows from (6.5) (conditioning on L and theξ i , and using Minkowski's inequality), that |D 0 | s ≤ N s 1 + L s ξ s , (6.6) which is finite by (6.2) and (6.4). Similarly, if 0 < s < 1 then by using subadditivity and Jensen's inequality we get that which is finite by (6.2) and (6.4).
Proof of Lemma 5.3. Note that i + (R) and i − (R) can be written in terms of first passage times of the random walk S n = n i=1 X i with the i.i.d. jump distribution X i described at (2.14): i + (R) = inf{n ≥ 0 : S n = −R} = inf{n ≥ 0 : S n ≤ −R}, i − (R) = inf{n ≥ 0 : S n ≥ R}.
For this random walk the distributions of such first passage times can be computed explicitly (using an exponential martingale argument). However, we use a more general argument based on approximating the random walk with Brownian motion, and well-known results for first passage times of the latter.
Let S(t) denote the process obtained from S n by linear interpolation between integer times. The X i of (2.14) have mean 0 and variance 2, so the Komlós-Major-Tusnády theorem [21] tells us that we may couple (S(t) : t ≥ 0) with a standard Brownian motion (W (t) : t ≥ 0) in such a way that for some constants c, ε > 0 and all T, x > 0 we have Let K(λ) = K ′ log λ, with the constant K ′ chosen large enough that K(λ)R > 2c log R + (c + 1/ε) log λ, for all R, λ ≥ 1.
We have that The first term is at most 2 √ 2π (K(λ) + 1/ √ 2)R(λR 2 ) −1/2 ≤ K ′′ log λ λ 1/2 by standard results for Brownian motion (by the reflection principle, the probability that W has not exceeded a by time t equals 2P(W (t) > a)). By (6.8) the second term is at most This proves the bound for i − (R). The bound for i + (R) is similar.
6.1. The process (Y n ) n≥0 . We collect here a few results which we need on the process (Y n ) n≥0 which is defined above (5.41). Recall that π i = 2 −i−1 .
Proof. We use the fact that for each fixed i ≥ 0, both E(Y i | Y j = 0) and E(Y i | Y j > 0) are non-decreasing in j. We thus obtain an upper bound by letting j → ∞. Since the process (Y i ) i≥0 dies out with probability 1 we have that lim j→∞ E(Y i | Y j = 0) = E(Y i ) = g ′ i (1) = 2 by (6.16). We also have that → 4i + 4 (6.18) as j → ∞ where the convergence follows from (6.16) and (6.17). The monotonicity in j used above may be proved as follows. Consider first E(Y i | Y j > 0) and start with the case j ≤ i. Then we have , Clearly P (Y j > 0) ≤ P (Y j−1 > 0), so the statement holds for all j up to i. Now consider the case j ≥ i. Write p j (k) = P (Y i = k | Y j > 0). Then we have We may rewrite where c j = P (Y j > 0)/P (Y j+1 > 0) and q r is the probability that an individual present at time i has no offspring at time i + r. Since q r+1 ≥ q r we have that h j (k) is non-decreasing in k. It follows from Harris' inequality that The argument for E(Y i | Y j = 0) is similar, using the increasing function (q j+1−i /q j−i ) k in place of h j (k).