Scaling Limits for Random Quadrangulations of Positive Genus

We discuss scaling limits of large bipartite quadrangulations of positive genus. For a given $g$, we consider, for every $n \ge 1$, a random quadrangulation $\q_n$ uniformly distributed over the set of all rooted bipartite quadrangulations of genus $g$ with $n$ faces. We view it as a metric space by endowing its set of vertices with the graph distance. We show that, as $n$ tends to infinity, this metric space, with distances rescaled by the factor $n^{-1/4}$, converges in distribution, at least along some subsequence, toward a limiting random metric space. This convergence holds in the sense of the Gromov-Hausdorff topology on compact metric spaces. We show that, regardless of the choice of the subsequence, the Hausdorff dimension of the limiting space is almost surely equal to 4. Our main tool is a bijection introduced by Chapuy, Marcus, and Schaeffer between the quadrangulations we consider and objects they call well-labeled $g$-trees. An important part of our study consists in determining the scaling limits of the latter.


Motivation
The aim of the present work is to investigate scaling limits for random maps of arbitrary genus. Recall that a map is a cellular embedding of a finite graph (possibly with multiple edges and loops) into a compact connected orientable surface without boundary, considered up to orientationpreserving homeomorphisms. By cellular, we mean that the faces of the map-the connected components of the complement of edges-are all homeomorphic to discs. The genus of the map is defined as the genus of the surface into which it is embedded. For technical reasons, it will be convenient to deal with rooted maps, meaning that one of the half-edges-or oriented edges-is distinguished.
We will particularly focus on bipartite quadrangulations: a map is a quadrangulation if all its faces have degree 4; it is bipartite if each vertex can be colored in black or white, in such a way that no edge links two vertices that have the same color. Although in genus g = 0, all quadrangulations are bipartite, this is no longer true in positive genus g ≥ 1.
A natural way to generate a large random bipartite quadrangulation of genus g is to choose it uniformly at random from the set Q n of all rooted bipartite quadrangulations of genus g with n faces, and then consider the limit as n goes to infinity. From this point of view, the planar casethat is g = 0-has largely been studied for the last decade. Using bijective approaches developed by Cori and Vauquelin [8] between planar quadrangulations and so-called well-labeled trees, Chassaing and Schaeffer [7] exhibited a scaling limit for some functionals of a uniform random planar quadrangulation. They studied in particular the so-called profile of the map, which records the number of vertices located at every possible distance from the root, as well as its radius, defined as the maximal distance from the root to a vertex. They showed that the distances in the map are of order n 1/4 and that these two objects, once the distances are rescaled by the factor n −1/4 , admit a limit in distribution.
Marckert and Mokkadem [21] addressed the problem of convergence of quadrangulations as a whole, considering them as metric spaces endowed with their graph distance. They constructed a limiting space and showed that the discrete spaces converged toward it in a certain sense. The natural question of convergence in the sense of the Gromov-Hausdorff topology [14] remained, however, open. It is believed that the scaling limit of a uniform random planar quadrangulation exists in that sense. An important step toward this result has been made by Le Gall [17] who showed the tightness of the laws of these metric spaces, and that every possible limiting space-commonly called Brownian map, in reference to Marckert and Mokkadem's terminology-is in fact almost surely of Hausdorff dimension 4. He also proved, together with Paulin [19], that every Brownian map is almost surely homeomorphic to the two-dimensional sphere. Miermont [22] later gave a variant proof of this fact.
In positive genus, Chapuy, Marcus, and Schaeffer [6] extended the bijective approaches known for the planar case, leading Chapuy [5] to establish the convergence of the rescaled profile of a uniform random bipartite quadrangulation of fixed genus. A different approach consists in using Boltzmann measures. The number of faces is then random: a quadrangulation is chosen with a probability proportional to a certain fixed weight raised to the power of its number of faces. Conditionally given the number of faces, a quadrangulation chosen according to this probability is then uniform. Miermont [23] showed the relative compactness of a family of these measures, adapted in the right scaling, as well as the uniqueness of typical geodesics in the limiting spaces.
The present work generalizes a part of the above results to any positive genus: we will show the tightness of the laws of rescaled uniform random bipartite quadrangulations of genus g with n faces in the sense of the Gromov-Hausdorff topology. These results may be seen as a conditioned version of some of Miermont's results appearing in [23]. We will also prove that the Hausdorff dimension of every possible limiting space is almost surely 4.

Main results
We will work in fixed genus g. On the whole, we will not let it figure in the notations, in order to lighten them. As the case g = 0 has already been studied, we suppose g ≥ 1.
We use the classic formalism for maps, which we briefly remind here. For any map m, we denote by V (m) and E(m) respectively its sets of vertices and edges. We also call E(m) its set of half-edges. By convention, we will note e * ∈ E(m) the root of m. For any half-edge e, we writeē its reverse-so that E(m) = {{e,ē} : e ∈ E}-as well as e − and e + its origin and end. Finally, we say thať E(m) ⊂ E(m) is an orientation of the half-edges if for every edge {e,ē} ∈ E(m) exactly one of e or e belongs toĚ(m).
Recall that the Gromov-Hausdorff distance between two compact metric spaces (S, δ) and (S ′ , δ ′ ) is defined by Theorem 1 Let q n be uniformly distributed over the set Q n of all bipartite quadrangulations of genus g with n faces. Then, from any increasing sequence of integers, we may extract a subsequence (n k ) k≥0 such that there exists a random metric space (q ∞ , d ∞ ) satisfying V (q n k ), 1 in the sense of the Gromov-Hausdorff topology, where γ := 8 9 1 4 . Moreover, the Hausdorff dimension of the limit space (q ∞ , d ∞ ) is almost surely equal to 4, regardless of the choice of the sequence of integers.
The limiting spaces (q ∞ , d ∞ ) appearing in Theorem 1 are expected to have similar properties as in the case g = 0. For instance, they are expected to have the same topology as the torus with g holes, and to possess the property of uniqueness of their geodesic paths. In an upcoming work, we will show that the topology is indeed that of the g-torus.
We call g-tree a map of genus g with only one face. This generalizes the notion of tree: note that a 0-tree is merely a (plane) tree. In order to show Theorem 1, we will code quadrangulations by g-trees via a bijection introduced by Chapuy, Marcus, and Schaeffer [6], which we expose in Section 2. We then study the scaling limits of g-trees: we first decompose them in Section 3 and study their convergence in Section 4 and 5. Finally, Section 6 is dedicated to the proof of Theorem 1.
Along the way, we will recover an asymptotic expression, already known from [6], for the cardinality of the set Q n of all rooted bipartite quadrangulations of genus g with n faces. Following [6], we call dominant scheme a g-tree whose vertices all have degree exactly 3. We write S * the (finite) set of all dominant schemes of genus g. It is a well-known fact that there exists a constant t g (only depending on g) such that |Q n | ∼ t g n 5 2 (g−1) 12 n (see for example [1,6,23]). This constant plays an important part in enumeration of many classes of maps [1,13] .
Theorem 2 ( [6]) The following expression holds where the second sum is taken over all (4g − 2)! orderings λ of the vertices of a dominant scheme s ∈ S * , i.e. bijections from 0, 4g − 3 onto V (s), and As the proof of this expression is more technical, we postpone it to the last section. By convention, we will suppose that all the random variables we consider are defined on a common probability space (Ω, F , P).

The Chapuy-Marcus-Schaeffer bijection
The first main tool we use consists in the Chapuy-Marcus-Schaeffer bijection [6, Corollary 2 to Theorem 1], which allows us to code (rooted) quadrangulations by so-called well-labeled (rooted) g-trees.
It may be convenient to represent a g-tree t with n edges by a 2n-gon whose edges are pairwise identified (see Figure 1). We note e 1 := e * , e 2 , . . . , e 2n the half-edges of t sorted according to the clockwise order around this 2n-gon. The half-edges are said to be sorted according to the facial order of t. Informally, for 2 ≤ i ≤ 2n, e i is the "first half-edge to the left after e i−1 ." We call facial sequence of t the sequence t(0), t(1), . . . , t(2n) defined by t(0) = t(2n) = e − 1 = e + 2n and for 1 ≤ i ≤ 2n − 1, t(i) = e + i = e − i+1 . Imagine a fly flying along the boundary of the unique face of t. Let it start at time 0 by following the root e * and let it take one unit of time to follow each half-edge, then t(i) is the vertex where the fly is at time i.
Let t be a g-tree. The two vertices u, v ∈ V (t) are said to be neighbors, and we write u ∼ v, if there is an edge linking them.
Definition 1 A well-labeled g-tree is a pair (t, l) where t is a g-tree and l : V (t) → Z is a function (thereafter called labeling function) satisfying: We call T n the set of all well-labeled g-trees with n edges.  Figure 1: On the left, the facial order and facial sequence of a g-tree. On the right, its representation as a polygon whose edges are pairwise identified.
A pointed quadrangulation is a pair (q, v • ) consisting in a quadrangulation q together with a distinguished vertex v • ∈ V (q). We call the set of all pointed bipartite quadrangulations of genus g with n faces.
The Chapuy-Marcus-Schaeffer bijection is a bijection between the sets T n × {−1, +1} and Q • n . As a result, because every quadrangulation q ∈ Q n has exactly n + 2 − 2g vertices, we obtain the relation Let us now briefly describe the mapping from T n × {−1, +1} onto Q • n . We refer the reader to [6] for a more precise description. Let (t, l) ∈ T n be a well-labeled g-tree with n edges and ε ∈ {−1, +1}. As above, we write t(0), t(1), . . . , t(2n) its facial sequence. The pointed quadrangulation (q, v • ) corresponding to ((t, l), ε) is then constructed as follows. First, shift all the labels in such a way that the minimal label is 1. Let us calll := l − min l + 1 this shifted labeling function. Then, add an extra vertex v • carrying the labell(v • ) := 0 inside the only face of t. Finally, following the facial sequence, for every 0 ≤ i ≤ 2n − 1, draw an arc-without crossing any edge of t or arc already drawn-between t(i) and its successor, defined as follows: ⋄ ifl(t(i)) = 1, then its successor is the extra vertex v • , ⋄ ifl(t(i)) ≥ 2, then its successor is the first following vertex whose shifted label isl(t(i)) − 1, The quadrangulation q is then defined as the map whose set of vertices is V (t) ∪ {v • }, whose edges are the arcs we drew and whose root is the first arc drawn, oriented from t(0) if ε = −1 or toward t(0) if ε = +1 (see Figure 2). Because of the way we drew the arcs of q, we see that for any vertex v ∈ V (q),l(v) = d q (v • , v). When seen as a vertex in V (q), we write q(i) instead of t(i). In particular, We end this section by giving an upper bound for the distance between two vertices q(i) and q(j), in terms of the labeling function l: where we note, for i ≤ j, i, j : We refer the reader to [23,Lemma 4] for a detailed proof of this bound. The idea is the following: we consider the paths starting from t(i) and t(j) and made of the successive arcs going from vertices to their successors without crossing the g-tree. They are bound to meet at a vertex with label m−1, On Figure 3, we see that the (red) plain path has length l(t(i)) − m + 1 and that the (purple) dashed one has length l(t(j)) − m + 1. 3 Decomposition of a g-tree We investigate here more closely the structure of a g-tree t. We call scheme a g-tree with no vertices of degree 1 or 2. Roughly speaking, a g-tree is a scheme in which every half-edge is replaced by a forest.

Formal definitions
We adapt the standard formalism for plane trees-as found in [24] for instance-to forests. Let where N := {1, 2, . . . }. If u ∈ N n , we write |u| := n. For u = (u 1 , . . . , u n ), v = (v 1 , . . . , v p ) ∈ U, we let uv := (u 1 , . . . , u n , v 1 , . . . , v p ) be the concatenation of u and v. If w = uv for some u, v ∈ U, we say that u is a ancestor of w and that w is a descendant of u. In the case where v ∈ N, we may also use the terms parent and child instead.
ii. if u ∈ f, |u| ≥ 2, then its parent belongs to f, The integer t(f) encountered in i. and iv. is called the number of trees of f.
We will see in a moment why we require t(f) + 1 to lie in f. For u = (u 1 , . . . , u p ) ∈ f, we call a(u) := u 1 its oldest ancestor. A tree of f is a level set for a: for 1 ≤ j ≤ t(f), the j-th tree of f is the set {u ∈ f : a(u) = j}. The integer a(u) hence records which tree u belongs to. We call f ∩ N = {a(u), u ∈ f} the floor of the forest f.
It is convenient, when representing a forest, to draw edges between parents and their children, as well as between i and i + 1, for i = 1, 2, . . . , t(f), that is between u's and v's such that u ∼ v (see Figure 4). We say that an edge drawn between a parent and its child is a tree edge whereas an edge drawn between an i and an i + 1 will be called a floor edge.
We call F m σ the set of all forests with σ trees and m tree edges, that is Definition 3 A well-labeled forest is a pair (f, l) where f is a forest and l : f → Z is a function satisfying: : f ∈ F m σ } be the set of well-labeled forests with σ trees and m tree edges. Remark. For every forest in F m σ , there are exactly 3 m admissible ways to label it: for all tree edges, one may choose any increment in {−1, 0, 1}. As a result, |F m σ | = 3 m |F m σ |.

Encoding by contour and spatial contour functions
There is a very convenient way to code forests and well-labeled forests. Let f ∈ F m σ be a forest. Let us begin by defining its facial sequence f(0), f(1), . . . , f(2m+σ) as follows (see Figure 5): f(0) := 1, and for 0 ≤ i ≤ 2m + σ − 1, ⋄ if f(i) has children that do not appear in the sequence f(0), f(1), . . . , f(i), then f(i + 1) is the first of these children, that is f(i + 1) := f(i)j 0 where ⋄ otherwise, if f(i) has a parent (that is |f(i)| ≥ 2), then f(i + 1) is this parent, f(10) Figure 5: The facial sequence associated with the well-labeled forest from Figure 4.
It is easy to see that each tree edge is visited exactly twice-once going from the parent to the child, once going the other way around-whereas each floor edge is visited only once-from some i to i + 1. As a result, f(2m + σ) = t(f) + 1.
The contour function of f is the function and linearly interpolated between integer values (see Figure 6). We can easily check that the function C f entirely determines the forest f. We see that C f ranges in the set of paths of a simple random walk starting from t(f) and conditioned to hit 0 for the first time at 2m + σ. This allows us to compute the cardinality of F m σ : Lemma 3 Let σ ≥ 1 and m ≥ 0 be two integers. The number of forests with σ trees and m tree edges is: where (S i ) i≥0 is a simple random walk on Z.
Proof. Shifting the contour functions, we see that |F m σ | is the number of different paths of a simple random walk starting from 0 and conditioned to hit −σ for the first time at 2m + σ. We have where the second equality is an application of the so-called cycle lemma (see for example [2,Lemma 2]). The second equality of the lemma is obtained by seeing that S 2m+σ = σ if and only if the walk goes exactly m + σ times up and m times down. Now, if we have a well-labeled forest (f, l), the contour function C f enables us to recover f. To record the labels, we use the spatial contour function L f,l : and linearly interpolated between integer values (see Figure 6). The contour pair (C f , L f,l ) then entirely determines (f, l).

Extraction of the scheme out of a g-tree
Definition 4 We call scheme of genus g a g-tree with no vertices of degree one or two. A scheme is said to be dominant when it only has vertices of degree exactly three.
Remark. The Euler characteristic formula easily shows that the number of schemes of genus g is finite. We call S the set of all schemes of genus g and S * the set of all dominant schemes of genus g.
It was explained in [6] how to extract the scheme out of a g-tree t. Let us recall now this operation. By iteratively deleting all its vertices of degree 1, we are left with a-non-necessarily rooted-g-tree. If the root has been removed, we root this new g-tree on the first remaining half-edge following the actual root in the facial order of t.
The vertices of degree 2 in the new g-tree are organized into maximal chains connected together at vertices of degree at least 3. We replace each of these maximal chains by a single new edge. The edge replacing the chain containing the root is chosen to be the final root (with the same orientation).
By construction, the map s we obtain is a scheme of genus g, which we call the scheme of the g-tree t. The vertices of t that remain vertices in the scheme s are called the nodes of t. See Figure 7.

Decomposition of a g-tree
When iteratively removing vertices of degree 1, we actually remove whole trees. Let c 1 , c 2 , . . . , c k be one of the maximal chains of half-edges linking two nodes. The trees that we remove, appearing on the left side of this chain, connected to one of the c − i 's, form a forest-with k trees-as defined in Section 3.1. Beware that the tree connected to c + k is not a part of this forest; it will be the first tree of some other forest. Remember that the forests we consider always end by a single vertex not t s Figure 7: Extraction of the scheme s out of the g-tree t.
considered to be a tree. This chain being later replaced by a single half-edge of the scheme, we see that a g-tree t can be decomposed into its scheme s and a collection of forests (f e ) e∈ E(s) . Recall that E(s) is the set of all half-edges of s.
For e ∈ E(s), let us define the integers m e ≥ 0 and σ e ≥ 1 by so that m e records the "size" of the forest attached on the half-edge e and σ e its "length." In order to recover t from s and these forests, we need to record the position its root. It may be seen as a half-edge of the forest f e * corresponding to the root e * of s. We code it by the integer u ∈ 0, 2m e * + σ e * (6) for which this half-edge links f e * (u) to f e * (u + 1). For every half-edge e ∈ E(s), if we callē its reverse, we readily obtain the relation: This decomposition may be inverted. Let us suppose that we have a scheme s and a collection of forests (f e ) e∈ E(s) . Let us define the integers m e 's and σ e 's by (5) and suppose they satisfy (7). Let again 0 ≤ u < 2m e * + σ e * be an integer. Then we may construct a g-tree as follows. First, we replace every edge {e,ē} by a chain of σ e = σē edges. Then, for every half-edge e ∈ E(s), we replace the chain of half-edges corresponding to it by the forest f e , in such a way that its floor 2 matches with the chain. Finally, we find the root inside f e * thanks to the integer u.
This discussion is summed up by the following proposition. The factor 1/2 in the last statement comes from the fact that the floor of f e and that of fē are overlapping in the g-tree, thus their edges should be counted only once.

Proposition 4
The above construction provides us with a bijection between the set of all g-trees and the set of all triples s, (f e ) e∈ E(s) , u where s ∈ S is a scheme (of genus g), the forests f e ∈ F m e σ e satisfy (7) and u satisfies (6).
Moreover, g-trees with n edges correspond to triples satisfying e∈ E(s) m e + 1 2 σ e = n.

Decomposition of a well-labeled g-tree
We now deal with a well-labeled g-tree. We will need the following definitions: We say that (M n ) n≥0 is a simple Motzkin walk if it is defined as the sum of i.i.d. random variables with law 1 3 (δ −1 + δ 0 + δ 1 ).
Remark. We then have M l1→l2 When decomposing a well-labeled g-tree (t, l) into a triple (s, (f e ), u) according to Proposition 4, every forest f e naturally inherits a labeling function notedl e from l. In general, the forest (f e ,l e ) is not well-labeled, because the labels of its floor have no reason to be equal to 0. We will transform it into a Motzkin bridge M e starting from 0 and a well-labeled forest (f e , l e ). The Motzkin bridge records the floor labels shifted in order to start from 0: for 0 ≤ i ≤ t(f e ), M e (i) :=l e (i + 1) −l e (1), where, on the right-hand side, we used the notation {1, 2, . . . , t(f e ) + 1} for the floor of f e . The well-labeled forest is obtained by shifting all the labels tree by tree in such a way that the root label of any tree is 0: for all w ∈ f e , l e (w) :=l e (w) −l e (a(w)).
We thus decompose the well-labeled g-tree (t, l) into its scheme s, a collection (M e ) e∈ E(s) of Motzkin bridges started at 0, a collection (f e , l e ) e∈ E(s) of well-labeled forests and an integer u, as shown on Figure 8.
For e ∈ E(s), we define the integer l e ∈ Z to be such that It records the spatial displacement made along the half-edge e. Because the floor of f e overlaps the floor of fē in the g-tree, M e and Mē read the same labels in opposite direction: In particular, lē = −l e . But this is not the only constraints on the family (l e ) e∈ E(s) . These will be easier to understand while looking at vertices instead of edges. For every vertex v ∈ V (s), we let l v be the label of the corresponding node shifted in such a way that l e − * = 0. We have the following relation between (l e ) e∈ E(s) and (l v ) v∈V (s) : for all e ∈ E(s), so that the family (l v ) v∈V (s) entirely determines (l e ) e∈ E(s) . Because of the choice we made, l e − * = 0, it is easy to see that (l e ) e∈ E(s) determines (l v ) v∈V (s) as well.
It now becomes clear that the only constraint on (l e ) e∈ E(s) is to be obtained from a family (l v ) v∈V (s) by the relations (10).
Let s be a scheme, (M e ) e∈ E(s) be a family of Motzkin bridges started from 0, (f e , l e ) e∈ E(s) be a family of well-labeled forests, and u be an integer. Let the integers m e 's, σ e 's and l e 's be defined by (5) and (8). We will say that the quadruple s, (M e ) e∈ E(s) , (f e , l e ) e∈ E(s) , u is compatible if the integers σ e 's satisfy the constraints (7), the Motzkin bridges M e 's satisfy (9), the integers l e 's can be obtained from a family (l v ) v∈V (s) by the relations (10), and u satisfies (6).
Let suppose now that we have a compatible quadruple s, (M e ) e∈ E(s) , (f e , l e ) e∈ E(s) , u . We may reconstruct a well-labeled g-tree as follows. We begin by suitably relabeling the forests. For every half-edge e, first, we shift the labels of M e by l e − so that it becomes a bridge from l e − to l e + . Then, we shift all the labels of (f e , l e ) tree by tree according to the Motzkin bridge: precisely, we change l e into w ∈ f e → l e − + M e (a(w) − 1) + l e (w). Then, we replace the half-edge e by this forest, as in the previous section. As before, we find the root thanks to u. Finally, we shift all the labels for the root label to be equal to 0. This discussion is summed up by the following proposition.

Proposition 5
The above construction provides us with a bijection between the set of all welllabeled g-trees and the set of all compatible quadruples s, Moreover, g-trees with n edges correspond to quadruples satisfying e∈ E(s) m e + 1 2 σ e = n.
If we call (C e , L e ) the contour pair of (f e , l e ), then we may retrieve the oldest ancestor of f e (i) thanks to C e by the relation where we use the notation for any process (X s ) s≥0 . The function then records the labels of the forest f e , once shifted tree by tree according to the Motzkin bridge M e . This function will play an important part in Section 6.
Through the Chapuy-Marcus-Schaeffer bijection, a uniform random quadrangulation corresponds to a uniform random well-labeled g-tree. In order to investigate the scaling limit of the latter, we will proceed in two steps. First, we consider the scaling limit of its structure, consisting in its scheme along with the integers m e 's, σ e 's, l v 's and u previously defined. Then, we deal with its Motzkin bridges and forests conditionally given its structure.
4 Convergence of the structure of a uniform well-labeled gtree

Preliminaries
We investigate here the convergence of the integers previously defined, suitably rescaled, in the case of a uniform random well-labeled g-tree with n vertices. Let (t n , l n ) be uniformly distributed over the set T n of well-labeled g-trees with n vertices. We call its scheme s n and we define , and u n as in the previous section. We know that the right scalings are 2n for sizes, √ 2n for distances in the g-tree, and γ n and u (n) := u n 2n .
Remark. Throughout this paper, the notations with a parenthesized n will always refer to suitably rescaled objects-as in the definitions above.
As sensed in the previous section, it will be more convenient to work with l v 's instead of l e 's. We use the notation Z + := {0, 1, . . . } for the set of non-negative integers. For any scheme s ∈ S, we define the set C n (s) of quadruples (m, σ, l, u) lying in Z This is the set of integers satisfying the constraints discussed in the previous section for a welllabeled g-tree with n edges. For (m, σ, l, u) ∈ C n (s), we will compute the probability that s n = s and (m n , σ n , l n , u n ) = (m, σ, l, u). A g-tree has such features if and only if its scheme is s and, for every e ∈ E(s), the path M e is a Motzkin bridge from 0 to l e = l e + − l e − on [0, σ e ], and the well-labeled forest (f e , l e ) lies in F m e σ e . Moreover, because of the relation (9), the Motzkin bridges (M e ) e∈ E(s) are entirely determined by (M e ) e∈Ě(s) , whereĚ(s) is any orientation of E(s). Using Lemma 3, we obtain P (s n = s, (m n , σ n , l n , u n ) = (m, σ, l, u)) where (S i ) i≥0 is a simple random walk on Z and (M i ) i≥0 is a simple Motzkin walk.
We will need the following local limit theorem (see [25, Theorems VII.1.6 and VII.3.16]) to estimate the probabilities above. We call p the density of a standard Gaussian random variable: Proposition 6 Let (X i ) i≥0 be a sequence of i.i.d. integer-valued centered random variables with a moment of order r 0 for some r 0 ≥ 3. Let η 2 := Var(X 1 ), h be the maximal span 4 of X 1 and the integer a be such that a.s.
2. For all 2 ≤ r ≤ r 0 , there exists a constant C such that for all i ∈ Z and k ≥ 1, Proof. The first part of this theorem is merely [ In what follows, we will always use the notation S for simple random walks, M for simple Motzkin walks, and Σ for any other random walks. We will use this theorem with S and M : we find (η, h) = (1, 2) for S and (η, h) = ( 2/3, 1) for M . In both cases, we may take r as large as we want.

Result
Recall that S * is the set of all dominant schemes of genus g, that is schemes with only vertices of degree 3. We call p a the density of a centered Gaussian variable with variance a, as well as p ′ a its derivative: For any s ∈ S, we identify an element (m, σ, l, u) ∈ R We write Note that the vector m lies in the simplex ∆ s as long as m e * ≥ 0. We define the probability µ by, for all non-negative measurable function ϕ on s∈S {s} × is a normalization constant. We may now state the main result of this section.

Proposition 7
The law of the random variable converges weakly toward the probability µ.
Proof. Let ϕ be a bounded continuous function on the set We need to look at the convergence of instead of (13.1), which may be seen as its discrete counterpart. This is an element of C n (s) provided that m e * (n) ≥ 0 and 0 ≤ u < 2m e * (n) + σ e * . Beware that here the definition of m e * (n) actually depends on n. It also depends on σ but we chose not to let it figure in the notation for space reasons.
For any vector Note that for m ∈ R E(s)\{e * } + , ⌊m⌋ e * (n) is well defined through (13.1'). Until further notice, we will write ⌊m⌋ e * for ⌊m⌋ e * (n), which n being implicit. So when we write ⌊m⌋, we mean the vector such that ⌊m⌋ e = ⌊m e ⌋ for e = e * and ⌊m⌋ e * = ⌊m⌋ e * (n). Using (12), we find where Writing the sum over C n (s) in the form of an integral, we obtain where ϕ ⌊·⌋ stands for Finally, the changes of variables m → nm, σ → √ 2n σ, l → γ n 1 4 l, and u → 2nu yields where 2) We are now going to see that every integral term of the sum appearing in the equation (15) converges, by dominated convergence. We no longer use (13.1') but (13.1) in the identification (13). Because we see that A s n → ½ {m e * ≥ 0, u<m e * } ϕ (s, m, σ, l, u). Thanks to Proposition 6, we then obtain B s,e n → −p ′ m e (σ e ) and C s,e n → p σ e (l e ).
It remains to prove that the convergence is dominated. To that end, we use the second part of Proposition 6. In the remainder of the proof, C will denote a constant in (0, ∞), the value of which may differ from line to line. First, notice that Then, applying Proposition 6 with r = 3, we obtain, for n ≥ 2, where we used the fact that for x ≥ 1, ⌊x⌋ −1 ≤ 2/x. The case ⌊nm⌋ = 0 is to be treated separately, and is left to the reader. Applying now Proposition 6 with r = 2, we find that, for n ≥ 2, Any integrand in the equation (15) is then bounded by We have to see that this expression is integrable. First, note that we integrate with respect to u on a compact set. Moreover, and we have the same bound if we integrate with respect to l e + instead of l e − . It is possible to injectively associate with every vertex v ∈ V (s)\{e − * } a half-edge e v ∈Ě(s) such that v is an extremity of e v . Let us call E V the range of such an injection. The integral of the expression (16) with respect to u and l is then bounded by Finally, it is easy to see that this expression, once integrated 5 with respect to σ, is bounded by C e∈ E(s) (m e ) −7/8 , which is integrable with respect to m.

3) We just saw that the integral expression in (15) converges toward
The dominant terms in the equation (15) are the ones for which |E(s)| is the largest. The corresponding schemes are exactly the dominant ones: for a scheme, 2 |E(s)| = v∈V (s) deg(v) ≥ 3 |V (s)| and the Euler characteristic formula gives |E(s)| ≤ 6g − 3, the equality being reached when 2 |E(s)| = 3 |V (s)|, that is when s is dominant. Note that this situation is exactly the same as the one encountered in [5,6,23].
Hence, if ϕ is momentarily chosen to be constantly equal to 1, we obtain that where Υ is defined by (14). Finally, which is the result we sought. The convergence of a uniform well-labeled tree with n edges is well-known, see [7], for example. We will need a conditioned version of this result: roughly speaking, instead of looking at one large tree with n edges uniformly labeled such that the root label is 0, we look at a forest with n edges, a number of trees growing like √ n, that are uniformly labeled provided the root label of every tree is 0. For that matter, we will adapt the arguments provided in [15,Chapter 6].

Convergence of the Motzkin bridges and the forests
Let us define the space K of continuous real-valued functions on R + killed at some time: For an element f ∈ K, we will define its lifetime σ(f ) as the only x such that f ∈ C([0, x], R). We endow this space with the following metric: Recall that we use the notation X(s) for the infimum up to time s of any process X ∈ K. Throughout this section, m and σ will denote positive real numbers and l will be any real number.

Brownian bridge and first-passage Brownian bridge
The results we show in this section are part of the probabilistic folklore. Because of the scarceness of the references, we will give complete proofs for the sake of self-containment.
We are a standard Brownian motion β on [0, m] conditioned respectively on the event {β m = l} and on hitting −σ for the first time at time m. Of course, both theses events occur with probability 0 so we need to define these objects properly. There are several equivalent ways do do so (see for example [3,27,2]).
Remember that we call p a the Gaussian density with variance a and mean 0, as well as p ′ a its derivative. Let (β t ) 0≤t≤m be a standard Brownian motion. As explained in [12,Proposition 1], the law of the Brownian bridge is characterized by B 0→l [0,m] (m) = l and the formula for all bounded measurable function f on K, for all 0 ≤ m ′ < m.
We define the law of the first-passage Brownian bridge in a similar way, by letting for all bounded measurable function f on K, for all 0 ≤ m ′ < m, and These formulae set the finite-dimensional laws of the first-passage Brownian bridge. It remains to see that it admits a continuous version. Because its law is absolutely continuous with respect to the Wiener measure on every [0, m ′ ], m ′ < m, the only problem arises at time m. We will, however, use the Kolmogorov lemma [27, Theorem 1.8] to obtain the continuity of the whole trajectory. We will see during the proof of Lemma 14 that, as for the Brownian motion, the trajectories of the first-passage bridge are α-Hölder for every α < 1/2.
The motivation of these definitions may be found in the following lemma: Lemma 8 Let (β t ) 0≤t≤m be a standard Brownian motion. Let (B ǫ t ) 0≤t≤m and (F ǫ t ) 0≤t≤m have the law of β conditioned respectively on the events Then, as ǫ goes to 0, The proof of this lemma uses similar methods as those we will use for Lemma 10 so we let the details to the reader. In what follows, we will use the following lemma, which is a consequence of the Rosenthal Inequality [26, Theorem 2.9 and 2.10]: Lemma 9 Let X 1 , X 2 ,. . . X k be independent centered random variables and q ≥ 2. Then, there exists a constant c(q) depending only on q such that In particular, if X 1 , X 2 ,. . . X k are i.i.d.,

Discrete bridges
We will see in this paragraph two lemmas showing that these two objects are the limits of their discrete analogs. These lemmas, in themselves, motivate our definitions of bridges and first-passage bridges. Let us begin with bridges. We consider a sequence (X k ) k≥0 of i.i.d. centered integer-valued random variables with a moment of order q 0 for some q 0 ≥ 3. We write η 2 := Var(X 1 ) its variance and h its maximal span. We define Σ i := i k=0 X k and still write Σ its linearly interpolated version. Let (m n ) ∈ Z N + and (l n ) ∈ Z N be two sequences of integers such that Let (B n (i)) 0≤i≤mn be the process whose law is the law of (Σ i ) 0≤i≤mn conditioned on the event which we suppose occurs with positive probability. We let be its rescaled version.
Lemma 10 As n goes to infinity, the process B (n) converges in law toward the process B 0→l [0,m] , in the space (K, d K ).
Proof. We note F i := σ(Σ k , 0 ≤ k ≤ i) the natural filtration associated with Σ. Applying the Skorokhod theorem, we may and will assume that converges a.s. toward a standard Brownian motion (β s ) 0≤s≤m for the uniform topology.
1) Let m ′ < m. We begin by looking at B (n) on [0, m ′ ]. For n large enough, ⌈nm ′ ⌉ < m n . Let f be continuous bounded from K to R. We have Recall the notation Q Σ k (i) = P (Σ k = i). Using the Markov property, we obtain where the second line comes from Proposition 6. Note that the denominator of the fractional term in (20) is the same as the numerator when m ′ is chosen to be 0. So the fractional term in (20) converges a.s. toward the convergence being dominated-by Proposition 6. Finally, 2) We will use the following lemmas, the proofs of which we postpone right after the end of this proof.

Lemma 11
There exists an integer n 0 ∈ N such that, for every 2 ≤ q ≤ q 0 , there exists a constant C q satisfying, for all n ≥ n 0 and 0 ≤ s ≤ t ≤ m (n) , Lemma 12 We note B := B 0→l [0,m] . For any q ≥ 2, there exists a constant C q such that, for all By the Portmanteau theorem [3, Theorem 2.1], we can restrict ourselves to bounded uniformly continuous functions from K to R. Let f be such a function. Let ε > 0, and δ > 0 be such that Let 0 < α < 1/2 − 1/q 0 . Thanks to Lemmas 11 and 12, Kolmogorov's criterion [30, Theorem 3.3.16] provides us with some constant C such that We take m ′ satisfying so that, for n sufficiently large, For any function X = (X(s)) 0≤s≤x ∈ K, we define X |y := (X(s)) 0≤s≤y ∈ K. Hence Thanks to point 1), for n large enough, the second term of the right-hand side of (22) is less than ε.
The first and third terms are treated in the same way (for the third term, just remove the (n)'s): on the set B (n) ∈ K , All in all, for n large enough and B (n) converges weakly toward B.
It remains to prove Lemmas 11 and 12.
Proof of Lemma 11. If |t − s| < 1/n, the fact that B n is linear on every interval [i, i + 1] implies that which gives the desired result. By the triangular inequality, we can restrict ourselves to the cases where ns and nt are integers, and either t ≤ m (n) /2 or m (n) /2 ≤ s. First, let us suppose that 0 ≤ s ≤ t ≤ m (n) /2. Applying (20) with m ′ = t and the proper function f , we obtain The asymptotic formula (21) and the fact that m (n) → m yield the existence of a positive constant c and an integer n 0 such that for n ≥ n 0 , √ n Q Σ mn (l n ) ≥ c and m (n) > m 2 .
Then Proposition 6 ensures us that for n ≥ n 0 , Thus, the fractional term in the equation (23) is uniformly bounded as soon as n ≥ n 0 , and by means of the Rosenthal Inequality (Lemma 9). Now, if m (n) /2 ≤ s ≤ t ≤ m (n) , we use the following time reversal invariance: We have and we are back in the case we just treated. Note that it is important that m (n) be a deterministic time.
Proof of Lemma 12. We show the inequality for 2 ≤ q ≤ q 0 . As B appears as the limit of B (n) (in a certain sense), we may choose the X k 's to have arbitrarily large moments, and we see that it actually holds for any value of q ≥ 2. For 0 ≤ s ≤ t < m, point 1) in the proof of Lemma 10 shows that where C q is the constant of Lemma 11. It only remains to see that B (n) (m ∧ m (n) ) → B(m) in probability in order to obtain the same inequality for t = m. The time reversal invariance (24) implies that and, thanks to 1),

Discrete first-passage bridges
We now see a lemma similar to Lemma 10 for first-passage bridges, in which we will only consider simple random walks. Let (m n ) ∈ Z N + and (σ n ) ∈ N N be two sequences of integers such that We consider a sequence (X k ) k≥1 of i.i.d. random variables with law (δ −1 + δ 1 )/2 and define S i := i k=1 X k (and, by convention, S 0 = 0). We still write S its linearly interpolated version. We call (B n (i)) 0≤i≤mn and (F n (i)) 0≤i≤mn the two processes whose laws are the law of (S i ) 0≤i≤mn conditioned respectively on the events There is actually a very convenient way to construct F n from B n . For 0 ≤ k ≤ m n , the shifted path of B n is defined by For 0 ≤ k ≤ σ n − 1, the first time at which B n reaches its minimum plus k is noted The following proposition [2, Theorem 1] gives a construction of F n from B n .
Using this construction, we may show that the first-passage Brownian bridge is the limit of its discrete analog:

Lemma 14
As n goes to infinity, the process F (n) converges in law toward the process F 0→−σ [0,m] , in the space (K, d K ).
Proof. We begin as in the proof of Lemma 10. We note F i := σ(S k , 0 ≤ k ≤ i) the natural filtration associated with S, and by the Skorokhod theorem, we may and will assume that converges a.s. toward a standard Brownian motion (β s ) 0≤s≤m for the uniform topology.
1) Let m ′ < m. For n large enough, ⌈nm ′ ⌉ < m n . Let f be continuous bounded from K to R. We have Recall the notation Q S k (i) = P(S k = i). We have to deal with terms of the form where the first equality is an application of the so-called cycle lemma (see e.g. [2, Lemma 2]). Using the Markov property, we obtain Here again, the denominator of the fractional term in (25) is the same as the numerator when m ′ is chosen to be 0. The fractional term in (25) converges a.s. toward and Proposition 6 ensures that this convergence is dominated. So, 2) For any α > 0 and X = (X(s)) 0≤s≤x ∈ K, we write its α-Hölder norm. Proposition 13 gives a stochastic domination of the α-Hölder norm of F (n) by that of B (n) : we may assume that F n = Θ rν n (Bn) (B n ). If 0 ≤ s < t ≤ m (n) − r νn (B n ), We obtain the same inequality when m (n) − r νn (B n ) ≤ s < t ≤ m (n) , and by the triangular inequality, we find 3) We now suppose that 0 < α < 1/2. Let ε > 0. Thanks to Lemma 11-for which we now have q 0 arbitrarily large-and Kolmogorov's criterion, we can find some constant C such that Ascoli's theorem [29,Chapter XX] shows that K is a compact set, so that the laws of the F (n) 's are tight. It only remains to deal with the point m. Let δ > 0. For n large enough, on F (n) ∈ K , We have shown that F (n) m ∧ m (n) converges in law toward the deterministic value −σ so Slutzky's lemma allows us to conclude that the finite-dimensional marginals of F (n) converge toward those of F . This, together with the tightness of the laws of the F (n) 's, yields the result thanks to Prokhorov's lemma.
For any real numbers m 1 , m 2 , l 1 , l 2 , we define the bridge on [m 1 , m 2 ] from l 1 to l 2 by , and for σ 1 > σ 2 , we define the first-passage bridge on [m 1 , m 2 ] from σ 1 to σ 2 by

The Brownian snake
We need a version of the Brownian snake's head driven by a first-passage Brownian bridge. There are several ways to define such an object. We may define it as a the head of a Brownian snake with lifetime process a first-passage Brownian bridge F σ→0 [0,m] and starting from the path 0 σ := t ∈ [0, σ] → 0 (see [16,Chapter IV] or [9, Chapter 4] for a proper definition).
Let (F s ) 0≤s≤m be a first-passage Brownian bridge from σ to 0. The Brownian snake driven by F and started at 0 σ is the path-valued process (F s , (W (s, t), 0 ≤ t ≤ F s )) 0≤s≤m whose law is defined by: ⋄ the conditional law of W (s, ·) given F is the law of an inhomogeneous Markov process whose transition kernel is described as follows: for 0 ≤ s ≤ s ′ ≤ m, The head of this process is then defined by This description has the advantage of being very visual: W (0, ·) is the function 0 σ . Then, every time F decreases, we erase the tip of the previous path, and when F increases, we glue a part of an independent Brownian motion (see Figure 9). In the following, we will only need the head and not the whole process. The following description gives a direct construction of this head. Conditionally given F = F σ→0 [0,m] , we define a Gaussian process (Γ s ) 0≤s≤m with covariance function The processes (F, Γ) then has the same law as the process F σ→0 [0,m] , Z [0,m] defined above.
We easily see that we can derive the law of the head from the law of the snake, and it is actually also possible to recover the whole snake from its head (see [20,Section 2]): starting from the process The process F (s), (W (s, t), 0 ≤ t ≤ F (s)) 0≤s≤m then has the law of the Brownian snake defined above. In particular, for s ∈ [0, m] fixed, the process has the law of a real Brownian motion started from 0. Using time reversal invariance, we see that the process has the same law. This fact will be used in Section 6.

The discrete snake
We will describe here an analog of the Brownian snake in the discrete setting. Let us first consider three sequences of integers (σ n ), (m n ) and (l n ) such that σ (n) := σ n √ 2n → σ, m (n) := 2m n + σ n 2n → m and l (n) := l n γn 1 4 → l.
We call (C n , L n ) the contour pair of a random forest uniformly distributed over the set F mn σn of well-labeled forests with σ n trees and m n tree edges. We define and L (n) := L n (2nt) γ n 1 4 0≤t≤m (n) their scaled versions. We define the discrete snake (W n (i, j), 0 ≤ j ≤ C n (i)) 0≤i≤2mn+σn by (see Figure 10) .
Let (f, l) be the well-labeled forest coded by (C n , L n ). Then for 0 ≤ i ≤ 2m n + σ n , (W n (i, j)) 0≤j≤Cn(i) C n i j Figure 10: Discrete snake records the labels of the unique path going from t(f) + 1 to f(i). As a result, We then extend W n to {(s, t) : s ∈ [0, 2m n + σ n ], t ∈ [0, C n (s)]} by linear interpolation and we let, for 0 ≤ s ≤ m (n) , 0 ≤ t ≤ C (n) (s), is a path lying in so that we can see W (n) as an element of For X ∈ W 0 , we call ξ(X) the real number such that X ∈ C([0, ξ(X)], K 0 ), and we endow W 0 with the metric

Convergence of a uniform well-labeled forest
We will prove the following result.

Proposition 15
The pair (C (n) , W (n) ) converges weakly toward the pair F σ→0 [0,m] , W , in the space We readily obtain the following corollary:

Corollary 16
The pair (C (n) , L (n) ) converges weakly toward the pair F σ→0 Proposition 15 may appear stronger than Corollary 16, but is actually not, because of the strong link between the whole snake and its head [20]. We begin by a lemma.
Lemma 17 For all 0 < δ < 1/4, for all ε > 0, there exist a constant C and an integer n 0 such that, as soon as n ≥ n 0 , P(W (n) / ∈ A) < ε, where Proof. It is based on (26) and a similar inequality for Motzkin paths (which is merely Rosenthal Inequality). The fact that the steps of the random walks we consider are bounded allows us to take the q of Lemma 9 arbitrary large.
We need to distinguish two cases: is merely a rescaled Motzkin path.
⋄ if b n > a n , then W (n) (s, t) = 0 for a n ≤ t ≤ b n and is a rescaled Motzkin path.
In both cases, is also a rescaled Motzkin path-independent from W (n) (s, t) − W (n) (s, a n ) an≤t≤C (n) (s) .
Treating both cases separately, we obtain that there exists a constant M , independent of s, such that for n large enough, by Lemma 9. The same inequality holds with s ′ instead of s. We have For C ≥ 1, Let 0 < δ < 1 4 . Then, let 0 < α < 1/2 be such that δ < α/2, and ε > 0. Thanks to (26), we may find a constant C such that, for n sufficiently large, For this C, the inequality (27) allows us to apply Kolmogorov's criterion [30,Theorem 3.3.16]: we find a constant C ′ such that, for n large enough, Finally, which is what we needed.
Proof of Proposition 15. We begin by showing the convergence of a finite number of trajectories, together with the whole contour process, and then conclude by a tightness argument using Lemma 17.
Because m (n) → m, for n sufficiently large, s p ≤ m (n) and the vector we consider is well-defined.
By means of the Skorokhod representation theorem (see e.g. [10, Theorem 3.1.8]), we may and will assume that this convergence holds almost surely. We also suppose that (28) holds for p − 1. Then, a.s., To see this, observe that by continuity of W (s p−1 , ·). A similar inequality holds for M .
Finally, the law of is that of W (s p , ·), conditionally given which is precisely what we wanted.
Tightness. Let 0 < δ < 1/4 and ε > 0. Lemma 17 provides us with a constant C and an integer n 0 such that for all n ≥ n 0 , P(W (n) / ∈ A) < ε, where Let (s k ) k≥1 be a countable dense subset of [0, m). As for every k ≥ 1, W (n) (s k , ·) n is tight, we can find compact sets K k ⊆ W 0 such that for all k ≥ 1, for all n ≥ n 0 ,

The set
K := A ∩ {X ∈ W 0 : ∀k ≥ 1, X(s k , ·) ∈ K k } . is a compact subset of W 0 by Ascoli's theorem [29,XX] and for n ≥ n 0 , P W (n) / ∈ K < 2ε, hence the tightness of the sequence of W (n) 's laws.

Proof of Theorem 1
We adapt the proof given in [17] for the case g = 0 to our case g ≥ 1.

Setting
Let q n be uniformly distributed over the set Q n of bipartite quadrangulations of genus g with n faces. Conditionally given q n , we take v n uniformly over V (q n ) so that (q n , v n ) is uniform over the set Q • n of pointed bipartite quadrangulations of genus g with n faces. Recall that every element of Q n has the same number of vertices: n + 2 − 2g. Through the Chapuy-Marcus-Schaeffer bijection, (q n , v n ) corresponds to a uniform well-labeled g-tree with n edges (t n , l n ). The parameter ε ∈ {−1, 1} appearing in the bijection will be irrelevant to what follows.
Recall the notations t n (0), t n (1), . . . , t n (2n) and q n (0), q n (1), . . . , q n (2n) from Section 2. For technical reasons, it will be more convenient, when traveling along the g-tree, not to begin by its root but rather by the first edge of the first forest. Precisely, we definė where u n is the integer recording the position of the root in the first forest of t n . We endow 0, 2n with the pseudo-metric d n defined by d n (i, j) := d qn (q n (i),q n (j)) .
We define the equivalence relation ∼ n on 0, 2n by declaring that i ∼ n j ifq n (i) =q n (j), that is if d n (i, j) = 0. We call π n the canonical projection from 0, 2n to 0, 2n /∼n and we slightly abuse notation by seeing d n as a metric on 0, 2n /∼n defined by d n (π n (i), π n (j)) := d n (i, j). In what follows, we will always make the same abuse with every pseudo-metric. The metric space 0, 2n /∼n , d n is then isometric to (V (q n )\{v n }, d qn ), which is at d GH -distance 1 from the space (V (q n ), d qn ).
As usual, we define the rescaled version: for s, t ∈ [0, 1], we let so that

Tightness of the distance processes
The first step is to show the tightness of the processes d (n) 's laws. For that matter, we use the bound (4). We define we extend it to [0, 2n] as we did for d n by (29), and we define its rescaled version d • (n) as we did for d n by (30). We readily obtain the following bound, Expression of d • (n) in terms of the spatial contour function of the g-tree Although it is not straightforward to define a contour function for the whole g-tree, we may define its spatial contour function L n : [0, 2n] → R by, , and we easily see that

Convergence results
As in Section 3, we call s n the scheme of t n , (f e n , l e n ) e∈ E(sn) its well-labeled forests, (m e n ) e∈ E(sn) and (σ e n ) e∈ E(sn) respectively their sizes and lengths, (l v n ) v∈V (sn) the shifted labels of its nodes, (M e n ) e∈ E(sn) its Motzkin bridges, and u n the integer recording the position of the root in the first forest f e * n . We call (C e n , L e n ) the contour pair of the well-labeled forest (f e n , l e n ) and we extend the definition of M e n to [0, σ e n ] by linear interpolation. As usual, we define the rescaled versions of these objects .
Combining the results of Proposition 7, Lemma 6 10 and Corollary 16, we find that the vector s n , m e (n) e∈ E(sn) , σ e (n) e∈ E(sn) , l v (n) v∈V (sn) , u (n) , C e (n) , L e (n) e∈ E(sn) , M e (n) e∈ E(sn) converges in law toward the random vector whose law is defined as follows: ⋄ the law of the vector is the probability µ defined before Proposition 7, ⋄ conditionally given I ∞ , the processes (C e ∞ , L e ∞ ), e ∈ E(s ∞ ) and (M e ∞ ), e ∈Ě(s ∞ ) are independent, -the process (C e ∞ , L e ∞ ) has the law of a Brownian snake's head on [0, m e ∞ ] going from σ e ∞ to 0: the process (M e ∞ ) has the law of a Brownian bridge on [0, σ e ∞ ] from 0 to l e ∞ := l e + ∞ − l e − ∞ : the Motzkin bridges are linked through the relation Applying the Skorokhod theorem, we may and will assume that this convergence holds almost surely. As a result, note that for n large enough, s n = s ∞ .

Decomposition of L (n) along the forests
In order to study the convergence of L (n) , we will express it in terms of the L e as well as its limit in the space (K, d K ), We then need to concatenate these processes. For f, g ∈ K 0 two functions started at 0, we call f • g ∈ K 0 their concatenation defined by σ(f • g) := σ(f ) + σ(g) and, for 0 We sort the half-edges of s n according to its facial order, beginning with the root: e 1 = e * , . . . , e 2(6g−3) and we see that .
We also sort the half-edges of s ∞ in the same way and define

Lemma 19
The sequence of the laws of the processes is tight in the space of probability measure on C([0, 1] 2 , R).
Proof. First observe that, for every s, s ′ , t, t ′ ∈ [0, 1], By Fatou's lemma, we have for every k ∈ N and δ > 0, Since d • ∞ is continuous and null on the diagonal, for ε > 0, we may find δ k > 0 such that, for n sufficiently large, By taking δ k even smaller if necessary, we may assume that the inequality (33) holds for all n ≥ 1. Summing over k ∈ N, we find that for every n ≥ 1, is a compact set.

The genus g Brownian map
Proof of the first assertion of Theorem 1 Thanks to Lemma 19, there exist a subsequence (n k ) k≥0 and a function d ∞ ∈ C([0, 1] 2 , R) such that By the Skorokhod theorem, we will assume that this convergence holds almost surely. As the d (n) functions, the function d ∞ obeys the triangular inequality. And because d (n) (s, s) = O(n −1/4 ) for all s ∈ [0, 1], the function d ∞ is actually a pseudo-metric. We define the equivalence relation associated with it by saying that s ∼ ∞ t if d ∞ (s, t) = 0, and we call q ∞ := [0, 1] /∼∞ . We will show the convergence claimed in Theorem 1 along the same subsequence (n k ) k≥0 . Thanks to (31), we only need to see that For that matter, we will use the characterization of the Gromov-Hausdorff distance via correspondences. Recall that a correspondence between two metric spaces (S, δ) and (S ′ , δ ′ ) is a subset R ⊆ S × S ′ such that for all x ∈ S, there is at least one x ′ ∈ S ′ for which (x, x ′ ) ∈ R and vice versa. The distortion of the correspondence R is defined by Then we have [4,Theorem 7.3.25] where the infimum is taken over all correspondences between S and S ′ .
We define the correspondence R n between (2n) −1 0, 2n /∼n , d (n) and (q ∞ , d ∞ ) as the set where π n : 0, 2n → 0, 2n /∼n and π ∞ : [0, 1] → q ∞ are both canonical projections. Its distortion is and, thanks to (34), A bound on d ∞ If we take the limit of the inequality (32) along the subsequence (n k ) k≥0 , we find d ∞ (s, t) ≤ d • ∞ (s, t). Because d • ∞ does not satisfy the triangular inequality, we may improve this bound by considering the largest metric on q ∞ that is smaller than d • ∞ : for all a and b ∈ q ∞ , we have where the infimum is taken over all integer k ≥ 0 and all sequences s 0 , t 0 , s 1 , t 1 ,. . . , s k , t k satisfying a = π ∞ (s 0 ), for all 0 ≤ i ≤ k − 1, t i ∼ ∞ s i+1 , and b = π ∞ (t k ).

Hausdorff dimension of the genus g Brownian map
We now prove the second assertion of Theorem 1. We follow the method provided by Le Gall and Miermont [18]. As usual, we proceed in two steps.

Lower bound
We start with a lemma giving a lower bound on d ∞ (s, t). Let us first define a contour function C n : [0, 2n] → R + for the g-tree t n by where the half-edges e 1 = e * , . . . , e 2(6g−3) are sorted according to the facial order of s n . This function is actually the contour function of the "large" forest consisting in the concatenation of f e1 n , f e2 n , . . . , f e 2(6g−3) n . As usual, we define its rescaled version C (n) , as well as its limit where, this time, the half-edges are sorted according to the facial order of s ∞ .
For 0 ≤ s, t ≤ 1, we define the set It will become clearer in a moment what this set represents, while looking at its discrete analog.

Lemma 20
The following bound holds Proof. This inequality follows easily by approximation, once we have shown its discrete analog: where the set represents the ancestral lineage ofṫ n (i) between i and j. An integer k belongs to L n (i, j) if and only if k is between i and j (first constraint),ṫ n (k) lies in the same subtree asṫ n (i) (second constraint), andṫ n (k) is an ancestor ofṫ n (i) (third constraint). Beware that L n (j, i) is in general a totally different set. We can suppose i = j. In order to show (35), we consider a geodesic path γ 0 , γ 1 , . . . , γ dn(i,j) fromṫ n (i) toṫ n (j) and call k ∈ L n (i, j) an integer for which L n (k) = min Ln(i,j) L n . Let us call p the order of the vertexṫ n (k). Then removing the edges incident toṫ n (k) breaks t n into p + 1 connected components: {ṫ n (k)}, p − 1 trees, and a (p + 1)-th component (which is a g-tree, unless ifṫ n (k) belongs to the floor of a forest). One of these components containsṫ n (i) and another one containṡ t n (j). Say that γ r , r < d n (i, j) is the last vertex of the geodesic path lying in the same component asṫ n (i). Then γ r is linked by an edge of q n to γ r+1 , which lies in another component. Moreover, the facial sequence of t n must visitṫ n (k) between any time it visits γ r and any time it visits γ r+1 (in that order or the other). The way we construct edges in the Chapuy-Marcus-Schaeffer bijection thus imposes l n (ṫ n (k)) ≥ l n (γ r ) ∨ l n (γ r+1 ). Finally, d n (i, j) ≥ d qn (q n (i), γ r ) ≥ d qn (q n (i), v n ) − d qn (v n , γ r ) = l n (ṫ n (i)) − l n (γ r ), and the same holds with r + 1 instead of r, yielding d n (i, j) ≥ l n (ṫ n (i)) − l n (ṫ n (k)) = L n (i) − min Let us define the measure λ on q ∞ as the image of the Lebesgue measure on [0, 1] by the canonical projection π ∞ : [0, 1] → q ∞ . From now on, we work conditionally given the parameters vector I ∞ . Let 0 ≤ s ≤ 1 be a point that is not of the form k i=1 m ei ∞ for some k = 0, . . . , 2(6g − 3). This means that it is not 0, 1, or a point at which two functions are being concatenated. Such points will thereafter be called junction points.
Suppose that for some δ > 0, we can find two positive numbers r − and r + such that For a ∈ q ∞ and r > 0, we call B ∞ (a, r) the open ball centered at a with radius r for the metric d ∞ .
For all 0 ≤ x ≤ C ∞ (s) − C ∞ (s), we define and we see that L ∞ (s, τ x ) = {τ y , 0 ≤ y ≤ x}. The discussion preceding Section 5.3 shows that the process has the law of a real Brownian motion started from 0. Let η > 0. Almost surely, provided that C ∞ (s) − C ∞ (s) > 0, the law of the iterated logarithm ensures us that for x small enough, We choose δ = x 1 2 +η and r + = τ x − s so that the second part of (36) holds. Moreover, because s is not a junction point, on one of its neighborhoods, the function C ∞ is a first-passage Brownian bridge, and is then absolutely continuous with respect to the Wiener measure on this neighborhood. It therefore obeys the law of the iterated logarithm as well. So, a.s., for r small enough, inf 0≤t≤r (C ∞ (s + t) − C ∞ (s)) < −r It follows that r + ≤ x ( 1 2 +η) −1 = δ ( 1 2 +η) −2 = δ 4−η ′ for some η ′ > 0. In a similar way, we can find an r − < δ 4−η ′ satisfying the first part of (36). This yields, for all δ > 0 small enough, Once again, because C ∞ is absolutely continuous with respect to the Wiener measure on a neighborhood of s, a.s. C ∞ (s) − C ∞ (s) > 0. For the record, note that if s was a junction point, we would always have C ∞ (s) = C ∞ (s) by definition of a first-passage bridge. We obtain that for every s that is not a junction point, (37) holds almost surely. Finally, as there are only 2(6g − 3) + 1 junction points, Fubini-Tonelli's theorem shows that a.s., for λ-almost every a, We then conclude that dim

An expression of the constant t g
This section is dedicated to the proof of Theorem 2. Recall that the constant t g is defined by: |Q n | ∼ t g n 5 2 (g−1) 12 n . The relation (3) gives that |T n | ∼ 1 2 t g n 5g−3 2 12 n , so that, thanks to (17), where Υ was defined by (14). For a given s ∈ S * , we will concentrate on First, notice that by integrating with respect to u, only a factor m e * appears.
7.1 Integrating with respect to (m e ) e∈ E(s)\{e * } For e = e * , m e is only present in the factor m e * (−p ′ m e * (σ e * )) (−p ′ m e (σ e )) = σ e * p m e * (σ e * ) (−p ′ m e (σ e )) , so we have to compute an integral of the form given in the following lemma: Lemma 21 Let a, b, and t be three positive numbers. Then so that there exists a function g t satisfying f t (a, b) = e − 1 2t (a+b) 2 g t (b).
Because of (40), the function g t is actually constant and Putting all this together, we obtain the result.
The first time we integrate with respect to an m e , for an e = e * , we apply Lemma 21 with a = σ e * , b = σ e and t = m e * + m e (t does not depend on m e ) and the factor (39) is changed into σ e * p m e * +m e (σ e * + σ e ).
We may then apply Lemma 21 again, with a = σ e * + σ e , b = σ e ′ and t = m e * + m e + m e ′ when integrating with respect to m e ′ and so on. In the end, after integrating with respect to u and (m e ) e =e * , the
If we differentiate this function ϕ with respect to every variables x e , we recognize the same integral we treated while integrating with respect to (m e ), The integral (38) is now equal to some constant times We follow here the ideas of [6]. The term e∈Ě(s) |l e | is a linear combination of l v 's. We will break the integral (42) into parts on which these coefficients are constant. This happens when the vertex labels are sorted according to a given ordering: we call O s the set of bijections from 0, 4g − 3 into V (s). Let λ ∈ O s be an ordering and v ∈ V (s). Because s is dominant, v is connected to exactly three other vertices-not necessarily distinct-that we call v ′ , v ′′ , and v ′′′ . When the labels are sorted according to λ, that is when l λ0 < l λ1 < · · · < l λ4g−3 , the coefficient of l v in the sum e∈Ě(s) |l e | is Let e ∈Ě(s) be a half-edge and i (resp. j) be the smaller (resp. larger) of λ −1 e − and λ −1 e + . Then |l e | = l λj − l λi and e will contribute to the sum by a factor +1 for l λj and −1 for l λi . So e will contribute to d(λ, k) by a factor +1 for k ≤ j plus a factor −1 for k ≤ i. Thus the definition we just gave for d(λ, k) is consistent with (2). This, by the way, also prove that d(λ, k) > 0 for k = 0.
We have e∈Ě(s) Let us call k = λ −1 e − * . We will write ½ λ := ½ {l λ 0 <l λ 1 <···<l λ 4g−3 } for short. We integrate d(λ, i) l λi − l λi−1 with respect to l λ4g−3 , then l λ4g−4 , and so on up to l λ k+1 . We then integrate with respect to l λ0 , l λ1 , . . . , l λ k−1 . By doing so, factors (d(λ, 4g − 3)) −1 , (d(λ, 4g − 4)) −1 , . . . , (d(λ, k + 1)) −1 then (d(λ, 1)) −1 , (d(λ, 2)) −1 , . . . , (d(λ, k)) −1 successively appear and every time we integrate, p [n] is changed into p [n+1] . All in all, The second part of (42) is a little bit trickier because it distinguishes the root from the other vertices. In order to circumvent this, we will consider the sum over all scheme with the same "unrooted" structure (we do not consider an ordering λ at this time). For any scheme s ∈ S, we note s the non-rooted scheme corresponding to s, and for any non-rooted scheme u, we note u e the scheme u rooted at the half-edge e. We chose the convention to fix l e − * to be 0 because we needed one of the l v 's to be 0 and e − * was already distinguished as the root. This choice was totally arbitrary and we could have taken any other vertex v 0 . This translates in the fact that, for any function χ, does not actually depend on e − * . In order to see this properly, we do the following change of variables: for every v / We now consider an ordering λ ∈ O s . A computation very similar to the one we conducted above (just change p [6g−1] into x → x p [6g−2] (x), which becomes, after 4g −3 successive integrations, The sum over all dominant schemes of (42) then becomes

Conclusion
We still have to compute p [10g−4] (0). For that matter, we may use Fubini-Tonnelli's theorem and rewrite (41), for n ≥ 4, as where the second line is obtained from an integration by parts (we differentiate y → y n−3 and integrate y → y p 1 (y)). As p [2] (0) = 1 2 , we find that Taking into account everything we have done so far, we find The expression we claimed in (1) is then obtained by using the identity