Convergence of mixing times for sequences of random walks on finite graphs

We establish conditions on sequences of graphs which ensure that the mixing times of the random walks on the graphs in the sequence converge. The main assumption is that the graphs, associated measures and heat kernels converge in a suitable Gromov-Hausdorff sense. With this result we are able to establish the convergence of the mixing times on the largest component of the Erdos-Renyi random graph in the critical window, sharpening previous results for this random graph model. Our results also enable us to establish convergence in a number of other examples, such as finitely ramified fractal graphs, Galton-Watson trees and the range of a high-dimensional random walk.


Introduction
The geometric and analytic properties of random graphs have been the subject of much recent research. One strand of this development has been to examine sequences of random subgraphs of vertex transitive graphs that are, in some sense, at or near criticality. A key example is the percolation model and, for bond percolation above the upper critical dimension, we expect to see mean-field behavior in the sequence of finite graphs in the critical window. That is, the natural scaling exponents for the volume and diameter of the graph and for the mixing time are of the same order as those for the Erdős-Rényi random graph in the critical window, as given in [35].
This mean-field behavior is seen in other natural models of sequences of critical random graphs. For example [6] obtained general conditions for the geometric properties of percolation clusters on sequences of finite graphs and discussed examples such as the high dimensional torus and the n-cube, while the random walk on critical percolation clusters on the high-dimensional torus is treated in [23]. Motivated by these results we will focus on the asymptotic behavior of mixing times for random walks on sequences of finite graphs. We consider general sequences of graphs but under some strong conditions which will enable us to establish the convergence of the mixing time.
In order to demonstrate our main result we consider the Erdős-Rényi random graph. Let G(N, p) be the random subgraph of the complete graph on N labeled vertices {1, . . . , N } in which each edge is present with probability p independently of the other edges. It is a classical result that if we set p = c/N , then as N → ∞, if c > 1 there is a giant component containing a positive fraction of the vertices, while for c < 1 the largest component is of size log N . However, if p = N −1 +λN −4/3 for some λ ∈ R, we are in the so-called critical window, and it is known that the largest connected component C N , is of order N 2/3 . The recent work of [1] has shown that the scaling limit of the graph, M, exists and can be constructed from the continuum random tree.
For the Erdős-Rényi random graph above criticality, [18] and [4] established mixing time bounds for the simple random walk on the giant component. The simple random walk on this graph is the discrete time Markov chain with transition probabilities determined by p(x, y) = 1/deg(x) for all y such that (x, y) is an edge in C N . For the random graph in the critical window, the following result on the mixing time t 1 mix (C N ) (a precise definition will be given later in (1.8), see also Remark 1.3) of the lazy random walk (a version of the simple random walk which remains at its current vertex with probability 1/2, otherwise it moves as the simple random walk) was obtained by Nachmias and Peres ([35, Theorem 1.1]). Theorem 1.1. Let C N be the largest connected component of G(N, (1 + λN −1/3 )/N ) for some λ ∈ R. Then, for any ǫ > 0, there exists A = A(ǫ, λ) < ∞ such that for all large N , It is natural to ask for more refined results on the behavior of the family of mixing times. The purpose of this paper is to give a general criteria for the convergence of mixing times for a sequence of simple random walks on finite graphs in the setting where the graphs can be embedded nicely in a compact metric space. Due to the recent work of [1] and [9] we can apply our main result to the case of the Erdős-Rényi random graph, to obtain the following result.
is the L p -mixing time of the simple random walk on C N started from its root ρ N , then in distribution, where the random variable t p mix (ρ) ∈ (0, ∞) is the L p -mixing time of the Brownian motion on M started from ρ.
We will later illustrate our main result with a number of other examples of random walks on sequences of finite graphs. In order to state it, though, we start by describing the general framework in which we work. Firstly, let (F, d F ) be a compact metric space and let π be a non-atomic Borel probability measure on F with full support. We will assume that balls B F (x, r) := {y ∈ F : d F (x, y) < r} are π-continuity sets (i.e. π(∂B F (x, r)) = 0 for every x ∈ F , r > 0). Secondly, take X F = (X F t ) t≥0 to be a π-symmetric Hunt process on F (for definition and properties see [19]), which will typically be the Brownian motion on the limit of the sequence of graphs. We suppose the following: • X F is conservative, i.e. its semigroup (P t ) t≥0 satisfies P t 1 = 1, π-a.e., ∀t > 0, (1.1) • there exists a jointly continuous transition density (q t (x, y)) x,y∈F,t>0 of X F , (1.2) • for every x, y ∈ F and t > 0, q t (x, y) > 0, (1.3) • for every x ∈ F and t > 0, q t (x, ·) is not identically equal to 1, (1.4) where conditions (1.3) and (1.4) are assumed to exclude various trivial cases, and by transition density we mean the kernel q t (x, y) such that for all bounded continuous function f on F . Furthermore, we will say that the transition density (q t (x, y)) x,y∈F,t>0 converges to stationarity in an L p sense for some p ∈ [1, ∞] if it holds that lim t→∞ D p (x, t) = 0, (1.5) for every x ∈ F , where D p (x, t) := q t (x, ·) − 1 L p (π) . If this previous condition is satisfied, then it is possible to check that the L p -mixing time of F , t p mix (F ) := inf t > 0 : sup is a finite quantity (see Section 3). Finally, note that t p mix (F ) ≤ t p ′ mix (F ) for p ≤ p ′ , which can easily be shown using the Hölder inequality.
We continue by introducing some general notation for graphs and their associated random walks. First, fix G = (V (G), E(G)) to be a finite connected graph with at least two vertices, where V (G) denotes the vertex set and E(G) the edge set of G, and suppose d G is a metric on V (G). In some examples, d G will be a rescaled version of the usual shortest path graph distance, by which we mean that d G (x, y) is some multiple of the number of edges in the shortest path from x to y in G, but this is not always the most convenient choice. Define a symmetric weight function µ G : V (G) 2 → R + that satisfies µ G xy > 0 if and only if {x, y} ∈ E(G). The discrete time random walk on the weighted graph G is then the Markov chain ((X G m ) m≥0 , P G x , x ∈ V (G)) with transition probabilities (P G (x, y)) x,y∈V (G) defined by P G (x, y) := µ G xy /µ G x , where µ G x := y∈V (G) µ G xy . If we define a measure π G on V (G) by setting, for A ⊆ V (G), π G (A) := x∈A µ G x / x∈V (G) µ G x , then π G is the invariant probability measure for X G . The transition density of X G , with respect to π G , is given by (p G m (x, y)) x,y∈V (G),m≥0 , where Due to parity concerns for bipartite graphs, we will consider a smoothed version of this function (q G m (x, y)) x,y∈V (G),m≥0 obtained by setting where D G p (x, m) := q G m (x, ·) − 1 L p (π G ) . Finally, in the case that we are considering a sequence of graphs (G N ) N ≥1 , we will usually abbreviate π G N to π N and q G N to q N , etc. Remark 1.3. In [35], the mixing time of C N is defined in terms of the total variation distance, that is T mix (C N ) = min{t : P t (x, ·) − π(·) TV ≤ 1/8, ∀x ∈ V (C N )}, (1.9) where P t (x, B) = y∈B p N t (x, y)π(y) for B ⊂ V (C N ), p N t (x, y) is the transition density for the random walk and µ − ν TV = max B⊂V (C N ) |µ(B) − ν(B)| for probability measures µ, ν on V (C N ). (To be precise, 1/8 in (1.9) is 1/4 in [35], but this only affects the constants in the results.) However, noting that µ − ν TV = 1 2 x∈V (C N ) |µ({x}) − ν({x})|, (see, for example [34,Proposition 4.2]), one sees that T mix (C N ) = t 1 mix (C N ). Also note that [35] considers the lazy walk on the graph to avoid parity issues, but the same techniques will apply to the mixing time defined in terms of the smoothed heat kernel introduced at (1.7).
We are now ready to state the assumption under which we are able to prove the convergence of mixing times for the random walks on a sequence of graphs. This captures the idea that, when suitably rescaled, the discrete state spaces, invariant measures and transition densities of a sequence of graphs converge to (F, d F ), π and (q t (x, y)) x,y∈F,t>0 , respectively. Its formulation involves a spectral Gromov-Hausdorff topology, the definition of which is postponed until Section 2, and a useful sufficient condition for it will be given in Proposition 2.4 below. Note that we extend the definition of the discrete transition densities on graphs to all positive times by linear interpolation of (q G m (x, y)) m≥0 for each pair of vertices x, y ∈ V (G). Note also that the extended transition densities are different from those of continuous time Markov chains.
is a sequence of finite connected graphs with at least two vertices for which there exists a sequence (γ(N )) N ≥1 such that, for any compact interval I ⊂ (0, ∞), in a spectral Gromov-Hausdorff sense.
In the case where we have random graphs, we will typically assume that we have the above convergence holding in distribution. Our main conclusion is then the following. Theorem 1.4. Suppose that Assumption 1 is satisfied. If p ∈ [1, ∞] is such that the transition density (q t (x, y)) x,y∈F,t>0 converges to stationarity in an L p sense, then t p mix (F ) ∈ (0, ∞) and In Section 3.2, we will explain how to derive a variation of Theorem 1.4 that concerns the convergence of mixing times of processes started at a distinguished point in the state space.
We emphasize that a key part of our paper is to verify Assumption 1 and apply Theorem 1.4 in various interesting examples (including the Erdős-Rényi random graphs in the critical window as mentioned above). Therefore, we devote considerable space to applying our results to such examples.
The organization of the paper is as follows. In Section 2, we give a precise definition of the spectral Gromov-Hausdorff convergence and give some of its basic properties. In Section 3, we prove Theorem 1.4 and derive a variation of the theorem for distinguished starting points. Some sufficient conditions for (1.1)-(1.5) are given in Section 4. A selection of examples where the assumptions of Theorem 1.4 can be verified, and hence we have convergence of the mixing time sequence, are given in Section 5. In Section 6 we introduce some geometric conditions on graphs for upper and lower bounds on the mixing times for the corresponding symmetric Markov chains. We use these ideas to derive tail estimates of mixing times on random graphs in the case of the continuum random tree and the Erdős-Rényi random graph. The proofs of these results can be found in the Appendix.

Spectral Gromov-Hausdorff convergence
The aim of this section is to define a spectral Gromov-Hausdorff distance on triples consisting of a metric space, a measure and a heat kernel-type function that will allow us to make Assumption 1 precise. We will also derive an equivalent characterization of this assumption that will be applied in the subsequent section when proving our mixing time convergence result, and present a sufficient condition for Assumption 1 that will be useful when it comes to checking it in examples. Note that we do not need to assume (1.3), (1.4) in this section, and only use (1.1) to deduce Proposition 2.4 from a result of [14].
First, for a compact interval I ⊂ (0, ∞), letM I be the collection of triples of the form (F, π, q), where F = (F, d F ) is a non-empty compact metric space, π is a Borel probability measure on F and q = (q t (x, y)) x,y∈F,t∈I is a jointly continuous real-valued function of (t, x, y). We say two elements, (F, π, q) and (F ′ , π ′ , q ′ ), ofM I are equivalent if there exists an isometry f : F → F ′ such that π • f −1 = π ′ and q ′ t • f = q t for every t ∈ I, by which we mean q ′ t (f (x), f (y)) = q t (x, y) for every x, y ∈ F , t ∈ I. Define M I to be the set of equivalence classes ofM I under this relation. We will often abuse notation and identify an equivalence class in M I with a particular element of it. Now, set where the infimum is taken over all metric spaces Z = (Z, d Z ), isometric embeddings φ : F → Z, φ ′ : F ′ → Z, and correspondences C between F and F ′ , d Z H is the Hausdorff distance between compact subsets of Z, and d Z P is the Prohorov distance between Borel probability measures on Z. Note that, by a correspondence C between F and F ′ , we mean a subset of F × F ′ such that for every x ∈ F there exists at least one x ′ ∈ F ′ such that (x, x ′ ) ∈ C and conversely for every x ′ ∈ F ′ there exists at least one x ∈ F such that (x, x ′ ) ∈ C.
In the following lemma, we check that the above definition gives us a metric and that the corresponding space is separable. (The latter fact will be useful when it comes to making convergence in distribution statements regarding the mixing times of sequences of random graphs, as is done in Sections 5.2 and 5.3, for example). Before this, however, let us make a few remarks about the inspiration for the distance in question. In the infimum characterizing ∆ I , the first term is simply that used in the standard Gromov-Hausdorff distance (see [7,Definition 7.3.10], for example). In fact, as far as the topology is considered, this term could have been omitted since it is absorbed by the other terms in the expression, but we find that it is technically convenient and somewhat instructive to maintain it. The second term is that considered by the authors of [22] in defining their 'Gromov-Prohorov' distance between metric measure spaces. The final term is closely related to one used in [16,Section 6] when defining a distance between spatial trees -real trees equipped with a continuous function. Indeed, the notion of a correspondence is quite standard in the Gromov-Hausdorff setting as a way to relate two compact metric spaces. One can, for example, alternatively define the Gromov-Hausdorff distance between compact metric spaces as half the infimum of the distortion of the correspondences between them (see [7,Theorem 7.3.25]). Proof. Fix a compact interval I ⊂ (0, ∞). That ∆ I is a non-negative function and is symmetric is obvious. To prove that it is also the case that ∆ I ((F, π, q), (F ′ , π ′ , q ′ )) < ∞ for any choice of (F, π, q), (F ′ , π ′ , q ′ ) ∈ M I , simply consider Z to be the disjoint union of F and F ′ , setting We next show that ∆ I is positive definite. Suppose (F, π, q), (F ′ , π ′ , q ′ ) ∈ M I are such that ∆ I ((F, π, q), (F ′ , π ′ , q ′ )) = 0. For every ε > 0, we can thus choose Z, φ, φ ′ , C such that the sum of quantities in the defining infimum of ∆ I is bounded above by ε. Moreover, there exists a δ ∈ (0, ε] such that sup be a dense sequence of disjoint elements of F (in the case F is finite, we suppose that the sequence terminates after having listed all of the elements of F ). By the compactness of F , there exists an integer N ε such that ( is a disjoint cover of F , and then consider a function f ε : F → F ′ obtained by setting Clearly, by definition, f ε is a measurable function. It is further the case that it satisfies, for any x ∈ F , where, in the above, we assume that i ∈ {1, . . . , N ε } is such that x ∈ A i . From this, it readily follows that: sup where d F ′ P is the Prohorov distance on F ′ . By applying (2.1), we also have that sup x,y∈F,t∈I To continue, we use a diagonalization argument to deduce the existence of a sequence (ε n ) n≥1 such that f εn (x i ) converges to some limit f (x i ) ∈ F ′ for every i ≥ 1. From (2.2), we obtain that for every i, j ≥ 1, and so we can extend the map f continuously to the whole of F ([7, Proposition 1.5.9]). This construction immediately implies that f is distance preserving. Moreover, reversing the roles of F and F ′ , we are able to find a distance preserving map from F ′ to F . Hence f must be an isometry. To check that (F, π, q) and (F ′ , π ′ , q ′ ) are equivalent, it therefore remains to check that π • f −1 = π ′ and q ′ t • f = q t for every t ∈ I. Fix ε > 0 and recall that the definition of ( where we are again assuming that i ∈ {1, . . . , N ε } is such that x ∈ A i , and have applied (2.2) and the distance-preserving property of f . In particular, this implies that 3) to deduce the second inequality. Since ε > 0 was arbitrary, this yields that π • f −1 = π ′ . Finally, (2.4) and (2.5) imply that sup x,y∈F,t∈I and so q ′ t • f = q t for every t ∈ I follows from the continuity properties of q ′ . This completes the proof of the fact that: if ∆ I ((F, π, q), (F ′ , π ′ , q ′ )) = 0, then the triples (F, π, q) and (F ′ , π ′ , q ′ ) are equivalent in the sense described at the start of the section. Consequently, ∆ I is indeed positive definite on the set of equivalence classes M I .
To complete the proof, we only need to show separability. This is straightforward, however, as for any element of M I , one can construct an approximating sequence that incorporates only: metric spaces with a finite number of points and rational distances between them, probability measures on these with a rational mass at each point, and functions that are defined (at each coordinate pair) to be equal to rational values at a finite collection of rational time points and are linear between these. To be more explicit, let (F, π, q) be an element of M I , and then define a sequence (F N , π N , q N ) N ≥1 as follows. First, let F N be a finite N −1 -net of F , which exists because F is compact. By perturbing d F , it is possible to define a metric d F N on F N such that |d F N (x, y) − d F (x, y)| ≤ N −1 and moreover d F N (x, y) ∈ Q for all x, y ∈ F N . Now, since F N is an N −1 -net of F , it is possible to choose a partition (A x ) x∈F N of F such that x ∈ A x and the diameter of A x (with respect to d F ) is no greater than 2N −1 . Moreover, it is possible to choose the partition in such a way that A x is measurable for each x ∈ F N . We construct a probability measure on F N by choosing π N ({x}) ∈ Q such that |π N ({x}) − π(A x )| ≤ N −1 (subject to the constraint that x∈F N π N ({x}) = 1). Finally, define ε N by setting ε N := sup s,t∈I: so that, by the joint continuity of q, ε N → 0 as N → ∞. Let inf I ≤ t 0 ≤ t 1 ≤ · · · ≤ t K ≤ sup I be a set of rational times such that |t 0 −inf I|, | sup for each x, y ∈ F N , and then extend q N to have domain F N ×F N ×I by linear interpolation in t at each pair of vertices. This construction readily yields that ∆ I ((F, π, q), (F N , π N , q N )) ≤ 6N −1 + 3ε N → 0. Since the class of triples from which the approximating sequence is chosen is clearly countable, this completes the proof of separability.
We will say that a sequence in M I converges in a spectral Gromov-Hausdorff sense if it converges to a limit in this space with respect to the metric ∆ I . We note that in the framework of compact Riemannian manifolds, different but related notions of spectral distances were introduced by Bérard, Besson and Gallot ( [5]) and by Kasue and Kumura ([24]). Moreover, by applying our characterization of spectral Gromov-Hausdorff convergence, we are able to deduce that if Assumption 1 holds, then we can isometrically embed all the rescaled graphs, measures and transition densities upon them into a common metric space (E, d E ) so that they converge to the relevant limit objects in a more standard way, as the following lemma makes precise. Note that in the proof of the result and henceforth we define balls in the space (E, d E ) by setting and also, lim where, for brevity, we have identified the spaces (V (G N ), d G N ), N ≥ 1, and (F, d F ), and the measures upon them with their isometric embeddings in (E, d E ). For each x ∈ F , we define Proof. Fix a compact interval I ⊂ (0, ∞). By Assumption 1, for each N ≥ 1 it is possible to find metric spaces (E N , d N ), isometric embeddings φ N : and correspondences C N between V (G N ) and F such that, identifying the original objects and their embeddings, where ε N → 0. Now, proceeding similarly to the proof of the triangle inequality in Lemma 2.1, set E to be the disjoint union of E N , N ≥ 1, and define a distance on it by setting Quotienting out points that are separated by distance 0 results in a metric space (E, d E ) (again, this is a slight abuse of notation), into which we have natural isometric embeddings of the metric spaces (V (G N ), d G N ), N ≥ 1, and (F, d F ). Moreover, in the metric space (E, d E ), it readily follows from (2.9) that the relevant isometrically embedded objects satisfy (2.6) and (2.7). To prove (2.8), first note that for every (2.10) Now, for every x ∈ F and N ≥ 1, there exists an where the second inequality is an application of (2.10). Letting N → ∞ and applying the joint continuity of (q t (x, y)) x,y∈F,t>0 , we obtain the desired result.
For our later convenience, let us note a useful tightness condition for the rescaled transition densities that was essentially established in the proof of the previous result.

Lemma 2.3. Suppose that Assumption 1 holds. For any compact interval
Proof. Recalling the continuity property of q, taking the limit as N → ∞ in (2.10) yields Again appealing to the continuity of q, the right-hand side here converges to 0 as δ → 0, which completes the proof.
It is straightforward to reverse the conclusions of the previous two lemmas to check that if (2.6), (2.7), (2.8) and (2.11) hold, then so does Assumption 1. Indeed, under these assumptions, we have isometric embeddings of (V (G N ), d G N ), N ≥ 1, and (F, d F ) into a common metric space (E, d E ) for which: (2.6) gives the Hausdorff convergence of sets; (2.7) gives the Prohorov convergence of measures; and moreover, it is elementary to check from (2.8) and (2.11) that, with respect to the correspondences the relevant transition densities converge uniformly, as described in the definition of the metric ∆ I . Thus, in examples, it will suffice to check these equivalent conditions when seeking to verify Assumption 1. In fact, it is further possible to weaken these assumptions slightly by appealing to a local limit theorem from [14]. To be precise, because we are assuming that the transition densities of the graph satisfy the tightness condition of (2.11), we can apply [14,Theorem 15], to replace the local convergence statement of (2.8) with a central limit-type convergence statement. Note that, although in [14] it was assumed that the metric on G N was a shortest path graph distance, exactly the same argument yields the corresponding conclusion in our setting, and so we simply state the result.
in such a way that (2.6) and (2.7) are both satisfied. Moreover, assume that there exists a dense subset F * of F such that, for any compact interval uniformly for t ∈ I, and also (2.11) holds. Then Assumption 1 holds.
To complete this section, let us observe that [14] also provides two ways to check (2.11): one involving a resistance estimate on the graphs in the sequence ([14, Proposition 17]), and one involving the parabolic Harnack inequality ( [14,Proposition 16]). Since the first of these two methods will be applied in several of our examples later, let us recall the result here. To allow us to state the result, we define R G N (x, y) to be the resistance between x and y in V (G N ) (see (6.1)), when we suppose that G N is an electrical network with conductances of edges being given by the weight function µ G N . This defines a metric on V (G N ), for which the following result is proved as [14,Proposition 17]. As above, note that although it was a shortest path graph distance considered in [14], the same proof applies for a general distance on the graph in question. Moreover, the statement of the lemma is slightly different from that of the corresponding result in [14], because there the scaling α(n) was absorbed into the definition of the metric.
3 Convergence of L p -mixing times 3.1 Proof of Theorem 1.4 In this subsection we prove the mixing time convergence result of Theorem 1.4. Throughout, we will suppose that Assumption 1 holds and that the graphs G N and limiting metric space F have been embedded into a common metric space (E, d E ) in the way described by Lemma 2.2.
Recall from the introduction the definition of D p (x, t) = q t (x, ·) − 1 L p (π) , the L p -distance from stationarity of the process X F started from x at time t. By applying the continuity of (q t (x, y)) x,y∈F,t>0 , compactness of F and finiteness of π, it is easy to check that this quantity is finite for every x ∈ F and t > 0. The next lemma collects together a number of other basic properties of D p (x, t) that we will apply later (the first part is a minor extension of [8, Proposition 3.1], in our setting).
For every x ∈ F , the function t → D p (x, t) is continuous and strictly decreasing. Furthermore, we have , and so we turn to checking that it is strictly decreasing. First, a standard argument involving an application of Jensen's inequality and the invariance of π allows one to deduce that By the assumption on f and the fact that X F is conservative and π-symmetric, we have that Furthermore, Jensen's inequality implies f 1 (y) ≤ f 2 (y). Thus, it must be the case that f 1 (y) = f 2 (y), π-a.e. In particular, because π is a probability measure, there exists a y ∈ F such that f 1 (y) = f 2 (y). In the case p > 1, the conclusion of the previous paragraph readily implies that f is constant q t (y, z)π(dz)-a.e. Recalling the assumption that q t (y, z) > 0 everywhere, namely (1.3), it must therefore hold that f is constant π-a.e. Observing that for s, t > 0 we can write However, condition (1.4) and the assumption that the transition density is continuous imply that there exists a non-empty open set on which q t (x, ·) = 1. Thus, because π has full support, it is not the case that q t (x, ·) = 1, π-a.e., and we must have D p (x, s + t) < D p (x, t), as desired.
For p = 1, the result f 1 (y) = f 2 (y) implies that f is either non-negative or non-positive, π-a.e. Consequently, if we suppose that D p (x, s + t) = P s (q t (x, ·)− 1) L p (π) = D p (x, t) for some s > 0, then it must be the case that q t (x, ·) − 1 is either non-negative or non-positive. However, since F (q t (x, y) − 1)π(dy) = 0 (due to (1.1)) and (1.4) holds, we arrive at a contradiction. In particular, it must be the case that D p (x, s + t) < D p (x, t), and this completes the proof of strict monotonicity.
To establish the limit in (3.1), it will suffice to prove the result in the case p = 1 (obtaining the result for other values of p is then simply Jensen's inequality). Let x ∈ F and r > 0, then where (1.1) is used in the last equality. Since X F is a Hunt process, the first term here converges to 2 as t → 0. Furthermore, because π is non-atomic, the second term can be made arbitrarily small by suitable choice of r. The result follows.
We continue by defining the L p -mixing time at x ∈ F by setting In fact, the previous lemma yields that . That the discrete mixing times at a point converge when suitably rescaled to the continuous mixing time there is the conclusion of the following proposition.
where, as in the statement of Lemma 2.
Proof. Suppose p ∈ [1, ∞] is such that (1.5) holds for x ∈ F , set t 0 := t p mix (x) ∈ (0, ∞), and fix ε > 0. By (1.2) and the tightness of Lemma 2.3, there exists a δ > 0 such that where I := [t 0 /2, 2t 0 ]. Moreover, by the compactness of F , there exists a finite collection of balls Now, suppose t ∈ I. From (3.2), we immediately deduce that T 1 ≤ ε. For T 2 , we first observe that the fact balls are π-continuity sets implies that A 1 , . . . , A k are also π-continuity sets. Hence Thus we can appeal to (3.3) to deduce that it is also the case that T 4 ≤ ε for large N . In fact, each of these bounds can be assumed to hold uniformly over t ∈ I, thereby demonstrating that is strictly decreasing, the proposition follows. Remark 3.3. In the case p = 2, the proof of the previous result greatly simplifies. In particular, we note that and similarly Hence the limit at (3.4) is an immediate consequence of the local limit result of (2.8), and we do not have to concern ourselves with estimating the relevant integrals directly.
To extend the above proposition to the corresponding result for the mixing times of the entire spaces, we will appeal to the following lemma, which establishes a continuity property for the L p -mixing times from fixed starting points in the limiting space, and a related tightness property for the discrete approximations.
Proof. Consider p ∈ [1, ∞] such that (1.5) holds for x ∈ F , so that t 0 := t p mix (x) is finite, and let ε ∈ (0, t 0 /2). Since the function t → D p (x, t) is strictly decreasing (by Lemma 3.1), there exists an η > 0 such that D p (x, t 0 − ε) > D p (x, t 0 ) + η = 1/4 + η and also D p (x, t 0 + ε) < 1/4 − η. By the continuity of (q t (x, y)) x,y∈F,t>0 , there also exists a δ > 0 such that This implies that t p mix (y) ∈ [t 0 − ε, t 0 + ε], and (a) follows. The proof of part (b) is similar. In particular, choose η as above and note that (3.4 Furthermore, by the transition density tightness of Lemma 2.3, there exists a δ > 0 such that sup Since it is trivially true that, once N is large enough, this result can be applied with y = g N (x), the result follows.
We are now ready to give the proof of our main result.
Proof of Theorem 1.4. Observe that, under the assumptions of the theorem, Lemma 3.4(a) implies that the function (t p mix (x)) x∈F is continuous. Since F is compact, the supremum of (t p mix (x)) x∈F is therefore finite. Now, it is an elementary exercise to check that we can write the L p -mixing time of F , as defined at (1.6), in the following way: Consequently t p mix (F ) ∈ (0, ∞), as desired. To complete the proof, we are required to demonstrate the convergence statement of (1.10). Fix ε > 0. For every x ∈ F , Proposition 3.2 and Lemma 3.4(b) allow us to choose δ(x) > 0 and Since (B E (x, δ(x))) x∈F is an open cover for F , by compactness it admits a finite subcover where we note that, similarly to (3.6), the L p -mixing time of the graph G N can be written as where we have again made use of Proposition 3.2. Since ε > 0 was arbitrary, we are done.

Distinguished starting points
In certain situations, convergence of transition densities might only be known with respect to a single distinguished starting point. This is the case, for instance, in two of the most important examples we present in Section 5 -critical Galton-Watson trees and the critical Erdős-Rényi random graph. In such settings, it is only possible to prove a convergence result for the mixing time from the distinguished point. It is the purpose of this subsection to present a precise conclusion of this kind.
Consider, for a compact interval I ⊂ (0, ∞), the space of triples of the form (F, π, q), where F = (F, d F , ρ) is a non-empty compact metric space with distinguished vertex ρ, π is a Borel probability measure on F and q = (q t (x, y)) x,y∈F,t∈I is a jointly continuous real-valued function of (t, x, y); this is the same as the collectionM I defined in Section 2, though we have added the supposition that the metric spaces are pointed. We say two such elements, (F, π, q) and (F ′ , π ′ , q ′ ), are equivalent if there exists an isometry f : F → F ′ such that f (ρ) = ρ ′ , π •f −1 = π ′ and q ′ t • f = q t for every t ∈ I. By following the proof of Lemma 2.1, one can check that it is possible to define a metric on the equivalence classes of this relation by simply including in the definition of ∆ I the condition that the correspondence C must contain (ρ, ρ ′ ). We define convergence in a spectral pointed Gromov-Hausdorff sense to be with respect to this metric. The distinguished starting point version of Assumption 1 is then as follows.
Assumption 2. Let (G N ) N ≥1 be a sequence of finite connected graphs with at least two vertices and one, ρ N say, distinguished, for which there exists a sequence (γ(N )) N ≥1 such that, for any compact interval I ⊂ (0, ∞), The following result can then be proved in an almost identical fashion to Proposition 3.2, simply replacing g N (x) by ρ N and x by ρ. In doing this, it is useful to note that if Assumption 1 is replaced by Assumption 2, then we are able to include in the conclusions of Lemma 2.2 that ρ N converges to ρ in E.

Convergence to stationarity of the transition density
Before continuing to present example applications of the mixing time convergence results proved so far, we describe how to check the L p convergence to stationarity of the transition density of X F in the case when we have a spectral decomposition for it and a spectral gap. In the same setting, we will also explain how to check the non-triviality conditions on the transition density that were made in the introduction. Write the generator of the conservative Hunt process X F as −∆, and suppose that ∆ has a compact resolvent. Then there exists a complete orthonormal basis of L 2 (F, π), (ϕ k ) k≥1 say, such that ∆ϕ k = λ k ϕ k for all k ≥ 0, 0 ≤ λ 0 ≤ λ 1 ≤ . . . and lim k→∞ λ k = ∞. By expanding as a Fourier series, we can consequently write the transition density of X F as where (P F t ) t≥0 is the associated semigroup, and the final equality holds as a simple consequence of the fact that d dt (P F t ϕ k ) = −P F t ∆ϕ k = −λ k P F t ϕ k . Now by (1.1), it holds that 1 = P F t 1 is in the domain of ∆. A standard argument thus yields ∆1 = ∆P F t 1 = − d dt (P F t 1) = 0, and so there is no loss of generality in presupposing that λ 0 = 0 and ϕ 0 ≡ 1 in this setting. The only additional assumption we make on the transition density (q t (x, y)) x,y∈F,t>0 is that it is jointly continuous in (t, x, y) (i.e. (1.2) holds).
Lemma 4.1. Suppose that the operator ∆ has a compact resolvent, so that the above spectral decomposition holds. If there is a spectral gap, i.e. λ 1 > 0, then (q t (x, y)) x,y∈F,t>0 converges to stationarity in an L p sense (namely (1.5) holds) for any p ∈ [1, ∞].
Proof. Recall from (3.5) that D 2 (x, t) 2 = q 2t (x, x) − 1. Under the assumptions of the lemma, it follows that as t → ∞, which completes the proof of the result for p = 2. To extend this to any p, we first use Cauchy-Schwarz to deduce Consequently, we have that for any t ≥ 1, where the second inequality involves an application of the monotonicity property proved as part of Lemma 3.1. Now, by (1.2), the term sup y∈F D ∞ (y, 1) is a finite constant, and so combining the above bound with (4.1) implies that D ∞ (x, t) ≤ CD 2 (x, t/2) → 0 as t → ∞. Proof. Firstly, assume that q t (x, y) = 0 for some x, y ∈ F , t > 0. If s ∈ (0, t), then the Chapman-Kolmogorov equations yield 0 = q t (x, y) = F q s (x, z)q t−s (z, y)π(dz). Since π has full support, using (1.2), it follows that q s (x, z)q t−s (z, y) = 0 for every z ∈ F . In particular, q s (x, y)q t−s (y, y) = 0. Noting that q t−s (y, y) = D 2 2 (y, t/2) + 1 ≥ 1, we deduce that q s (x, y) = 0. Now, define a function f : (0, ∞) → R + by setting f (s) := q s (x, y). Letting (λ ′ i ) i≥0 represent the distinct eigenvalues of ∆, we can write x)q s (y, y)) 1/2 < ∞, this series converges absolutely whenever s ∈ (0, ∞). Thus f (z) := i≥0 a i e −λ ′ i z defines an analytic function on the whole half-plane ℜ(z) > 0. By our previous observation regarding q s (x, y), this analytic function is equal to 0 on the set (0, t], and therefore it must be 0 everywhere on ℜ(z) > 0. However, this contradicts the fact that f (t) = q t (x, y) → 1 as t → ∞, which was proved in Lemma 4.1. Hence, q t (x, y) > 0 for every x, y ∈ F , t > 0.
Secondly, suppose that q t (x, ·) ≡ 1 for some x ∈ F and t > 0. Then 1 = q t (x, x) = 1 + i≥1 ϕ i (x) 2 e −λ i t , and so ϕ i (x) = 0 for every i ≥ 1. This implies that q t (x, x) = 1 for every t > 0. However, by following the proof of (3.1), one can deduce that and so the previous conclusion can not hold. Consequently, we have shown that q t (x, ·) ≡ 1 for any x ∈ F , t > 0, as desired.
To summarize, the above results demonstrate that to verify all the conditions on the transition density that are required to apply our mixing time convergence results, it will suffice to check that the conservative Hunt process X F has a jointly continuous transition density and the corresponding non-negative self-adjoint operator, ∆, has a compact resolvent and exhibits a spectral gap. As the following corollary explains, this is a particularly useful observation in the case that the Dirichlet form (E, F) associated with X F is a resistance form. A precise definition of such an object appears in [26, Definition 3.1], for example, but the key property is the finiteness of the corresponding resistance, i.e.
is finite for any x, y ∈ F .

Examples
The More interestingly, however, as we will now demonstrate, it is possible to apply our main results in a number of examples where the graphs, and sometimes limiting spaces, are random: self-similar fractal graphs with random weights, critical Galton-Watson trees, the critical Erdős-Rényi random graph, and the range of the random walk in high dimensions. For the second and third of these, we will in the next section go on to describe how the convergence in distribution of mixing times we establish can be applied to relate tail asymptotics for mixing time distributions of the discrete and continuous models.

Self-similar fractal graphs with random weights
Although the results we have proved apply more generally to self-similar fractal graphs (see below for some further comments on this point), to keep the presentation concise we restrict our attention here to graphs based on the classical Sierpinski gasket, the definition of which we now recall. Suppose p 1 , p 2 , p 3 are the vertices of an equilateral triangle in R 2 . Define the similitudes ψ i (x) := p i + z − p i 2 , i = 1, 2, 3.
Since (ψ i ) 3 i=1 is a family of contraction maps, there exists a unique non-empty compact set F such that F = ∪ 3 i=1 ψ i (F ) -this is the Sierpinski gasket. We will suppose d F is the intrinsic shortest path metric on F defined in [27], and note that this induces the same topology as the Euclidean metric. Moreover, we suppose π is the (ln 3)/(ln 2)-Hausdorff measure on F with respect to the Euclidean metric, normalized to be a probability measure. This measure is nonatomic, has full support and satisfies π(∂B(x, r)) = 0 for every x ∈ F , r > 0 (see [14,Lemma 25]).
We now define a sequence of graphs (G N ) N ≥0 by setting We set d G N := d F | V (G N )×V (G N ) , so that (V (G N ), d G N ) converges to (F, d F ) with respect to the Hausdorff distance between compact subsets of F . Weights (µ N e ) e∈E(G N ),N ≥0 will be selected independently at random from a common distribution, which we assume is supported on an interval [c 1 , c 2 ], where 0 < c 1 ≤ c 2 < ∞. By the procedure described in the introduction, we define from these weights a sequence of random measures (π N ) N ≥0 on the vertex sets of our graphs in the sequence (G N ) N ≥0 . That π N weakly converges to π as Borel probability measures on F , almost-surely, can be checked by applying [14,Lemma 26].
To describe the scaling limit of the random walks associated with the random weights µ N , we appeal to the homogenization result of [30]. To describe this, we first introduce the Dirichlet form associated with the walk on the level N graph by setting, for f ∈ R V (G N ) , x,y∈V 0 ,x =y be the collection of weights such that the associated random walk on G 0 is the trace of X G N onto V 0 . It then follows from [30,Theorem 3.4] that there exists a deterministic constant C ∈ (0, ∞) such that for f ∈ F, where F is the subset of C(F, R) such that the right-hand side above exists and is finite. It is known that (E, F) is a local, regular Dirichlet form on L 2 (F, π), which is also a resistance form (see [29], for example). Thus, by Corollary 4.3, the associated π-symmetric diffusion X F , which (modulo the scaling constant C) is known as Brownian motion on the Sierpinski gasket, satisfies (1.1)-(1.5).
For the case of unbounded fractal graphs, a probabilistic version of (2.12) was proved as [14, Proposition 30(i)] by applying the homogenization result for processes of [31] (cf. [30]). Since the Sierpinski gasket is a finitely ramified fractal, it is a relatively straightforward technical exercise to adapt this result to the compact case by considering a decomposition of the sample paths of the relevant processes into segments started at one of the outer corners of the gasket and stopped upon hitting another.
To expand on this, we will explain how to prove a version of [31, Theorem 3.6] in our setting. (Note that our X G N is a discrete time Markov chain with π N as the invariant measure, whereas in [31] it was the continuous-time Markov chains with normalized counting measure as the invariant measure that were studied. However, since both measures are comparable and they converge to π almost-surely, this difference can be easily resolved.) Recall p 1 and p 2 are two distinct elements of V 0 . Let σ (0) p 1 (X G N ) be the first hitting time of p 1 by X G N , and for each i ∈ N, define inductively Then, we can write, for continuous f : F → R, where x N ∈ V (G N ) converges to x ∈ F , say. The first summand in the right hand side of (5.2) can be written in terms of the process X G N killed at p 1 , and so by tracing the proof of [14, Proposition 30(i)] line by line, we can check it converges to the corresponding expectation involving X F killed on hitting p 1 . Similarly, the second summand in (5.2) can be written as where θ is the shift map. Given σ (0) p 1 = s, the strong Markov property allows us to write in terms of the process started at p 1 and killed on hitting p 2 , independently of the distribution of σ (0) p 1 . Thus the second term in the right hand side of (5.2) We can prove convergence of the rest of the terms similarly. Moreover, by applying the estimate for the exit time of the random walks from balls stated as part of [14,Lemma 27], for example, it is straightforward to check that there exists a t 0 > 0 such that P G N p 1 (σ (1) p 2 ≤ 5 N t 0 ) and P G N p 2 (σ (0) p 1 ≤ 5 N t 0 ) are both bounded above by 1/2, uniformly in N . As a consequence of this, one can show that the terms in the sums at (5.3) and (5.4) decay exponentially, uniformly in N , and hence that the right hand side of (5.3) converges to E F x [f (X F t )] as N → ∞. Convergence of the finite dimensional distributions can be shown similarly and we obtain the desired version of [31, Theorem 3.6].
Finally, a probabilistic version of the tightness condition of (2.11) is easily checked by applying (a probabilistic version of) Lemma 2.5, using known resistance estimates for nested fractals (cf. [14, Proposition 30(ii)]), and so Assumption 1 holds in probability due to Proposition 2.4. Thus we are able to apply Theorem 1.4 to deduce the following.
Theorem 5.1. If t mix (G N ) is the mixing time of the random walk on the level N approximation to the Sierpinski gasket equipped with uniformly bounded, independently and identically distributed random weights, then in probability, where t mix (F ) is the mixing time of the diffusion X F .
Let us remark that the same argument will yield at least two generalizations of this theorem. Firstly, it is not necessary for the weights to be independent and identically distributed, but rather it will be sufficient for them only to be 'cell independent', i.e. each collection (µ N ψ i 1 ...i N (x)ψ i 1 ...i N (y) ) x,y∈V 0 ,x =y is independent and identically distributed as (µ xy ) x,y∈V 0 ,x =y . (We note that without a symmetry condition, though, the limiting diffusion will no longer be guaranteed to be the Brownian motion on the Sierpinski gasket.) Secondly, the Sierpinski gasket is just one example of a nested fractal. Identical arguments could be applied to obtain corresponding mixing time results for sequences of graphs based on any of the highly-symmetric fractals that come from this class (since the key references [14], [30] and [31] all incorporate nested fractals already).
Finally, variations on the above mixing time convergence result can also be established for examples along the lines of those appearing in [14,Sections 7.4 and 7.5]. These include: an almost-sure statement for Vicsek set-type graphs (which complements the mixing time bounds for deterministic versions of these graphs proved in [21]); a convergence of mixing times for deterministic Sierpinski carpet graphs; and a subsequential limit for Sierpinski carpets with random weights. Since many of the ideas needed for these applications are similar to those discussed above, we omit the details.

Critical Galton-Watson trees
The connection between critical Galton-Watson processes and α-stable trees is now well-known, and so we will be brief in introducing it. Let ξ be a mean 1 random variable whose distribution is aperiodic (not supported on a sub-lattice of Z). Furthermore, suppose that ξ is in the domain of attraction of a stable law with index α ∈ (1, 2), by which we mean that there exists a sequence a N → ∞ such that ξ[N ] − N a N → Ξ, (5.5) in distribution, where ξ[N ] is the sum of N independent copies of ξ and the limit random variable satisfies E(e −λΞ ) = e −λ α for λ > 0. If T N is a Galton-Watson tree with offspring distribution ξ conditioned to have total progeny N , then it is the case that in distribution with respect to the Gromov-Hausdorff distance between compact metric spaces, where T (α) is an α-stable tree normalized to have total mass equal to 1 (see [33,Theorem 4.3], which is a corollary of a result originally proved in [15]). Note that the left-hand side here is shorthand for the metric space (V (T N ), N −1 a N d T N ), where V (T N ) is the vertex set of T N and d T N is the shortest path graph distance on this set. The α-stable tree T (α) is almost-surely a compact metric space. Moreover, there is a natural non-atomic probability measure upon it, π (α) say, which has full support, and appears as the limit of the uniform measure on the approximating graph trees. Usefully, we can decompose this measure in terms of a collection of measures of level sets of the tree. More specifically, in the construction of the α-stable tree from an excursion we can naturally choose a root ρ ∈ T (α) . We define T (α) (r) := {x ∈ T (α) : d T (α) (ρ, x) = r} to be the collection of vertices at height r above this vertex. For almost-every realization of T (α) , there then exists a càdlàg sequence of finite measures on T (α) , (ℓ r ) r>0 , such that ℓ r is supported on T (α) (r) for each r and [16,Section 4.2]). Clearly this implies that π (α) (∂B T (α) (ρ, r)) = 0 for every r > 0, for almost-every realization of T (α) . Since α-stable trees satisfy a root-invariance property (see [16,Theorem 4.8]), one can easily extend this result to hold for π (α) -a.e. x ∈ T (α) . Although this is not quite the assumption of the introduction that π (α) (∂B T (α) (x, r)) = 0 for every x ∈ T (α) , r > 0, by a minor tweak of the proof of Proposition 3.2, we are still able to apply our mixing time convergence results in the same way.
Upon almost-every realization of the metric measure space (T (α) , π (α) ), it is possible to define a corresponding Brownian motion X (α) (to do this, apply [28,Theorem 5.4], in the way described in [10,Section 2.2]). This is a conservative π (α) -symmetric Hunt process, and the associated Dirichlet form (E (α) , F (α) ) is actually a resistance form. Thus we can again apply Corollary 4.3 to confirm that (1.1)-(1.5) hold for some corresponding transition density, q (α) say. Now, in [13], it was demonstrated that if P T N ρ N is the law of the random walk on T N started from its root (original ancestor) ρ N and π N is its stationary probability measure, then, after embedding all the objects into an underlying Banach space in a suitably nice way, the conclusion of (5.6) can be extended to the distributional convergence of ρ is the law of X (α) started from ρ. By applying the fixed starting point version of the local limit result of Proposition 2.4 (cf. [14,Theorem 1]), similarly to the argument of [14,Section 7.2], for the Brownian continuum random tree, which corresponds to the case α = 2, one can obtain from this a distributional version of Assumption 2. (The tightness condition of (2.11) is easily checked by applying Lemma 2.5.)

Lemma 5.2. For any compact interval
Consequently, since the space in which the above convergence in distribution occurs is separable, we can use a Skorohod coupling argument to deduce from this and Theorem 3.5 the following mixing time convergence result. We remark that the √ 2 that appears in the finite variance result is simply an artefact of the particular scaling we have described here, and could alternatively have been absorbed in the scaling of metrics.
is the L p -mixing time of the random walk on T N started from its root ρ N , then N −2 a N t p mix (ρ N )→t p mix (ρ), in distribution, where t p mix (ρ) ∈ (0, ∞) is the L p -mixing time of the Brownian motion on T (α) started from ρ. In particular, in the case when the offspring distribution has finite variance σ, it is the case that σ Remark 5.4. We note that it was only for convenience that the convergence of the random walks on the trees T N , N ≥ 1, to the Brownian motion on T (α) was proved from a single starting point in [13]. We do not anticipate any significant problems in extending this result to hold for arbitrary starting points. Indeed, the first step would be to make the obvious adaptations to the proof of [13,Lemma 4.2] to extend the result, which demonstrates convergence of simple random walks (and related additive functionals) on subtrees of T N consisting of a finite number of branch segments to the corresponding continuous objects, from the case when all the random walks start from the root to an arbitrary starting point version. An argument identical to the remainder of [13,Section 4] could then be used to obtain the convergence of simple random walks on the whole trees, at least in the case when the starting point of the diffusion is in one of the finite subtrees considered. Since the union of the finite subtrees is dense in the limiting space, we could subsequently use the heat kernel continuity properties to obtain the non-pointed spectral Gromov-Hausdorff version of Lemma 5.2. However, we do not pursue this approach here as it would require a substantial amount of space and new notation that is not relevant to the main ideas of this article. Were it to be checked, Theorem 1.4 would imply, for any p ∈ [1, ∞], the distributional convergence of t p mix (T N ), the L p -mixing time of the random walk on T N , when rescaled appropriately, to t p mix (T (α) ) ∈ (0, ∞), the L p -mixing time of the Brownian motion on T (α) .

Critical Erdős-Rényi random graph
Closely related to the random trees of the previous section is the Erdős-Rényi random graph at criticality. In particular, let G(N, p) be the random graph in which every edge of the complete graph on N labeled vertices {1, . . . , N } is present with probability p independently of the other edges. Supposing p = N −1 + λN −4/3 for some λ ∈ R, so that we are in the so-called critical window, it is known that the largest connected component C N , equipped with its shortest path graph metric d C N , satisfies in distribution, again with respect to the Gromov-Hausdorff distance between compact metric spaces, where (M, d M ) is a random compact metric space [1]. (In fact, this and all the results given in this subsection hold for a family of i-th largest connected components for all i ∈ N. For notational simplicity, we only discuss the largest connected component C N .) Moreover, in [9], it was shown that the associated random walks started from a root vertex ρ N satisfy a distributional convergence result of the form where X M is a diffusion on the space M started from a distinguished vertex ρ ∈ M. Although the invariant probability measures of the random walks, π N say, were not considered in [9], it is not difficult to extend this result to include them since the hard work regarding their convergence has already been completed (see [9,Lemma 6.3], in particular). Hence, by again applying the fixed starting point version of the local limit result of Proposition 2.4 (using Lemma 2.5 again to deduce the relevant tightness condition), we are able to obtain the analogue of Lemma 5.2 in this setting.
converges in distribution to ((M, d M , ρ), π M , (q M t (ρ, x)) x∈M,t∈I ), where π M is the invariant probability measure of X M and (q M t (x, y)) x,y∈M,t>0 is its transition density with respect to this measure, in a spectral pointed Gromov-Hausdorff sense.
In order to proceed as above, we must of course check that π M and q M satisfy a number of technical conditions. To do this, first observe that a typical realization of M looks like a (rescaled) typical realization of the Brownian continuum random tree T (2) glued together at a finite number of pairs of points [1]. Since π M can be considered as the image of the canonical measure π (2) on T (2) under this gluing map, it is elementary to obtain from the statements of the previous section regarding π (2) that π M is almost-surely non-atomic, has full support and satisfies π M (∂B M (x, r)) = 0 for π M -a.e. x ∈ M and every r > 0, as desired. For q M , we simply observe that because the Dirichlet form corresponding to X M is a resistance form ([9, Proposition 2.1]), we can once again apply Corollary 4.3 to establish conditions (1.1)-(1.5).
Given these results, pointwise mixing time convergence follows from Theorem 3.5.
is the L p -mixing time of the random walk on C N started from its root ρ N , then is the L p -mixing time of the Brownian motion on M started from ρ. Remark 5.7. As discussed in Remark 5.4, we do not expect any major barriers in extending the above result to arbitrary starting points. The first task in doing this would be to adapt the convergence result proved in [9] regarding the convergence of simple random walks on subgraphs of C n 1 formed of a finite number of line segments ([9, Lemma 6.4]) to arbitrary starting points. One could then extend this to obtain the desired convergence result for simple random walks on the entire space using ideas from [9, Section 7] and heat kernel continuity. It would also be necessary to introduce a new Gromov-Hausdorff-type topology to state the result, as the one used in [9] is only suitable for the pointed case. Again, we suspect taking these steps will simply be a lengthy technical exercise, and choose not to follow them through here. We do though reasonably expect that t p mix (C N ), the L p -mixing time of the random walk on C N , when rescaled appropriately, converges in distribution to t p mix (M) ∈ (0, ∞), the L p -mixing time of the Brownian motion on M, for any p ∈ [1, ∞].

Random walk on range of random walk in high dimensions
Let S = (S n ) n≥0 be the simple random walk on Z d started from 0, built on an underlying probability space with probability measure P, and define the range of S up to time N to be the graph G N with vertex set In this section, we will explain how to prove that if d ≥ 5, which is an assumption henceforth, then the mixing times of the sequence of graphs (G N ) N ≥1 grows asymptotically as cN 2 , P-a.s., where c is a deterministic constant. Since doing this primarily depends on making relatively simple adaptations of the high-dimensional scaling limit result of [12] for the random walk on the entire range of S (i.e. the N = ∞ case) to the finite length setting, we will be brief with the details. First, suppose that S = (S n ) n∈Z is a two-sided extension of (S n ) n≥0 such that (S −n ) n≥0 is an independent copy of (S n ) n≥0 . The set of cut-times for this process, is known to be infinite P-a.s. ( [17]). Thus we can write T = {T n : n ∈ Z}, where . . . T −1 < T 0 ≤ 0 < T 1 < T 2 < . . . . The corresponding set of cut-points is given by C := {C n : n ∈ Z}, where C n := S Tn . For these objects, an ergodicity argument can be applied to obtain that, P-a.s., as |n| → ∞, where d G is the shortest path graph distance on the range G of the entire two-sided walk S, which is defined analogously to (5.7) and (5.8). In particular, see [12,Lemma 2.2], for a proof of the same convergence statements under the measure P(·|0 ∈ T ), and note that the conditioning can be removed by using the relationship between P and P(·|0 ∈ T ) described in [12,Lemma 2.1]. Given these results, it is an elementary exercise to check that the metric converges P-a.s. with respect to the Gromov-Hausdorff distance to the interval [0, 1] equipped with the Euclidean metric. Moreover, the same ideas readily yield an extension of this result to a spectral Gromov-Hausdorff one including that π N , the invariant measure of the associated simple random walk, converges to Lebesgue measure on [0, 1]. Now, for a fixed realization of G, let X = (X n ) n≥0 be the simple random walk on G started from 0. Define the hitting times by X of the set of cut-points C by H 0 := min{m ≥ 0 : X m ∈ C}, and, for n ≥ 1, H n := min{m > H n−1 : X m ∈ C}. We use these times to define a useful indexing process Z = (Z n ) n≥0 taking values in Z. In particular, if n < H 0 , define Z n to be the unique k ∈ Z such that X H 0 = C k . Similarly, if n ∈ [H m−1 , H m ) for some m ≥ 1, then define Z n to be the unique k ∈ Z such that X Hm = C k . Noting that this definition precisely coincides with the definition of Z in [12], from Lemma 3.5 of that article we have that: for P-a.e. realization of G, in distribution, where (B t ) t≥0 is a standard Brownian motion on R started from 0, and κ 2 (d) ∈ (0, ∞) is the deterministic constant defined in [12]. To deduce from (5.10) the following scaling limit for X N , the simple random walk on G N , we proceed via a time-change argument that is essentially a reworking of parts of [12, Section 3].
Lemma 5.8. For P-a.e. realization of S, if X N is started from 0, then is Brownian motion on [0, 1] started at 0 and reflected at the boundary.
Proof. The following proof can be applied to any typical realization of S. To begin with, define a process (A Z,N n ) n≥0 by setting where T −1 N := max{n : T n ≤ N }. From (5.9), we have that T −1 N ∼ τ (d) −1 N . Combining this observation with (5.10), one can check that, simultaneously with (5.10), . We now apply the above result to establish a scaling limit for the process X observed on the vertex set V (G N ) := {S n : T 1 ≤ n ≤ T −1 N }. Specifically, set Similarly to the proof of [12, Lemma 3.6], one can check that It is therefore a simple consequence of (5.10) that converges to 0 in probability as N → ∞ for any T ∈ (0, ∞). Since we know from equation (16) of [12] that also converges to 0 in probability, we readily obtain , (5.11) in distribution, whereX N = (X N n ) n≥0 is the random walk X observed on V (G N ) -this is defined precisely by settingX N n := X α N (n) , where α N (n) := max{A N m ≤ n}. We remark that the particular limit process B [0,1] arises as a consequence of the fact that (B α B (t) ) t≥0 , where α B is the right-continuous inverse of A B , has exactly the distribution of B [0,1] .
Finally, since the processX N is identical in law to the simple random walk X N observed on V (G N ), to replaceX N by X N in (5.11) it will suffice to check that X N spends only an asymptotically negligible amount of time in V (G N )\V (G N ). Since doing this requires only a simple adaptation of the proof of [12, Lemma 3.8], we omit the details. To complete the proof, one then needs to replace d G by d G N , but this is straightforward since Although the previous lemma only contains a convergence statement for the random walks started from the particular vertex 0, there is no difficulty in extending this to the case when 1] is started from x 0 ∈ [0, 1]. Applying the local limit result of Proposition 2.4 (to establish (2.11), we once again appeal to Lemma 2.5), we are able deduce from this that Assumption 1 holds for P-a.e. realization of the original random walk. Lemma 5.9. For P-a.e. realization of S, if I ⊂ (0, ∞) is a compact interval, then Since it is clear that (1.1)-(1.5) hold in this case, we can therefore apply Theorem 1.4 to obtain the desired convergence of mixing times.

Mixing time tail estimates
In this section, we give some sufficient conditions for deriving upper and lower estimates for mixing times of random walks on finite graphs, primarily using techniques adapted from [35]. We will also discuss how to apply these general estimates to concrete random graphs (see Section 6.3). In order to crystallize the results and applications, most of the proofs shall be postponed to the appendix.
As will be illustrated by our examples, the results in this section are robust and convenient for obtaining mixing time tail estimates. Moreover, when the convergence of mixing times (as in Theorem 1.4) is available for a sequence of graphs, we highlight how, by first deriving estimates for the relevant continuous mixing time distribution (where similar techniques are sometimes applicable, see Remark 6.3), it can be possible to deduce results regarding the asymptotic tail behavior of random graph mixing times that are difficult to obtain directly (see the proof of Proposition 6.6 or Remark 6.9, for example).
We start by fixing our notation. Let G = (V (G), E(G)) be a finite connected graph and µ G be a weight function, as in the introduction. Suppose here that d G is the shortest path metric on the graph G, and denote, for a distinguished vertex ρ ∈ V (G), We define a quadratic form E by For disjoint subsets A, B of G, the effective resistance between them is then given by: If we further define R eff (x, y) = R eff ({x}, {y}), and R eff (x, x) = 0, then one can check that R eff (·, ·) is a metric on V (G) (see [29,Section 2.3]). We will call this the resistance metric. The resistance metric enjoys the following important (but easy to deduce) estimate, Moreover, it is easy to verify that if c −1 1 := inf x,y∈G:x∼y µ G xy > 0, then Let v, r : {0, 1, · · · , diam d G (G) + 1} → [0, ∞) be strictly increasing functions with v(0) = r(0) = 0, v(1) = r(1) = 1, which satisfy In what follows, v(·) will give the volume growth order and r(·) the resistance growth order. For convenience, we extend them to functions on [0, diam d G (G) + 1] by linear interpolation. For the rest of the paper, C 1 , C 2 , d 1 , d 2 and α 1 , α 2 stand for the constants given in (6.3).

General upper and lower bounds
In this subsection, we give general upper and lower bounds for mixing times. Note that, since t p mix (ρ) ≤ t p mix (G) and t p mix (G) ≤ t p ′ mix (G) for p ≤ p ′ , it will be enough to estimate t ∞ mix (G) for the upper bound 1 and t 1 mix (ρ) for the lower bound.
Upper bound We first give an upper bound of the mixing times that is a reworking of [35,Corollary 4.2], in our setting.
Lemma 6.1. For any weighted graph (G, µ G ), where diam R (G) is the diameter of G with respect to the resistance metric R eff .
Lower bound We next give the mixing time lower bound. Let λ ≥ 1, H 0 , · · · , H 3 > 0, and let where C 2 is the constant in (6.3). We give the following two conditions concerning the volume and resistance growth.

Random graph case
We now consider a probability space (Ω, F, P) carrying a family of random weighted graphs G N (ω) = (V (G N (ω)), E(G N (ω)), µ N (ω) ; ω ∈ Ω). We assume that, for each N ∈ N and ω ∈ Ω, G N (ω) is a finite, connected graph containing a marked vertex ρ N , and #V (G N (ω)) ≤ M N for some non-random constant M N < ∞. (Here, for a set A, #A is the number of elements in A.) Let d G N (ω) (·, ·) be a graph distance, B(R) := B ω (ρ N , R), and V (R) := V ω (ρ N , R). We write X = (X n , n ≥ 0, P x ω , x ∈ G N (ω)) for the random walk on G N (ω), and denote by p ω n (x, y) its transition density with respect to π ω . Furthermore, we introduce a strictly increasing function h : N ∪ {0} → [0, ∞) with h(0) = 0, which will roughly describe the diameter of G N with respect to the graph distance. We then set γ(·) = v(h(·)) · r(h(·)). Finally, for i = 1, 2, we suppose p i : [1, ∞) → [0, 1] are functions such that lim λ→∞ p i (λ) = 0. We then have the following. (Note that C 2 , d 2 in the statement are the constant in (6.3).) Proposition 6.4. (1) Suppose that the following holds: (2) Suppose there exist c 1 ≤ 1 and J ≥ (1 + H 1 )/d 2 such that the following holds: then there exist c 2 , p 0 > 0 such that Suppose there exist c 1 ≤ 1 and J ≥ (1 + H 1 )/d 2 such that the following holds: P((6.4) ∧ (6.5) for R = c 1 λ −J h(N ) and for ε 0 (λ)R) ≥ 1 − p 1 (λ), where ε 0 (λ) is as in Proposition 6.2 ii), then there exist c 2 , p 0 > 0 such that To illustrate this result, we consider the case when the random graphs G N (ω) are obtained as components of percolation processes on finite graphs, thereby recovering [35,Theorem 1.2(c)]. (In [35], it was actually the lazy random walk was considered to avoid parity concerns, but the same techniques apply when we consider q G m (·, ·) as in (1.7) instead.) Proposition 6.5. LetĜ N be a graph with N vertices and with the maximum degree d ∈ [3, N − 1]. Let C N be the largest component of the percolation subgraph ofĜ N for 0 < p < 1. Let for some fixed λ ∈ R, and assume that there exist c 1 , θ 1 ∈ (0, ∞) and then there exist c 2 , θ 2 ∈ (0, ∞) and K 2 ∈ N such that, for all p ∈ [1, ∞], Finally, below is a list of exponents for each example in Section 5. Section Here the Euclidean distance is used instead of the intrinsic shortest path metric for the examples in Section 5.1. Note that when α = 2 in Section 5.2 (the finite variance case), the growth of v(R) and r(R) is of the same order as in Section 5.3. The difference of scaling exponents of mixing times (namely γ(N )) is due to the difference of scaling exponents for graph distances (namely h(N )). We also observe that the convergence to a stable law at (5.5) forces the scaling constants to be of the form a N = N 1/α L(N ) for some slowly varying function L (see [20,Section 35]

Examples
Critical Galton-Watson trees of Section 5.2 By combining the results in this section with our mixing time convergence result, we can establish asymptotic bounds for the distributions of mixing times of graphs in the sequence (T N ) N ≥1 in the case when we have a finite variance offspring distribution.
Proposition 6.6. In the case when the offspring distribution has finite variance, there exist constants c 1 , c 2 , c 3 , c 4 ∈ (0, ∞) such that and also lim sup Proof. To prove (6.11), we apply the general mixing time upper bound of Lemma 6.1 to deduce that is the diameter of T N with respect to d T N , and we note that #E(T N ) is equal to 2(N − 1). By (5.6), the right-hand side here converges to P(8 diam d T (2) (T (2) ) ≥ λ).
By construction, the diameter of the continuum random tree T (2) is bounded above by twice the supremum of the Brownian excursion of length 1. We can thus use the known distribution of the latter random variable (see [25], for example) to deduce the relevant bound. For (6.12), we first apply the convergence in distribution of Theorem 5.3 to deduce that Now, for the continuum random tree, define where R T (2) is the resistance on the continuum random tree (see [11, (20)]). Then Remark 6.7. The above proof already gives an estimate for the lower tail of t 1 mix (ρ). That the bound corresponding to (6.11) holds for the limiting tree, i.e.
can be proved similarly to the discrete case (see Remark 6.3).
Critical Erdős-Rényi random graph of Section 5.3 Let C N be the largest component of the Erdős-Rényi random graph in the critical window. Then the following holds.
(1) The tail estimates for t 1 mix (C N ) are given in [35, Theorem 1.1] without quantitative bounds. (In fact, reading the paper very carefully, it can be checked that the bounds similar to Proposition 6.8 are available in the paper.) (2) It does not seem possible to apply current estimates for the graphs (C N ) N ≥1 and techniques for bounding mixing times to replace t 1 mix (C N ) by t 1 mix (ρ N ) in the latter estimate (see Remark A.4), or even prove that the sequence (N/t 1 mix (ρ N )) N ≥1 is tight, i.e. lim λ→∞ lim sup That this final statement is nonetheless true is a simple consequence of Theorem 5.6.
A Appendix: Proof of the statements in Section 6 In this appendix, we prove various results given in Section 6. We adopt the convention that if we cite elsewhere the constant c 1 of Proposition A.3 (for example), we denote it as c A.3.1 .
A.1 Proof of Lemma 6.1 and Proposition 6.2 Proof of Lemma 6.1. First, note that by [2, Proposition 3 in Chapter 2], we have that for any stopping time S with X G S = x. Taking S to be the first hitting time of x after time 2m − 1, and writing Π(x, 2m) to represent the law of X G 2m when X G is started from x, we obtain that where σ x is the first hitting time of x, and the inequality holds because q G 2l (x, x) is decreasing in l (see the proof of [14, Lemma 9], for example). Since by Cauchy-Schwarz, By applying the commute time identity for random walks on graphs, , and the result follows.
Remark A.1. As mentioned in Remark 6.3, we can apply essentially the same argument to deduce the corresponding mixing time upper bound in the continuous setting when we suppose that we have a process whose Dirichlet form is a resistance form. In particular, suppose that this is the case for X F , as defined in the introduction. Let S be the first hitting time of x ∈ F after time t, then, for any f ∈ L 1 (F, π), which can be obtained by applying an ergodicity argument similar to that used to prove (A.1). Writing Π(x, t) to represent the law of X F t when X F is started from x, the expectation on the right-hand side here satisfies E x (S) = t + E Π(x,t) (τ x ) ≤ t + sup y∈F R eff (x, y), where to deduce the upper bound, we have applied that the commute time identity E x (τ y ) + E y (τ x ) = R eff (x, y) also holds for resistance forms (since we are assuming π to be a probability measure, it does not appear explicitly in this version of the identity). Moreover, if f is positive, the left-hand side is bounded below as follows: E x ( S 0 f (X s )ds) ≥ t 0 F q s (x, y)f (y)π(dy)ds. Combining these bounds, we have proved that, for positive f ∈ L 1 (F, π) such that f L 1 (π) = 0, By choosing a sequence of suitable functions whose support converges to {x}, the joint continuity of (q t (x, y)) x,y∈F,t>0 allows us to deduce from this that where the first inequality holds because q t (x, x) is decreasing in t. The remainder of the proof is identical to the graph case.
The proof of Proposition 6.2 requires some preparations. Our argument depends on some estimates for hitting times that are modifications of results in [3,32].
To begin with, let B = B(R) and define g B (x, y) = µ −1 y ∞ k=0 P G x (X k = y, k < τ B ).
By the second inequality of (6.5), we have and therefore we obtain, Rearranging this gives (A.5).
The proofs of this proposition and Proposition 6.6 highlight why it is useful to have a general theory where the exponents H 0 , · · · , H 3 can vary.
Remark A.4. As mentioned in Remark 6.9 (2), it does not seem possible to apply current estimates for the graphs (C N ) N ≥1 and techniques for bounding mixing times to replace A −1 N ≤ t p mix (C N ) by A −1 N ≤ t p mix (ρ N ) in (6.10). The major difficulty is to verify the first inequality of (6.8) for ε 0 (λ)R. Indeed, even if we choose H 0 , · · · , H 3 large (which increases the chance that (6.4) and (6.5) hold for R), ε 0 (λ) gets small accordingly, so that the probability P((6.4) ∧ (6.5) for ε 0 (λ)R) does not increase.