Scale-free percolation mixing time

Assign to each vertex of the one-dimensional torus i.i.d. weights with a heavy-tail of index $\tau-1>0$. Connect then each couple of vertices with probability roughly proportional to the product of their weights and that decays polynomially with exponent $\alpha>0$ in their distance. The resulting graph is called scale-free percolation. The goal of this work is to study the mixing time of the simple random walk on this structure. We depict a rich phase diagram in $\alpha$ and $\tau$. In particular we prove that the presence of hubs can speed up the mixing of the chain. We use different techniques for each phase, the most interesting of which is a bootstrap procedure to reduce the model from a phase where the degrees have bounded averages to a setting with unbounded averages.


INTRODUCTION
In this article we study the mixing time of the simple random walk on a scale-free percolation random graph defined on the one-dimensional torus.
1.1.Spatial random graphs and scale-free percolation.We say that a graph is spatial if its vertices occupy a position in a given metrical space.It is reasonable to believe that spatial random graphs are good candidates when one tries to model real-world networks where agents have a geographical or physical position (commercial or social networks, telecommunications, brain cells...).This intuition has been confirmed by the fact that some of these models exhibit properties that are often observed in data: for example, it is often the case that the nodes of the network have a degree distribution with polynomial tails (the network is said to be scale-free), the nodes are separated by a relatively low number of edges (the network is said to be small-world) and they are highly clustered.While physicists have been studying this kind of networks for some time (see e.g. the review Barthélemy (2011)), mathematicians have begun to prove rigorous results on these models only in recent years.
The scale-free percolation random graph (from now on SFP) falls into the category of inhomogeneous spatial models: not only does the probability of linking two vertices of the graph depend on their position in space, but also on a random importance, or weight, that is assigned to each of them.SFP can be considered a combination of long-range percolation (a random graph where the probability of linking two nodes decays, roughly, polynomially in their distance, Schulman (1983)) and inhomogeneous random graphs, such as the Norros-Reittu model (where nodes with a high weight are likelier to be linked, Norros and Reittu (2006)).Since its introduction in Deijfen et al. (2013a) SFP has been the object of intense research.In particular, the scale-free property has been proved for the discrete-space model, where the nodes lay on Z d , in Deijfen et al. (2013a).In its continuum counterpart, where the position of the nodes is given by a Poisson point process, the graph has been shown to be scale-free in an annealed sense in Deprez and Wüthrich (2019) and in a quenched sense in Dalmau and Salvi (2021).The convergence in distribution for the maximum of the degrees on a growing observation window has been studied in Bhattacharjee and Schulte (2019).The problem of graph distances in SFP has been addressed in the original paper for discretespace and in Deprez and Wüthrich (2019) for the continuum.Depending on the parameters of the model, SFP can exhibit the small-world property (the graph distance between vertices behaves asymptotically as the log of their Euclidean distance), the ultra-small-world property (the graph distance is log log of the Euclidean distance) or can be comparable to Euclidean distances.A good deal of effort is being put in finding the precise order of these distances, see Deprez et al. (2015), Hao and Heydenreich (2021), van der Hofstad and Komjáthy (2017).Finally, in Dalmau and Salvi (2021) the authors show the positivity of the clustering coefficient for continuous SFP.
Among the other few spatial models for which these properties have been proved to various extents, we mention the ultra-small scale-free geometric network (Yukich, 2006), the hyperbolic random graph (Gugelmann et al., 2012), the spatial preferential attachment model (Jacob and Mörters, 2015), the age-dependent random connection model (Gracar et al., 2019a) and the geometric inhomogeneous random graph (Bringmann et al., 2019).As noted in Gracar et al. (2019b), most of these models can be thought as particular cases of the more general weight-dependent random connection model.To further confirm our original motivation, we point out that some of these random graphs have been proposed to model realworld networks such as the Internet (Papadopoulos et al., 2010), banking systems (Deprez et al., 2015) and livestock trades (Dalmau and Salvi, 2021).
1.2.Stochastic processes on spatial inhomogeneous random graphs.Motivated by applications (the spread of fake news on social media, the outbreak of an epidemics, the diffusion of a computer virus...), we take a step further and look at stochastic processes that evolve over inhomogeneous spatial networks.The interest of the mathematical community on this topic is quite recent and a few references are available.Among them we find Candellero and Fountoulakis (2016) and Koch and Lengler (2016), where the authors study bootstrap percolation on the hyperbolic random graph and on the geometric inhomogeneous random graph respectively; Komjáthy and Lodewijks (2020), dealing with first passage percolation on different graph models; Janssen and Mehrabian (2017), about a push&pull protocol on the spatial preferential attachment.
One of the most basic and studied processes on a graph is the simple random walk, where at each time-step a particle moves from its current location to any neighboring vertex with equal probability.As far as we could check, the only available results for the simple random walk on spatial inhomogeneous random graphs are the analysis of transience and recurrence for weight-dependent random connection models in Gracar et al. (2019b) and for SFP in Heydenreich et al. (2017).The mixing time of the simple random walk is, roughly put, the time needed for the distribution of the chain to approach its invariant measure.Quantifying the mixing time of a Markov chain is of primary importance, for example, for its connection with the spectral gap of the chain (see Remark 2.6) and in computer science for sampling through Monte Carlo procedures.We refer to Levin and Peres (2017a) for a complete account on the subject.We could find very few works where the random walk mixing time is studied for models that go beyond lattices or graphs without an underlying geometry.Benjamini et al. (2008) and Crawford and Sly (2012) deal with long-range percolation in dimension d = 1 and d ≥ 2 respectively, while Dyer et al. (2020) analyzes another closely related model.

Our contribution.
In the present paper we address the problem of finding the order of the mixing time for the simple random walk on an SFP constructed on the one-dimensional torus of size N.To our knowledge, this is the first time the mixing time is analyzed for a walk on an inhomogeneous spatial random graph.The graph, called G N , is built as follows: to each node in T N := {1, . . ., N} we assign independent weights (W x ) x∈T N following a Pareto distribution of parameter τ − 1, with τ > 1.Once we have fixed the weights, we add an edge between node x and node y with probability where • is the torus-distance and where α > 0 is the parameter which tunes the influence of the distance between the nodes over the linking probability.So α and τ are the two parameters of the model, and we call γ := α(τ − 1).It is possible to show that the degrees of the nodes have a heavy tail of parameter γ, see Deijfen et al. (2013a).We consider then a lazy simple random walk on G N and study its mixing time t mix (G N ), see Section 2.2 for a precise definition.
We are inspired by Benjamini et al. (2008), who studied the same problem for the longrange percolation random graph on T N .Long-range percolation is equivalent to a version of SFP where all the weights are set equal to 1, which morally corresponds to the case τ = ∞.The authors of Benjamini et al. (2008) prove that the simple random walk on long-range percolation undergoes a phase transition in the parameter α: when 1 < α < 2, the mixing time is of order N α−1 , whereas for α > 2 it is of order N 2 (all up to polylogarithmic factors).Note in particular the remarkable discontinuity of the exponent at α = 2.
For SFP we depict an almost complete phase diagram in α and τ with a rich variety of phases.Up to correcting factors, we show that t mix (G N ) behaves as follows, cfr. Figure 1: (i) for γ < 1, t mix (G N ) is upper bounded by a power of log N (Theorem 2.2). (ii) For 1 < γ < 2 and τ < 2, t mix (G N ) is of order N γ−1 (Theorem 2.3).(iii) For α ∈ (1, 2) and τ > 2, t mix (G N ) is at least of order N α−1 (Theorem 2.4).(iv) For α > 2 and γ > 2, t mix (G N ) is of order N 2 (Theorem 2.5).Let us further comment on these results also comparing them with the phase-diagram borrowed from Heydenreich et al. (2017), see Figure 2. In this diagram, we report the asymptotic behaviour of the graph distances between vertices with respect to their Euclidean distance for SFP on Z d (so we are only interested in the case d = 1).
(i): in this case, the mean degree of the nodes of SFP on T N goes to infinity as N grows.Nevertheless, the resulting graph is far from being the complete graph (even if the graph distances on the infinite lattice for γ < 1 are bounded by 2, see Figure 2) and the bound of t mix (G N ) in this regime presents several challenges, see Section 1.4.Our result here holds almost surely and we point out that this fact becomes fundamental for the proof of (ii).When α < 1 and γ > 1 the mean degree is also unbounded and t mix (G N ) should be again polylogarithmic.We do not study this regime since it is irrelevant for the investigation of the polynomial mixing of the other cases.
(ii): we consider this to be the most interesting regime, both from a mathematical viewpoint and for applications (see Stegehuis et al. (2019)).The degrees have bounded first moment in N, but unbounded variance, while the weights have infinite mean.The statement on t mix (G N ) holds in probability, see Theorem 2.3, and its proof consists in a bootstrap procedure that brings us back to the model with γ < 1, see Section 1.4 for more details.The 2 3 1 2 3 Heydenreich et al. (2017).D(x, y) indicates the graph distance between points x and y (conditioned on being in the infinite cluster of the graph, see the paper for more details), while | • | is the Euclidean distance.See Hao and Heydenreich (2021) for the precise meaning of the symbol ≈.
exponent γ − 1 = α(τ − 1) − 1 of the mixing time confirms the intuitive fact that the presence of nodes with a very high degree (due to small values of τ), also called hubs, speeds up the mixing (note that, conversely, there are dynamics that are slowed down by hubs, see e.g.Janssen and Mehrabian (2017)).This is a fundamental novelty with respect to the long-range percolation of Benjamini et al. (2008), where this phase clearly does not appear.
Another interesting point is that the graph with 1 < γ < 2 and τ < 2 has a very small diameter (cfr.Proposition 6.1), but a polynomial mixing time.In this regime the graph distances in the infinite lattice behave even as the log log of the Euclidean distances, see Figure 2: the graph exhibits the ultra-small world property.
(iii): in this case we are only able to show a lower bound of order N α−1 .We make the conjecture that an upper bound of the same order applies: Conjecture 1.1.Let 1 < α < 2 and τ > 2. There exist c > 0 such that There are several reasons to believe this statement to be true.First of all, when τ > 2 the weights have a finite mean, so it is reasonable to think that the model would behave like long-range percolation and the order of t mix (G N ) would match the N α−1 of Benjamini et al.  (2008).Secondly, we can prove the conjecture in certain sub-regions of this phase.Thirdly, the probability of linking two distant nodes undergoes a phase-transition at τ = 2, cfr.Lemma 3.2.One might object, by looking at Figure 2, that the ultra-small world regime extends to the "triangle" τ > 2, α > 1, γ < 2 (the area between the pink, the purple and the red dotted lines in Figure 1), so one might think that the order N γ−1 could extend at least to that part of the diagram.Indeed, our proof for the upper bound of (ii) works also in the triangle, yielding an upper bound of N γ−1 .Nevertheless, we believe this bound to be suboptimal (notice that for τ > 2 one has that γ − 1 > α − 1).This is because we believe t mix (G N ) to be an increasing function on τ (that is, higher weights bring to a faster mixing), and if N γ−1 was the right order in the triangle and N α−1 outside of the triangle, we would have a sudden decrease of the order of t mix (G N ) when increasing τ in correspondence of γ = 2 (the red dotted line in Figure 1).
(iv): the last regime has the slowest possible mixing, N 2 .This could be expected by noticing in Figure 2 that the graph distances between points behave linearly in their Euclidean distance.
1.4.Techniques and outline of the paper.We introduce precisely our model and state our main theorems in Section 2. After giving some preliminary results in Section 3, we carry out the proofs.We summarize here the key ideas and techniques we use.
• Upper bound of (i), Section 4: the main idea is to study first a simplified model where the torus geometry is ignored: we introduce a new random graph G N where two nodes are linked if and only if the product of their weights is larger than N α (log N) 2 .We show that, if the weights follow independent Pareto distributions of parameter τ − 1, then the mixing time of the simple random walk on G N is polylogarithmic in Proposition 4.1.In order to do so, we study the Cheeger constant of G N .While the Cheeger constant is usually used to find lower bounds on the mixing thanks to a test set, here we analyze the bottleneck ratio of all the possible sets of vertices.This requires a thorough slicing of the set of vertices according to their weights and then concentration inequalities to control the number of nodes and their degree in each slice (Proposition 4.6).Once the mixing of the toy model t mix (G N ) has been established, we prove that it is substantially equivalent to t mix (G N ), see Proposition 4.2.
• Upper bound of case (ii), Sections 5 and 6: this bound requires the most elaborate ideas.The initial approach is inspired by Benjamini et al. (2008): we divide the torus T N into K chunks S 1 , . . ., S K of length L = N γ−1+ε for some small ε > 0. We collapse all the points of the S j 's into a unique point and we obtain a new graph Γ on the torus T K .While in Benjamini et al. (2008) the graph resulting from a similar operation stochastically dominates an Erdős-Rényi random graph with link probability log K/K, we end up we something quite different.By a rescaling of order L −(τ−1) −1 of the weights and a coupling procedure, we can show that Γ stochastically dominates a random graph Γ which is again an SFP.The fundamental point is that this time Γ has parameters α < α and τ = τ such that γ = α( τ − 1) < 1.Furthermore, by using multicommodity flows as a tool, it is possible to bound t mix (G N ) by some quantities related to G N and Γ, see Lemma 5.2.This is a re- finement of the approach of Benjamini et al. (2008), since for long-range percolation much cruder bounds are sufficient.
The second part of the proof in Section 6 is devoted to the estimate of these quantities.The first one is t mix ( Γ), which we already know to be at most polylogarithmic in K by case (i).The second one is the largest diameter ∆ G N of the graphs induced by G N on each S j ; we bound ∆ G N by adapting an argument of Deijfen et al. (2013a) for graph distances on Z d .The third is the ratio of the total number of edges in G N and in Γ, which we prove to be very stable thanks to a Bernstein-type concentration inequality in Proposition 6.3.The last two quantities, called Π G N and R G N , Γ in Lemma 5.2, involve the equilibrium measure on G N and its relation with the equilibrium measure on Γ.Their study, Propositions 6.4 and 6.5, is quite involved and technical.One is forced not only to bound the maximum of the degrees on each S j , but also its product with the sum of all the other degrees in S j .We achieve an optimal bound by using several times the Fuk-Nagaev inequality, see Theorem A.2.
• Upper bound of case (iv), Section 7: we use a second moment method to show the con- centration of the total degree of G N around its mean, see Lemma 7.3.In turn, this allows us to easily bound t mix (G N ) via the hitting times of the chain in Proposition 7.1.
• Lower bounds of all regimes, Section 8: the lower bounds of cases (ii) and (iii) are obtained at once in Proposition 8.1 using a test set in the Cheeger constant.The lower bound of case (iv) uses the parallel between random walks on graphs and electrical networks: similarly to Benjamini and Berger (2001) we show that there exists a positive fraction of nodes that are cut-points for the graph (in a particular sense, see Lemma 8.2) and infer in Proposition 8.4 that the mixing must be at least the square of the number of vertices.

MODEL AND RESULTS
2.1.Scale-free percolation on the torus.We describe now in detail the distribution of the SFP random graph G N .The set of vertices of G N is {1, 2, ..., N}, which we identify with T N := Z/NZ, the torus of size N.The edge set E(G N ) with law P is constructed in two steps: • for τ > 1, we associate to each x ∈ T N a random weight W x such that the weights under P are independent and follow a Pareto distribution with parameter τ − 1, that is (2.1) • Once we have fixed the weights of the nodes of the graph, we connect independently any couple of nodes x, y ∈ T N with probability where α > 0 is another parameter of the graph and x − y denotes the distance of x and y on the torus, that is, x − y := |x − y| ∧ (N − |x − y|), where ∧ indicates the minimum between the two.To make sure that G N is connected, we will also impose that x and x + 1 are linked for all x ∈ T N (we identify N + 1 with 1).
Remark 2.1.The Pareto tail for the weights in (2.1) has been chosen for convenience, rather than some more general distribution as in Deijfen et al. (2013a).We preferred to sacrifice generality for cleaner and more readable calculations in the proofs.We believe that our results would substantially remain true for weights whose distribution has a regularly varying tail of index τ − 1.
For a given graph G = (V, E) we write {x G ↔ y}, or simply {x ↔ y} when there is no risk of confusion, for the event that x and y are connected by an edge.D x = D x (G) := ∑ y =x ½ {y↔x} indicates the degree of node x ∈ V.For a set A ⊆ V, we write D A = D A (G) := ∑ x∈A D x , so that D G = 2|E| is twice the total number of edges.For two sets A, B ⊆ V, we also let D A,B := ∑ x∈A, y∈B ½ {x↔y} be the number of edges going from A to B. The diameter of G is defined as diam(G) := max x, y∈V D(x, y) , where D(x, y) denotes the graph distance between points x and y, that is, the minimal number of edges of the graph one has to cross to go from x to y.

2.2.
The simple random walk on G N and its mixing time.For a given realization of the graph G N , we define now the simple random walk (X n ) n∈N 0 .This is the Markov chain with transition matrix given by We consider this lazy version of the walk in order to avoid periodicity issues.The invariant (in fact, reversible) measure π for the walk (X n ) n∈N 0 is defined as Laziness ensures that, no matter the starting point of the walk, the distribution of X n will approach π by the ergodic theorem.Our goal is to quantify how long we will have to wait before these two measures are in some sense close to each other.To this end, we define as the distance between the distribution of the walk on G N at time n and π when starting from the worst possible vertex.Recall that the total variation distance for two measures µ and ν on G N is given by The time for (X n ) n∈N 0 to get close to π is the so-called mixing time of the chain: For other graphs G we will write t mix (G) for the analogous quantity.Notice that the quantity 1/4 in the definition is arbitrary, see Levin and Peres (2017b, Section 4.5).

Main results. We call
The main results of our work are the following.
Theorem 2.3.Let 1 < γ < 2 and 1 < τ < 2. There exist a constant c > 0 and a slowly varying function ℓ : N → (0, ∞) such that There exists c > 0 such that, P -a.s., for N large enough, Theorem 2.5.Let α > 2 and γ > 2. There exist c 1 , c 2 > 0 such that, P -a.s., for N large enough, The results of these theorems are summarized in the phase-diagram of Figure 1, where we use the symbols and to omit corrections with slowly varying functions, see Section 2.4.
Remark 2.6.We point out that there is a close relation between t mix (G N ) and the spectral gap of the chain.Recall that, letting ) be the eigenvalues of the matrix P = (P(x, y)) x,y∈T N , the spectral gap is defined as 1 − λ 2 (N).Then (see for example Levin and Peres  (2017a, Theorems 12.3 and 12.4)) it holds In particular, all our results can be read in terms of the spectral gap rather then the mixing time of the chain.
2.4.Notation.We will use the notation a ∧ b := min{a, b} for a, b ∈ R as well as a ∨ b := max{a, b}.The use of the symbols c, c 1 , c 2 . . .refers to positive constants whose value may change from line to line.Their value might depend on the model parameters α and τ, but will not depend from other variables (for example, N) unless specified otherwise.
The symbols and indicate respectively ≤ and ≥ eventually up to a slowly varying function in N. Actually, except for the upper bound in Theorem 2.3, all the and refer to polylogarithmic corrections.

PRELIMINARY RESULTS
Let us recall the fundamental inequality which we will use frequently.
3.1.The Cheeger constant.We now give a quick overview on the Cheeger constant and its relation to the mixing time.We recall that for the lazy simple random walk on a graph G = (V, E), the bottleneck ratio of a subset S ⊂ V of its state space is defined as (Levin and Peres, 2017b, Remark 7.2) and the Cheeger constant of the chain is defined as where π is the invariant measure of the walk.The following result links the mixing time of the chain and its Cheeger constant (Sinclair, 2012, pg. 58): where π min := min x∈V π(x).

Multicommodity flows.
Consider a reversible Markov chain on the vertices V of a graph G with its set of unoriented edges E, transition matrix P and reversible measure π.Let E (G) = {(x, y) ∈ V × V : {x, y} ∈ E} be the set of oriented edges obtained by doubling the unoriented edges.An E-path from x ∈ V to y ∈ V is a sequence p = e 1 e 2 . . .e m of edges in E (G) such that e 1 = (x, x 1 ), e 2 = (x 1 , x 2 ), . . ., e m = (x m−1 , y) for some vertices x i ∈ V.The length of a path p is indicated as |p|.The set of all paths is called P and the set of all simple paths from x to y is called P (x, y).We will also use the notation P (G) and P (x, y, G) when we need to specify in which graph the path is taken.A flow is a function Extending the definition of f also to oriented edges, we let the edge load of an edge e ∈ E (G) be The congestion of a flow f is .
(3.6) Sinclair (1992) establishes the relation between congestion rate and mixing time of the chain (notice that in Sinclair (1992) the congestion of a flow f as defined in (3.6) is called ρ( f ), while the letter ρ is used for a related quantity).The author shows (see Theorem 5' and Proposition 1 therein) that for any flow f and that there exists a flow f * (see Theorem 8 and Remark (a) therein) such that (3.8)

A simple lemma.
The following lemma will be used for the upper bound of Theorem 2.3 to pin the right polynomial order of the mixing time in that regime.It should be a wellknown fact about slowly varying functions, but we could not find a reference.
Lemma 3.1.Let (X N ) N∈N be a sequence of random variables such that, for each ε > 0, Then there exists a slowly varying function ℓ : N → (0, ∞) such that Proof.Take a sequence (ε i ) i∈N such that ε i ↓ 0. We know that for all i ∈ N there exists a Since lim N→∞ ε(N) = 0, we are done if we find a slowly varying function ℓ such that By Karamata's representation theorem (see e.g.Bingham et al. (1989, Theorem 1.3.1)), the function ℓ(N) = exp{∑ N k=1 θ(k)/k} is a slowly varying function as long as lim N→∞ θ(N) = 0.So it is sufficient to find a function θ with lim N→∞ θ(N) = 0 for which This is clearly possible by choosing a θ which decays to 0 slowly enough.
3.4.Preliminary results on SFP.We will use in different places the following bound on linking probabilities for SFP.
Lemma 3.2.For x, y ∈ T N with x − y > 1, it holds where c > 0 is a constant that only depends on α and τ.
In this Section we prove the upper bound of Theorem 2.2.First of all we will show that the result holds for a simplified model where the geometry of the torus plays no role.Then we will prove that we can dominate the mixing time of the original model by the square of the mixing time of the simplified model up to a polylogarithmic factor.4.1.Proof of Proposition 2.2.Under the measure P, we construct another random graph called G N with vertices on the torus T N .We use the same random weights {W x } x∈T N that we use to construct G N , so the two random graphs are coupled.Then, we put an edge between two vertices in G N if and only if the product of their weights exceeds N α (log N) 2 , that is, for all x = y ∈ T N , We refer to G N as the simplified model.We indicate with D x the degree of a node in G N , D A the sum of the degrees of the vertices in A ⊆ T N , π the invariant measure of the lazy simple random walk on G N , that is π(x) = D x /D G N .Proposition 4.1 shows that the lazy simple random walk on G N mixes fast, while Proposition 4.2 tells us that the mixing time on G N is bounded by the square of that of G N up to a correction.We will prove the two propositions in the next sections and we point out that their combination yields Theorem 2.2.Proposition 4.1.Let γ := α(τ − 1) < 1.There exists c > 0 such that P-a.s., for N large enough, Proposition 4.2.P-a.s., for all N large enough, it holds 4.2.Preliminary results on the simplified model.Before proving Propositions 4.1 and 4.2, we collect hereby some facts about G N and G N .
Proposition 4.3.P-a.s., for N large enough, the following properties hold: Proof.The proof is postponed to Section B.1 in the appendix.
4.3.Proof of Proposition 4.1.In view of the upper bound in (3.4) and the fact that, for N large enough, π min ≥ 1/N 2 (using item (v) in Proposition 4.3), it will be enough to show that there exists a c > 0 such that, P-a.s., for N large enough, Φ * (G N ) ≥ (log N) −c where Φ * (G N ) is the Cheeger constant (cfr.(3.3)) associated to the lazy simple random walk on G N .Therefore, we will be done if we can prove the following: there exists c > 0 such that, P-a.s. for N large enough, for each S ⊆ T N with π(S) < 1/2 we have where in this Section Φ(S) indicates the bottleneck ratio of the set S associated to the lazy simple random walk on G N .
Observation 4.4.If a set S is such that D S,S c N 2−γ , we automatically have (4.7).In fact, Pa.s. for all N large enough, we have that D S ≤ D G N N 2−γ by Proposition 4.3 item (v).
We partition the set of vertices into "weight slices".Let and define The V j 's partition the vertices of the graph with weight larger than N α/2 log N, whereas the V j c 's partition the vertices with weight smaller than , while all the other sets V j and V j c are mutually disjoint).V + j and V + j c are subsets, respectively, of V j and V j c with vertices that have a high weight (within their weight slice).
Observation 4.5.In the simplified model, for each j, all vertices in V j c are connected to all vertices in V ℓ , with ℓ ≥ j, since the product of the weight of a vertex in V j c and the weight of a vertex in V ℓ is always larger than N α (log N) 2 .Proposition 4.6.Call Q := (log N) τ−1 .P-a.s. for all N large enough, the following holds: (i) for j = 1, . . ., Proof.The proof is postponed to Appendix B.2.
Take now any S ⊆ T N and call Q := (log N) τ−1 as in Proposition 4.6.We have three possibilities: We analyze separately these three cases and show that (4.7) always holds when π(S) < 1/2.This will conclude the proof.
Case A) Since all points in V 1 are connected to all the other points in V 1 , in this case we have N 2−γ , so that we have (4.7)thanks to Observation 4.4.

Case B) Define
We distinguish two subcases: either j ∈ {2, . . ., j max } or j = ∞ (that is, the set on the righthand side above is empty).
Case B1) Suppose that j ∈ {2, . . ., j max }.Recall (cfr.Observation 4.5) that, by construction, all the points in V j−1 and all the points in V j are connected to all the points in V (j−1) c .There are now two possibilities: if Using (4.8) and (4.9), we see that in both cases (4.7) holds thanks to Observation 4.4.

Case B2
) Suppose now that j = ∞.If there exists i ∈ {1, . . ., and we obtain again (4.7) with (4.8) and (4.9).If such an i does not exist, it means that max }.We want to show now that this kind of set S is not to be taken into account in (3.3) since π(S) > 1/2.This should intuitively be true because we are considering a set S that contains the large majority of the points of each weight slice.More precisely we decompose (recall that Consequently, if we can prove that . ., j max (4.12) it will follow that π(S) > 1/2.We will show (4.12) just for the D V j 's, the proof for the D V j c being completely similar.For j = j max , we notice that most of the points of V j max are inside the set S. Since all of the points in V j max have the same degree (which is N − 1), we clearly have D V jmax ∩S > D V jmax /2.Take now j ∈ {1, . . ., j max − 1}.By (4.8) and (4.10) we know that Since by assumption |V j ∩ S c | ≤ Q −2 |V j | and since the vertices in V + j are the vertices of V j with the largest weight and therefore with the largest degree, we deduce that D V j ∩S ≥ D

Case C)
We mirror the proof of Case B. Define As for j, j can be either finite or not.
Case C2) Finally suppose that j = ∞.Similarly to Case B2, if there exists i ∈ {1, . . ., and we obtain once more (4.7).If such an i does not exist, it means that |U ∩ Then, setting V (j max +1) c = ∅ to ease the notation, on the one hand while on the other hand Since by (4.9) one has |V (j+1) c | ≤ 4Q|V j c |, one sees from (4.14) and (4.15) that for some constant c > 0. This concludes the proof of Proposition 4.1.
4.4.Proof of Proposition 4.2.For a realization of the simplified model G N , let f * be a flow on G N for which (3.8) holds.We define now a flow on G N as follows: for each x, y ∈ T N and each path p from x to y we set Recall by Proposition (ii) that E(G N ) ⊆ E(G N ) almost surely for N large enough.Since G N is connected for N large enough, it is easy to check that f is indeed a flow for G N .
We notice that, by the definition of f , for any edge e ∈ E(G while for any edge e ∈ G N \ G N one has f (e) = 0. Using also that D G N ≤ D G N , we can bound the congestion of f by By (3.7) and the fact that D G N N 2−γ (as can be seen by ( 4.5) and (4.6)), we are done if we prove that for some c > 0 max almost surely for N large enough.But this is true, since each node z has the same weight W z in both models, so (4.1), (4.2), (4.3) and (4.4) do the job.
Remark 5.1.For simplicity of exposition we will consider from now on ℓ = 0, so that all the chunks have the same size.Our results will still hold, up to minor corrections, for other values of ℓ.When the needed modifications will not be minor, we will explicitly indicate what has to be changed.
We call Γ = Γ(N) the random graph on T K obtained in the following way: for a realization of SFP G N , we put a single undirected edge between node i and j in Γ if the point x max (i) with the largest weight in S i is connected to the point x max (j) with the largest weight in S j .That is, We include in any case all edges between neighbouring i's, too.
Lemma 5.2.Let Γ = Γ(N) be a SFP random graph on T K with parameters It is possible to couple G N and Γ so that where with G j N indicating the restriction of G N on S j .Notice that Γ in the lemma is an SFP where γ = α( τ − 1) < 1 .Therefore we already know that t mix ( Γ) is polynomial in K (and hence in N) by Theorem 2.2.We point out that the presence of ε in the definition of L is needed for the stochastic domination (5.4).
We proceed now as follows.We will spend the rest of this section in the proof of Lemma 5.2, dividing the proof in two parts: in Subsection 5.1 we describe how to couple the two random graphs and obtain (5.4), while in Subsection 5.2 we derive (5.5).Afterwards, in Section 6, we will show how the upper bound in Theorem 2.3 follows from Lemma 5.2.5.1.Stochastic domination, proof of (5.4).We show now that we can build Γ on the same probability space of G N (and therefore of Γ) so that condition (5.4) is satisfied.The idea is to couple the weights W j in Γ with the weights in Γ so that, roughly, W j ∼ L −1/(τ−1) W max (S j ), where W max (S j ) := max x∈S j W x = W x max (j) .Since we will use this fact in the next section, we rephrase the statement in a more precise proposition.
Proposition 5.3.There exists a coupling between G N and Γ such that, P-a.s. for all N large enough, and (5.7) Proof.First of all we check that it is possible to couple the weights of the nodes so that (5.6) holds.To this end, it is enough to show that where we shortened U L := (log L) 2/τ−1 .For (5.8) we notice that, for all t ≥ 1, the right hand side is where for the inequality we used the fact that (1 − a) L ≥ 1 − aL for all a ≥ 0 and L ≥ 0. For (5.9) we calculate This quantity is larger than P( W j > t) = t −(τ−1) for, say, all t ≥ 2 as can be checked straightforwardly.But we claim that L −1/(τ−1) U L W max (S j ) is always larger than 2, for L large enough.
In fact, τ−1 which is summable in L (recall that K can be written as a polynomial in L), so the first Borel-Cantelli lemma gives the claim and (5.9) is verified for t < 2, too.Now we consider G N and Γ built on the same probability space with weights satisfying (5.6).We recall that, given the weights, the presence of each edge is independent from the others.Hence, showing that for all i = j ∈ T (5.10) will ensure that there exists a coupling such that (5.7) holds.Without loss of generality we can take i = 1 and show (5.10) for all j = 3, . . ., ⌈K/2⌉ (the case j = 2 is trivial, since by the definition of the model all nearest neighbours are connected with probability 1, and we stop at ⌈K/2⌉ since we are dealing with the torus distance).Recalling (5.2), the left-hand side of (5.10) can be bounded as where for the first inequality we have used the fact that for all x ∈ S 1 and y ∈ S j one has x − y ≤ jL.On the other hand, the r.h.s. of (5.10) is so (5.10) is verified if we prove that (5.11) Since α > α, it is enough to show (5.11) for j = ⌈K/2⌉.Recalling that L = ⌊N γ−1+ε ⌋ and K = N/L, we see that l.h.s. of (5.11) (5.12) while, recalling (5.3), r.h.s. of (5.11) (5.13) Comparing (5.12) and (5.13), we obtain that (5.11) holds for all N large enough, which in turn gives (5.10) and so (5.7).

5.2.
Comparison of the mixing times, proof of (5.5).We show now how to obtain (5.5), closing the proof of Lemma 5.2.The idea comes from Benjamini et al. (2008) (cfr.Proposition 2.1 thereby), and we will borrow part of its notation; we will also drop the N from G N and G i N for simplicity.Recall Section 3.2 for some notation about paths.For x, y both in some S i denote as p(x, y) the graph-geodesic in G i between x and y (if there is more than one, we just choose any).For an edge (i, j) ∈ E(Γ), let e(i, j) ∈ E(G) be the edge (x max (i), x max (j)).If q = e 1 e 2 • • • e |q| ∈ P (i, j, Γ) and x ∈ S i , y ∈ S j , we denote p(q, x, y) the path in G from x to y that uses p(x, x max (i)), then the edges induced by q and then p(x max (j), y), that is p(q, x, y) := p(x, e where for each oriented edge e k we indicate with e + k (respectively e − k ) its starting (ending) point.We also observe that for any q, x, y as above one has (5.14) Let f * denote a flow on Γ for which (3.8) holds.From it, we will construct a flow f on G as follows: -for x, y ∈ S i , set f (p) := π G (x)π G (y) if p = p(x, y), and 0 otherwise; -for x ∈ S i , y ∈ S j with i = j, set for any q ∈ P (i, j, Γ) and 0 otherwise.
It is straightforward to verify that this defines a flow on G. Let us now compute the congestion rate associated to f .Let (x, y) ∈ E(G) and let x ∈ S i and y ∈ S j .
• If i = j, then denoting q + (respectively q − ) the starting (ending) vertex of a path q we obtain ∑ p∈P (G) p∋(x, y) f (p)|p| (5.14) for some constant c > 0.
• If i = j, any path p that contains the edge (x, y) such that f (p) > 0 must be of the form p = p(z, w) for some z, w ∈ S i .Therefore (5.16) Now note that for a flow f one has that (5.17) The result (5.5) follows by applying (3.7) and (5.17) to t mix (G N ) and then using (5.15) and (5.16).
In the previous section we showed inequality (5.5).In order to conclude the proof of the upper bound of Theorem 2.3, we will have therefore to bound the quantities appearing on the right hand side of (5.5).As mentioned before since, for ε small enough, Γ is a SFP random graph with γ = α( τ − 1) < 1: by using the result of Theorem 2.2 its mixing time is at most logarithmic in the number of nodes, which is K.We are left to control the quantities This is taken care of in the next four propositions.Also in this section we will drop the N from G N to ease the notation.
).There exists c > 0 such that, P-a.s. for all N large enough, (6.2) Observation 6.2.Mind that, for completeness, Proposition 6.1 is stated for a set of parameters that is more general than the one considered in this Section, that is, we are not imposing that 1 < τ < 2.
Proposition 6.3.There exists c > 0 such that, P-a.s. for all N large enough, There exists a constant c > 0 such that, P-a.s. for all N large enough, R G, Γ N ε (6.4) Proposition 6.5.Recall that Π G := max j∈T K ∑ z =w∈S j π G (z)π G (w).It holds Before giving the proofs of these four propositions in the next subsections, we conclude the argument for the upper bound of Theorem 2.3.Using (6.1), (6.2), (6.3), (6.4) and (6.5) in combination with (5.5) yields that there exist positive constants c 1 and c 2 not depending on N such that Since ε can be taken arbitrarily small, we finally obtain the upper bound of Theorem 2.3 thanks to Lemma 3.1.6.1.Proof of Proposition 6.1.The proof of the following proposition takes inspiration from the argument of Deijfen et al. (2013b, Theorem 5.1) which deals with graph distances in SFP.The approach is alternative to the renormalization approach of Benjamini and Berger (2001, Theorem 3.1).First of all we consider G 1 , the graph induced by G on {1, . . ., L}.Take a constant M > 0, to be chosen large enough later on.We denote D(•, •) the graph distance between points of G 1 .We start by showing that, for some ξ > 0 and c > 0, (6.6) Fix some 0 < δ < (2 − γ)/2 and define We want to show that in each A i (respectively in each B i ) there is a point a an upper bound of the distance between 1 and the rightmost point of A i max (resp.between the leftmost point of B i max and L), being neighbouring points always connected.We focus on the A i 's, since for the B i 's the same calculation holds.
Let a i be the point in A i with the largest weight: and let F be the event where |A i | indicates as usual the cardinality of the set A i .We bound By upper bounding the last expression with i max times the largest summand (which corresponds to i = i max ) and noticing that |A i max | ≤ (log L) M we find for some c > 0 not depending on L.
On the other hand, conditioning on F, it is unlikely that for some i one has a i ↔ a i+1 : where for the last passage we have used that unit and that δ < (2 − γ)/2, one can check that the exponent in the last display is bounded by where c > 0 is a constant not depending on L, possibly different of the one appearing in (6.7).With this bound at hand, we conclude the estimate (6.8) obtaining }.The bounds (6.7) and (6.9) yield P i=1,...,i max −1 and this implies (6.6) absorbing the factor 2 in the constant c.
We notice that the bound (6.6) also works if we replace D(1, L) by any D(x, y) with x, y ∈ G 1 , since we could repeat the whole argument here above with x and y replacing 1 and L and obtain an even better bound.Hence, by a union bound, The last quantity can be made summable in N by choosing M large enough so that, by the first Borel-Cantelli lemma, we are done.Notice that we ignored the fact that the graph induced on the K-th the graph induced on S K = {(K − 1)L, . . ., N} might be of a size larger than L; since this size cannot be larger than 2L − 1, though, the proof can be easily adapted.6.2.Proof of Proposition 6.3.First of all we claim that, P-a.s. for all N large enough, (6.10) The lower bound is obvious.We begin by bounding, for any x ∈ T N , and considering N odd for simplicity, By an elementary calculation, one can see that the j-th term of the sum equals 1 if j α < W x , while it is smaller than where the last inequality can be checked by using the approximation of sums by definite integrals.Notice that D x cannot be larger than N, so (6.11) implies Furthermore, for all t > 1, it holds The inequality can be checked via Bernstein's inequality, see Lemma A.1 with the X i 's given by ½ {x↔x+i} for i = 1, . . ., N − 1, taking M = 1 and noticing that This ensures that which is summable in N: we can conclude, thanks to the first Borel-Cantelli lemma and using (6.12), that almost surely for N large enough (6.13)We are now ready to bound the total number of edges in G. Thanks to (6.13) we get We use once more Bernstein's inequality: we take in Lemma A.1 the independent variables Bernstein's inequality then yields The last quantity is summable in N, so the Borel-Cantelli lemma finally gives the upper bound in (6.10).
We turn our attention to |E( Γ)|.This is the number of edges in a SFP model with K vertices and γ < 1.By item (ii) in Proposition 4.3 on the one hand, and by (4.16) on the other, we know that, P-a.s. for all N large enough, (6.14)where Γ indicates the simplified model described in Section 4.1.At the same time, item (v) in Proposition 4.3 tells us that and recalling that K = N/L ≥ N 2−γ−ε , (6.14) and (6.15) yield, for some c 4 , c 5 > 0 that can be chosen independently of ε, . This, together with (6.10), implies (6.3).6.3.Proof of Proposition 6.4.For simplicity in this proof we abbreviate D S j for D S j (G) and D j for D j ( Γ).We use the lower bound in (6.3) to see that Therefore we want to show that there exists c > 0 such that, P-a.s. for N large enough, for all j ∈ T K D S j D j N γ−1+ε (6.17) which together with (6.16) implies (6.4).We claim that, P-a.s. for all N large enough, for all Before proving (6.18) and (6.19) we show how to conclude by analyzing all possible cases.Fix j and abbreviate W := max x∈S j W x .Recall that L = ⌊N γ−1+ε ⌋.We point out that in principle some of the cases listed below might be empty, depending on the values of α and τ.
Case 1. W > L α .In this case we obtain Here we distinguish two further sub-cases.
Case 2.b.W ≤ L 1/(τ−1) .In this case again D j N 2−γ−ε and (6.17) holds.We move to the proof of (6.18) and (6.19).For equation (6.19) we recall the simplified model described at the beginning of Section 4 and write D j for the degree of node j in the simplified model related to Γ.We have, using item (ii) of Proposition 4.3, For equation (6.18), we first of all notice that, by (6.13), We claim that there exists a constant Q > 0 such that Thanks to the Borel-Cantelli lemma, this implies (6.18) by also noticing that D S j N by (6.10).Let us show (6.20).
We focus on j = 1.Call Y := Y 1 and M := M 1 and let µ := E[W 1/α x ] < ∞.We distinguish between the cases where M j is smaller or larger than L. In the first case we can use directly the Fuk-Nagaev inequality (A.2) with y = L and x = (Q − µ)L to get that there exists a constant c > 0 such that where the last inequality holds for Q large enough.When instead the maximum exceeds L we proceed as follows.First of all we divide the possible values of M in intervals and bound At the cost of a union bound we can suppose that W L is the largest W x in S 1 , so that, for each ℓ, the ℓ-th summand in the last display can be dominated by where x and M ′ := max x=1,...,L−1 W 1/α x .On the one hand, and on the other, using again the Fuk-Nagaev inequality (A.2), for Q large enough, where 2 is a constant that can be made arbitrarily large by taking Q large.Summing up, using (6.23), (6.24) and (6.25) into (6.22)we obtain This last expression together with (6.21) show that where Q ′′ is a constant that can be made arbitrarily large by taking Q large.Since L = ⌊N γ−1+ε ⌋ and K = N/L, by taking Q large enough we can make (6.20) true, which in turn implies (6.18) as mentioned before.This concludes the proof of Proposition 6.4.6.4.Proof of Proposition 6.5.Fix any j ∈ T K and call z * the point in S j realizing the maximum of π G (z) in S j .We notice that By using that |E(G)| ≥ N, the probability on the left-hand side of (6.5) can be therefore upper bounded using also a union bound by l.h.s. of (6.5) Equation (6.18) in the proof of Proposition 6.4 and equation (6.13) imply that there exists a c > 0 such that, for N large enough, P-a.s., . Therefore, in order to show that the left-hand side of (6.5) tends to zero, it will be enough to prove lim We bound this probability depending on the value of M.
Case M < L(log N) c : first of all we use that M ∨ L < L(log N) c to see that x ] < ∞ and since N γ L −1 (log N) −2c ≥ 2µL by (5.1), one obtains Using the Fuk-Nagaev inequality (A.2) we bound the right hand side of the last display with for some c 1 , c 2 > 0. Since there exists δ > 0 such that L 3−γ N −γ < N −2δ and N γ L −2 > N 2δ , for N large enough we can bound Case M ≥ N 1−2ε : when the maximum is large, one can simply bound we are ignoring the fact that ℓ min and ℓ max might not be integers, in which case we could just take their integer part).We partition where I ℓ := [L2 ℓ−1 , L2 ℓ ).We bound now the probability in the sum.At the cost of a union bound we can suppose that W L is the largest By (6.20) and the first Borel-Cantelli lemma, we know that with probability 1 there exists a Q > 0 such that, for all N large enough, Plugging this into (6.29) and using (6.24) yields for all ℓ min ≤ ℓ ≤ ℓ max .Since the number of ℓ's is logarithmic, we conclude that We gather the results in (6.27), (6.28) and (6.30) to see that which proves (6.26) for ε small enough.
7. CASE α > 2, γ > 2: UPPER BOUND OF THEOREM 2.5 For the reader's convenience we restate the upper bound of Theorem 2.5 as a Proposition: Proposition 7.1.Let α > 2 and γ > 2. There exists c > 0 such that, P-a.s. for all N large enough, we have Before proving the Proposition, we need two lemmas.
Proof.In the regime α > 2 and γ > 2 we can use Deijfen et al. (2013b, Theorem 2.2) to infer that the node degrees have a variance which is bounded in N and so, in particular, they have a bounded mean.In fact, it is easy to get convinced that both E[D G N ] and Var(D G N ) are dominated by their infinite counterpart on Z since the random variables {½ {x↔y} : x, y ∈ T N } are positively correlated.The result (7.1) on the expectation of D G N immediately follows.
As for the variance, the first step is the simple observation that for some constant c > 0. Note that here we use the fact that the degrees have bounded variance.Let us look, for x = y, at 3) The first double sum in the last expression is upper bounded by E[D x ]E[D y ] = ∑ z =x P(x ↔ z) ∑ w =y P(y ↔ w), and therefore with R x,y given by (7.5) We will now provide a bound for R x,y .Using the inequality P(A ∩ B) ≤ P(A) ∧ P(B) for any two events A and B and the fact that the weights have finite mean, we can see that for some c > 0. Analogously, using that for some c > 0. We can bound similarly the third term of (7.5).Again (3.9) entails that the last term of (7.5) is upper-bounded by x − y −2−ε .This estimate, (7.6) and (7.7) plugged back into (7.5)yield R x,y ≤ c x − y −1−ε for some constant c > 0. This fact and (7.4) bring to for some c > 0, where we have used the fact that ∑ ∞ n=1 n −1−ε < ∞.We can conclude with (7.2) and (7.8) that, for some c 1 > 0, For our purpose, not only do we need the mean and variance of the total degree, but also a concentration result.
Lemma 7.3.For α > 2, γ > 2 we have that, P-a.s. for all N large enough, Proof.By Chebyshev's inequality The statement is a consequence of Lemma 7.2 and the first Borel-Cantelli lemma.
Proof of Proposition 7.1.First of all, we recall the upper bound (Levin and Peres, 2017b, Remark 10.17) for any irreducible chain on a graph G, where t hit (G) = max x, y∈G E x [τ y ] is the mean hitting time τ y of y of the chain starting at x.Using for example Levin and Peres (2017b, Proposition 10.7) one shows that the maximum hitting time for a graph with N vertices and M edges is at most of order MN.But by Lemmas 7.2 and 7.3 we know that P-a.s. for all N large enough we have D G N ≤ 2N log N. Since D G N is proportional to the number of edges, we obtain that there exists c > 0 such that, P-a.s. for all N large enough, This, together with (7.10), concludes the argument.
8.1.Lower bound on t mix (G N ) for case α ∈ (1, 2), τ > 2 and case τ ∈ (1, 2), γ ∈ (1, 2).We recall the desired result in the following proposition.Proposition 8.1.Consider either the case α ∈ (1, 2) and τ > 2 or the case τ ∈ (1, 2) and γ ∈ (1, 2).Then it holds Proof.Consider the set S := {1, . . . ,⌊N/2⌋} .Using for example the approximation of sums by definite integrals, one can check that for some constant c > 0 depending on α and τ.Recall the definitions of the bottleneck ratio (3.2) and of the Cheeger constant Φ * in (3.3) and its relation with the mixing time in (3.4).Using (8.1) and Markov's inequality we obtain, with c > 0 a constant depending on α and τ that might change from line to line, For a moment, let us consider S N = {1, 2, . . ., N} as a segment rather than a torus, so that in particular the distance between x, y ∈ S N is just |x − y| rather than the torus distance x − y .For a graph G on S N , we say that a point x ∈ S N is a cut-point for G if there is no edge between a point in {1, 2, . . ., x − 1} and a point in {x + 1, . . ., N}.We say that x ∈ S is a good cut-point if x − 1, x and x + 1 are cut-points.Consider now a SFP random graph G N (S) on S N , that is, the model described in Section 2.1 and with link probability given by equation (2.2) with |x − y| replacing x − y .In the following lemma, we will show there is a positive density of good cut-points for N large.Lemma 8.2.Consider a SFP random graph G N (S N ) on the segment S N .There exists c > 0 such that lim N→∞ P |{Good cut-points in S N }| ≥ cN = 1 .
Proof.We want to make use of the ergodic theorem, so we start to investigate the infinite SFP random graph on Z.The infinite graph G N (Z) can be constructed under the measure µ := x∈Z Law(W x ) ⊗ x, y∈Z Law(U ) and connect all nearest neighbours as usual.The definition of cut-point and of good cut-point on the infinite graph are just the same we gave for the segment before the lemma.We compute, using Jensen's inequality, by the dominated convergence theorem.
We observe now that an instance of scale-free percolation on S N can be obtained as follows: construct G N (Z) and then restrict it to the vertices in S N .Using this construction we see that if a point is a good cut-point in the infinite graph, then it is so also on its restriction to S N .The lemma follows then automatically from (8.4).
Remark 8.3.From Lemma 8.2 it is possible to deduce that, in the regime where α > 2 and γ > 2, there exists c > 0 such that lim that is, the diameter of G N is linear with probability tending to 1.
We are now ready to prove the lower bound for the regime under consideration.
Proposition 8.4.Let α > 2 and γ > 2. There exists c > 0 such that lim Sketch of the proof.The proof follows the argument in Benjamini et al. (2008, Proposition 4.1), and for completeness we will sketch here the main points.For simplicity we assume N to be divisible by 8. We partition T N into three sets: . ., 8. By Lemma 8.2 we can assume that there exists a constant c > 0 for which, with probability arbitrarily close to 1, G N restricted to the segment K i contains at least cN good cut-points.We can also assume without loss of generality (eventually rotating the cycle) that π(A) ≥ π(B ∪ C), π(B) ≥ π(C).This entails that π(A ∪ B) ≥ 3/4.For x / ∈ A ∪ B, let T be the hitting time of A ∪ B for the simple random walk (X n ) n∈N 0 on G N starting at x.We denote its law (given the realization of the graph G N ) as P x and the relative expectation as E x .For n ≥ t mix (G N ) one obtains 3 4 Hence, for any x / ∈ A ∪ B and n ≥ t mix (G N ), one has P x (T ≤ n) ≥ P x (S n ∈ A ∪ B) ≥ 1/4.It follows that for any real s ≥ 0 one has P x (T > s) ≤ (3/4) s/t mix (G N )−1 and thus there exists c > 0 independent of N such that (8.5) Call u := 7N/8.Using the language of electrical networks (Levin and Peres, 2017b, Chapter 9) we ground the set A ∪ B and set a potential in u so that there is a unit of current flowing from A ∪ B to u.Let x 1 , . . ., x cN be the good cut points on the side of u with at least 1/2 the current.In particular, each of them will be crossed by at least half the current.In turn, using the relation between voltage and resistance and that the number of cut edges (resistors of resistance 1 connected in series) is at least the number of good cut points, the voltage v(x i ) at x i is at least i/2.Hence for some c 1 > 0. By (8.5) we derive that there exists a c > 0 such that t mix ≥ cN 2 , thus giving the desired result.which immediately gives the desired formula.For (4.1) we first bound (recall that nearest neighbours are always connected) where we have splitted in two the integral and have upper bounded the integrand of the first part with (3.1) and the integrand of the second part by 1.A simple calculation shows that, for a fixed y, both the summands in the last brackets are bounded by a constant times 1 − y −γ W τ−1 1 , and summing over all y's gives (4.1).
For (iv), formulas (4.3) and (4.4) can be proved in a very similar way, so we just show the first one.Abbreviate U N (x) := E[D x | W x ] 1/2 log N and use a union bound to get We observe that D 1 is the sum over x = 2, . . ., N of the variables Z N,x := ½ {1↔x} , which under P( • | W 1 ) are just N − 1 independent Bernoullis.Since by (iii) for some 0 < ε < 1 − γ, we can apply (A.4) in Proposition A.3 with A N = 1 to bound (B.1).We obtain which is summable in N and allows us to use the first Borel-Cantelli lemma to conclude.
For the second part of (v) we just integrate (4.2): and (4.6) follows.We go back to the first equation of (v).Using (iv) yields, P-almost surely, for N large enough, We would like to invoke Proposition A.3 with Z N,x = E[D x | W x ], which are mutually independent under P, and A N = N. Condition (A.3) is satisfied since, using item (ii), for some 0 which is larger than A 2 N (log N) 2 = N 2 (log N) 2 .Hence we get that, for N large enough, With (4.2) at hand, we can calculate As before we bound Putting this last result back into (B.4) and combining it together with (B.3) into (B.2),gives an upper bound of the desired order for D G N , for N large enough.A lower bound can be obtained in a completely similar way, yielding (4.5).
B.2. Proof of Proposition 4.6.Recall the notation Q = (log N) τ−1 .For item (i) we first compute, for j = 1, . . ., j max − 1, and analogously . Now we claim that, for all j = 1, . . ., j max − 1 and for all A ∈ {V j , V j c , V + j , V + for all choices of j and A, as can be easily checked.It follows that P(For some j, ∃A ∈ {V j , V j c , V + j , V + which is summable in N, and by the first Borel-Cantelli lemma (B.5) follows.To conclude, we put together (B.5) and the formulas for the expectations of V j , V j c , V + j , V + j c .The case j = j max can be treated in the very same way.
For item (ii), we just prove that D V j > 2D V + j for any j = 1, . . ., j max − 1, since the proof for D V j c , D V + j c is very similar.From (4.4) we know that almost surely for N sufficiently large.Note that we neglect the error E[D x |W x ] 1/2 appearing in (4.4) thanks to (4.2).For the first expression we use Proposition A.3 with Z N,x = E[D x |W x ]½ x∈V j .Using (4.2) one can check that, for N large enough, Z N,x ≤ A N := N 1−γ/2 Q j .Condition (A.3) is verified since which is larger than A 2 N log N for all j < j max .Since = cN 1−γ (N − 1)Q −2 log log N , (B.8) which is much larger than ∑ N x=1 E[Z 2 N,x ], (A.5) implies then that there exists some constant c 1 > 0 such that, P-a.s. for all N large enough, for some c 2 > 0, P-a.s. for N large enough.This together with (B.9) concludes the argument.E83C18000100006.MS acknowledges the hospitality of TU Delft where part of this work was carried out.
FIGURE 1. Phase diagram of the mixing time of the simple random walk on the SFP random graph on the one-dimensional torus of size N.The symbols and ≃ indicate an (in)equality up to a slowly varying function in N.
.28) Case M ∈ [L(log N) c , N 1−2ε ): in this case {M ∨ L} = M and we divide the possible values of M in intervals.Let ℓ min := c log 2 log N and ℓ max := (2 − γ−3ε) log 2 N (here and below j c }, P-a.s. for N large enough, |A| − E[|A|] ≤ E[|A|] 1/2 log N .(B.5) Writing |A| := ∑ N x=1 ½ x∈A , we apply Proposition A.3 with Z N,x := ½ x∈A and A N = 1] = E[|A|] ≥ (log N) 2 j c } : |A − E[|A|]| ≥ E[|A|] 1/2 log N) ≤ 4j max e −c(log N) 2 log N .(B.9) Analogously, for the second expression in (B.7) we use Proposition A.3 by posing Z N,x = E[D x |W x ]½ x∈V + j .Condition (A.3) is again verified since ∑ E[Z 2 N,x ] and A 2 N log N have the same orders as before.So, by (A.5) and calculating E[D V + j ] as in (B.8), x, y ), where U x, y are i.i.d.uniform random variables on [0, 1].To do so, first sample the weight W x of each point in Z, then connect each couple of points x and y with |x − y| > 1 with an edge if U x,y ≤ P(x ↔ y | W x , W y ) (cfr.(2.2) with | • | replacing • 2 .(B.3)We are left to deal with the last summand in (B.2).We apply once more Proposition A.3, this time withZ N,x = E[D x | W x ] 1/2 and A N = N 1/2 .Condition (A.3) is satisfied since ∑ x∈T N E[Z 2 N,x ] = E[D G N ] which is larger than A 2 N (log N) 2 = N(log N)2 by (4.6).Therefore