Distances in scale free networks at criticality

Scale-free networks with moderate edge dependence experience a phase transition between ultrasmall and small world behaviour when the power law exponent passes the critical value of three. Moreover, there are laws of large numbers for the graph distance of two randomly chosen vertices in the giant component. When the degree distribution follows a pure power law these show the same asymptotic distances of $\frac{\log N}{\log\log N}$ at the critical value three, but in the ultrasmall regime reveal a difference of a factor two between the most-studied rank-one and preferential attachment model classes. In this paper we identify the critical window where this factor emerges. We look at models from both classes when the asymptotic proportion of vertices with degree at least~$k$ scales like $k^{-2} (\log k)^{2\alpha + o(1)}$ and show that for preferential attachment networks the typical distance is $\big(\frac{1}{1+\alpha}+o(1)\big)\frac{\log N}{\log\log N}$ in probability as the number~$N$ of vertices goes to infinity. By contrast the typical distance in a rank one model with the same asymptotic degree sequence is $\big(\frac{1}{1+2\alpha}+o(1)\big)\frac{\log N}{\log\log N}.$ As $\alpha\to\infty$ we see the emergence of a factor two between the length of shortest paths as we approach the ultrasmall regime.


Background and Motivation
Scale-free networks are characterised by the fact that, as the network size goes to infinity, the asymptotic proportion of nodes with degree at least k behaves like k −τ +o(1) for some power law exponent τ .There are a number of mathematical models for scale-free networks, in the class of rank-one models the probability that two vertices are directly connected is asymptotically equivalent to the product of suitably defined weights w v associated to the vertices v in a network G N with vertex set [N ] := {1, . . ., N }.Examples of rank-one models are the Chung-Lu model where the Norros-Reittu model in which where (w i ) N i=1 is a deterministic or random sequence of weights, and the configuration model in which each vertex is assigned a degree chosen randomly from a given degree distribution and the weights are the degrees themselves.
A popular alternative to rank one models are the preferential attachment models introduced by Barabási and Albert.The original Barabási-Albert model (see Bollobas et al. [BRST01] for a rigorous definition) is a dynamical network model in which new vertices connect to a fixed number of existing vertices with a probability proportional to their degree.In this model the power law exponent is always τ = 3.Recent variants introduced by van der Hofstad et al. [DHH10] and Dereich and Mörters [DM09], allow the connection probability to be proportional to a function of the degree and can therefore generate networks with variable power law exponent τ > 2. Physicists have predicted that all these models of scalefree networks with the same power law exponent share essentially the same global topology, see for example [AB02].Indeed, all models listed above have been shown to experience a phase transition at power law exponent three.If τ > 3 randomly chosen vertices in the largest connected component have a distance of aymptotic order logarithmic in the network size, whereas for 2 < τ < 3 the distance behaves like an iterated logarithm of the network size, this phase is called the ultrasmall regime.
At the critical value τ = 3 a fine analysis has been performed by Bollobás and Riordan in their seminal paper [BR04].They show that two randomly chosen vertices in the original Barabási-Albert model have a graph distance (1 + o(1)) log N/ log log N .The same result holds for a variety of other models of scale-free networks when the asymptotic proportion of vertices with degree at least k scales precisely like k −2 .Examples include the rank one models of Chung and Lu, of Norros and Reittu, inhomogeneous random graphs with a suitable choice of kernel, and the configuration model.
It was therefore believed that distances in preferential attachment models behave similar to distances in the configuration model with the same tail of the asymptotic degree distribution, see for example [HHZ07].It thus came as a surprise when a finer analysis in [DMM12] showed that in the ultrasmall regime, i.e. when the power law exponent is in the range 2 < τ < 3, distances in preferential attachment models are twice as long as in the rank one models above when they have the same tail of the degree distribution.This is due to the fact that two vertices of high degree in the preferential attachment model are much more likely to be connected by a path of length two, rather than a single edge as in the rank one models.
It is the aim of the present paper to study the emergence of this factor two at the critical value τ = 3.Does the factor occur at a sharp threshold and if so where is this threshold?Or is there a smooth transition between the factors one and two in a suitably chosen critical window?To answer these questions we need to consider models that can be studied with logarithmic corrections in the tails of the aymptotic degree distribution, which requires us to look at preferential attachment models with nonlinear attachment rules, an area essentially unexplored in the rigorous literature.We look at preferential attachment models in the framework of [DM09,DM13].This allows the attachment probabilities to be chosen as concave functions of the vertex degree, giving enough flexibility to generate varying asymptotic degree distributions.The critical window for our study emerges when the asymptotic proportion of nodes with degree at least k scales like k −2 (log k) 2α+o(1) , for some α > 0. We compare our results on preferential attachment networks with those on the Norros-Reittu model with i.i.d.weights whose degree sequence has the same tail behaviour.Our main result shows that typical distances in the preferential attachment networks are bigger by an asymptotic factor of (1 + 2α)/(1 + α), which converges to two as α ↑ ∞.

Statement of the main results
Our main result concerns the variant of the preferential attachment model introduced in [DM09], which has the advantage over other variants of remaining tractable even when the connection probability is a nonlinear function of the degree of the older vertex.To define the model precisely, fix a concave function f : N 0 → (0, ∞), which is called the attachment rule, and define a sequence of random graphs (G N ) N ∈N in the following way: (1) The initial graph G 1 is a single vertex labelled 1.
(2) Given G N , the graph G N +1 is obtained by • adding a new vertex labelled N + 1; • independently for any vertex with label m ≤ N insert an edge between this vertex and the new vertex with probability where Z[m, N ] := N i=m+1 1l{m ↔ i} is the number of younger vertices connecting to i in G N .
If we orient all edges from the younger to the older vertex we can interpret Z[m, N ] as the indegree of the vertex labelled i in the oriented graph derived from G N .Note however that throughout this paper we consider the graphs G N as unoriented and the notions of connectivity and graph distance d N taken in G N are with reference to unoriented edges.For any potential edge (v, w) ∈ [N ] 2 with v < w we write v ↔ w if we wish to indicate that (v, w) is contained in G N .When it is convenient to stress the original orientation we write v ← w or w → v.
The following theorem identifies the class of attachment rules which produces typical distances of order log N /log log N .It is the main result of this paper.
Theorem 1.Let (G N ) N ∈N be the sublinear preferential attachment model obtained from a concave attachment rule f satisfying for some α > 0. Consider two vertices U, V chosen independently and uniformly at random from the largest connected component The lower bound in Theorem 1 uses a standard path counting argument and first moment bounds.The upper bound is much more difficult to obtain and we use a rather complicated second moment argument for the size of the neighbourhood of a typical vertex and combine it with a result concerning a dense subgraph among the oldest vertices using sprinkling-type arguments.
It is shown in [DM09] that the asymptotic degree distribution in the preferential attachment graph G N with the attachment rule given in (2.1) satisfies (2.2) This can be seen as follows.According to [DM09, Theorem 1.1.],the asymptotic indegree distribution in G N is explicitly given by whereas the outdegree is asymptotically Poisson distributed.Choosing an affine attachment rule f (k) = γk + β, one obtains from (2.3) by use of Stirling's formula, This illustrates that the network is a small world for γ < 1/2 and ultrasmall if γ > 1/2, since for affine f the power law tails of the indegree distribution dominate the exponential tails of the outdegree distribution.Fixing γ = 1/2 and adding a logarithmically decaying perturbation into the linear factor, i.e.
yields, using the Taylor expansion of log(•), for large j ∈ N. Note that the latter two terms are summable in j whereas the first two terms are not.Hence, (2.3) implies that Noting that the left hand side of (2.2) converges to ∞ j=k µ j one obtains the asserted scaling.The same derivation together with a somewhat tedious but straightforward analysis of the lower order terms appearing yields (2.2) for the more general shapes of f given in (2.1).
The calculation of the last paragraph also explains our particular choice of attachment rule.At the critical point τ = 3 (or γ = 1/2), the scale of the typical distances is rather sensitive to the parameters of the network model under consideration.We limit ourselves in Theorem 1 to those f which change precisely the factor in front of the log N/ log log N term obtained in [BR04] to illustrate the emergence of the characteristic factor 2 that separates distances in preferential attachment models from distances in rank-1-models in the ultrasmall regime.Note that in [BR04] the authors rely on the equivalence of certain instances of the Barabási to another combinatorial model making it very challenging to adapt their arguments to the regime we are interested in.
In principle, it is possible to obtain distances on a variety of scales between log N and log log N other than log N/ log log N at τ = 3.One may be able to reverse engineer the correct attachment function and then give a rigorous proof along the same lines as ours.We have refrained from doing so, since many of our calculations use explicit estimates and are not straightforwardly generalisable.A formula relating the typical distance explicitly to f or to the degree sequence (µ k ) k≥0 , as it can be given for rank-1-models, see e.g.[CL03], seems presently out of reach for nonlinear preferential attachment models.
We contrast the result of Theorem 1 on typical distances in the preferential attachment model with a result on typical distances in the Norros-Reittu model with an i.i.d.weight sequence parametrised to obtain the same tail behaviour of the empirical degree distribution.
We choose this model for definiteness but the result extends easily to other rank-one models, such as the Chung-Lu model, and to deterministic weight sequences with similar asymptotics.
To define the model, given a distribution on the positive reals we generate a sequence W = (W i ) ∞ i=1 of i.i.d.random variables with this distribution.Let L N = N n=1 W n denote the total weight of the vertices in [N ].For fixed N and given the weights W 1 , . . ., W N we construct the random graph H N = H N (W ) with vertex set [N ] as follows: • Between any two distinct vertices v, w ∈ [N ] the number of edges is Poisson distributed with parameter W v W w /L N , independent of all other edges.• Parallel edges are merged to obtain a simple graph.
Theorem 2. Let (H N ) N ∈N denote the Norros-Reittu model with weight distribution satisfying (2.4) for α > 0. Consider two vertices U, V chosen independently and uniformly at random from the largest connected component with high probability as N → ∞.
We observe that the characteristic difference in the typical distances between preferential attachment models and rank-one models in the ultrasmall regime does not occur suddenly at the phase transition, but arises gradually in a critical window.For networks with empirical degree distributions decaying as in (2.2) there is a factor of (1 + 2α)(1 + α) between the typical distances in the two types of networks.This factor converges to two as we approach the ultrasmall regime by letting α ↑ ∞, and converges to one as we approach the linear case by letting α ↓ 0. A heuristic explanation for this transition is that in the preferential attachment model in the critical window the probabilities that two vertices of high indegree are connected directly or via a young connector vertex are on the same scale.Hence the asymptotical proportion of the transistions between vertices on a typical short path that use a connector, is a constant strictly between zero and one.This constant turns out to be α/(1 + α) and this yields a factor 1 + α/(1 + α) by which the length of shortest paths in the preferential attachment model exceed that in the rank-one models.
Qualitatively different behaviour for the preferential attachment and rank-one model class can also be observed when studying robustness of the giant component under targeted attack, see Eckhoff and Mörters [EM13], or in the behaviour of the size of the giant component near criticality, see forthcoming work of Eckhoff, Mörters and Ortgiese [EMO16].

Proof of lower bounds -preferential attachment
Lower bounds for average distances are proved using a first moment method.To set it up, Section 3.1 provides bounds for expected degrees in the preferential attachment model, which are used in Section 3.2 to prove the lower bound in Theorem 1.
Remark on notation.In all subsequent sections a subscript number on a constant refers to the place where it is defined, e.g.C 1.23 is the constant introduced in Lemma 1.23., C (1.24) the same constant as in equation (1.24), etc.
3.1.Degree asymptotics for preferential attachment.It follows immediately from the definition of the preferential attachment graph, that the network is entirely represented by the collection (Z[1, n]) n≥1 , (Z[2, n]) n≥2 , . . . of independent Markov chains, which we refer to as degree evolutions.In this section, we derive lower and upper bounds for Ef (Z[m, n]).For conciseness in the formulation of later results, we allow (Z[m, n]) n≥m to start in any integer k ∈ N and denote the resulting distribution by P k , its expectation by , where ξ(m, n) is given by .
Then X = (X(n)) n≥m and Y = (Y (n)) n≥m are submartingales.If f is affine, then they are martingales. Proof.
The martingale property of X for an affine attachment rule f (x) = 1 2 x + β follows immediately from (3.1) The corresponding calculation for Y is performed in complete analogy to (3.1), we obtain Division by (1 + n −1 )n/m = (n + 1)/m now yields the martingale property.For strictly concave f , we have ∆f 2 , for all i ∈ N, and the equalities in the above calculations turn into inequalities yielding the submartingale property.
where δ(m) can be chosen such that lim m→∞ δ(m) = 0.In the affine case ξ(m, n) ), in particular the score ξ(m, N ) of a vertex m is asymptotically proportional to its expected degree at time N .For the deviation from the affine case we introduce the notation Determining the magnitude of ψ k is the first step towards the proof of Theorem 1.As we will see later, it suffices to study the special case with α ≥ 0 and β = f (0) > 0.
Proposition 3.2 (First and second moment upper bound).Let f be an attachment rule of the form (3.4).Then, for any k ∈ N, there exist constants C = C(k), C = C (k) only dependent on α and β, such that for all pairs m, n ∈ N with n ≥ m, Proposition 3.3 (First moment lower bound).Let f be as in (3.4).Then, there exists a constant c > 0 only dependent on α and β, such that for all pairs m, n ∈ N with n ≥ m and any k ∈ N ∪ {0} We note that the two propositions together imply that there are constants 0 < c ≤ C depending only on α and k, such that (3.5) To prove Proposition 3.2 and Proposition 3.3 we need three auxiliary statements concerning the properties of the attachment rule and the behaviour of the degree evolutions Z([m, n]) n≥m .In particular, in [DM09] a scaling function Φ is introduced to linearise the degree evolutions with respect to logarithmic time.As a byproduct of [DM09, Lemma 2.1], one obtains useful bounds for the degree evolutions.
Lemma 3.4.Let f be a concave attachment rule and g be given by Proof.By interpolation we can assume that f is twice differentiable on (0, ∞) with existing right derivative in 0. Let e denote the inverse of f , which is a well defined convex function, since f is increasing and concave.The second derivative of g is given by To see that g (x) ≤ 0 for large x, we note that e (x) ≥ 0 and e (0) ≤ e (x)(lim k→∞ ∆f (k)) −1 .As e(x) is bounded below by x − 1, the numerator in (3.6) is nonpositive for sufficiently large x.
Lemma 3.5.Let f satisfy condition (3.4) with α > 0 and set Then, for fixed m ∈ N, the process Proof.This is the first statement of [DM09, Lemma 2.1].Note that in their notation Lemma 3.6.Let f and Φ be as in Lemma 3.5.Then, (i) the linear interpolation of the inverse of Φ exists and is strictly monotone, in particular, for x ≥ 1/f (0) and k ∈ N, (ii) there are constants c, C > 0, only depending on f , such that, for all x ∈ N, where log + y = log(y ∨ 1) and log log + y = log log(y ∨ e), y ∈ R.
Proof.For (i) note, that the attachment rule f is positive and strictly increasing, which implies that ∆Φ = 1/f > 0 is strictly decreasing.Thus Φ is concave and strictly increasing, hence its inverse is well defined, convex, strictly increasing and The claimed monotonicity is inherited by the linear interpolation.To show (ii), we note that Φ(x) ≥ 1/f (0) is true for any x ∈ N and that from which the statement follows by summation.
Proof of Proposition 3.2.We begin with the first moment and note that, for n ≥ n, and conditioning on Z[m, n] yields n .
Taking expectations we obtain the recursion (3.7)Note that for sufficiently large i, log i ≥ log f (i) and hence We may thus fix i 0 such that f (i) > e 2 and (3.8) hold for all i ≥ i 0 .For k ≥ i 0 , it follows that The function x → x/(log x) is concave on (e 2 , ∞), and we apply Jensen's inequality to the second term in this sum and obtain .
Applying this bound to the right hand side of (3.7) yields, after division by We can apply the lower bound in (3.2) to bound the denominator of the last term from below by n 1 ∨ log(n/m) to get . (3.10) Iterating both sides of (3.10) in n then yields , and using the inequality 1 + x ≤ e x we get , which implies (3.11) We have e for some constant D. To handle the second expression in the exponent we observe that for some absolute constant C .Applying these estimates to (3.11) we arrive at It remains to deduce the bound for the second moment.We argue as for the first moment, conditioning as in the derivation of (3.7) yields a similar recursion for the function f (•) 2 in terms of f (•) 2 itself and the differences ∆f (•) 2 := ∆(f (•) 2 ).In fact we obtain Since f is nondecreasing, we find that ∆f (k) 2 ≤ f (k + 1)2∆f (k) and thus The function E(m, n) can be bounded in the same fashion as the first moment, we obtain, , 2α n/m, and the second moment bound follows.
Proof of Proposition 3.3.By monotonicity, we only need to focus on the lower bound for k = 0 and begin with the observation that the concavity condition on f implies that To obtain a lower bound on Z[m, n], we begin by representing and using concavity of Φ, Jensen's inequality implies that which yields, together with the upper bound on Φ from Lemma 3.6, for some small constant d > 0 and combining the last inequality with (3.12) we obtain The expectation on the right can be bounded below by the expectation in the affine case, for which a lower bound is implicit in (3.2).For all sufficiently large n > m we get for some c > 0 and a further adjustment of the constant, which only depends on the value f (0), yields the statement of the proposition.
We close this section with two very intuitive stochastic domination results from [DM13] which are instrumental in the proof of Theorem 1.
for all l ≥ m.
I.e. the unconditioned process initiated in k + i dominates the process initiated at k and conditioned to have jumps at times n 1 , . . ., n i .
Proof.The case i = 1 is the original statement [DM13, Lemma 2.10] and proven there.The generalisation to i = 2, 3, . . . is obtained by a straightforward induction argument.

3.2.
Lower bounds for distances.The first moment estimates of the previous section now yield lower bounds on the typical distances in a straightforward manner under the assumption of bounded correlation for edges along any self-avoiding path.
Lemma 3.9 (First order lower bound on distances).Let G N be a random graph with vertex set [N ] and assume that there are κ N ≥ 0 and Ψ N ≥ 0, such that, for any self-avoiding path P = (v 0 , . . ., v l ), we have and where lim inf Then, for uniformly chosen vertices U, V ∈ G N , lim Proof.We first observe that for any positive sequence (a i ) ∞ i=0 satisfying a i+1 /a i ≥ 1 + δ, for all i ≥ 0 and some fixed δ > 0, we can find a constant C > 0 with ) and P = (v 0 , . . ., v l ) be selfavoiding.Assumptions (3.13) and (3.14) imply that For v, w ∈ [N ] and P l (v, w) denoting the set of self-avoiding paths of length l from v to w, By (3.15), the terms in the last sum grow at least exponentially in l for all sufficiently large N , so using (3.16) we infer the existence of an independent constant C > 0 such that (3.17) For any ε ∈ (0, 1), the probability that one of the vertices U, V is smaller than ε/3N is bounded by 2ε/3 and thus using (3.17) on the complement of this event results in and the proof is complete.
The lower bounds on the distances in Theorem 1 can now be obtained by verifying the assumptions of Lemma 3.9.Proposition 3.10 (Lower bounds for PA).The preferential attachment model G N with attachment rule f of the form (2.1) satisfies lim for every δ > 0 and independently and uniformly chosen vertices U, V ∈ G N .
Proof.Let P = (v 0 , . . ., v n ) be a self-avoiding path along vertices in [N ].By definition of the preferential attachment mechanism P(P ⊂ G N ) can be decomposed in the following way: each edge (u, v) in P corresponds to a jump in the degree evolution of the vertex u ∧ v and since P is self-avoiding, any given degree evolution can feature at most twice in the formation of P .Moreover, if a degree evolution is used twice, then it is used to obtain two consecutive edges of P .By independence of the degree evolutions, P(P ⊂ G N ) therefore must factorise into terms of the form P(u → v ← w) and P(u → v), corresponding to two jumps and one jump of the repsective degree evolution.To obtain a bound on = 1 and evolving according to the law of an unconditioned degree evolution, stochastically dominates the process (Z[v, n]) n≥v conditional on Z[v, u] = 1 and hence ).We obtain and in combination with Proposition 3.2 this shows that the edge correlation bound (3.13) is satisfied with κ N = C 3.2 (1)/C 3.2 (0).According to Proposition 3.2, we also have and thus the bound (3.14) holds for Ψ N = C 3.2 (0)(log N ) α , in the case where the attachment rule f is of the form (3.4).For such f the distance bound follows therefore for any choice of δ ∈ (0, 1 1+α ) immediately from Lemma 3.9.
For f of the more general form (2.1), we note that f ≥ f implies that the respective networks satisfy ḠN ≥ G N stochastically for all N ∈ N, where ≥ is the partial order given by inclusion on the edge sets of graphs with the same vertex set, so that distances in G N dominate those in ḠN .By (2.1), for every ε > 0, there is k 0 ∈ N such that, for all k ∈ N 0 , Choosing ε suitably in dependence on δ thus allows to deduce the bound for general f from the special case treated in the previous paragraph.

Proof of upper bounds -preferential attachment
To prove the upper bound of Theorem 1 we need to find short paths connecting two uniformly chosen vertices, say U and W .We use the concept of an inner core: we will show that with high probability U and W have at most distance (1+o(1))(2+2α) −1 log N/ log log N to a small set of vertices that has uniformly bounded diameter, see for instance [CL03,DHH10] for similar ideas.
Starting from a uniform vertex U ∈ G N we perform essentially a breadth-first search, a precise definition of the exploration algorithm is given below.Roughly speaking, at each exploration stage k the set of vertices at distance k from U is assessed using the the score ξ introduced in Lemma 3.1, i.e. for a set V ⊂ [N ] and p ∈ N we call ξ p (V, N ) := v∈V ξ(v, N ) p the total p-score of the set V .The proof is based on the following three auxiliary results.
• By a local approximation argument we first show that with high probability either the local exploration around U will quickly lead to a configuration with a high score or the vertex is in a small component, see Proposition 4.1.• Using moment estimates we show that starting in a configuration with sufficiently high score, the score will quickly grow from generation to generation with high probability, see Proposition 4.12, until we find a configuration with score exceeding √ N /(log N ) 2α+2 .• Finally, we show that a subset with score exceeding √ N /N 2α+2 is with high probability connected to a dense subgraph among the oldest vertices.This subgraph is of bounded diameter, see Proposition 4.14.
Recall our notations ξ(•, •) and ψ k (•, •), introduced in Lemma 3.1 and (3.3), respectively, which are repeatedly used throughout the following sections.If the graph size N is fixed, we also write ξ(•), ψ k (•) for ξ(•, N ), ψ k (•, N ) for ease of notation.Note that, for m ≤ n ≤ N , (3.2) allows us to appproximate the ratios ξ(m, n)/n as we use this approximate factorisation frequently in subsequent proofs.Here and throughout the article, 'f 1 (•) ≈ f 2 (•)' means that the ratio of the functions f 1 , f 2 is bounded away from 0 and ∞ uniformly in all arguments.

Local approximation results -initial phase.
A configuration e associates with every vertex a state in the set {veiled, active, dead}, and with every potential edge a state in the set {0, 1, unknown}, the state 'unknown' capturing the absence of the information whether an edge is contained in G N or not.The graph associated with a configuration consists of the vertex set [N ] and all edges in state 1.The score of a configuration is the cummulative score of all active vertices in the configuraton.
We now describe the exploration process that we follow in the initial phase as well as the main phase.Its definition uses a non-increasing sequence ( k ) k∈N of truncation levels, which are set to k = 1, for all k ∈ N, in the initial phase.The exploration is an inhomogeneous Markov chain (E k ) k∈N on the space of configurations, which we define on the probability space associated with the random graph G N .We assume that we start with an initial configuration E 0 , and the graph associated with this configuration is a tree.
In the kth exploration step we go through all active vertices in E k−1 , starting with the vertex of smallest label and proceeding in increasing order of labels until all active vertices are treated.For each such vertex v we (1) inspect all potential edges connecting v to veiled vertices in { k , . . ., N }; (2) If the edge does not exist in G N its state becomes 0 and the veiled vertex remains so; (3) If it does exist in G N its state becomes 1 and the veiled vertex is declared pre-active.Once all active vertices are explored, they are declared dead, the pre-active vertices are declared active and the exploration step ends.Note that, if we start with a configuration associated with a tree, the configuration at the end of an exploration step is again associated with a tree.We call such configurations proper.The sets of active, veiled and dead vertices of e are denoted by active(e), veiled(e) and dead(e), respectively.
The following proposition (and nothing else in this paper) relies on a coupling of local neighbourhoods in G N with the 'idealised neighbourhood tree' introduced in [DM13, Section 1.3].The probability that this tree is infinite is denoted by p(f ).It coincides with the asymptotic proportion of vertices in the connected component of a uniformly chosen vertex, and hence with the probability that such a vertex is in the giant component.
Proposition 4.1.Suppose U ∈ G N is uniformly chosen, determining an initial configuration in which U is active, all other vertices are veiled and all edges are in state unknown.Denote by ξ(V ) := v∈V ξ(v) the score associated with a set V ⊂ [N ] of vertices.Given ε > 0 and s 0 > 0 there exists k 0 = k 0 (s 0 , ε) ∈ N, such that, for sufficiently large N, we have P there exists some k ≤ k 0 and a set As the proof of Proposition 4.1 is obtained by application of the results of [DM13] and is therefore not self-contained we defer it to Appendix A. 4.2.Score growth -main phase.Our next goal is to fix a sequence ( k ) k≥1 which guarantees that the score of encountered configurations during an exploration of the giant component grows with high probability at a certain deterministic rate.We rely on a careful analysis of the exploration process and the following concentration inequality.Lemma 4.2 (Lower tail bound for independent sums, [CL06, Theorem 2.7]).Let I be a finite set and (X i ) ∈I be independent, nonnegative random variables.Then, for any λ > 0, We start the main phase in a proper configuration E 0 with the property that the score of the set A of active vertices in the configuration satisfies ξ(A) ≥ s 0 ξ(min A) from some s 0 to be specified later.From this initial configuration we restart the exploration process (E k : k ∈ N) using a new truncation sequence ( k ) k∈N .As before, each E k is a proper configuration.While obtaining gradually more information about G N , we need to control the correlation between discovered edges.This is done in the following two lemmas, which provide upper and lower bounds on conditional jump probabilities of a degree evolution Z[m, •] given disjoint sets I 1 , I 0 of times at which Z[m, •] is known to jump or to stay constant, respectively.Lemma 4.3 (Lower bound for conditional jump probabilities).For every k ∈ N there exists n 0 ∈ N and a constant C(k) > 0 such that for every n 0 ≤ m ≤ N , and disjoint sets the events 2) The last sum can be rewritten The conditioning event in the last sum involves at most k jumps.We may apply Lemma 3.8 to move them to the start of Z[m, •] and then the estimates (3.5) and (3.2) to obtain a constant C(k) such that, for all l ∈ I 0 , (4.4) Inserting (4.4) into (4.3) in combination with (4.2) yields and using (4.1) yields the statement.
To choose ( k ) k≥1 suitably, we need to understand how the choice of cutoff points influences the growth of the score.To this end let E denote a configuration obtained after some stage of the exploration process, V ⊂ veiled(E) and consider the random variable The inclusion-exclusion principle yields the lower bound To derive bounds on the probability that the term after the minus sign is positive, we define the events Recalling that ξ 2 (A) = v∈A ξ(v) 2 for A ⊂ [N ], we obtain the following bounds: Proposition 4.5 (Collision probability).Let e be a proper configuration and V ⊂ veiled(e) such that, for some fixed k ∈ N and n 0 = n 0 (k) as in Lemma 4.4, Then there is a constant C > 0, depending only on f and k, such that 2α+1 ξ(active(e)) 2 − ξ 2 (active(e)) N , Proof.Repeated use of the union bound yields To drop the conditioning, we first use Lemma 4.4 to remove all dependencies on non-existing connections given by e and then Lemma 3.8 to move the jump of Z[v, •] to the start.Note that we are allowed to do this as condition (4.9) and the monotinicity of ξ ensure that (4.1) is satisfied, since certainly active(e)∪dead(e) contains the set of continuity points I 0 appearing in the conditioning of Z[v, •].
, where the last inequality follows by using (3.2) and combining all occurring constants into B > 0. Hence, with v 0 = min V , we get For A 2 (V, E) we need to take into account that, for a ∈ active(e), Z[a, •] may only be conditioned to have at most one jump.This holds since e is proper, i.e. the active and dead vertices of e together with the explored edges form a tree implying that exactly one edge incident to a has been explored.Using this fact to derive an upper bound on the number of jumps appearing in the conditioning of Z[a, •], a similar calculation as above yields for some B > 0 and a 0 = min(active(e)).Analogously, we obtain for some B > 0. Setting B = max(B, B , B ) these three estimates together with the union bound yield which implies the claimed upper bound.
Remark 4.6.Note that we only use the that e is proper to make sure that an active vertex has at most one explored adjacent edge.Our proofs still work, if we drop the requirement that the explored subgraph is a tree and replace it with the requirement that its indegree is bounded in N .
Proposition 4.5 allows us to ignore the second sum of (4.8) outside a set of small probability.Decomposing the first sum of (4.8) according to the orientation of the occuring edges yields Setting we note that due to the independence of indegree evolutions S < (V ) = v∈V X v and S > (V ) = a∈active(E) Y a are independent and both are sums of elements of the collection {X v , Y a : v ∈ V, a ∈ a ∈ active(E)} of mutually independent random variables.In order to apply Lemma 4.2 we determine moment bounds for X v , v ∈ V , Y a , a ∈ active(E).
Proposition 4.7 (First and second moments of vertex scores).Let e be a proper configuration and V ⊂ veiled(e) such that (4.9) is satisfied for some k ∈ N.
(i) There are constants 0 < c, C < ∞ depending only on f and k, such that for all a∈active(e): a>v ξ(a)(log a v ∨ 1) α (4.10) and

.11)
(ii) There are constants 0 < c, C < ∞ depending only on f and k, such that for all a ∈ active(e) and Proof.As X v is a constant multiple of a sum of indicators, its first conditional moment is a∈active(e) a∈active(e) a∈active(e): a>v where we have used Lemmas 3.7 and 4.3, Proposition 3.3, (3.2) and chosen some appropiate constant c > 0. A similar calculation for the second moment relies on Lemmas 3.7, 3.8 and 4.4 and Proposition 3.2 and reads a∈active(e): a>v This establishes (i).Turning to (ii) we obtain firstly, for some appropriately chosen c > 0, where we have used Lemmas 3.7 and 4.3 for the first inequality and Proposition 3.3 and (3.2) for the second.Secondly, analogous to the second moment calculation for (i) we get , and the claim follows.
The lower bounds (4.10) and (4.12) now imply, that for e, V chosen as before for some small c > 0. The factor v∈V v −1 log((a ∨ v)/(a ∧ v)) ∨ 1 α in the last sum is large as long as the set V is sufficiently dense in [N ].In fact, the following instance of the pigeonhole principle applies, which is proved as Lemma A.2 in Appendix A.
We summarise our observations in the following lemma.
Lemma 4.9 (Concentration of score).Let e be a proper configuration with a 0 = min(active(e)) and let v 0 < (a 0 /e 2 ) ∧ η 4.8 N such that V = {v 0 , . . ., N } ∩ veiled(e) satisfies both (4.9) for k = 2 and (4.15) for A = active(e).Then there exists a constant c > 0 such that, for all β ∈ (0, 1), Proof.On the complement of the event set d = c (4.14) c 4.8 /2, and note that by (4.14), Applying Lemma 4.2, we obtain a∈active(e) v,w∈V : We now calculate bounds for all the terms appearing on the right hand side of (4.18).Observe that, for some appropiately chosen constant D 1 > 0, by (3.2) Combining the two estimates just obtained repeated use of (3.2) yields for some D > 0, using ) for the second sum, and finally x+y ≤ 2(x ∨ y).Next, we obtain in a similar way, for some D 3 > 0, Consequently, mirroring the derivation of (4.19), we obtain for some D > 0. Applying (4.19) and (4.20) in (4.18) yields a bound on the denominator in (4.17) from which the exponential term in the conclusion of the lemma is obtained.To conclude the proof it remains to note that the second term in the conclusion of the lemma is the bound on the probability of the occurence of As a consequence of Lemma 4.9 we are able to bound the growth of the score from below as long as the total score of the explored vertices is not too large.To this end define, for given s 0 > 0 and δ 0 ∈ (0, 1 2 ), If the maximum in the above definition is taken over the empty set, we let k = 1.
Remark 4.10.Note that k is defined in such a way that, up to a factor of order log k, (ξ( k )) k≥1 mimics the superexponential growth of (S k ) k≥1 described in Lemma 4.9.It is precisely this property of ( k ) k≥1 which makes them the correct truncation points for our exploration.
Denoting by K * := K * (N ) the first index k for which k+1 = k , we check that ( k ) k≥1 satisfies the following decay condition.
A verification of Lemma 4.11 is provided in Appendix A, see Lemma A.3.We conclude this section with the central result on the growth of the score in the truncated exploration.While Lemma 4.9 states that with high probability the total score of the active vertices grows by a factor close to (log N ) α+1 in every exploration step, the next proposition states that with high probability we may iterate the estimate of Lemma 4.9 and indeed reach a large score after O(log N/ log log N ) stages.Proposition 4.12 (Score growth).Let ε, η > 0 and set Then there are s 0 (ε) > 0, δ 0 (ε) ∈ (0, 1 2 ) and N 0 (ε, η) such that where (E k ) k≥0 is the exploration in G N with truncation ( k ) k≥1 as in (4.21) that is started in a proper configuration E 0 satisfying ξ(active(E 0 ))/ξ(min(active(E 0 ))) ≥ s 0 .
Proof.We first note that, for fixed η > 0, K * < K for all sufficiently large N by the first statement in Lemma 4.11.We wish to iteratively apply Lemma 4.9 until k ≤ K * (N ) is so large that the second conclusion of Lemma 4.11 allows us to establish the lower bound for ξ(active(E K ) ∪ dead(E K )).To this end let, for k ≥ 0, and furthermore To accomplish this, we need to bound the total probability of error which arises by repeatedly applying Lemma 4.9.The proof is complete once we have verified the following three claims: (i) For given ε, δ 0 we may choose s 0 > 0 such that, for all sufficiently large N , the configuration E 0 and 1 satisfy the conditions of Lemma 4.9 unless K 0 = 0. Additionally, with probability exceeding 1 − γ 1 , for γ 1 := 6ε/(2π 2 ) we have, for all sufficiently large N , the configuration E k and k+1 satisfy the conditions of Lemma 4.9.Consequently, we can find γ k > 0, such that with conditional probability exceeding 1 − γ k we have and thus Note that (γ k ) k≥1 serves as a proxy for the probability that in exploration step k + 1 the exploration process violates the conditions of Lemma 4.9.
Proof of (i): If K 0 = 0, then there is nothing to show.Let K 0 > 0. Given δ 0 , the condition 1 < a 0 /e 2 is satisfied by choosing s 0 sufficiently large.The conditions 1 < η 4.8 N , (4.9) and (4.15) are now implicit in the assumption K 0 > 0, for all sufficiently large N .Application of Lemma 4.9 with β = 1 2 yields as N → ∞, thus, after possibly increasing s 0 again, (i) holds.
Proof of (ii): Assume that K * > K 0 > k and note that this implies ξ( k+1 ) ≤ √ N .By definition of the exploration we have a k ≥ k .By Lemma 4.11 and the definition of ( k ) k≥1 , the network size N can be chosen so large that ( k ) k≥1 decays faster than (e −2k ) k≥1 for all k < K * (N ).In particular k+1 < a k /e 2 holds.As in the proof of (i), K 0 > k implies that (4.15) is satisfied and also, using ξ( k+1 ) ≤ √ N , (4.9) must hold.Hence we may again apply Lemma 4.9 with β = 1 2 to obtain that, conditionally on E k , The conclusion , by choice of the defining recursion (4.21) for the truncation ( k ) k≥1 .
Proof of (iii): It remains to bound the random terms ∆ k + Γ k , k ≤ K 0 , appearing in (4.22) by some deterministic sequence γ k with the desired summability property.We start with Γ k .Since S k < H K 0 , we get To bound ∆ k , we analyse the three terms under minimisation separately.Since x → x(log(N/x)) 2 is strictly increasing on [1, N/e 2 ], the deterministic rightmost term satisfies By definition of ( k ) k≥1 and (3.2) there is some constant c, independent of k, N and ε, such that ξ( k+1 ) ≤ c(log(N/ k )) α+1 ξ( k ) and thus we also have .
On the conditioning event of (ii), we have and thus for some constant c > 0. Combining all estimates we obtain, for some c > 0, This implies that by choosing δ 0 small enough we may obtain ∆ k ≤ 6ε/(2π 2 k 2 ) ∨N −c log N and the last claim is proved.
4.3.Connectivity of high degree vertices.We now provide a connectivity result for those vertices in G N which have a very high degree.This sprinkling-type argument is close in spirit to the proof of a diameter result for the 'inner core' of a different preferential attachment model in [DHH10].
Fix a sequence (M N ) N ∈N of positive integers satisfying log M N = o(log N ).We will now define a random subset C N ⊂ [N ] of size at most M N which has small diameter in G N .To this end, fix ε ∈ (0, 1) and associate N ∈ N with N ε = (1 + ε) −1 N .Assuming that N is sufficiently large such that M N ≤ N ε we call the elements of the random set core vertices of G N .We show below that the diameter of C N in the random graph G N is bounded with high probability, but first we provide an estimate for the number of vertices in C N .
Lemma 4.13 (Size of core N ).There exists a constant c = c(ε) > 0 such that where C N is as in (4.24).
Proof.Note that by the Paley-Zygmund inequality one has for v ∈ By Propositions 3.2 and 3.3 there exists p * > 0 such that for large N ∈ N, p(v, N ) > p * for all v ≤ M N .Since further the degree evolutions (Z , with high probability. Proposition 4.14 (Diameter of the core).Let C N be as in (4.24) and M N = (log N ) R for some R > 0.Then, with high probability as N → ∞ we have For the proof of Proposition 4.14 we use multinomial random graphs.This random graph model depends on three parameters: a finite set of vertices V, an iteration number t and a success probability r ≥ 0 with The corresponding multinomial random graph is an undirected multigraph that is constructed as follows.We denote by A(v, w) the random number of edges that connect two distinct vertices v and w of V.An M(V, t, r)-graph (V, A) is obtained by choosing multinomially distributed with t draws and identical success probabilities r.Note that we do not assume that r #V(#V−1) 2 = 1 which means that formally the random vector has to be extended by a dummy variable which gets the remaining mass.
Recall that the sum of two independent multinomial random variables with identical success probabilities is again multinomial.Hence the sum of two independent multinomial random graphs with identical sets of vertices and success probabilities is a multinomial random graph with the same success probability with the number of draws being the sum of the two draw parameters.We will make use of this fact in the proof of Proposition 4.14 below.
Proof.The detailed argument is given in Lemma A.2 and Proposition 3.2 of [DHH10].We give a brief outline here: First one shows that the diameter of (V, A) is bounded by the diameter of the uniform random graph G u (#V, m) with #V vertices and m = m(t) edges, where The graph G u (#V, m) is in turn asymptotically equivalent to the classical Erdős-Rényi graph G(#V, p) on #V vertices with edge probability p = p(t) given by Finally, by assumption, for n = #V, we have rt ≥ n ρ−1 and therefore by (4.25) for some constant c.We may assume ρ ≤ 1/2, since clearly decreasing ρ only increases the diameter.Setting d = 1/ρ + 1 it is obvious, that (4.26) holds and furthermore implying that (4.27) and (4.28) are satisfied as well.
Proof of Proposition 4.14.We use a coupling of C N ⊂ G N and a multinomial random graph to show that the diameter of C N is small.Recall that the preferential attachment model is uniquely specified by the degree evolutions which can be constructed as follows.Take a family of independent Uniform[0, 1] random variables (U (v, n) : v, n ∈ N with v < n) and define iteratively thus the success probability equals Next we show that for a sequence of sets c N ⊂ [N ε ] satisfying δM N ≤ #c N ≤ M N , for some δ > 0, the random graphs (c N , A N ) satisfy the assumptions of Lemma 4.15.By steps and any choice of η > 0, the conclusion of Proposition 4.12 is applicable.We call such an exploration successful.In the step when the score bound in Proposition 4.12 is reached we have where Also note for later reference that by the definition (4.21) of ( k ) k≥1 and the recursion for (S k ), cf.(4.23), for some small d > 0.
We may assume without loss of generality that K (1) 0 < K (2) 0 .After stage K (1) 0 , we cannot apply exactly the same reasoning for the second exploration as in Proposition 4.12, since the total score of both configurations combined is too high.However, the lower bound given in Lemma 4.3 can still be applied in each exploration step, since the set I 0 of non-jump times featured in this lemma consists only of odd vertices and is therefore disjoint of the sets of non-jump times used in the other exploration which may have exceeded the score bounds.The restriction on the set of jump-times I 1 clearly plays no role -if we encounter an additional jump due to a connection to the first exploration, then the procedure can be stopped and a shortest path connecting U and V is found.
As a consequence, we deduce that with high probability, U and V are either found to be connected before stage K (2)  0 or their respective explorations have reached a score of at least √ N ε (log N ε ) −(α+1) .Note that for a successful exploration, by definition of K 0 , and furthermore (4.21) and (4.23) imply that Combining these estimates it follows that K 0 > (log N ε ) 2α+2 and the exploration has thus collected no information about the degree evolutions of vertices in [M N ] during its main phase, where M N = c ε (log N ) 2α+2 , and c ε > 0 is some suitably chosen constant.Therefore we can apply Lemma 4.13 and Proposition 4.14 to deduce that, for sufficiently large N , with probability exceeding 1 − ε/4, the subgraph induced by C N ⊂ G N is of bounded diameter D and contains at least rM N vertices, for some r = r(ε) > 0.
Denoting the sets of active and dead vertices of E (i) by V (i), and using the shorthand it remains to show that We have already established that, with high probability, C N contains at least rM N vertices.
It is now straightforward to deduce via an appropriate coupling to Bernoulli random variables that with probability at least 1 − ε/12, where q = q(ε) > 0 is some small constant.Each j ∈ L has an independent probability of at least f (Z[v, N ε ])/N to connect to v ∈ V (i), thus the probability that it does not connect to any v ∈ V (i) is bounded above by exp − ) .Since this holds independently for all j ∈ L, we obtain by (4.33) and (4.35), recalling that for all sufficiently large N and some small ν ∈ (0, 1) to be fixed below.Note that the term in the last exponential is bounded below by a constant (depending only on ε) multiple of (log(N/M N )) α .
It remains to fix ν > 0 and bound The proof of Proposition 4.12 shows that (S k ) K 0 k=1 grows superexponentially, thus for every µ > 0, there is ν > 0 such that i.e.H K 0 can differ from S K 0 by at most a constant factor.Therefore it is sufficient to find a lower bound on S K 0 .Note that, for v ∈ active(E K 0 ), replacing the attachment rule f by the linearised attachment rule f (k) = f (0) + k/2 does not change the values ξ(v, N ε ) and only diminishes the sum on the left.For the rest of the argument we may therefore assume that f = f in the evolutions {Z[v, •], v ∈ active(E K 0 )}.During the final exploration stage K 0 , the evolution Z[v, i] Nε i=1 of an active vertex v is only conditioned on a set I 0 of non-jumps which still fullfills the conditions of Lemma 4.3.This implies that, for some small s > 0, we have The random variables under summation on the left are independent.Choosing µ = µ(s) small enough we thus find, by Lemma 4.2, , for some δ = δ(µ) > 0. Taking into account the linearisation of f , and Proposition 3.2, we obtain and that the maximum is attained at K 0 due to the restriction of the exploration.Therefore by (4.34) and the fact that H K 0 is a bounded multiple of S K 0 .Taking expectations and using the already established lower bound on the probability of a successful exploration yields the desired bound of P( Combining the distance bounds from all exploration phases and summing up all error probabilities we thus have shown that for any ε ∈ (0, 1/3) with probability exceeding 1 − 3ε, for all sufficiently large N .This concludes the proof as η > 0 was arbitrary.

Proof of Theorem 2
In this section we use a similar method as in the previous sections to describe the average distances in the Norros-Reittu model with i.i.d.random weights, and thus prove Theorem 2. The technical details are considerably easier in this case, and some parts of the proof which proceed in direct analogy to the preferential attachment case will only be sketched.
We first state some well known facts about heavy tailed i.i.d.weight sequences.
Proposition 5.1 (Asymptotics of weights).Let (W i ) i≥1 be an i.i.d.sequence satisfying and denote by F n the distribution function of the n-th power W n 1 of the weights.For every ε ∈ (0, 1) there is a subset Ω ε of the space of all infinite weight sequences with P(Ω ε ) > 1 − ε and positive constants C 1 , C 2 , C 3 and c 2 such that on Ω ε the following conditions are satisfied (5.4) where the generalised inverse of a monotone function is chosen to be left-continuous and Proof.Inequality (5.2) is a direct consequence of the weak convergence of the rescaled maximum weight to the Fréchet distribution (see e.g.[Res87, Chapter I]).The relations (5.3) and (5.4) follow from weak convergence of rescaled partial sums to stable random variables with positive support (see e.g.[Res07, Corollary 7.1] for a stronger functional version).of Lemma 3.9 now concludes the proof of Proposition 5.2 as log Ψ N = 2α + o(1) log log N and κ N = 1.5.2.Proof of the upper bound.We now prove the upper bound in Theorem 2.
Proposition 5.3 (Upper bound on distances in NR).Let H N be a Norros-Reittu network with weight distribution satisfying (2.4).Consider vertices U, V chosen independently and uniformly at random from the largest component C N ⊂ H N .Then, for any δ > 0, This result can be obtained by a straightforward adaptation of the proof of [Hof16, Theorem 3.22], which uses the second moment method in combination with path counting techniques.For the closely related Chung-Lu model with deterministic weights, a related result is [CL06, Theorem 7.9], the proof of which also works in our setting.We provide a sketch of a proof relying on similar arguments as given in Section 4 for the preferential attachment network.
For H ⊂ [N ] we denote by W (H) = v∈H W v the total weight of H. Just like in the preferential model, the neighborhood of a uniformly chosen vertex V ∈ H N converges in distribution to a random tree S.This tree can be obtained by a mixed Poisson branching process, see Proof.This follows from local weak convergence to S and the fact that the offspring distribution of the branching process generating S has infinite mean in every generation k ≥ 2, hence is supercritical.
Lemma 5.5.Fix M = log N R for some fixed R > 0 and let C N denote the M vertices with the largest weights.Then the diameter of the subgraph induced by C N ⊂ H N is bounded with high probability, as N → ∞.
Proof.Given N we relabel the vertices of H N in decreasing order of weight and denote by ) the order statistics of the first N weights.Fix ε > 0 and δ ∈ (0, α).Then on a subset Ω ε with probability exceeding 1 − ε, by a standard extreme value calculation, using e.Proof.By conditional independence, P(V 1 ↔ V 2 ) = e −W (V 1 )W (V 2 )/L N , from which the result follows since W (V 1 )W (V 2 )/L N diverges to infinity, in probability.
Proof of Proposition 5.3.In view of Lemmas 5.4 to 5.6 it is sufficient to show that a truncated exploration in H N started in a configuration E 0 of large initial weight S 0 with high probability, as N → ∞, reaches a configuration E k satisfying in less than K stages, where R, δ > 0 are fixed and We truncate the exploration in the following way: at stage k, we only investigate connections between active vertices and vertices of weight at most w k+1 , where (w k ) k≥1 is a superexponentially growing sequence specified below.Since we would like to condition on the weights, we start by demonstrating that almost all weight sequences have certain properties.Let (A k ) K k=0 denote a partition of the set [1, √ N (log N ) 2α ) into K nonoverlapping intervals A k = [a k , a k+1 ) of equal length.Applying Lemma 4.2, and a brief calculation we may assume that W 1 , . . ., W N satisfy, W 2 i 1l{W i ≤ w k } , for 1 ≤ k ≤ K, (5.8) as well as (5.9) Fix ε > 0. Let E be a configuration obtained from an exploration of H N , S = W (active(E)), H = active(E) ∪ dead(E), w > 0 and V = V (w) = {v ∈ veiled(E) : W v ≤ w}.It is easy to see, using an appropriate coupling to a sum of independent weighted Bernoulli random variables and Lemma 4. Hence choosing w 0 sufficiently large, setting w k = c(δ, ε) log N 1+2α−η(δ) w k−1 , 1 ≤ k ≤ K, for some appropriately chosen small values of c(δ, ε), η(δ) and letting V k = V (w k ) in (5.10), we obtain that the weight S k of the active vertices increases in each stage k of the exploration by a factor of at least ν(w k , N ) ≥ c log(w k ) 2α+1−η(δ) , for some constant c which depends on δ and ε but not on N .A straightforward calculation now shows that the exploration satisfies after at most K stages.Summing the error terms in (5.11) for the different stages using (5.8) and (5.9), we obtain for some constants c 1 , c 2 , which are independent of N , ) −1 0 , we choose K = 2 α+1 and the desired bound (A.5) follows.Proof.We first show the upper bound by induction in k.For k = k 0 the assertion is trivially true as soon as N is large enough.Now assume that k ≤ N e −(2α+2−δ)(k−k 0 ) log k for some k < K * (N ) − 1 then we have, by definition of ( k ) k≥1 , that log k+1 ≤ log k − (2α + 2) log(log N − log k ) + 1 and applying the induction hypothesis yields

( 4
.29) Let N ∈ N and c N ⊂ [N ε ] such that #c N → ∞ and log(#c N ) = o(log N ) (4.30) and construct for each n ∈ [N ]\[N ε ] a multinomial random graph (c N , A n ) with iteration number one by the rule that for distinct vertices v, v ∈ c N the edge (v, v ) is present if and only .32) Clearly, r(N ) ∈ (0, 1) if N is sufficiently large by (4.30).Note that the random graphs (c N , A [Nε]+1 ), . . ., (c N , A [N ] ) are independent and the sum of the latter graphs, say (c N , A N ), is a binomial random graph with iteration number N − N ε and success probability r(N ).Furthermore, by (4.31) and (4.29), for any v, w∈ c N with f (Z[v, N ε ]) ≥ Ef (Z[M N , N ε ]) and f (Z[w, N ε ]) ≥ Ef (Z[M N , N ε ]) the existence of the edge (v, w) in the multinomial graph (c N , A n ) (n = [N ε ] + 1, . . ., [N ]) implies the existence of edges (v, n) and (w, n) in the graph G N .Thus the diameter of c N in G N is less than twice the diameter of the multinomial random graph (c N , A N ).
2 that, as long as wS = o(L N ),W ({v ∈ V : v ↔ active(E)}) ≥ v∈V W 2 v 4L N S =: ν(w, N )S,(5.10)conditionalon E and the weight sequence, with probability at least that, by (5.3) and our choice of weight distribution,v∈V W 2 v ≥ v∈[N ] W 2 v 1l{W v ≤ w} − max a∈H W a W (H).