On the Largest Component of a Hyperbolic Model of Complex Networks

We consider a model for complex networks that was introduced by Krioukov et al. In this model, N points are chosen randomly inside a disk on the hyperbolic plane and any two of them are joined by an edge if they are within a certain hyperbolic distance. The N points are distributed according to a quasi-uniform distribution, which is a distorted version of the uniform distribution. The model turns out to behave similarly to the well-known Chung-Lu model, but without the independence between the edges. Namely, it exhibits a power-law degree sequence and small distances but, unlike the Chung-Lu model and many other well-known models for complex networks, it also exhibits clustering. The model is controlled by two parameters α and ν where, roughly speaking, α controls the exponent of the power-law and ν controls the average degree. The present paper focuses on the evolution of the component structure of the random graph. We show that (a) for α > 1 and ν arbitrary, with high probability, as the number of vertices grows, the largest component of the random graph has sublinear order; (b) for α < 1 and ν arbitrary with high probability there is a " giant " component of linear order, and (c) when α = 1 then there is a non-trivial phase transition for the existence of a linear-sized component in terms of ν.


Introduction
The term "complex networks" describes a class of large networks which exhibit the following fundamental properties: 1. they are sparse, that is, the number of their edges is proportional to the number of nodes; 2. they exhibit the small world phenomenon: almost all pairs of vertices that are in the same component are within a short distance from each other; 3. clustering is present: two nodes of the network that have a common neighbour are somewhat more likely to be connected with each other; 4. their degree distribution is scale free.This means that its tail follows a power law.
There has been extensive experimental evidence (see for example [2]) which suggests that many networks that emerge in applications have a degree distribution whose tail follows a power law with exponent between 2 and 3.
The books of Chung and Lu [10] and of Dorogovtsev [7] are excellent references for a detailed discussion of these properties.
During the last 15 years a number of models have been developed in a series of attempts to capture these features.Among the first such models is the preferential attachment model.This is a class of models of randomly growing graphs whose aim is to capture a basic feature of such networks: nodes which are already popular tend to become more popular as the network grows.It was introduced by Barabási and Albert [2] and subsequently defined and studied rigorously by Bollobás, Riordan and co-authors (see for example [6], [5]).
Another extensively studied model was defined by Chung and Lu [8], [9].Here every vertex has a weight which effectively corresponds to its expected degree and every two vertices are joined independently of every other pair with probability that is proportional to the product of their weights.If these weights follow a power-law distribution, then it turns out that the resulting random graph has power-law degree distribution.This model is a special case of an inhomogeneous random graph.
All these models have their shortcomings.Namely, none of them succeeds in incorporating all the above features.For example, the Chung-Lu model exhibits a power law degree distribution (provided the weights of the vertices are suitably chosen) and average distance of order O(log log N ) (when the exponent of the power law is between 2 and 3, see [8]), but it does not exhibit clustering.This is also the situation in the Barabási-Albert model.
In the case of the Chung-Lu model the absence of clustering is essentially due to the fact that pairs of vertices form edges independently.On the contrary, the presence of clustering requires the edges not to appear independently.If two edges share a common endvertex, then the probability that their other two endvertices are joined must be higher compared to that where we assume nothing about these edges.This property is naturally present in random graphs that are created over metric spaces, such as random geometric graphs.In this context, the vertices are a random set of points in a given metric space and any two of them are adjacent if their distance is smaller than a certain distance.
Recently Krioukov et al. [19] introduced a model which naturally exhibits these typical features.In this model, a random network is created on the hyperbolic plane (we will see the detailed definition shortly).In particular, Krioukov et al. [19] determined the degree distribution showing that it is scale free and its tail follows a power law, whose exponent is determined by one of the parameters of the model.The exponent can take any value that is at least 2. Furthermore, they consider the clustering properties of the resulting random network.A numerical approach in [19] suggests that the (local) clustering coefficient is positive and it is determined by one of the parameters of the model.These characteristics have been verified rigorously by Gugelmann, Panagiotou and Peter [12].
The basic hypothesis of Krioukov et al. [19] was that the hyperbolic geometry underlies complex networks.In particular, the heterogeneity of the nodes, whose expression is the power law degree distribution, is in fact the expression of an underlying hyperbolic geometry.Complex networks do exhibit some sort of hierarchy in the sense that their nodes/members form groups which are further organised into subgroups etc.Thus, there is a hidden tree-like structure and the hyperbolic geometry is the natural space which can accommodate such a structure.
The aim of the present work is the study the component structure of such a random graph and more specifically the number of vertices that are contained in a largest component of the graph.One of our main findings is that for the range of the parameters of the random graph model where the exponent of the power law is larger than 3, the random graph typically consists of many relatively small components, no matter how large the average degree of the graph is.This is in sharp contrast with the classical Erdős-Rényi model (see [3]) as well as with the situation on random geometric graphs on Euclidean spaces (see [20]) where the dependence on the average degree is crucial.However, we show that the structure of the random geometric graph is significantly different when the exponent of the power law is smaller than 3.In fact, we show that in this case a giant component exists with high probability, that is, a component containing a positive fraction of the vertices of the random graph.

The model
We start by recalling some facts about the hyperbolic plane H.The hyperbolic plane is an unbounded surface of constant negative curvature −1.There are several ways to represent it in two dimensions, including the half-plane model, the Beltrami-Klein disk model and the Poincaré disk model.In the Poincaré disk model, we equip the unit disk D := {(x, y) ∈ R 2 : x 2 + y 2 < 1} with the metric determined 1 by the differential form ds 2 = 4 dx 2 +dy 2 (1−x 2 −y 2 ) 2 .The book of Stillwell [22] covers the basic theory of hyperbolic geometry.In this paper we find it helpful to draw pictures in the native model of H.This is obtained from the Poincaré disk model by multiplying each point (x, y) ∈ D by a scalar that equals the ratio of the distance to the origin in the hyperbolic metric, over the distance to the origin in the euclidean metric (in the case of the origin itself we define this ratio to equal one).This produces a model of H that fills all of R 2 .It lacks many of the properties that make the classical models of H elegant to work with, but it does allow us to see more detail in visualizations of the graph model we are about to introduce.To the best of our knowledge the native model was first introduced by Krioukov et al. [19].
Basic facts about H that we will rely on heavily in the paper are that in H a disk of radius r (i.e. the set of points at hyperbolic distance at most r from a given point) has area equal to 2π(cosh(r) − 1) and circumference length equal to 2π sinh(r).Another important fact that we will rely on in the paper is the hyperbolic cosine rule.It states that if A, B, C are distinct points on the hyperbolic plane, and we denote by a the distance between B, C, by b the distance between A, C, by c the distance between A, B and by γ the angle (at C) between the shortest AC-and BC-paths, then cosh(c) = cosh(a) cosh(b) − cos(γ) sinh(a) sinh(b).
We are now ready to introduce the model we will be studying in this paper.We will name it the Krioukov-Papadopoulos-Kitsak-Vahdat-Boguñá-model, after its inventors.For convenience we will abbreviate this to KPKVB-model throughout the rest of the paper.The model has three parameters: the number of vertices N , which we think of as large, and α, ν > 0 which we think of as fixed.Given N, ν, α, we compute R := 2 log(N/ν).We now select N points independently at random from the disk of radius R centred at the origin O, which we denote by D R , according to the following probability distribution.If the random point u has polar coordinates (r, θ), then θ, r are independent, θ is uniformly distributed in (0, 2π] and the probability distribution of r has density function given by: Note that when α = 1, then this is simply the uniform distribution on D R .An alternative way to view this distribution is as follows.If we multiply the differential form in the Poincaré disk model by a factor 1/α 2 then we obtain (a model of) the hyperbolic plane H α of curvature −α 2 .It can be seen that the above probability distribution corresponds precisely to a point taken uniformly at random from the disk of radius R around the origin in H α .(We however treat these points as points of the ordinary hyperbolic plane.)The set of N points we have thus obtained will be the vertex set of our random graph and we denote it by V N .The KPKVB-random graph, denoted G(N ; α, ν), is formed when we join each pair of vertices, if and only if they are within (hyperbolic) distance R. Figure 1 shows an example of such a random graph on N = 1000 vertices.
We should mention that Krioukov et al. in fact had an additional parameter ζ in their definition of the model.In their definition, the points were taken inside a disk of radius R ζ := (2/ζ) log(N/ν) on the hyperbolic plane H ζ of curvature −ζ 2 , and the points were generated according to (1.1) with R ζ in place of R. In this case the random graph is denoted by G(N ; ζ, α, ν).However, it turns out that there is no need for the extra parameter ζ.The following lemma, which we prove in Appendix A, shows that we can take ζ = 1 without any loss of generality.We remind the reader that a coupling of two Thus, the previous lemma states that one can define G(N ; ζ, α, ν) and G(N ; ζ , α , ν) on a common probability space in such a way that the two graphs are isomorphic (with probability one).Let us also remark that edge-set of G(N ; α, ν) is decreasing in α and increasing in ν in the following precise sense.Lemma 1.2.Let α, α , ν, ν > 0 be such that α α and ν ν .For every N ∈ N, there exists a coupling such that G(N ; α, ν) is a subgraph of G(N ; α , ν ).
The proof of Lemma 1.2 is given in Appendix B.
Krioukov et al. [19] focus on the degree distribution of G(N ; α, ν), showing that when α > 1  2 this follows a power law with exponent 2α + 1.They also discuss clustering on a smooth version of the above model.Their results have been verified rigorously by Gugelmann et al. [12].Note that when α = 1, that is, when the N vertices are uniformly distributed in D R , the exponent of the power law is equal to 3. When 1 2 < α < 1, the exponent is between 2 and 3, as is the case in a number of networks that emerge in applications such as computer networks, social networks and biological networks (see for the electronic journal of combinatorics 22(3) (2015), #P3.24 example [2]).They have also shown that the average degree of the random graph can be "tuned" through the parameter ν.
Throughout the paper, we will be using the notion of the type of a vertex.For a vertex u ∈ V N , its type t u is defined to be equal to R − r u where r u is the radius of u in D R .Similarly, a point p ∈ D R of radius r p has type t p = R − r p .We note the following lemma, making it easier to work with the distribution of the types.Lemma 1.3.Uniformly for 0 t < 0.99R we have (1.2) Proof.Using the pdf in Equation 1.1, we get The notion of inhomogeneous random graphs was introduced by Söderberg [21] but was defined more generally and studied in great detail by Bollobás, Janson and Riordan in [4].In its most general setting, there is an underlying compact metric space S equipped with a measure µ on its Borel σ-algebra.This is the space of types of the vertices.A kernel κ is a bounded real-valued, non-negative function on S ×S, which is symmetric and measurable.It is assumed that the vertices of the random graph are points in S. If x, y ∈ S, then the corresponding vertices are joined with probability that is equal to κ(x,y) N ∧ 1, where N is the total number of vertices, independently of every other pair.The points that are the vertices of the graph are approximately distributed according to µ.More specifically, the empirical measure induced by the N points converges weakly to µ as N → ∞.
Of particular interest is the case where the kernel function can be factorised and can be written as κ(x, y) = t(x)t(y) -this is called a kernel of rank 1.Here, the function t(x) represents the weight of a vertex of type x and, in fact, it is approximately its expected degree.The special case where t(x) follows a distribution that has a power law tail was considered by Chung and Lu in a series of papers [8], [9] (see also [14]).
In the random graph G(N ; α, ν) the probability that two vertices are adjacent has this form.The proof of this fact relies on Lemma 2.1, which we will state and prove later.This provides an approximate characterization of what it means for two points u, v to have hyperbolic distance at most R in terms of their relative angle, which we denote by θ u,v .
As we shall see later in Lemma 2.1, two vertices u and v of types t u and t v are within distance R (essentially) if and only if θ u,v < 2νe tu/2 e tv/2 /N .Hence, conditional on their types the probability that u and v are adjacent is proportional to e tu/2 e tv/2 /N .But Lemma 1.3 shows that the type of a vertex is approximately exponentially distributed.Thus, if we set t(u) = e tu/2 , then P(t(u) x) = P(t u 2 ln x) e −2α ln x = 1/x 2α .In other words, the distribution of t(u) has a power-law tail with parameter 2α +1.Thus, the random graph G(N ; α, ν) can be seen as a dependent version of the Chung-Lu model that emerges naturally from the hyperbolic geometry of the underlying space.The fact that this is a random geometric graph gives rise to the existence of local clustering, which is missing in the Chung-Lu model.There, most vertices have tree-like neighbourhoods.
In fact, it can shown that the degree of a vertex u in G(N ; α, ν) that has type t u is approximately distributed as a Poisson random variable with parameter proportional to e tu/2 .

Component structure of G(N ; α, ν)
This paper focuses on the component structure of G(N ; α, ν) and, in particular, the size of its largest component.We denote by |L 1 | the size of a largest connected component of G(N ; α, ν).
Among the key findings of Erdős and Rényi on the theory of random graphs is the emergence of a component of linear order in a uniformly chosen random graph on N vertices and M edges, usually denoted by G(N, M ).Erdős and Rényi [13] showed that N/2 is the critical number of edges for the appearance of a component whose number of vertices is proportional to N .That is, with probability , then there exist constants c 1 , c 2 depending on ε such that the largest component of G(N, M ) has at least c 1 N vertices whereas every other component contains at most c 2 ln N vertices.In the latter case, the largest component is also known as the giant component.See [3] or [16] for a detailed exposition and analysis of the emergence of the giant component.
For (euclidean) random geometric graphs it is known that there exists a critical value c for the average degree such that the size of the biggest component is sublinear if the expected average degree is less than c and linear for expected average degree greater than c.The exact value of c remains unknown.For more information on giant components in random geometric graphs see [20].
In this contribution, we show that when α crosses 1 a "phase transition" occurs.More specifically, if α > 1, then asymptotically almost surely (a.a.s.), that is, with probability Theorem 1.4.Let α, ν be positive real numbers.The following hold: Recently, Kiwi and Mitsche [18] showed that the second largest component is in fact at most poly-logarithmic in N , and at least logarithmic in N .The previous theorem shows there is a phase-change at α = 1.The next result shows that inside this phase-change, when α = 1, the existence or not of a giant component depends on the value of ν.
Theorem 1.5.Assume that α = 1.There exist constants π 8 ν 0 ν 1 20π such that the following hold: We now proceed with some auxiliary results and the proof of the above theorems.

Auxiliary results
We start by deriving some tools that help to approximate the probability that two points are adjacent in the graph.Let us first remark that the shape of the set of all point of D R within distance R of a given point varies greatly depending on the type of the point.See Figure 2 for a depiction.
In particular, when the type of the point is small, then the set of points of D R at distance R from it resembles a "long and very thin balloon" in the native model.The following lemma is crucial to many computations in the paper.Lemma 2.1.For any ε > 0 there exists an N 0 > 0 and a c 0 > 0 such that for any N > N 0 and u, v ∈ D R with t u + t v < R − c 0 the following hold.
the electronic journal of combinatorics 22(3) (2015), #P3.24 Proof.We begin with the hyperbolic law of cosines: The right-hand side of the above becomes: +O e −2(2R−(tu+tv)) . (2.1) . The latter is at most 2 .Therefore, the upper bound on θ u,v yields: for N sufficiently large and c 0 such that e ,v) , it follows that d(u, v) < R. To deduce the second part of the lemma, we consider a lower bound on (2.1) using the lower bound on θ u,v : the electronic journal of combinatorics 22(3) (2015), #P3.24 Using again that 1 − cos(θ) = 2 sin 2 θ 2 we deduce that for N and c 0 large enough, using the Taylor's expansion of the sine function around 0. Substituting this bound into (2.2) we have Thus, if d(u, v) R, the left-hand side would be smaller than the right-hand side which would lead to a contradiction.
We will define approximating areas of the circle of radius R around a given point u, motivated by Lemma 2.1.We call these bounding areas inner and outer tube of the point u.
Definition 2.2.For a given point u ∈ D R and for ε and N 0 as in Lemma 2.1 we call the sets Although by our definition there is no unique inner and outer tube, we will talk of the inner and outer tube.These should always be for suitably chosen ε and N 0 in the given context.Lemma 2.1 shows that, for sufficiently large graphs, all points in the inner tube of a typical vertex u (that is, a vertex of low type) are of distance at most R of u and all vertices of distance at most R of u are within the outer tube of u.We will use outer and inner tubes to derive stochastic bounds on the size of a component.
During our proofs we will also use the following lemma, which states that every vertex in the neighbourhood of a given vertex u will still be connected to u when we increase t u .
Proof.Using basic properties of the geometry of the hyperbolic plane, it follows that the geodesic between two points of radius at most R uses only points of radius at most R. Also note that the geodesic between the origin and any point is the ray from the origin through the point.Let O be the origin, and consider vertices u, v, w as in the statement of the lemma.Consider some isometric mapping which maps w to O.As w ∈ D R , we have d(O, w) R. By the requirements of the lemma, d(u, w) R.So O and u are within the disk of radius R around w, and so is their geodesic.Since θ u,v = 0, it follows that v lies on this geodesic and therefore d(v, w) R.

Theorem 1.4: the subcritical case
To show the first part of Theorem 1.4, we construct a process that exposes the angle that the component of a given vertex v covers.We already discussed that vertices that are close to the centre have a higher expected degree than those close to the periphery.The connectivity in a hyperbolic random geometric graph highly depends on the structure of the sub-graph that is induced by vertices of high type.When α > 1  2 , the types of all vertices of G(N ; α, ν) are bounded away from R a.a.s.This is made precise in the following lemma.
The proof of this lemma can be found in [11] (Corollary 2.2).Thus, it suffices to consider vertices of type no larger than this bound.Note that all vertices have type smaller than and asymptotically bounded away from R/2, since α > 1.We will consider a vertex u of type 1 2α R + ω(N ) and analyse a breadth exploration process, through which we will bound the total angle of the component which contains u: if C(u) denotes the connected component u belongs to, we define This quantity represents the "width" of the component u belongs to from the point of view of u itself.When working with this we generally need to double it as it only considers the direction of maximum extent but we need to take both into account.Let 0 θ c (u, v) < 2π be the angle between the points u and v, in clockwise direction from the point u.We define a bounding path, which is a path on D R that is not crossed by any edge.The length of a bounding path essentially bounds the number of vertices of the component in which it is rooted.In particular, if a bounding path induces a partition of D R into two parts, one of which covers an angle of at most o(1), then a.a.s.any component in this part will be of sublinear size.Definition 3.2.We call a series of points P = (p 1 , p 2 , . . ., p m ) in D R a bounding path for G(N ; α, ν), if the following hold: (i) The points p 1 and p m are on the boundary of D R , i.e. their radius is R. Also, (iii) Let A ∪ B be the partition of D R incurred by P , using radial lines to connect vertices that only differ in type and arcs to connect vertices that only differ in angle.Let B be the part containing the origin and let A contain all points on the connections.
There is no pair of adjacent vertices a ∈ A and b ∈ B in G(N ; α, ν).
Note that (ii) ensures that P does not cross itself, so it does partition the disk into two parts and (iii) makes sense.Also, for 1 < i < m and any vertex v with θ v,p i = 0 and t v < t p i , the component of G(N ; α, ν) that v belongs to covers an angle of at most θ c (p 1 , p m ).
We will now proceed with the definition of the breadth exploration process that we will use to get a short bounding path.Note that Throughout this section we will need several small constants ε.We will assume that we choose one ε for all these and require N to be large enough to satisfy everything.Given some constant C > 0, let i 0 be the minimum i such that λC.We partition the disk D R into three bands: By Lemma 3.1 a.a.s.B 0 does not contain any vertices.We define two phases for our random process, one on B C λ and one on B − .We start the process from a point u ∈ D R with 0 < t u We know that there exists i u ∈ {1, . . ., i 0 } such that u ∈ B (iu) C λ .We consider the domain of attraction around u: and for i = 1, . . ., i 0 we let C λ that are in the clockwise direction from u.

By Lemma 2.1, the domain of attraction
u contains all points of the band u must necessarily be within distance R of u.We define the first phase of the breadth exploration process in the clockwise direction started at u as follows.Note that the auxiliary points defined in the process do not necessarily (in fact, with probability 1 they do not) correspond to vertices of the graph.
1. v := u and Θ := 0; let i v be such that v ∈ B (iv) the electronic journal of combinatorics 22(3) (2015), #P3.24 2. let j 0 be the smallest i such that A (i) v contains a vertex; if such an index does not exist, then go to Phase II; if j 0 i v , the goto Step 5; (we then say that a backward jump occurs) 3. let Θ1 := 2ν(1 + ε) e 1/2(tv +t j 0 −1 ) N .Let w be the point of polar coordinates (R − Note that this process does not involve any points of type higher than t 0 .Indeed, this is not necessary as by Lemma 3.1 a.a.s.all vertices in V N have types no more than t 0 .
We call a single execution of Steps 2-4 of Phase I a round.A maximal series of consecutive rounds is called a cycle.Thus, if at the end of a cycle a backward jump occurs, then Phase I proceeds to Step 5, initiating a cycle starting at a point of type t 0 .This ensures that no matter where the backward jump takes place, vertices that are within distance R from the new root will be covered.
The set of rounds up to the end of Phase II is called an epoch.Hence, an epoch consists of repeated cycles, whose repetitions stop with an execution of Phase II.The breadth exploration process starting at a vertex/point u is the process consisting of repeated epochs with the initial root v being the point of type t 0 and relative angle with respect to u that is equal to 0. (Thus, in fact, the process does not start from u but at the "image" of u that has type t 0 .) Recall that Θ(u) is the maximum relative angle between any two vertices in the component that contains u.We prove the following lemma: Lemma 3.3.For any vertex u ∈ V N of type less than t 0 , if Θ denotes the maximum of the angles gained during the breadth exploration process started at u in the clockwise and the anticlockwise direction, then Θ(u) 2 • Θ .
the electronic journal of combinatorics 22(3) (2015), #P3.24 Proof.Using the breadth exploration process in the clockwise direction, we get a series of root points -these are the vertices in the beginning of Phase I. Let u 1 , . . ., u m be the part of this series that corresponds to the last cycle, i.e. there was a backwards jump before u 1 and there is no more backwards jump from there on.Let ûi be the radial projection of the point u i to type t u i+1 .The series u 1 , û1 , u 2 , û2 , . . ., ûm−1 , u m thus always alters between changing the type and relative angle, as required in condition (ii).Similarly, in anticlockwise direction, we get the series u 1 , û 1 , u 2 , û 2 , . . ., û −1 , u .Letting ûm and û be the radial projections of u m and u to the boundary of D R , we get the path P = (û , u , û −1 , . . ., u 1 , u 1 , . . ., ûm−1 , u m , ûm ).If the breadth exploration process only uses a total angle that is o(1), which is the case a.a.s., then (i) and (ii) are naturally true if , almost surely (with probability 1) we can push u 1 in clockwise direction by some small amount to fix this problem without causing further problems elsewhere (i.e.all the adjacencies of u 1 stay the same).
To prove that P is a bounding path for G(N ; α, ν) we need to show that there is no pair of vertices (v, w) such that v ∈ A, w ∈ B and v ∼ w.Assume for a contradiction that there is such an edge vw.Without loss of generality we only consider the series where, for convenience, û0 = u ).But as v ∈ A, we have t v t u i , so by Lemma 2.3 and the fact that decreasing angles decreases the distance we have that u i is adjacent to w.By the choice of u i+1 in the breadth exploration process, we thus have θ c (p 1 , w) θ c (p 1 , u i+1 ) θ c (p 1 , u m ) and t w < t u i+1 , so w ∈ A, a contradiction.Note that i < m since as u i is adjacent to w the breadth exploration process cannot have stopped at u i .
So using the breadth exploration process twice we indeed find a bounding path.In particular, the angle gained in both direction gives a slice of the disk that contains the entire component of u.
We now want to bound from above the angle that can be gained during the execution of the process.Note that increasing the type of a vertex u will keep intact all the edges incident to u.Thus if u is a vertex replacing u, of type t u > t u and with relative angle 0 to u, we have Θ(u ) Θ(u).This justifies the choice of the type in the following lemma.Proof.Let us consider the breadth exploration process started at a vertex u having type t 0 = 1 2α R + ω(N ).For an ε > 0 we let T ε denote the first round at the end of which Θ ε if there is such a round, otherwise T ε = ∞.We also denote by u 0 (t) the root vertex at the beginning of the tth round and let i u 0 0 denote the index of the band this vertex belongs to.We will first bound from below the probability that the exploration process does not backtrack during the tth round.Let B (iu 0 ) t be the indicator random variable that is equal to 1 if and only if backtracking does occur during the tth round assuming that the root vertex is in i u 0 .Claim 3.5.For ε ∈ (0, 2π), let t < T ε .There exists a constant K = K(α, ν) > 0 such that for any N that is sufficiently large we have Proof of Claim 3.5.Let us write u 0 = u 0 (t).For t < T ε we give a stochastic upper bound on the number of vertices that belong to u 0 .Hence, we will be able to give a lower bound on the probability that this region is empty.In other words, we will bound from below the probability that no backtracking occurs during the tth round.Let N t denote the number of vertices that have not been exposed at the beginning of the tth round.Using Lemma 1.3, the probability that one of them will belong to Hence, the number of vertices which during round t will fall into u 0 is binomially distributed with parameters N t , p (iu 0 ) t .In turn, this is stochastically bounded from above by a binomially distributed random variable with parameters N, p (iu 0 ) t .Note also that if the number of vertices that fall into ∪ iu 0 j=0 A (j) u 0 is positive, then backtracking occurs.Hence, the probability of backtracking during round t is at least Pr Bin(N, p This shows that p , we obtain an asymptotic estimate on the probability of backtracking: Pr Bin(N, p the electronic journal of combinatorics 22(3) (2015), #P3.24 Hence, we obtain Pr Bin(N, p for any N sufficiently large, uniformly over all possible values of i u 0 .(The latter is the case since always t iu 0 < R/2.) Taking K = 2K , the claim now follows.Now, observe that the above claim implies that the probability of no backtracking at a certain round can become very close to 1. Indeed, note that t u 0 λC and, therefore, the exponent on the right-hand side of the bound obtained in Claim 3.5 can be made as close to 0 as we want, provided we choose C large enough.Moreover, if t u 0 is bounded from below by a function of N that increases as N → ∞, then the probability of no backtracking is in fact 1 − o(1).These observations are key to the deduction of the first part of the lemma.
We first show that provided that Θ is much less than ε, the number of cycles within an epoch is essentially stochastically dominated by a geometrically distributed random variable that has probability of success 1 − ε, provided that the parameter C = C(ε) is large enough.Suppose that an epoch starts with Θ g(N ) where g(N ) = o(1).Recall that a cycle starts at a vertex that has type t 0 = 1 2α R + ω(N ).Let T C λ denote the random variable that is the length of a cycle.We say that a cycle is successful if it exits to Phase II.Note that a cycle is successful, that is, no backtracking occurs, if and only if B (iu 0 ) t = 0, for all t T C λ .We will bound the probability that, conditional on Θ g(N ) at the beginning of the epoch, the number of cycles is at least R.In particular, we will show that for every ε there exists a C such that this probability is at most ε R−1 .Claim 3.6.Let g(N ) = o(1).For every ε > 0 there exists a C = C(ε) such that for any N sufficiently large, conditional on Θ g(N ) at the beginning of an epoch, with probability at least 1 − ε R−1 the total angle gained during the epoch is at most 2R log 2 RN 1/α−1 .
Proof.To bound this probability, we will repeatedly apply Claim 3.5.However, in order to this we need to ensure that Θ does not exceed ε whenever at most R cycles have been executed.
Hence, we first need to give an upper bound on the angle that is gained during the execution of a cycle.If Θ T C λ denotes this angle, then But for all i we have the electronic journal of combinatorics 22(3) (2015), #P3.24 Using this with ω(N ) = log log 1 2 R, the above sum can be further bounded from above by if N is large enough.Therefore, after r R cycles the angle gained will be at most R log 2 R N 1/α N < ε, for any N that is sufficiently large.Note also that this quantity bounds the total angle that is gained during an epoch consisting of at most R cycles.
Hence, applying Bayes' rule repeatedly, Claim 3.5 implies that Pr B Let a i := e − ε 2 t i and note that since t i = λ i t 0 we have for i i 0 a i−1 Thus, if C is large enough, then Substituting this bound into the right-hand side of (3.2) we obtain Pr B choosing C large enough so that the last inequality holds.Hence, the probability that backtracking occurs before T C λ is at most ε, for C = C(ε) that is sufficiently large.In other words, the probability that the cycle is not successful conditional on Θ g(N ) at the beginning of the epoch is at most ε.Therefore, the conditional probability of having R cycles during the epoch is at most ε R−1 .
As we pointed out above the total angle that was gained above during the execution of the R cycles is no more than R log 2 RN 1/α−1 .During Phase II, the angle gained is at most Hence, an epoch having at most R cycles adds at most 2R log 2 RN 1/α−1 to Θ , provided that N is sufficiently large.Now, we will show that as long as Θ has not grown too much, the probability that an epoch is the final one is asymptotically bounded away from 0. To see this, we will bound from above the probability that T + v , that was defined in Step 3 of Phase II, contains at least one vertex conditional on Θ ε.In particular, conditional on this, the probability that a given vertex whose exact position in D R has not been exposed yet belongs to Under the above conditioning, the number of vertices that belong to T + v is stochastically bounded from above by a binomially distributed random variable with parameters N, p II Hence, the probability that T + v is empty conditional on Θ ε is at least provided that ε < π and N is sufficiently large.Now, we set E := − 1−1/α ln δ R .We will finish the proof by showing that the probability that less than E epochs take place each having at most R cycles is 1 − o N 1/α−1 .This together with Claim 3.6 imply that with this probability the total angle gained during the process is at most 2ER log 2 RN 1/α−1 .
Indeed, the probability of having E epochs each one having at most R cycles is at least and the latter is o N 1/α−1 .Also, arguing as in the proof of Claim 3.6, we deduce that the probability that there exists one among the first E cycles having more than R cycles is at most Eε R = o(N 1/α−1 ), provided that ε is chosen small enough.
The above lemma together with Lemma 3.3 imply that Lemma 3.7.For any u ∈ V N we have Θ(u) We will now deduce the first part of Theorem 1.4 from Lemma 3.7.Let B denote the set of vertices u for which Θ(u) > 2R 2 log 3 RN 1/α−1 -we call these vertices bad.Thus, Lemma 3.7 implies that ). Markov's inequality in turn implies that for any δ > 0 we have that with probability Assume now that G(N ; α, ν) has a component C of order greater than 8R 2 log 3 RN 1/α .Hence, on the event (3.4), there is at least one (in fact, many) vertex u ∈ C in this component that is not bad.
A sector of D R is the area between two radii of D R of relative angle which is less than π -we call this angle the angle of the sector.Hence, since u is not bad, it turns out that there is a sector of angle at most 2R 2 log 3 RN 1/α−1 which contains at least 4R 2 log 3 RN 1/α vertices (in fact, our assumption implies that it contains almost twice as many vertices as this).But the next lemma shows that this is not the case with probability 1 − o(1) and the first part of Theorem 1.4 follows.Proof.Consider a partition P of D R into 2π/θ(N ) sectors of angle θ(N ).If D R contains a sector as in the statement of the lemma, then one of the sectors in P must contain at least N θ(N ) vertices.Now, note that the number of vertices is a sector σ ∈ P, which we denote by N s , is binomially distributed with parameters N and θ(N )/2π.Hence, E(N σ ) = 1 2π N θ(N ) and since N θ(N ) → ∞ and applying a Chernoff-type bound we deduce that Pr N σ > N θ(N ) = exp (−Ω(N θ(N ))) .
Therefore, using Markov's inequality we obtain: which concludes the proof of the lemma.

Theorem 1.4: the supercritical case
In this section, we will show the second part of Theorem 1.4.Namely, we shall assume that α < 1 and with |L 1 | denoting the size of a largest component of G(N ; α, ν), we will show that there exists a c = c(α, ν) such that a.a.s.|L 1 | > cN .

Proof overview
We will consider a set of homocentric bands in D R .The innermost band consists of those vertices of type at least R/2.Note that the subgraph of G(N ; α, ν) that is induced by the vertices which belong to this part of D R is a clique.This follows from the triangle inequality, which implies that the distance between any two vertices there is at most R.
The remaining bands are determined by a sequence of numbers t i , with t 0 = R/2 and t i defined by the following recursion: where now λ := 2 α − 1 2 -we assume that α > 1 2 .The bands are now as follows: the electronic journal of combinatorics 22(3) (2015), #P3.24 We shall assume that i < T , where T = T (α, ν, ε) and ε is a positive real number which we will assume to be small enough for the purposes of our calculations.We will determine T in Subsection 4.2.Observe that (4.1) implies that provided that t We denote by N i the set of vertices which belong to the ith band, for i 0, and let N i denote its size.Furthermore, for i > 0 we denote by N i the set of vertices in B i that have at least one neighbour in N i−1 -here we set N 0 = N 0 .We say that these belong to the active area of B i .This definition together with the fact that a clique is formed in N 0 imply that the graph induced by T i=0 N i is connected and contains T i=0 |N i | vertices.Our aim is to show that a.a.s.this quantity is linear in N .We let More specifically, we show that the number of vertices in N i stochastically dominates the number of vertices in a subset of B i that has arc Θ i .This makes working with sizes a lot easier, as implications for the size of N i can be deduced from the angle Θ i−1 .In particular, with probability 1 3) The proof of this can be found in Section 4.4.Next we argue (cf.Section 4.4) that conditional on N i−1 as above and Θ i−1 > π with high probability Θ i is at least a certain fraction of Θ i−1 .
To derive the stochastic domination we will assume that the following conditions hold: • any vertex of type t with t i R/2 is of type R/2; • any vertex of type t with t i < t t i−1 is of type t i .
Lemma 2.3 ensures that for a vertex v ∈ B i−1 the area consisting of all points that belong to B i and have distance R from v becomes smaller, if the type of v within the bounds of B i−1 decreases.Now, using the first part of Lemma 2.1, we can use the inner tubes to obtain a further lower bound on this area.In particular, we will consider only vertices that fall within the inner tube of v ∈ N i−1 assuming that the type of v is t i−1 and deduce a stochastic lower bound on the size of N i .
Using concentration arguments we will show that a.a.s.N i and for some C > 0 then it will follow by (4.4) that for some c = c(α, ν).

The definition of T
Firstly, we will require that t i > B 1 where B 1 = B 1 (α, ν, ε) > e is large enough so that we have This condition implies that We use (4.8) in order to deduce that if (4.9) where γ = γ(α, ε) > 0 will be specified later.Let Thus, by (4.8) we deduce that T = O(log R).Hence, (4.9) implies that for any i < T we have 12) The definition of T also implies that for i < T as required above.Recall that this ensures that the second term in the left-hand side of (4.1) is positive as ln 4π ν(1 − ε) 4 t i > 0, and thereby (4.2) holds for all i < T .
Secondly, we will require that

.13)
As we shall see in the next section, this will imply that As Θ 0 = 2π we need that the product on the right-hand side of the above is at least 1/2.This will be the case if To bound the above sum, we will give an upper bound on the difference of t j − t j−1 .We have Hence, we can write Also by the third condition in (4.10)

Some concentration results
In this sub-section, we will show that the number of vertices that belong to each band is almost determined.Note that by Lemma 1.3 (since t i R/2, for all i 0) we have uniformly for all i (4.17) (Here we take t −1 = R.) We need to show that this quantity grows fast enough as a function of N .To see that this is indeed the case, we write Hence, since t i R/2, it follows that which tends to infinity as N grows, since α < 1.
Hence, applying a standard Chernoff bound we deduce that with probability 1 − exp (−Ω (N 1−α )) we have Hence, since T = O (ln R) (cf.(4.8)), a simple first-moment argument shows that with probability 1 − exp (−Ω (N 1−α )) we have for all 0 i T .In what follows, we shall condition on this event, which we denote by C N .

The inductive step
Throughout this section, we will have , for 0 < i T and θ (0) = 2(1 − ε).Assume that there are N i−1 vertices in the active area of B i−1 .For a vertex in v ∈ N i−1 let S(v) denote the arc of angle θ (i) around the projection of v on the circle of radius R − t i (in other words, the set of points of type t i ) -we denote this circle by C i .We call this the shadow of v. Let denote the union of the shadows of the vertices in N i−1 -this is the active area of the band B i .Let Θ i be the total angle of S i .
We will determine Θ i conditional on Θ i−1 , assuming that we have not specified which vertices among those in N i−1 belong to S i−1 .Let S i−1 denote the projection of S i−1 on the circle C i .Note that S i−1 is the disjoint union of arcs each of them having angle which is at least θ (i−1) .Moreover, the total angle covered by S i−1 is Θ i−1 as well.
Assuming that the vertices of N i−1 have all type t i−1 , we expose their positions on C i−1 and consider the shadows of those points that will fall into S i−1 .Recall that this number is a stochastic lower bound on N i−1 , whereby Furthermore, since Θ i−1 π, as i − 1 < T and N i−1 = Ω(N 1−α ), as the event C N is realised, an application of the Chernoff bound implies that with probability 1 We will show that conditional on N i−1 as above and Θ i−1 with high probability Θ i is at least a certain fraction of Θ i−1 .
Proof.To show this statement, we divide each subinterval of S i−1 into segments of angle It is possible that each of these subintervals contains at least one segment of smaller angle.However, each subinterval of S i−1 contains many segments of angle i−1 .We denote by P the collection of all those segments.We will use a bounded-differences concentration inequality in order to show that with high probability most of them are contained in S(v) for some v ∈ N i−1 .
Firstly, let us bound from below the size of P. Recall that each subinterval of S i−1 has angle at least θ (i−1) .Therefore, there are at most Θ i−1 /θ (i−1) subintervals.Each such subinterval contains at most one segment of angle less than i−1 .Hence, But and also since C N is realised (4.18) implies that the electronic journal of combinatorics 22(3) (2015), #P3.24 Thus, in this case This together with (4.22) imply that It is now immediate that the total angle covered by the union of the segments in P, which we denote by Θ P , satisfies To be more precise, recall that each subinterval has angle that is at least θ (i−1) .Since the event C N is realised, we have In other words, uniformly for all i < T , we have θ (i−1) > θ (i) i−1 .For a segment σ ∈ P, let E σ denote the event that the segment σ is not covered by S(v), for all v ∈ N i−1 .The probability that the segment is indeed covered for a certain v ∈ N i−1 is at least θ (i) Now, on the event C N , the following holds through (4.18) and (4.20) But by (4.1) we have which, if substitute in (4.26) implies that Let P denote the subset of segments of P that are covered by S(v), for some v ∈ N i−1 .Therefore, For any vertex v ∈ N i−1 , S(v) covers at most 2θ (i) / i−1 segments by Lemma 2.1.Thus changing the position of one vertex in N i−1 changes the number of covered segments by at most 2θ (i) / i−1 .Hence, applying the Hoeffding-Azuma concentration bound (cf.[16] Theorem 2.25 p. 37) we deduce that We will show now that We will estimate the above quantity up to absolute multiplicative constants -we write A B to denote that A/B is bounded from below by some constants that depend only on α, ν and ε.To derive the above lower bound we will need to deduce a stronger upper bound on t i in terms of t i−1 .By (4.1) we have Now, we have Hence, Now, We will bound the right-hand side of the above from below as follows: But by (4.1) we have We substitute this bound into the last expression of (4.32) and deduce the following: there exists a constant γ = γ(ε) > 0 such that for all N sufficiently large and for all i = 1, . . ., T we have

Proof of Theorem 1.4
For i = 1, . . ., T let E i denote the event that for all 0 j i, we have Note that conditional on C N , the latter inequality together with (4.17) and (4.20) implies that for any N sufficiently large Now by (4.21) and Lemma 4.2, we have But as the sequence {t i } i=1,...,T decreases exponentially fast (cf.(4.8)), we have T = O(ln R).Hence, since the events {E i } i=1,...,T form a decreasing sequence, we deduce that On the event E T , we have Θ i > π for all i = 1, . . ., T (cf.(4.14)).Therefore, which in turn implies that 5 Proof of Theorem 1.5 In the critical case, that is, when α = 1, the probability of having a giant component turns out to depend on the value of ν.It will be convenient to work with the Poisson model P(N ; α, ν) in which the vertex set is the set of points of a Poisson point process inside D R .In Lemma 5.1 below, we show that for certain graph properties, if they hold a.a.s. in P(N ; α, ν), then this is also the case in the G(N ; α, ν) model.

Poissonisation
Sometimes it is easier to work under the setting where instead of N (fixed) random points, our vertex set consists of Po(N ) points on D R -of course still sampled independently according to (1.1).We denote the resulting graph by P(N ; α, ν).The benefit is that in this way our vertex set consists of a Poisson point process (see for example [17]).In particular, the numbers of points in any finite collection of pairwise disjoint measurable subsets of D R are independent Poisson-distributed random variables.The term asymptotically almost surely (a.a.s.) has the same meaning in this context as before.We prove the following lemma that allows us to transfer results from the Poisson model into the G(N ; α, ν) model.Let A n denote a set of graphs on V n := {1, . . ., n} that is closed under automorphisms.We call a family Lemma 5.1.Assume that α > 0 is fixed.Let A be a (vertex-) non-decreasing family of graphs.For N large enough we have P(G(N ; α, ν) / ∈ A) < 4P(P(N ; α, ν) / ∈ A).The same holds if A is (vertex-) non-increasing.
Proof.Denote by E Po and E the events that P(N ; α, ν) / ∈ A and G(N ; α, ν) / ∈ A, respectively.We write where we have used in the last line that, since A is non-decreasing, we have Let us also note that P(E Po |Po(N ) = N ) = P(E).Thus, where the last line holds N large enough (by an application of, say, the central limit theorem).The second part of the lemma follows similarly, bounding the sum by taking only the terms where N N .
This implies that if P(P(N ; α, ν) / ∈ A) = o(1), then P(G(N ; α, ν) / ∈ A) = o(1).The families we consider will be A = {G : G has a component of order at least C} or B = {G : G has no component of order at least C}, where C does not depend on the number of vertices in G.In particular, in the second part of Theorem 1.5, having chosen the parameter N , we consider the family {G : G has a component of order at least N/65}, where |G| = N is not required.

The subcritical case
We prove the first part of Theorem 1.5 for P(N ; α, ν) by contradiction.Assuming we have a component of size N log log R , then at least We will use a smooth breadth exploration process starting at a point v 0 of type at most log log R, a continuous-type version of the process defined in Section 3: the electronic journal of combinatorics 22(3) (2015), #P3.24 is at most 5 exp(log R − R/2) log 2 log R. (This justifies the name V small : the total angle of the component of v is bounded by this expression which is a decaying function of N .)Now, Lemmas 5.2 and 5.3 imply that there exists a positive constant c such that for a vertex v ∈ V low we have Hence, by Markov's inequality, a.a.s.
Also, using Lemma 1.3 and the concentration of the Poisson distribution, we can deduce that a.a.s.
Now, by Lemma 3.8, a.a.s.every vertex in V small is contained in a component with at most 10N exp(log R − R/2) log 2 log R < 10νR log 2 log R vertices.Hence, any component of large size must be induced by vertices in V large ∪ V high , whereby |L 1 | N log log R .To prove the lemmas, we simplify the process in a way that allows for any real types, dropping the 0 t R requirement.We use the following cdf and pdf for the type T i of v i given the type t i−1 of v i−1 : (5.1) Claim 5.4.Lemmas 5.2 and 5.3 hold if they hold for the extended and simplified distribution of types in Equation (5.1), where not finding a next neighbour in the original distribution corresponds to a negative type in the extended one.
Proof.We prove the result by showing that the given cdf is a lower bound on the actual cdf at any point.This means there is a coupling in which any vertex of the actual distribution is coupled with a vertex of higher or equal type and same angle in the simplified distribution.We later prove that the distance we change the type by does not depend on the type of the active vertex for the vertices that we consider, so a higher type cannot have a negative influence on the result we want to prove.If the type of a vertex is less than log R, we can use Lemma 2.1 with ε = 0.009 to get a bound on the relative angle for possibly adjacent vertices up to type R − 2 log R < R − log R − c 0 (0.009) for sufficiently large N .Note that the expected number of vertices of type larger than R − 2 log R is Thus the probability of having such a vertex is o(1) and we can condition on no such vertex existing.We use outer tubes to estimate the expected value of N t t 0 , the number of neighbours of type at least t of a vertex of type t 0 log R, using the outer tubes for ε = 0.009 as an upper bound, taking N large enough and using Lemma 1.3: With this, as we are using the Poisson distribution, we get the following cdf for the distribution of the next type, given we are at step i with type t 0 : To prove the two lemmas we need to introduce new notation.Instead of looking at the types of the vertices at some step, we analyse the jump J i = T i − T i−1 in some step, the difference in types from one step to the next.This makes sense as F T i (t) only depends on the difference of T i and t, each jump is distributed as Starting at a vertex of type T 0 (= log log R), we write the electronic journal of combinatorics 22(3) (2015), #P3.24 where we couple with a sequence of independent random variables having as their cdf the function F J .The type T i is thus coupled with the sum of independent copies of the jump.
We are now ready to prove Lemma 5.3.
Proof of Lemma 5.3.We first calculate the following expectation: for s > 0 we have Given a starting vertex of type log log R, we calculate, for large N , the probability of reaching type at least log R in any step i < log log R. For s > 0 arbitrary we have Ee sJ k , using Markov's inequality.Choosing 0 < s < 1 2 arbitrarily, we get some constant C > 1 such that Ee sJ k = C. Thus for some 0 < c = c(s) < 1.With this, we can use the union bound to bound the probability that the smooth breadth exploration process has type at least log R at one of the first log 2 log R steps: We use the same technique to prove Lemma 5. are connected if their relative angle is at most 1.92e 1/2(tu+tv−R) , provided that N is large enough.

The vertices in C
(i) j have type at least t i−1 = (i − 1) log 2 and the relative angle of a vertex v in C ) , so v and u are adjacent.The same bounds on the maximum relative angle and minimum type hold for vertices u in C (i) j and v in C (i+1) j/2 , so u and v are adjacent.By simple change of variables we get all the desired adjacencies.
We define two auxiliary graphs, a blue graph G b and a red graph G r .The vertices of the blue graph will be those cells that contain at least one vertex, whereas the vertices of the red graph are those that contain no vertex.Two vertices of G b are connected if the corresponding cells share an edge.This means that vertices of P(N ; α, ν) in the cells corresponding to adjacent vertices in G b are themselves adjacent by Lemma 5.5.Two vertices of G r are connected if their corresponding cells share at least a point.This corresponds to the same adjacency as in G b but with added "diagonal" edges.These adjacencies are illustrated in Figure 4. We denote by G c the union of the two graphs.For the graph resulting from the example in Figure 3, see Figure 5. Whenever necessary we will refer to the vertices of the graphs as cells.A blue component is a component of G c that consists of blue vertices.A red path is a path in G c that consists of red vertices.These are the main structures we use, but we also use the red/blue notation for other structures.Because of the adjacency rules, a blue component is always surrounded by a red path or a collection of red paths, the periphery and the inside of the disk.We are now interested in the probability of a vertex being blue or red.Note that This means the probability of having a red path of length at least L is o (1).
With this we are able to make statements on the structure of G c .We prove that a.a.s.there is a blue lollipop L b : a blue cycle surrounding the origin of D R that contains a cell in B 1 or is connected to such a cell by a blue path (see Figure 6).We call the relevant cell of B 1 the base of the lollipop.• There is an index i such that there are two cells c 1 and c 2 , both in the band B i , that are connected on the path via a subpath of length at least 2 using only cells in the bands B 1 − B i .In this case we can create a new subpath of the same length that just uses cells in the band B i+1 as inner vertices and leading to a new cell c 2 .Because the number of cells doubles in each band this new subpath must cover more cells than the old one and thus we can create a path of length that covers more cells, a contradiction.
So the optimal path has the desired form, as shown in Figure 7.Note that each cell in band i covers 2 i−1 cells in band B 1 .If is odd, this yields −1 2 upward/downward edges each and one edge (2 cells) staying at the same level, yielding ( −1)/2 i=1 2 i−1 + 2 • 2 ( −1)/2 = 3 • 2 ( −1)/2 − 1 < 2 /2+2 covered cells.If is even, the path uses 2 upward/downward edges each and no edge (one cells) staying at the same level, yielding /2 i=1 2 i−1 + 2 /2 = 2 /2+1 − 1 < 2 /2+2 covered cells.Let i be the length of the red path connecting c i and r i .We define independent random variables K i distributed as Geom(1 − 8e −ν/5π ).Note that this independence can only make the bound larger as no two of the corresponding paths can meet, so a path cannot use anything that has been exposed already by a different path.Also, the number of available next cells in any step can only go down if we consider the dependent case.Because every red cell has at most 8 neighbours and every cell is red with probability at most e −ν/5π , we have In other words, i is stochastically bounded from above by K i .Also, by Claim 5.9(ii) R i is stochastically bounded by 2 i /2+2 , which in turn is stochastically bounded from above by 2 K i /2+2 .We denote the latter by Y i .
the electronic journal of combinatorics 22(3) (2015), #P3.24 Proof of the second part of Theorem 1.5.Let ν > 20π and let G c be defined as above.By Claim 5.8 there is a blue lollipop a.a.s.By Claims 5.9.1 and 5.10 a.a.s. the blue lollipop extends into a blue component of order at least t >

1.9π
ν(e ν/5π +3) N .Setting ν = 20π, this quantity is at least 1.9π  20π(e 4 +3) N > N/610.Note that increasing ν can only stochastically increase the order of the largest connected component.Thus, for any such ν > 20π, the order of the largest component is a.a.s. at least N/610.

Conclusion
In this paper we have studied the component structure of the KPKVB-model, a geometric model of random graphs on the hyperbolic plane.This can be viewed as a dependent version of the well-known Chung-Lu model of inhomogeneous random graphs.The model was recently introduced by Krioukov et al. [19], whose aim is to develop a geometric framework for the study of complex networks.We determine the critical parameters which control the typical component structure of such a random graph.Namely, we determine the critical parameter which controls the emergence of a giant component.We show that in the regime where the random graph exhibits power law degree distribution with exponent greater than 3, all components are sub-linear.On the other hand, when the exponent of the power law is less than 3, then a component that contains a positive fraction of the vertices exists with high probability.
However, our results are not precise as far as the size of the giant component is concerned.Showing, for example, a law of large numbers seems to be a challenging problem, mainly due to the dependencies that are present in this model.Also, for the case α = 1, we conjecture that there is a critical value for ν above which the giant component emerges.(Note that in this way the ρ i s have exactly the distribution with pdf (1.1) and the ρ i s have the same pdf but with α , R in place of α, R.) The points used in the construction of G(N ; ζ, α, ν) will be (θ 1 , ρ 1 ), . . ., (θ N , ρ N ) while the points used in the construction of G(N ; ζ , α , ν) will be (θ 1 , ρ 1 ), . . ., (θ N , ρ N ).
It remains to be seen that this way we get two isomorphic graphs.
Since cosh(x) is strictly increasing for x 0, it follows that we must have αρ i = α ρ i .
Let us write d ij for the distance between (θ i , ρ i ) and (θ j , ρ j ) in the curvature-ζ-surface, and let d ij be defined analogously.

Figure 2 :
Figure 2: Sets of the points of distance R from certain points.(Depicted in the native model.)

4 .
go to Step 2, setting v := w and Θ := Θ + Θ1 ; 5. let v be the point of polar coordinates (R − t 0 , θ v − 2ν(1 + ε) e 1 2 (tv +t 0 ) N ); set v := v ; Phase II 1. let v and Θ have their final values after the execution of Phase I; 2. let w ∈ D R be the point of type C λ and θ v,w = 2ν(1 + ε) e 1/2(tv +C λ ) N in the clockwise direction; 3. set Θ := Θ + 2ν(1 + ε) e 1/2(tv +C λ )N and let T + w be the half-tube containing every point u that has relative angle at most 2ν(1 + ε) e 1/2(tu+C λ ) N with w in clockwise direction;4.if T +w is empty, then exit; else start the process again from Step 2 of Phase I with Θ

Lemma 3 . 4 .
Let u ∈ D R be a point having t u = t 0 .If the breadth exploration process starts at point u, then by the end of it Θ R 2 log 3 RN 1/α−1 with probability 1−o N 1/α−1

N 2
log log R vertices of type at most T = log log R must be contained in that component as a.a.s. at most N 2 log log R vertices have type larger than T .

Figure 5 :
Figure 5: G c resulting from the Graph in Figure 3.

Claim 5 . 8 . 2 R 4 log 2 2 =
Let ν > 20π.A.a.s.G c contains a blue lollipop.Proof.By Lemma 5.7, a.a.s.there is no red path of length L. The number of bands is T > L, so a.a.s.there is no red path connecting B 1 and B T .This implies that there must be a blue cycle C surrounding the origin of D R .Now, the number of cells in any band is at leastM/2 T 2π 2 T e −R/22π +1 e −R/πe R/4 L for N sufficiently large, as L = O(R).Thus any red cycle surrounding C would have length at least L. But by Lemma 5.7 a.a.s.there is no such cycle.This implies that C must either contain a cell of B 1 or there must be a blue path P connecting C and some cell in B 1 .In either case we have a blue lollipop.

Figure 7 :
Figure 7: Red path of length 9 covering the maximum number of cells.