Limit theory for isolated and extreme points in hyperbolic random geometric graphs

Given $\alpha \in (0, \infty )$ and $r \in (0, \infty )$, let ${\mathcal {D}}_{r, \alpha }$ be the disc of radius $r$ in the hyperbolic plane having curvature $-\alpha ^{2}$. Consider the Poisson point process having uniform intensity density on ${\mathcal {D}}_{R, \alpha }$, with $R = 2 \log (n/ \nu )$, $n \in \mathbb {N}$, and $\nu < n$ a fixed constant. The points are projected onto ${\mathcal {D}}_{R, 1}$, preserving polar coordinates, yielding a Poisson point process ${\mathcal {P}}_{\alpha , n}$ on ${\mathcal {D}}_{R, 1}$. The hyperbolic geometric graph ${\mathcal {G}}_{\alpha , n}$ on ${\mathcal {P}}_{\alpha , n}$ puts an edge between pairs of points of ${\mathcal {P}}_{\alpha , n}$ which are distant at most $R$. This model has been used to express fundamental features of complex networks in terms of an underlying hyperbolic geometry.For $\alpha \in (1/2, \infty )$ we establish expectation and variance asymptotics as well as asymptotic normality for the number of isolated and extreme points in ${\mathcal {G}}_{\alpha , n}$ as $n \to \infty $. The limit theory and renormalization for the number of isolated points are highly sensitive on the curvature parameter. In particular, for $\alpha \in (1/2, 1)$, the variance is super-linear, for $\alpha = 1$ the variance is linear with a logarithmic correction, whereas for $\alpha \in (1, \infty )$ the variance is linear. The central limit theorem fails for $\alpha \in (1/2, 1)$ but it holds for $\alpha \in (1, \infty )$.


Hyperbolic random geometric graphs
We study in this paper the random geometric graph on the hyperbolic plane H 2 −1 , as introduced by Krioukov et al. [16]. The standard Poincaré disk representation of H 2

−1
is the open unit disk D := {(u, v) ∈ R 2 : u 2 + v 2 < 1} equipped with the hyperbolic (Riemannian) metric d H given by ds 2 = 4 du 2 +dv 2 (1−u 2 −v 2 ) 2 . Recall that the arclength of the 23   boundary of a disk D r ⊂ D of radius r and centered at the origin is 2π sinh(r), whereas the area of D r is 2π(cosh(r) − 1). Given ν ∈ (0, ∞) a fixed constant and a natural number n > ν, we let R := 2 log(n/ν), i.e., n = ν exp(R/2). For every α ∈ (0, ∞), consider the probability density function ρ α,n (r) := α sinh(αr) cosh(αR)−1 0 ≤ r ≤ R 0 otherwise . (1.1) Let θ be uniformly distributed on (−π, π]. When α = 1 the distribution of (r, θ) given by (1.1) is the uniform distribution on D R under the metric d H . For general α ∈ (0, ∞) Krioukov et al. [16] call this the quasi-uniform distribution on D R , since it arises as the projection of the uniform distribution on a disc of hyperbolic radius R in H 2 −α 2 , the hyperbolic plane having curvature −α 2 and equipped with the metric 4 Denote by κ α,n the Borel measure on D R given by κ α,n (A) := 1 2π A ρ α,n (r)drdθ, (1.2) where A is a Borel subset of D R . We let P α,n denote the Poisson point process on D R with intensity measure nκ α,n . Denote by (Ω n , F n , P n ) the probability space on which the point process P α,n is realised. Let E := E n denote expectation with respect to P := P n . We join two points in P α,n with an edge if and only if they are within hyperbolic distance R of each other. The resulting hyperbolic random geometric graph on D R is denoted by G α,n := G α,n,ν . Figure 1 illustrates the disc B R (v) of radius R centered at v ∈ D R . An equivalent construction of G α,n goes as follows. Given α ∈ (0, ∞) and r ∈ (0, ∞), let D r,α be a disc of radius r in H 2 −α 2 . Consider the Poisson point process having uniform intensity density on D R,α . The points are projected onto D R,1 , preserving polar coordinates, and the hyperbolic geometric graph on D R,α is created by putting an edge between the points of the Poisson point process whose projections are distant at most R. The projection of this graph onto D R,1 is G α,n,ν .
When P α,n is replaced by n i.i.d. random variables having density ρ α,n (r)/2π, we obtain the model of Krioukov et al. [16]. The underlying hyperbolic geometry gives rise to a power-law degree distribution tuned by the parameter α, whereas the parameter G α,n for small α, as described in Section 1.3.
The limit constants appearing in our first and second order results (1.3), (1.5), and (1.6) are given in terms of expectations and covariances of scores involving isolated and extreme points of a Poisson point process on the upper half-plane, which appears to be a natural setting for studying such problems. Put γ := 8να/π(2α − 1). . (1.4) On the other hand, for all α ∈ (1/2, ∞), the expectation and variance asymptotics for the number of extreme points exhibit linear scaling in n, that is to say the renormalization is the standard one in stochastic geometric models. as seen in Section 2. In particular, it facilitates the evaluation of the probability content of the intersection of two radius R balls, which is essential to evaluating the covariance of scores at distinct points. Variance calculations are based on the covariance formula for two points. In the case of isolated points, the upper bound is based on the Poincaré inequality. The lower bound, which turns out to be tight, is based on a careful analysis of the intersection of the balls of radius R around two typical points. We refer the reader to Sections 3 and 4 for all details.
The determination of variance asymptotics for S ext (P α,n ) is handled by extending stabilization methods. That is, for a given point p, we define a radius of stabilization R ξ := R ξ ext for ξ ext , in the sense that points at distance farther than R ξ from p do not affect the property of p being extreme. We show that the covariance of two points (depending on their interpoint distance and heights) is rather small. By stabilization, the covariance converges to the covariance of two points in the infinite hyperbolic plane.
We show in Section 5 that though the constants describing the tail behavior of R ξ grow exponentially fast with the height of p, it is still possible to extract an explicit integral formula for the scaled variance.
To prove the asymptotic normality (1.7) we use the Poincaré inequality for Poisson functionals [17], which bounds the Wasserstein distance in terms of first-and secondorder difference operators. When α ∈ (1, ∞) there is a high probability event on which these difference operators may be controlled, as vertices of high degree are fewer in this regime. For α ∈ (1/2, 1), conditional under the likely event of having no vertex sufficiently close to the origin (equivalently having no vertex of significantly high degree), the variance is much smaller than the unconditional variance, and the convergence to the standard normal fails in this regime. Intuitively, vertices close to the origin generate radius R balls which cover a relatively large part of D R and any vertex lying therein will not be isolated. Surprisingly, when α ∈ (1, ∞), this phenomenon stops having a significant effect and, therefore one can deduce the asymptotic normality of S iso (P α,n ).
The case of extreme points is different, since the extremality status of a point is influenced only by points of larger radius lying in the ball of radius R around it. This region is typically quite small and makes the corresponding score functions almost independent. To prove the central limit theorem (1.8), we cut the plane into rectangles and define a dependency graph on the vertex set of such rectangles, so that no points in non-adjacent vertices in this dependency graph can be connected, and we use the central limit theorem of Baldi and Rinott [3].
Remarks. (i) We are unaware of results treating the limit theory for statistics of G α,n in the regime α ∈ (1/2, 1). The paper [24] establishes variance asymptotics and asymptotic normality for the number of copies of trees in G α,n with at least two vertices, but the authors require α ∈ (1, ∞), save for when counting trees close to the boundary of D R . The methods of [24] do not appear to treat the limit theory of S iso (P α,n ) and S ext (P α,n ), as n → ∞.
(ii) It is an interesting problem whether the number of isolated points asymptotically follows a normal law when α = 1. The methods in this paper do not apply, as they give estimates that are useless. To deal with this case, one likely needs a more detailed treatment of the variance of S iso (P α,n ), giving not only the order of magnitude but the multiplicative constant.
(v) It is unclear whether there exists a limiting distribution for S iso (P α,n ) when α ∈ (1/2, 1). As we are going to see in Section 6, the variance of S iso (P α,n ) is highly sensitive when conditioning on the high probability event of having no points within a certain radius in D R . It is plausible that a central limit theorem holds in such a conditional space.
(vi) As elaborated upon in the next subsection, the degree distribution in G α,n follows a power-law with exponent 2α + 1 when α ∈ (1/2, ∞). In particular, the degree distribution belongs to L 2 when α ∈ (1, ∞). It would be worthwhile to better understand the connection between asymptotic normality of the number of isolated vertices and moments of the degree distribution.
Notation and terminology We say that a sequence of events E n ∈ F n occur asymptotically almost surely (a.a.s.) if lim n→∞ P(E n ) = 1. Given a n and b n two sequences of positive real numbers, we write a n ∼ b n to denote that a n /b n → 1, as n → ∞.

Degree and connectivity properties of the graph G α,n
For α ∈ (1/2, ∞), the tails of the distribution of the degrees in G α,n follow a power law with exponent 2α + 1; see Krioukov et al. [16]. This was verified rigorously in [12]. For α ∈ (1/2, 1), the exponent is between 2 and 3, as is the case in a number of networks arising in applications (see for example [2] for a list of experimental observations). The paper [16] observes that the average degree of G α,n is determined through the parameter ν for α ∈ (1/2, ∞). This was rigorously shown in [12]. In particular, they show that the average degree tends to 8α 2 ν/π(2α − 1) 2 in probability. However, when α ∈ (0, 1/2], the average degree tends to infinity as n → ∞. Thus, in this sense, the regime α ∈ (1/2, ∞) corresponds to the thermodynamic regime in the context of random geometric graphs on the Euclidean plane [21]. In [8] the degree distribution of a soft version of this model is determined. Here, pairs of points that are distant at most R are joined with some probability that is not identically equal to 1.
When α is small, one expects more points of P α,n to be near the origin and one may expect increased graph connectivity. The paper [6] establishes that α = 1 is the critical point for the emergence of a giant component in G α,n . In particular, when α ∈ (0, 1), the fraction of the vertices contained in the largest component is bounded away from 0 a.a.s. [6], whereas if α ∈ (1, ∞), the largest component is sublinear in n a.a.s. For α = 1, the component structure depends on ν. If ν is large enough, then a giant component exists a.a.s., but if ν is small enough, then a.a.s. all components have sublinear size [6].
Apart from the component structure, the geometry of this model has been also considered. In [13] and [11] polylogarithmic upper bounds on the diameter are shown. These were improved shortly afterwards in [19] where a logarithmic upper bound on  Figure 2: Samples of G α,n for n = 300, ν = 3 and α = 0.7 and 2, respectively, from left to right. Average degree decreases as α increases. Figure 3: Samples of G α,n for n = 300, α = 1 and ν = 3 and 5, respectively, from left to right. Average degree increases as ν increases.
the diameter is established. Furthermore, in [1] it is shown that for α ∈ (1/2, 1) the largest component has doubly logarithmic typical distances and it forms what is called an ultra-small world.

Approximating a hyperbolic ball
We characterize when two points in D R are within hyperbolic distance R. In particular the next lemma approximates hyperbolic balls by analytically more tractable sets, reducing a statement about hyperbolic distances between two points to a statement about their relative angle. For a point p ∈ D R , we let θ(p) ∈ (−π, π] be the angle pOq between p and a (fixed) reference point q ∈ D R (where positive angle is determined by moving from q to p in the anti-clockwise direction). For points p, p ∈ D R we denote by θ(p, p ) their relative angle: θ(p, p ) := min{|θ(p) − θ(p )|, 2π − |θ(p) − θ(p )|}. For any p ∈ D R recall that r(p) denotes its radius (hyperbolic distance to the origin) whereas y(p) := R − r(p), or more succinctly y := R − r. Thus for p ∈ D R , we shall write p := (θ(p), y(p)). The hyperbolic law of cosines relates the relative angle θ(p, p ) between two points with their hyperbolic distance: cosh(d H (p, p )) = cosh(r(p)) cosh(r(p )) − sinh(r(p)) sinh(r(p )) cos(θ(p, p )). (2.1) For r, r ∈ [0, R], we let θ R (r, r ) be the value of θ(p, p ) ∈ [0, π] satisfying (2.1), having set d H (p, p ) = R, for two points p, p ∈ D R with r(p) = r and r(p ) = r . As cos(·) is decreasing in [0, π], it follows that d H (p, p ) ∈ [0, R] if and only if θ(p, p ) ≤ θ R (r(p), r(p )). When y(p) and y(p ) are not too large, our next result estimates θ R (r(p), r(p )) as a function of y(p) and y(p ). To prepare for mapping D R to a rectangle in R × R + having length proportional to 1 2 e R/2 , we re-scale θ R (r(p), r(p )) by a factor of 1 2 e R/2 . The following lemma appears in a stronger form in [9]. Here and elsewhere we put H := 4 log R. (2. 2) The proof of the next lemma is in Section A.
Recall that D r denotes the disc of (hyperbolic) radius r centered at the origin O. For any h ∈ [0, R) we let A h denote the annulus D R \ D R−h . Throughout we shall use caligraphic letters to denote subsets of D R . For p ∈ D R we let B(p) := B R (p) ∩ D R . We now approximate B(p) whenever r(p) ∈ (C, R], with C as in Lemma 2.1. This goes as follows. By the triangle inequality, given p ∈ D R , any point with defect radius at most y(p) := R − r(p) is also within distance R from p. To approximate B(p) from above, we will take a superset of this set, namely the set of points of radius at most y(p) − C, with C := C(ε) as in Lemma 2.1. We set 1 +ε := 1 + ε and 1 −ε := 1 − ε and put For ε ∈ (0, 1/3), C := C(ε) > 0 as in Lemma 2.1(i), and p ∈ D R with r(p) ∈ (C, R], the inequality (2.3) yields the following inclusions: In our calculations for E[ξ ext (p, P α,n )] we will need the truncated subset of B(p) consisting of points with height coordinates at most y(p), namely A point p ∈ D R ∩ P α,n is extreme with respect to P α,n if and only if D(p) ∩ P α,n = {p}.

Properties of G α,n
The density of the defect radius is close to the exponential density with parameter α.
The proof of this fact is based on elementary algebraic manipulations and appears in Section A. Lemma 2.2. Letρ α,n (y) := ρ α,n (R − y), y ∈ [0, R), be the probability density of the defect radii. For all α ∈ (1/2, ∞) we have (2.8) One does not expect to observe isolated and extreme points close to the origin. The following lemma makes this precise and shows that the isolated and extreme points a.a.s. have defect radii less than H := 4 log R.
These upper bounds are also valid for E[ξ ext (p, P α,n ∪ {p})], with the exception of the last integral which would start from r(p) ∈ (0, R] instead of from r(p) = 0. However, the asymptotic growth of this integral is still Θ(1).

Mapping
To further simplify our calculations, we will transfer our analysis from D R to R 2 , making use of a mapping introduced in [9]. We set  For p ∈ D R , recall that we write p := (y(p), θ(p)), with y(p) the defect radius and θ(p) the angle with respect to a reference point. We re-scale the angle θ(p) by 2e −R/2 , setting x(p) := 1 2 θ(p)e R/2 . This defines the map Φ : D R → D, mapping (θ(p), y(p)) → (x(p), y(p)).
Put β := 2να/π. The map Φ sends P α,n to the Poisson point processP α,n on D with intensity density dP α,n (x, y) = (βe −αy + n )dydx, (x, y) ∈ D, (2.10) where, recalling Lemma 2.2, we have n = O(n −2α ) = o(n −1 ), since α ∈ (1/2, ∞). The analogue of the relative angle is defined as follows. For x, x ∈ (−I n , I n ], we let When considering the geometry of hyperbolic balls inside D, it will be convenient to use arithmetic on the x-axis modulo 2I n . In particular, for x 1 , x 2 ∈ (−I n , I n ], we write This definition naturally extends to all other types of inequalities. Also, for any x 1 ,    Approximating S iso (P α,n ) and S ext (P α,n ) on D Letp := (x(p), y(p)) be the image of p by Φ. Forp ∈P α,n we defineξ iso (p,P α,n ) = ξ iso (p, P α,n ). In other words, and similarly whenξ iso is replaced byξ ext .
Proof. This follows from Lemma 2.3.
We putS iso H (P α,n ) := Proof. For brevity we write S n for S iso (P α,n ) andS n forS iso H (P α,n ). We first assert that P(S n =S n ) = O(n −15 ). (2.14) To see this, we condition on the event that |P α,n | ≤ 2n (note that the complementary event has probability which is generously bounded by O(n −16 )) and then use Boole's inequality together with Lemma 2.4.

Now write
VarS n = Var[S n 1(S n =S n ) + S n 1(S n =S n )].
By Hölder's inequality and (2.14), we have Proof of Lemma 2.6. LetX n beS iso H (P α,n ) and let X n beS iso H (P α ∩ D). We denote by F n the event thatP α,n =P α ∩ D. By Lemma 2.2, there is a coupling of the point processes P α,n andP α ∩ D such that P(F n ) = O(n −2α ) = o(n −1 ), since α ∈ (1/2, ∞). We let A n := {|P α,n | > 2n} and B n := {|P α ∩ D| > 2n}. Then P(A n ∪ B n ) = o(n −1 ). Settinǧ Y n :=X n − EX n and Y n := X n − E X n gives since |P α,n | is Poisson-distributed with parameter equal to n. Thus VarX n ≤ Var X n +o(n). The bound remains valid if we interchangeX n with X n ,P α,n withP α ∩D, and A n with B n . We thus obtain |VarX n − Var X n | = o(n). The proof of the bound for |E[X n − X n ]| is identical, except that second moments are replaced by first moments and this yields |E[X n − X n ]| = o(1). This completes the proof of the estimates involvingS iso H (P α,n ) and S iso H (P α ∩ D). The proofs of the assertions involvingS ext H (P α,n ) andS ext (P α ∩ D) are identical.

A covariance formula forξ iso
We establish a basic covariance formula needed for the calculation of Var[S iso H (P α ∩D)]. If ξ(p, (P α ∩ D) ∪ {p}) is a score function, we define the covariance of ξ at the points p 1 , p 2 as We will give an expression for cξ iso . The number of points of (P Therefore, given p 1 , p 2 ∈ D([0, H]), we have the following basic covariance formula: Consider the second case in (3.3), where the covariance is negative. By Lemma 2.1(ii), given points p 1 and p 2 with y(p 1 ), y(p 2 ) ∈ [0, H], we have we may re-state the above as Before focussing on the first case in (3.3) we need some geometric preliminaries. ejp531.tex; 2020/12/08; 11:39 p. 14 EJP 25 (2020), paper 1.

The geometry of balls with height coordinate at most H
Our aim now is to estimate and We continue to assume that p 1 and p 2 belong to D([0, H]). We assume without loss of generality that x(p 1 ) < Φ x(p 2 ) and y(p 1 ) ∈ (y(p 2 ), H]. Henceforth, for C := C(ε) as in Figure 5. Furthermore, the definitions of B + (p) and B − (p) and the assumption y( (3.12) ejp531.tex; 2020/12/08; 11:39 p. 15 EJP 25 (2020), paper 1. First, we notice that the definitions of h, β, and I n give and denote by 1 ±ε either 1 +ε or 1 −ε , depending on which of the two cases we are considering. The following lemma characterises when two balls are disjoint.
2 (y(p)+y(p )) }; cf. Figure 5. The first part of the next lemma shows that ∂ r B ± h (p 1 ) and ∂ B ± h (p 2 ) intersect whenever the x-coordinates of p 1 and p 2 are far enough apart with respect to the exponentiated height coordinates.
(ii) On the contrary, assume that there . Using the definition of the left boundary we have We deduce that Figure 6). Then p 12 satisfies which yields (3.15) and completes the proof of Lemma 3.2.

Consider now the union of the two balls
it follows that y(p) < y(p ). In other words, the curves ∂ B ± h (p 1 ) and ∂ B ± h (p 2 ) do not intersect and ∂ B ± h (p 2 ) stays "above" ∂ B ± h (p 1 ).
). The calculations of the µ α -measure of these two intersections are similar, as the considered sets differ only by constant factors 1 +ε and 1 −ε . We provide a generic calculation covering both cases. The inequality (3.12) shows that the µ α -content of controls the growth of Cov(p 1 , p 2 ). The following lemma gives quantitative bounds on µ α (S ± p1p2 ). We will use the first part of the lemma to lower bound Var[S iso H (P α ∩ D)]. It turns out that this gives the main contribution to the variance bound of Theorem 1.1. We will give a matching upper bound on the variance through the Poincaré inequality. The second part of the lemma gives an upper bound on the intensity measure of S ± p1p2 , which will be used in the proof of the central limit theorem forS iso Proof. Part (i). We express S ± p1p2 as the disjoint union of the sets D((y U , R)) ∩ S ± We also have ejp531.tex; 2020/12/08; 11:39 p. 18 EJP 25 (2020), paper 1. Therefore, Combining (3.23) and (3.24) yields Substituting this into (3.26) yields Hence, the proof of part (i) is complete. Part (ii). We will consider three different subsets of the interval (e R/4 · ( . In this case, we have y U ∈ (h, R]. Thus, any point p with y(p) = y and y ∈ [y L , h] belongs to S ± p1p2 if and 2 (y(p1)+y) . Hence, we will use a modified version of (3.21): Using (3.24) and (3.25), the above becomes: (Note that when t = (Y 1 + Y 2 )e h/2 , the above expression is equal to 0.) Now, since  [21,23,18]). Let S be a measurable space and N (S) the set of all locally finite point configurations on S. For a Poisson point process P on S with intensity ρ and a measurable non-negative function h : where the sum ranges over all pairwise distinct r-tuples of points of P. .1)), the definition of the variance together with (4.1) yield: where the last equality holds since ξ 2 = ξ (in fact the first equality does not require that the score function is an indicator random variable, but this is the case throughout our paper).  Proof. We use the inclusions in (2.11). By Lemma 2.1(iii) we may put C := C(n) := 5 log R and ε = O(e −5 log R ). Now (2.11) yields: We conclude that µ α (B(p)) ≤ 1 +ε · γe y(p) 2 + o(1). (Recall that 1 +ε := 1 + ε and ε = O(e −5 log R ).) Notice that y(p)/2 ∈ [0, 2 log R) since y(p) ∈ [0, H). Thus ε · e y(p)/2 = ejp531.tex; 2020/12/08; 11:39 p. 21
We now establish expectation asymptotics (1.3). Since Let F be a functional on a space S hosting a Poisson process P of intensity measure λ. For a point p ∈ S we define the first order linear operator ∇ p F := F (P ∪ {p}) − F (P).
We change variables and, as before, put Y i = e yi/2 , i = 1, 2. Hence, 2dY i = e yi/2 dy i and . Moreover, as y 1 ranges from 0 to H, the variable Y 1 ranges from 1 to e H/2 = e 2 log R = R 2 . Thus, (4.10) To simplify notation we shall write e h/2 := 1 −ε · e h/2 . This amounts to transferring the term 1 −ε inside h changing the constant C to C −2 ln 1 −ε .
It will make no difference.

Let us observe that
where, recalling Y 12 defined at (3.4), we have By (3.5) the covariance is negative only when t belongs to the range covered by the J 1 integral. For t ∈ (Y 12 , I n ], the covariance is positive. Thus, for the range (Y 12 , I n ] it suffices to use the subset given by the smaller range of J 2 which in turn is covered by We now show that |L 1 | = O(1) for all α ∈ (1/2, ∞), whereas we derive lower bounds on L 2 which match the upper bounds in (4.9).

The lower bound on integral L 2
Let s := max{5α/(2α − 1), 5} and r := R s e −h/2 . Given the domain W : It suffices to consider the contribution to L 2 that comes from the domain W . That is, we will bound from below the integral For simplicity, we set t 1 : (4.14) Consider the integral in (4.14) when (Y 1 , Y 2 ) ∈ W . The following lemma shows that its value changes radically as α crosses 1. The regimes for this lemma induce three regimes for L 2 .
For (Y 1 , Y 2 ) ∈ W and n sufficiently large we have Y 12 ∨ 1 +ε · (Y 1 + Y 2 ) ≤ R 4 . In this domain the definition of s gives . For any ε > 0 sufficiently small (in terms of δ), we have 1 −ε · (1 + δ) > 1 + δ/2. Therefore, We then deduce that  The above bounds imply that Thus, for α ∈ (1/2, 1) we have In this case, the integral in (4.14) is bounded below as follows: where the last inequality holds for n sufficiently large, if we put ε = O(e −5 log R ) (cf. Lemma 2.1(iii)). In particular, if we let (Y 1 , Also, recall that h := R − y(p 1 ) − C + 2 ln 1 −ε . Since y(p 1 ) < H, it follows that for n sufficiently large we have h ∈ (R/2, R]. Combining this observation with the above lower bound, (4.14) yields Using the third case in Lemma 4.2, the integral in (4.14) is bounded from below as follows: Hence,

Proof of growth rates for Var[S iso (P α,n )]
We have now estimated the two summands that bound the main term of Var[S iso H (P α ∩ D)] from below. Our findings are summarised as follows: . (4.20) By (4.10) we have V 1 = Ω (I n (L 1 + L 2 )), and 2I n = πe R/2 = Θ(n). Therefore, . Combining (4.9) and (4.21), and recalling Corollary 2.7, we thus establish the desired growth rates for Var[S iso (P α,n )], completing the proof of (1.4).

Proof of variance asymptotics for S ext (P α,n )
The determination of variance asymptotics for S ext (P α,n ) is handled by extending existing stabilization methods. We show that when the constants describing the tail behavior of the stabilization radius at a point p are allowed to grow exponentially fast with the height of p, as at (5.5) below, then one may nonetheless establish explicit variance asymptotics as n → ∞, as shown in the analysis between (5.12) and (5.14) below.
We first require several auxiliary lemmas. For all r > 0 and p := (x(p), y(p)) ∈ R × R + we let B(p, r) denote the closed Euclidean ball of radius r centered at p.  We set s n := s n (p) := 1 + λ n (p , p), where λ n (p , p) = o(1). Put p 0 := (0, y 0 ). We let d 0 be the Euclidean distance between p 0 and the point in (P α ∩ D) ∩ D(p 0 ) which is closest to p 0 . Set d n := diam(D(p 0 )) and note d n ≤ 2s n e y0 . Now we put The extremality status of p 0 depends only on the point set (P α ∩ D) ∩ B(p 0 , R ξ ) in the sense that points outside this set will not modifyξ ext (p 0 ,P α ∩ D). In other words, that is to say that R ξ is a radius of stabilization for ξ :=ξ ext . Clearly, for t ∈ (d n , ∞), we have P(R ξ (p 0 ,P α ∩ D) ≥ t) = 0. We seek to control P(R ξ (p 0 ,P α ∩ D) > t), t ∈ [0, d n ], as a function of both t and the height parameter y 0 . Put c 0 := √ 3β(1 − e −8α )/α and set φ(t) := min{αt/4, c 0 t/3} for t ∈ (0, ∞). We assert there is a constant c 1 such that for y 0 ∈ [0, H] we have We first compute lower bounds on the µ α probability content of the regions Proof. First assume t ∈ [2y 0 , e Now assume t ∈ [e y 0 2 , d n ]. Since y 0 exceeds 8, we have e y0/2 ≥ 2y 0 . As above, it follows that where the last inequality uses t ≤ d n ≤ 2s n e y0 . Hence, µ α (R t (p 0 )) ≥ c 0 t/3, as desired.

Proof of Theorem 1.3
To prove (1.7) and (1.8), we first assert that it suffices to prove central limit theorems for the random variablesS ext H (P α ∩ D) andS iso H (P α ∩ D), defined at (2.16) and (2.17), respectively. We prove this assertion forS n :=S iso H (P α ∩ D) as the proof forS ext H (P α ∩ D) is identical.
Set S n to be S iso (P α,n ) :=S iso (P α,n ). Recall that S n is determined by the Poisson processP α,n on D defined at (2.10) whereasS n is determined byP α ∩D defined at (2.16). By Lemma 2.2, the intensities of these two processes differ by n = O(n −2α ). We can couple these two processes using a sprinkling argument. LetP be the Poisson process on D with intensity equal to λ(x, y) := min{βe −αy , βe −αy + n } at (x, y) ∈ D -in other words the minimum of the intensities ofP α ∩ D andP α,n . Now, we define two other independent processes on D:P 1 of intensity βe −αy − λ(x, y) at (x, y) andP 2 of intensity βe −αy + n − λ(x, y) at (x, y). The union ofP andP 1 is distributed asP α ∩ D, whereas the union ofP andP 2 is distributed asP α,n . We will use the symbolsP α ∩ D andP α,n to denote the copies of these processes in the coupling space. For each n, we may define the nth coupling space to be the product of the spaces on whichP,P 1 , andP 2 are all defined. Let P n denote the product probability measure on the coupling space.
Thus, for any α ∈ (1/2, ∞) This implies that on the coupling space we haveP α,n =P α ∩ D with probability → 1 as n → ∞. Also, the coupling will allow us to assume that S n andS n are defined on the same probability space. In particular, the former implies that P(S n =S n ) = o(1). Henceforth, if X n , n ≥ 1, is a sequence of random variables with X n defined on the nth coupling space, then by X n Pn −→ 0 we mean that for all > 0 we have P n (|X n | ≥ ) → 0 as n → ∞. as well. If X n , n ≥ 1, and Y n , n ≥ 1, are sequences of random variables with |X n − Y n | Pn −→ 0, if sup n E|Y n | < ∞, and if α n , n ≥ 1, is a sequence of scalars with lim n→∞ α n = 1, then |X n − α n Y n | Pn −→ 0. Since lim n→∞ √ VarS n / VarS n = 1 it follows that as n → ∞ we have Thus the asymptotic normality for (S n − ES n )/ VarS n implies the asymptotic normality of (S n − ES n )/ √ VarS n , i.e., we have as n → ∞ In the following sub-sections, we will show thatS iso H (P α ∩ D) andS ext H (P α ∩ D) satisfy a central limit theorem, the former for all α ∈ (1, ∞) and the latter for all α ∈ (1/2, ∞).
These imply (1.7) and (1.8). On the other hand, in the final sub-section, we show that S iso H (P α ∩D) does not satisfy a central limit theorem for α ∈ (1/2, 1). The above argument implies that S iso (P α,n ) also does not satisfy a central limit theorem in the same range of α.

The central limit theorem forS ext
We define a dependency graph G n := G ext n := (V n , E n ) as follows. Firstly, we partition the interval (−I n , I n ] into Θ(n/R) consecutive intervals of equal length, which we enumerate J 1 , . . . , J [2In/R] . For each i = 1, . . . , [2I n /R], we set C i := J i × [0, R]. The collection of axis-parallel rectangles {C i } i=1,...,[2In/R] partitions D. The vertex set V n consists of the rectangles C 1 , ..., C [2In/R] . We put an edge (C i , C j ) between any two rectangles whenever C i and C j are separated by a rectangle having x-distance at most 2x max . Let E n be the collection of all such edges. Put for all i = 1, ..., [2I n /R] By the definition of x max , if C 1 and C 2 are disjoint collections of rectangles in V n such that no edge in E n has one endpoint in C 1 and the other endpoint in C 2 , then the random variables {Z Ci , C i ∈ C 1 } and {Z Ci , C i ∈ C 2 } are independent. Note that a rectangle C having x-side equal to x max will have non-empty intersection with at most Standard tail estimates for Poisson random variables give P |C 1 ∩ (P α ∩ D)| > R 2 ≤ e −R 2 , for n sufficiently large. So The maximal degree D n of the dependency graph G ext n satisfies D n = O(R 3 ). We also set B n : We thus conclude that We have shown that Var[S ext H (P α ∩ D)] = Θ(n) and thus also Var[S ext H (P α ∩ D)|A n ] = σ 2 n = Θ(n). The Baldi-Rinott central limit theorem for dependency graphs [3] gives Since σ n = Θ(n 1/2 ), we have D 2 n B 3 n V n /σ 3 n = o(1). This shows a central limit theorem for S ext H (P α ∩ D) conditional on A n . To deduce a central limit theorem forS ext SinceS ext H (P α ∩ D) conditional on A n satisfies a central limit theorem by (6.6), the probability on the right-hand side converges to Φ(x).

The central limit theorem forS iso
The above approach turns out to be not strong enough for showing the asymptotic normality for the number of isolated vertices. For a certain range of α a dependency graph defined as above has high maximum degree making the bounds (6.6) of little use. We will instead prove a central limit theorem forS iso H (P α ∩ D) using a Poincaré-type inequality for Poisson functionals due to Last, Peccati and Schulte [17].
Let P denote a Poisson point process on a space S having intensity measure λ. Let F denote a functional on locally finite point sets in S. Recall that for a point p ∈ S we defined the first order linear operator ∇ p F := F (P ∪ {p}) − F (P). Here, we will also use the second order operator ∇ 2 p1,p2 F := F (P ∪ {p 1 , p 2 }) − F (P ∪ {p 1 }) − F (P ∪ {p 2 }) + F (P). The functional F belongs to the domain of ∇ if E[F (P) 2 ] < ∞ and E S (∇ p F (P)) 2 λ(dp) < ∞. Theorem 1.1 of [17] uses these differential operators to approximate the normalised version of F by the standard normal N . For two real-valued random variables X and Y , let d W (X, Y ) denote the Wasserstein distance between the measures on R induced by X and Y .  We will apply Theorem 6.1 on the conditional space of the event A calculation similar to the one in (6.2) shows that for any α ∈ (1, ∞) we have P(E n ) = 1 − O(n 1−α ).
We shall apply Theorem 6.1 setting λ to be µ α and letting These ensure that on E n , one has E[F |E n ] = 0 and Var[F |E n ] = 1. We will verify that F is on the domain of ∇ later on, using the estimate on γ 3 . We will only check the second condition; the requirement that EF (P α ∩ D) 2 < ∞ follows from our bounds on To apply Theorem 6.1 we shall bound |∇ p F | by the number of points of (P α ∩ D) ∪ {p} which are inside the hyperbolic ball around p having height at most H. By the inclusionexclusion principle, the second order operator ∇ p1,p2 F is proportional to the number of isolated points ofP α ∩ D which are contained in the intersection of the hyperbolic balls around p 1 and p 2 and having height at most H. Thus  Set λ(p) := η · e y(p)/2 . Thus, |∇ p F | is stochastically dominated from above by a random variable X(p) + 1, where X(p) distributed as Po(λ(p)). Analogously, |∇ p1,p2 | is stochastically dominated by a Poisson-distributed random variable X(p 1 , p 2 ) with parameter λ(p 1 , p 2 ) := η · e (y(p1)∧y(p2))/2 . We now bound γ 3 , γ 2 and γ 1 in this order.  Proof. For p ∈ S, We deduce that Recall that n = νe R/2 and σ n = Θ(n 1/2 ) by the first part of Lemma 6.2 and (4.21). Therefore if α ∈ (1, 3/2] then . Let us point out that the bound on E|∇ p F | 3 λ(dp) is also a bound on E|∇ p F | 2 λ(dp) and thus F is in the domain of ∇. Proof. The second inequality in (6.8) implies . Now, Lemma 3.1 implies that there exist some constant γ > 0 such that B + H (p 1 ) ∩ B + H (p 2 ) = ∅ if (6.9) holds. This implies that when we integrate with respect to x(p 2 ) and x(p 3 ), relative distances with respect to p 1 greater than this quantity have no contribution to the integral defining γ 2 . In other words, p 2 (and p 3 , respectively) contributes to this integral only if .
The proof of this lemma is almost identical to the proof of the previous lemma. We postpone it to Section C.
We now establish the central limit theorem at (1.7). Consider the random variablê For any x ∈ R we have We are going to show that (6.14) cannot converge in distribution to a standard normally distributed random variable N .
By Lemma 6.3 and the discussion immediately after its statement, we have that |∇ p F (P)| is stochastically bounded by X(p) + 1 where X(p) is a Poisson-distributed random variable with parameter η · e y(p)/2 . Hence, Recalling h 1 = R/2α + (log log R)/α and n = νe R/2 we obtain S E (∇ p F (P)) 2 λ(dp) = O(log