The number of unbounded components in the Poisson Boolean model in hyperbolic space

We consider the Poisson Boolean continuum percolation model in n-dimensional hyperbolic space. In 2 dimensions we show that there are intensities for the underlying Poisson process for which there are infinitely unbounded components in the covered and vacant regions. In n dimensions we show that if the radius of the balls are big enough, then there are intensities for the underlying Poisson process for which there are infinitely many unbounded components.


Introduction
We begin by describing the fixed radius version of the so called Poisson Boolean model in R n , arguably the most studied continuum percolation model. For a detailed study of this model, we refer to [18]. Let X be a Poisson point process in R n with some intensity λ. At each point of X, place a closed ball of radius R. Let C be the union of all balls, and V be the complement of C. The sets V and C will be referred to as the vacant and covered regions. We say that percolation occurs in C (respectively in V ) if C (respectively V ) contains unbounded (connected) components. For the Poisson Boolean model in R n , it is known that there is a critical intensity λ c ∈ (0, ∞) such that for λ < λ c , percolation does not occur in C, and for λ > λ c , percolation occurs in C. Also, there is a critical intensity λ * c ∈ (0, ∞) such that percolation occurs in V if λ < λ * c and percolation does not occur if λ > λ * c . Furthermore, if we denote by N C and N V the number of unbounded components of C and V respectively, then it is the case that N C and N V are both almost sure constants which are either 0 or 1. In R 2 it is also known that λ c = λ * c and that at λ c , percolation does not occur in C or V . For n ≥ 3, Sarkar [21] showed that λ c < λ * c , so that there exists an interval of intensities for which there is an unbounded component in both C and V .
It is possible to consider the Poisson Boolean model in more exotic spaces than R n , and one might ask if there are spaces for which several unbounded components coexist with positive probability. The main results of this paper is that this is indeed the case for n-dimensional hyperbolic space H n . We show that there are intensities for which there are almost surely infinitely many unbounded components in the covered region if R is big enough. In H 2 we also show the existence of three distinct phases regarding the number of unbounded components, for any R. It turns out that the main difference between R n and H n which causes this, is the fact that there is a linear isoperimetric inequality in H n , which is a consequence of the constant negative curvature of the spaces. In H 2 , the linear isoperimetric inequality says that the circumference of a bounded simply connected set is always bigger than the area of the set. The main result in H 2 is inspired by a theorem due to Benjamini and Schramm. In [6] they show that for a large class of nonamenable planar transitive graphs, there are infinitely many infinite clusters for some parameters in Bernoulli bond percolation. For H 2 we also show that the model does not percolate on λ c . The discrete analogue of this theorem is due to Benjamini, Lyons, Peres and Schramm and can be found in [4]. It turns out that several techniques from the aforementioned papers are possible to adopt to the continuous setting in H 2 .
There is also a discrete analogue to the main result in H n . In [17], Pak and Smirnova show that for certain Cayley graphs, there is a non-uniqueness phase for the number of unbounded components. In this case, while it is still possible to adopt their main idea to the continuous setting, it is more difficult than for H 2 .
The rest of the paper is organized as follows. In section 2 we give a very short review of uniqueness and non-uniqueness results for infinite clusters in Bernoulli percolation on graphs (for a more extensive review, see the survey paper [14]), including the results by Benjamini, Lyons, Peres, Schramm, Pak and Smirnova. In section 3 we review some elementary properties of H n . In section 4 we introduce the model, and give some basic results. Section 5 is devoted to the proof of the main result in H 2 and section 6 is devoted to the proof of the main theorem for the model in H n .

Non-uniqueness in discrete percolation
Let G = (V, E) be an infinite connected transitive graph with vertex set V and edge set E. In p-Bernoulli bond percolation on G, each edge in E is kept with probability p and deleted with probability 1 − p, independently of all other edges. All vertices are kept. Let P p be the probability measure on the subgraphs of G corresponding to p-Bernoulli percolation. (It is also possible to consider p-Bernoulli site percolation in which it is the vertices that are kept or deleted, and all results we present in this section are valid in this case too.) In this section, ω will denote a random subgraph of G. Connected components of ω will be called clusters.
Let I be the event that p-Bernoulli bond percolation contains infinite clusters. One of the most basic facts in the theory of discrete percolation is that there is a critical probability p c = p c (G) ∈ [0, 1] such that P p (I) = 0 for p < p c (G) and P p (I) = 1 for p > p c (G). What happens on p c depends on the graph. Above p c it is known that there is 1 or ∞ infinite clusters for transitive graphs. If we let p u = p u (G) be the infimum of the set of p ∈ [0, 1] such that p-Bernoulli bond percolation has a unique infinite cluster, Schonmann [22] showed for all transitive graphs, one has uniqueness for all p > p u . Thus there are at most three phases for p ∈ [0, 1] regarding the number of infinite clusters, namely one for which this number is 0, one where the number is ∞ and finally one where uniqueness holds.
A problem which in recent years has attracted much interest is to decide for which graphs p c < p u . It turns out that whether a graph is amenable or not is central in settling this question: For K ⊂ V , the inner vertex boundary of K is defined as Benjamini and Schramm [7] have made the following general conjecture: Of course, one direction of the conjecture is the well-known theorem by Burton and Keane [8] which says that any transitive, amenable graph G has a unique infinite cluster for all p > p c .
The other direction of Conjecture 2.1 has only been partially solved. Here is one such result that will be of particular interest to us, due to Benjamini and Schramm [6]. This can be considered as the discrete analogue to our main theorem in H 2 . First, another definition is needed.
Such a general result is not yet available for non-planar graphs. However, below we present a theorem by Pak and Smirnova [17] which proves non-uniqueness for a certain class of Cayley graphs. Let S k be the multiset of elements of Γ of the type g 1 g 2 ...g k , g 1 , ..., g k ∈ S and each such element taken with multiplicity equal to the number of ways to write it in this way. Then S k generates G.
Theorem 2.5 is the inspiration for our main result in H n .

Hyperbolic space
We consider the unit ball model of n-dimensional hyperbolic space H n , that is we consider H n as the open unit ball in R n equipped with the hyperbolic metric. The hyperbolic metric is the metric which to a curve γ = {γ(t)} 1 t=0 assigns length and to a set E assigns volume The linear isoperimetric inequality for H 2 says that for all measurable A ⊂ H 2 with L(∂A) and µ(A) well defined, Denote by d(x, y) the hyperbolic distance between the points x and y. Let S(x, r) := {y : d(x, y) ≤ r} be the closed hyperbolic ball of radius r centered at x. In what follows, area (resp. length) will always mean hyperbolic area (resp. hyperbolic length). The volume of a ball is given by where B(n) > 0 is a constant depending only on the dimension. We will make use of the fact that for any ǫ > 0 there is a constant K(ǫ, n) > 0 independent of r such that (3.3) µ(S(0, r) \ S(0, r − ǫ)) ≥ K(ǫ, n)µ(S(0, r)) for all r. For more facts about H n , we refer to [20].

Mass transport
Next, we present an essential ingredient to our proofs in H 2 , the mass transport principle which is due to Benjamini and Schramm [6]. We denote the group of isometries of H 2 by Isom(H 2 ).
The intuition behind the mass transport principle can be described as follows. One may think of ν(A × B) as the amount of mass that goes from A to B. Thus the mass transport principle says that the amount of mass that goes out of A equals the mass that goes into A.

The Poisson Boolean model in hyperbolic space
Definition 4.1. A point process X on H n distributed according to the probability measure P such that for k ∈ N, λ ≥ 0, and every measurable A ⊂ H n one has is called a Poisson process with intensity λ on H n . Here X(A) = X ∩ A and | · | denotes cardinality.
In the Poisson Boolean model in H n , at every point of a Poisson process X we place a ball with fixed radius R. More precisely, we let C = x∈X S(x, R) and V = C c and refer to C and V as the covered and vacant regions of H n respectively. For A ⊂ H n we let C[A] := x∈X(A) S(x, R) and V [A] := C[A] c . For x, y ∈ H n we write x ↔ y if there is some curve connecting x to y which is completely covered by C. Let d C (x, y) be the length of the shortest curve connecting x and y lying completely in C if there exists such a curve, otherwise let d C (x, y) = ∞. Similarly, let d V (x, y) be the length of the shortest curve connecting x and y lying completely in V if there is such a curve, otherwise let d V (x, y) = ∞. The collection of all components of C is denoted by C and the collection of all components of V is denoted by V. Let N C denote the number of unbounded components in C and N V denote the number of unbounded components in V . Next we introduce four critical intensities as follows. We let λ c := inf{λ : N C > 0 a.s.}, λ u = inf{λ : N C = 1 a.s.}, Our main result in H 2 is: For the Poisson Boolean model with fixed radius in H 2 Furthermore, with probability 1, The main result in H n for any n ≥ 3 is: In what follows, we present several quite basic results. The proofs of the following two lemmas, which give the possible values of N C and N V are the same as in the R n case, see Propositions 3.3 and 4.2 in [18], and are therefore omitted.  Next we present some results concerning λ c and λ * c .

Lemma 4.6. For the Poisson Boolean model with balls of radius
The proof is identical to the R n case, see Theorem 3.2 in [18].

Proposition 4.7. Consider the Poisson Boolean model with balls of radius
Proof. We prove the proposition using a supercritical branching process, the individuals of which are points in H n . The construction of this branching process is done by randomly distorting a regular tree embedded in the space. Without loss of generality we assume that there is a ball centered at the origin, and the origin is taken to be the 0'th generation. Let a be such that a six-regular tree with edge length a can be embedded in H 2 in such a way that the angles between edges at each vertex all equal π/3, and d(u, v) ≥ a for all vertices u and v in the tree. Suppose R is so large that 2R − 1 > a.
Next pick three points x 1 , x 2 , x 3 on ∂S(0, 2R) ∩ H 2 such that the angles between the geodesics between the origin and the points is 2π/3. We define a cell associated to x i as the region in S(0, 2R) \ S(0, 2R − 1) which can be reached by a geodesic from the origin which diverts from the geodesic from the origin to x i by an angle of at most π/6. For every cell that contains a Poisson point, we pick one of these uniformly at random, and take these points to be the individuals of the first generation. We continue building the branching process in this manner. Given an individual y in the n:th generation, we consider an arbitrary hyperbolic plane containing y and its parent, and pick two points at distance 2R from y in this plane such that the angles between the geodesics from y to these two points and the geodesic from y to its parent are all equal to 2π/3. Then to each of the new points, we associate a cell as before, and check if there are any Poisson points in them. If so, one is picked uniformly at random from each cell, and these points are the children of y.
We now verify that all the cells in which the individuals of the branching process were found are disjoint. By construction, if y is an individual in the branching process, the angles between the geodesics from y to its two possible children and its parent are all in the interval (π/3, π), and therefore greater than the angles in a six-regular tree. Also, the lengths of these geodesics are in the interval (2R − 1, 2R) and therefore larger than a. Thus by the choice of a, if all the individuals were in the same hyperbolic plane, the cells would all be disjoint.
Suppose all individuals are in H 2 , with the first individual at the origin. For each child of the origin we may pick two geodesics from the origin to infinity with angle θ less than π/3 between them that define a sector which contains the child and all of its descendants and no other individuals, and the angle between any of these two geodesics and the geodesic between the origin and the child is θ/2. In the same way, for each child the grandchildren and their corresponding descendants can be divided into sectors with infinite geodesics emanating from the child and so on. Now, such a sector emanating from an individual will contain all the sectors that emanates from descendants in it.
From a sector emanating from an individual, we get a n-dimensional sector by rotating it along the geodesic going through the individual and its corresponding child. Then this n-dimensional sector will contain the corresponding n-dimensional sectors emanating from the child. From this it follows that the cells will always be disjoint. Now, if the probability that a cell contains a poisson point is greater than 1/2, then the expected number of children to an individual is greater than 1 and so there is a positive probability that the branching process will never die out, which in turn implies that there is an unbounded connected component in the covered region of 2R)). By the above it follows that completing the proof. In H 2 , we will need a correlation inequality for increasing and decreasing events. If ω and ω ′ are two realizations of a Poisson Boolean model we write ω ω ′ if any ball present in ω is also present in ω ′ . An event A measurable with respect to the Poisson process is said to be increasing (respectively The proof is almost identical to the proof in the R n case, see Theorem 2.2 in [18]. In particular, we will use the following simple corollary to Theorem 4.9, the proof of which can be found in [12], which says that if A 1 , A 2 , ..., A m are increasing events with the same probability, then The same holds when A 1 , A 2 , ..., A m are decreasing. For the proof of Theorem 4.2 we need the following lemma, the proof of which is identical to the discrete case, see [14].

The number of unbounded components in H 2
The aim of this section is to prove Theorem 4.2. We perform the proof in the case R = 1 but the arguments are the same for any R. We first determine the possible values of (N C , N V ) for the model in H 2 . The first lemma is an application of the mass transport principle. First, some notation is needed.
Before the proof we describe the intuition behind it: we place mass of unit density in all of H 2 . Then, if h is a component of H, the mass inside h is transported to the boundary of h. Then we use the mass transport principle: the expected amount of mass transported out of a subset A equals the expected amount of mass transported into it. Finally we combine this with the isoperimetric inequality (3.1).

Proof. For
Thus, ν is a diagonally invariant positive measure on where the last inequality follows from the linear isoperimetric inequality. Hence, the claim follows by Theorem 3.2.
In the following lemmas, we exclude certain combinations of N C and N V . The first lemma can be considered as a continuous analogue to Lemma 3.3 in [6]. Since the distribution of A r is Isom(H 2 )-invariant we get by Lemma 5.2 that there is r 1 < ∞ such that for r ≥ r 1 , P[A r has unbounded components] > 0.
But by construction, for any r it is the case that A r has only finite components. Hence the initial assumption is false. The proof of the next lemma is very similar to the discrete case, see Lemma 11.12 in [12], but is included for the convenience of the reader. Proof. Assume (N C , N V ) = (1, 1) a.s. Fix x ∈ H 2 . Denote by A u C (k) (respectively A d C (k), A r C (k), A l C (k)) the event that the uppermost (respectively lowermost, rightmost, leftmost) quarter of ∂S(x, k) intersects an unbounded component of C\S(x, k). Clearly, these events are increasing. Since N C = 1 a.s., Hence by the corollary to the FKG-inequality, ) be the event that the uppermost (respectively lowermost, rightmost, leftmost) quarter of ∂S(x, k) intersects an unbounded component of V \S(x, k). Since these events are decreasing, we get in the same way as above that lim k→∞ P[A t V (k)] = 1 for t ∈ {u, d, r, l}. Thus we may pick k 0 so big that P[A t C (k 0 )] > 7/8 and P[A t V (k 0 )] > 7/8 for t ∈ {u, d, r, l}.
Bonferroni's inequality implies P[A] > 1/2. On A, C\S(x, k 0 ) contains two disjoint unbounded components. Since N C = 1 a.s., these two components must almost surely on A be connected. The existence of such a connection implies that there are at least two unbounded components of V , an event with probability 0. This gives P[A] = 0, a contradiction.

5.1
The situation at λ c and λ * c It turns out that to prove the main theorem, it is necessary to investigate what happens regarding N C and N V at the intensities λ c and λ * c . Our proofs are inspired by the proof of Theorem 1.1 in [4], which says that critical Bernoulli bond and site percolation on nonamenable Cayley graphs does not contain infinite clusters. Proof. We begin with ruling out the possibility of a unique unbounded component of C at λ c . Suppose λ = λ c and that N C = 1 a.s. Denote the unique unbounded component of C by U. By Proposition 5.6, V contains only finite components a.s. Let ǫ > 0 be small and remove each point in X with probability ǫ and denote by X ǫ the remaining points. Furthermore, let C ǫ = ∪ x∈Xǫ S(x, 1). Since X ǫ is a Poisson process with intensity λ c (1 − ǫ) it follows that C ǫ will contain only bounded components a.s. Let C ǫ be the collection of all components of C ǫ . We will now construct H ǫ as a union of elements from C ǫ and V such that the distribution of H ǫ will be Isom(H 2 )-invariant. For each z ∈ H 2 we let U ǫ (z) be the union of the components of U ∩ C ǫ being closest to z. We let each h from C ǫ ∪ V be in H ǫ if and only if sup z∈h d(z, U) < 1/ǫ and U ǫ (x) = U ǫ (y) for all x, y ∈ h. We want to show that for ǫ small enough, H ǫ contains unbounded components with positive probability. Let B be some ball. It is clear that L(B ∩ ∂H ǫ ) → 0 a.s. and also that µ(B ∩ H ǫ ) → µ(B) a.s. when ǫ → 0. Also L(B ∩ ∂H ǫ ) ≤ L(B ∩ (∂C ǫ ∪ ∂C)) and E[L(B ∩ (∂C ǫ ∪ ∂C))] ≤ K < ∞ for some constant K independent of ǫ. By the dominated convergence theorem, we have Therefore we get by Lemma 5.2 that H ǫ contains unbounded components with positive probability when ǫ is small enough. Suppose h 1 , h 2 , ... is an infinite sequence of distinct elements from C ǫ ∪ V such that they constitute an unbounded component of H ǫ . Then U ǫ (x) = U ǫ (y) for all x, y in this component. Hence U ∩ C ǫ contains an unbounded component (this particular conclusion could not have been made without the condition sup z∈h d(z, U) < 1/ǫ in the definition of U ǫ (z)). Therefore we conclude that the existence of an unbounded component in H ǫ implies the existence of an unbounded component in C ǫ . Hence C ǫ contains an unbounded component with positive probability, a contradiction.
We move on to rule out the case of infinitely many unbounded components of C at λ c . Assume N C = ∞ a.s. at λ c . As in the proof of Lemma 5.4, we choose r such that for x ∈ H 2 the event • A(y, r) ∩ B(y, r) occurs; The third condition above means that if y 1 and y 2 are two encounter points, then S(y 1 , r + 1) and S(y 2 , r + 1) are disjoint sets. By the above, it is clear that given y ∈ Y , the probability that y is an encounter point is positive.
We now move on to show that if y is an encounter point and U is the unbounded component of C containing y, then each of the disjoint unbounded components of U\S(y, r + 1) contains a further encounter point. The proof now continues with the construction of a forest F , that is a graph without loops or cycles. Denote the set of encounter points by T , which is a.s. infinite by the above. We let each t ∈ T represent a vertex v(t) in F . For a given t ∈ T , let U(t) be the unbounded component of C containing t. Then let k be the number of unbounded components of U(t)\S(t, r + 1) and denote these unbounded components by C 1 , C 2 ,..., C k . For i = 1, 2, ..., k put an edge between v(t) and the vertex corresponding to the encounter point in C i which is closest to t in C (this encounter point is unique by the nature of the Poisson process).
Next, we verify that F constructed as above is indeed a forest. If v is a vertex in F , denote by t(v) the encounter point corresponding to it. Suppose v 0 , v 1 , ..., v n = v 0 is a cycle of length ≥ 3, and that d C (t(v 0 ), t(v 1 )) < d C (t(v 1 ), t(v 2 )). Then by the construc- ) obviously leads to the same conclusion. But if y ∈ Y , the probability that there are two other points in Y on the same distance in C to y is 0. Hence, cycles exist with probability 0, and therefore F is almost surely a forest. Now define a bond percolation F ǫ ⊂ F : Define C ǫ in the same way as above. Let each edge in F be in F ǫ if and only if both encounter points corresponding to its end-vertices are in the same component of C ǫ . Since C ǫ contains only bounded components, F ǫ contains only finite connected components.
For any vertex v in F we let K(v) denote the connected component of v in F ǫ and let ∂ F K(v) denote the inner vertex boundary of K(v) in F . Since the degree of each vertex in F is at least 3, and F is a forest, it follows that at least half of the vertices in K(v) are also in ∂ F K(v). Thus we conclude The right-hand side of the above is positive and independent of ǫ. But the left-hand side tends to 0 as ǫ tends to 0, since when ǫ is small, it is unlikely that an edge in F is not in F ǫ . This is a contradiction.
By Proposition 5.6, if N C = 0 a.s., then N V = 1 a.s. Thus we have an immediate corollary to Theorem 5.7.
Next, we show the corresponding results for λ * c . Obviously, the nature of V is quite different from that of C, but still the proof of Theorem 5.9 below differs only in details to that of Theorem 5.7. We include it for the convenience of the reader.
Proof. Suppose N V = 1 a.s. at λ * c and denote the unbounded component of V by U. Then C contains only finite components a.s. by Proposition 5.6. Let ǫ > 0 and let Z be a Poisson process independent of X with intensity ǫ. Let C ǫ := ∪ x∈X∪Z S(x, 1) and V ǫ := C c ǫ . Since X ∪Z is a Poisson process with intensity λ * c +ǫ it follows that C ǫ has a unique unbounded component a.s. and hence V ǫ contains only bounded components a.s. Let V ǫ be the collection of all components of V ǫ . Define H ǫ in the following way: For each z ∈ H 2 we let U ǫ (z) be the union of the components of U ∩V ǫ being closest to z. We let each h ∈ C∪V ǫ be in H ǫ if and only if sup z∈h d(z, U) < 1/ǫ and U ǫ (x) = U ǫ (y) for all x, y ∈ h. As in the proof of Theorem 5.7, for ǫ > 0 small enough, H ǫ contains an unbounded component with positive probability, and therefore V ǫ contains an unbounded component with positive probability, a contradiction. Now suppose that N V = ∞ a.s. at λ * c . Then also N C = ∞ by Proposition 5.6. Therefore, for x ∈ H 2 , we can choose r > 1 big such that the intersection of the two independent events • A(y, r) ∩ B(y, r) occurs; By the above discussion, If y is a encounter point, y is contained in an unbounded component U of V and U\S(y, r) contains at least 3 disjoint unbounded components. Again we construct a forest F using the encounter points and define a bond percolation F ǫ ⊂ F . Let V ǫ be defined as above. Each edge of F is declared to be in F ǫ if and only if both its end-vertices are in the same component of V ǫ . The proof is now finished in the same way as Theorem 5.7.
Again, Proposition 5.6 immediately implies the following corollary:

Proof of Theorem 4.2
Here we combine the results from the previous sections to prove our main theorem in H 2 .

The number of unbounded components in H n
This section is devoted to the proof of Theorem 4.3.
First part of proof of Theorem 4.3. In view of Lemma 4.10, it is enough to show that P[u ↔ v] → 0 as d(u, v) → ∞ for some intensity above λ c . We use a duplication trick. Let X 1 and X 2 be two independent copies of the Poisson Boolean model. If we for some ǫ > 0 can find points u and v on an arbitrarily large distance from each other such that u is connected to v in X 1 with probability at least ǫ, then the event has probability at least ǫ 2 . So it is enough to show that P[B(u, v)] → 0 as d(u, v) → ∞ at some intensity above λ c . Fix points u and v and suppose d(u, v) = d. Let k = ⌈d/(2R)⌉. That is, k is the smallest number of balls of radius R needed to connect the points u and v. Thus, for B(u, v) to occur, there must be at least one sequence of at least k distinct connected balls in X 1 , such that the first ball contains u and the last ball contains v, and at least one such sequence of balls in X 2 . This in turn obviously implies that there is at least one sequence of at least k connected balls in X 1 such that the first ball contains u, and the last ball intersects the first ball of a sequence of at least k connected balls in X 2 , where the last ball in this sequence contains u. In this sequence of at least 2k balls, the center of the first ball is at distance at most 2R from the center of the last ball.
Let l ≥ 2k. Next we estimate the expected number of sequences of balls as above of length l. Denote this number by N(l). Now, if we consider sequences of balls as above of length l, without the condition that the last ball contains u, then the expected number of such sequences is easily seen to be bounded by λ l µ(S(0, 2R)) l (as for example in the proof of Theorem 3.2 in [18]). Let P R (l) be the probability that the center of the last ball in such a sequence is at most at distance 2R from the center of the first ball. Then N(l) ≤ λ l µ(S(0, 2R)) l P R (l). Now (λµ(S(0, 2R))) l P R (l).
We will now estimate the terms in the sum above.
In other words, The distribution of Y i will be defined in the proof.
First part of proof of Lemma 6.1. Note that given the point X i , the distribution of the point X i+1 is the uniform distribution on S(X i , 2R). Put d i := d(X i , X i+1 ). Then d 0 , d 1 ... is a sequence of independent random variables with density Next we write The terms in the sum 6.2 are neither independent nor identically distributed. However, we will see that the sum is always larger than a sum of i.i.d. random random variables with positive mean. Suppose without loss of generality that X 0 is at the origin. Let γ i be the geodesic between 0 and X i and let ϕ i be the geodesic between X i and X i+1 for i ≥ 1. Let θ i be the angle between γ i and ϕ i for i ≥ 1 and let θ 0 = π. Then θ 1 , θ 2 , ... is a sequence of independent random variables, uniformly distributed on [0, π]. Since the geodesics γ i , γ i+1 and ϕ i lie in the same hyperbolic plane, we can express d(0, X i+1 ) in terms of d(0, X i ), d(X i , X i+1 ) and θ i using the first law of cosines for triangles in hyperbolic space (see [20], Theorem 3.5.3), which gives that Next we prove a lemma that states that the random variable above dominates a random variable which is independent of d(0, X i ). Put f (x, y, θ) := cosh −1 (cosh(x) cosh(y) − sinh(x) sinh(y) cos(θ)) − y.
Proof. For simplicity write a = a(x) := cosh(x) and b = b(x, θ) := sinh(x) cos(θ). Then by rewriting we get by easy calculations that the limit as y → ∞ is as desired. It remains to show that f ′ y (x, y, θ) < 0 for all x, y and θ. We have that If the right hand side in 6.6 is negative then we are done, otherwise, taking squares and simplifying gives that the inequality 6.6 is equivalent to the simpler inequality a 2 − b 2 > 1 which holds since a 2 − b 2 = cosh 2 (x) − sinh 2 (x) cos 2 (θ) > cosh 2 (x) − sinh 2 (x) = 1, completing the proof of the lemma.
Second part of proof of Lemma 6.1. Letting Y i := g(d i , θ i ) we have (since Y 0 > 0), where g is as in Lemma 6.2, which concludes the proof.
We now want to bound the probability in Lemma 6.1, and for this we have the following technical lemma, which in a slightly different form than below is due to Patrik Albin. Lemma 6.3. Let Y i be defined as above. There is a function h(R, ǫ) such that for any ǫ ∈ (0, 1) we have h(R, ǫ) ∼ Ae −R(1−ǫ) as R → ∞ for some constant A = A(ǫ) ∈ (0, ∞) independent of R and such that for any R > 0, Proof. Let K be the complete elliptic integral of the first kind (see [11], pp. 313-314).
Second part of proof of Theorem 4.3. By the estimates in Proposition 4.7 and Lemma 6.3 we get that ∞ l=2k (λ c (R)µ(S(0, 2R))) l P R (l) ≤ e R ∞ l=2k K l h(R, ǫ) l−1 for any ǫ ∈ (0, 1) and some constant K ∈ (0, ∞). Thus if we take R big enough, the sum goes to 0 as k → ∞. This is also the case if we replace λ c with tλ c for some t > 1, proving that there are intensities above λ c for which there are infinitely many unbounded connected components in the covered region of H n for R big enough.