Poisson cylinders in hyperbolic space

We consider the Poisson cylinder model in $d$-dimensional hyperbolic space. We show that in contrast to the Euclidean case, there is a phase transition in the connectivity of the collection of cylinders as the intensity parameter varies. We also show that for any non-trivial intensity, the diameter of the collection of cylinders is infinite.


Introduction
In the recent paper [6], the authors considered the so-called Poisson cylinder model in Euclidean space. Informally, this model can be described as a Poisson process ω on the space of bi-infinite lines in R d . The intensity of this Poisson process is u times a normalized Haar measure on this space of lines. One then places a cylinder c of radius one around every line L ∈ ω, and with a slight abuse of notation, we say that c ∈ ω. The main result of [6] was that for any 0 < u < ∞ and any two cylinders c 1 , c 2 ∈ ω, there exists a sequence c 1 , . . . c d−2 ∈ ω such that c 1 ∩ c 1 = ∅, c 1 ∩ c 2 = ∅, . . . c d−2 ∩ c 2 = ∅. In words, any two cylinders in the process is connected via a sequence of at most d − 2 other cylinders. Furthermore, it was proven that with probability one, there exists a pair of cylinders not connected in d − 3 steps. The result holds for any 0 < u < ∞, and therefore there is no connectivity phase transition. This is in sharp contrast to what happens for other percolation models. For example, ordinary discrete percolation (see [8]), the Gilbert disc model (see [11]), and the Voronoi percolation model (see [5]) all have a connectivity phase transition. A common property that all the above listed models exhibit is something that we informally refer to as a "locality property" and can be described as follows. Having knowledge of the configuration in some region A, gives no, or almost no, information about the configuration in some other region B, as long as A and B are well separated. For instance, in ordinary discrete percolation the configurations are independent if the two regions A, B are disjoint, while for the Gilbert disc model with fixed disc radius r, the regions need to be at Euclidean distance at least 2r in order to have independence. For Voronoi percolation, there is a that when u is so large that C is a connected set, then we cannot have that C is again disconnected for an even larger u.
In [16] a result similar to Theorem 1.1 for the random interlacements model on certain non-amenable graphs was proven. The random interlacements model (which was introduced in [15]) is a discrete percolation model exhibiting long-range dependence. However, the dependence structure for this model is very different from that of the Poisson cylinder model. To see this, consider three points x, y, z ∈ H d (or R d in the Euclidean case). If we know that there is a geodesic L ∈ ω such that x, y ∈ L, then this will determine whether z ∈ L. For a random interlacement process, the objects studied are essentially trajectories of bi-infinite simple random walks, and so knowing that a trajectory contains the points x, y ∈ Z d will give some information whether the trajectory contains z ∈ Z d , but not "full" information. Thus, the dependence structure is in some sense more rigid for the cylinder process.
Knowing that C is connected, it is natural to consider the diameter of C defined as follows. For any two cylinders c a , c b ∈ ω, let Cdist(c a , c b ) be the minimal number k of cylinders c 1 , . . . c k ∈ ω such that is a connected set. If no such set exists, we say that Cdist(c a , c b ) = ∞. We then define the diameter of C as diam(C) = sup{Cdist(c a , c b ) : c a , c b ∈ ω}.
Our second main result is Remark: Of course, the result is trivial for u < u c .
When 0 < u < u c (d), it is natural to ask about the number of unbounded components. Our next proposition addresses this. Proposition 1.3. For any u ∈ (0, u c ) the number of infinite connected components of C is a.s. infinite.
One of the main tools will be the following discrete time particle process. Since we believe that it may be of some independent interest, we present it here in the introduction, along with our main result concerning it. In essence, it behaves like a branching process where every particle gives rise to an infinite number of offspring whose types can take any positive real value.
Formally, let ξ 0 , (ξ k,n ) ∞ k,n=1 be an i.i.d. collection of Poisson processes on R with intensity measure ue min(0,x) dx. Let ζ 0 = {0}, and we think of this as the single particle in generation 0. Then, let ζ 1 = {x ≥ 0 : x ∈ ξ 0 } be the particles of generation 1, and let Z 1,1 = min{x ∈ ζ 1 } and inductively for any k ≥ 2, let Z k,1 = min{x ∈ ζ 1 : x > Z k−1,1 }. Thus Z 1,1 < Z 2,1 < · · · and {Z 1,1 , Z 2,1 , . . .} = ζ 1 . We think of these as the offspring of Z 1,0 = {0}. In general, if ζ n has been defined, and Z 1,n < Z 2,n < · · · are the points in ζ n , we let ζ k,n+1 = x∈ξ k,n :x+Z k,n ≥0 {x + Z k,n }, (1.3) and ζ n+1 = ∞ k=1 ζ k,n+1 . We think of ζ n+1 as the particles of generation n + 1, and ζ k,n+1 as the offspring of Z k,n ∈ ζ n . From (1.3), we see that ζ k,n+1 ⊂ R + . Furthermore, conditioned on Z k,n = x, Z k,n gives rise to new particles in generation n + 1 according to a Poisson process with intensity measure dµ x = I(y ≥ 0)ue −(x−y) + dy (where I is an indicator function and (x − y) + = max(0, x − y)). We let ζ = (ζ n ) ∞ n=1 denote this particle process. We point out that in our definition, any enumeration of the particles of ζ n would be as good as our ordering Z 1,n < Z 2,n < · · · , as long as the enumeration does not depend on "the future", i.e. (ξ k,n+1 ) ∞ k=1 or such. Informally the above process can be described as follows. Thinking of a particle as a point in R + corresponding to the type of that particle, it gives rise to new points with a homogeneous rate forward of the position of the point, but at an exponentially decaying rate backward of the position of the point. Of course, since any individual gives rise to an infinite number of offspring, the process will never die out. However, it can still die out weakly in the sense that for any R there will eventually be no new points of type R or smaller.
For any n, let Thus, X n [a,b] is the number of individuals in generation n of type between a and b. We have the following theorem Theorem 1.4. There exists a constant C < ∞ such that for u < 1/4, and any That is, ζ dies out weakly. Furthermore, for any u > 1/4, Theorem 1.4 will be used to prove that u c (d) > 0 (part of Theorem 1.1) through a coupling procedure informally described in the following way (see Section 6.2 for the formal definition). Consider a deterministic cylinder c 0 passing through the origin o ∈ H d and a Poisson process of cylinders in H d as described above. Let c 1,1 , c 2,1 , . . . be the set of cylinders in this process that intersect c 0 . These are the first generation of cylinders (and correspond to ζ 1 ). In the next step, we consider independent Poisson processes (ω k,1 ) ∞ k=1 and the collection of cylinders in ω k,1 that intersect c k,1 (these collections will correspond to (ζ k,2 ) ∞ k=1 and the union of them corresponds to ζ 2 ). We then proceed for future generations in the obvious way. By a straightforward coupling of this "independent cylinder process" and the original one described above (and since in every step we use an independent process in the entire space H d ), we get that the set of cylinders connected to c 0 through this procedure, will contain the set of cylinders in C ∪ c 0 connected to c 0 . With some work, the independent cylinder process can be compared to the particle process as indicated. By Theorem 1.4, for u < 1/4, the latter dies out weakly. We will show that this implies that the number of cylinders (in the independent cylinder process) connected to c 0 and intersecting B(o, R) will be of order at most e 4ucR where c < ∞. However, the number of cylinders in C intersecting B(o, R) must be of order e (d−1)R , which of course is strictly larger than e 4ucR for u > 0 small enough. Assuming that C is connected then leads to a contradiction.
We end the introduction with an outline of the rest of the paper. In Section 2 we give some background on hyperbolic geometry and define the cylinder model. In Section 3, we establish some preliminary results on connectivity probabilities that will be useful in later sections. In Section 4, we prove that u c (d) < ∞ and the monotonicity part of Theorem 1.1. In Section 5, we prove Theorem 1.4, which (as described) will be a key ingredient in proving u c (d) > 0, which is done in Section 6. In Sections 7 and 8 we prove Theorem 1.2 and Proposition 1.3 respectively.

The model
In this section we will start with some preliminaries of hyperbolic space which we will have use for later, and proceed by defining the model.

Some facts about d-dimensional hyperbolic space
There are many models for d-dimensional hyperbolic space (see for instance [2], [12] or [13]). In this paper, we prefer to consider the so-called Poincaré ball model. Therefore, we consider the unit ball .
We refer to U d equipped with the metric d H as the Poincaré ball model of d-dimensional hyperbolic space, and denote it by H d . For future convenience, we now state two well known (see for instance Chapter 7.12 of [2]) rules from hyperbolic geometry. Here, we consider a triangle (consisting of segments of geodesics in H d ) with side lengths a, b, c and we let α, β, γ denote the angles opposite of the segments corresponding to a, b and c respectively. These rules are usually referred to as hyperbolic cosine rules. Let S d−1 denote the unit sphere in R d . We will identify ∂H d with S d−1 . Any point x ∈ H d is then uniquely determined by the distance ρ = d H (o, x) of x from the origin o, and a point s ∈ S d−1 by going along the geodesic from o to s a distance ρ from o. If we let dν d−1 denote the solid angle element so that O d−1 = S d−1 dν d−1 is the (d − 1)-dimensional volume of the sphere S d−1 , then the volume measure in H d can be expressed in hyperbolic spherical coordinates (see [13], Chapter 17) as

Thus, for any
Let A(d, 1) be the set of all geodesics in H d . As mentioned in the introduction, a geodesic L ∈ A(d, 1) will sometimes be referred to as a line. Although it will have no direct relevance to the paper, we note that it is well known (see [7], section 9) that in the Poincaré ball model, A(d, 1) consists of diameters and boundary orthogonal circular segments of the unit ball U d .

The process
We consider the following space of point measures on A(d, 1): Here, δ L of course denotes Dirac's point measure at L. We will often use the following standard abuse of notation: if ω is some point measure, then we will write "L ∈ ω" instead of "L ∈ supp(ω)". We will draw an element ω from Ω according to a Poisson point process with intensity measure uµ d,1 where u > 0. We call ω a (homogeneous) Poisson line process of intensity u in H d .
If L ∈ A(d, 1), we denote by c(L, s) the cylinder of base radius s centered around L, i.e. c(L, s) = {x ∈ H d : d H (x, L) ≤ s}.
If s = 1 we will simplify the notation and write c(L, 1) = c(L). When convenient, we will write c ∈ ω instead of c(L) where L ∈ ω. Recall that the union of all cylinders is denoted by C, and that the vacant set V is the complement H d \ C. For an isometry g on H d and an event B ⊂ Ω, we define gB := {ω ′ ∈ Ω : ω ′ = gω for some ω ∈ B}. We say that an event B ⊂ Ω, is invariant under isometries if gB = B for every isometry g. Furthermore, we have the following 0 − 1 law.
The proof of Proposition 2.1 is fairly standard, so we only give a sketch based on the proofs of Lemma 3.3 of [17] and Lemma 2.6 of [9]. Below, ω B(x,k) denotes the restriction of ω to L B(x,k) . We note that the laws of the random objects ω, C and V are all invariant under isometries of H d .
The purpose of this section is to establish some preliminary estimates on connectivity probabilities, and in particular to establish (1.2). This result will then be used many times in the following sections.
For any two sets A, B ⊂ H d , we let A ↔ B denote the event that there exists a cylinder c ∈ ω such that A ∩ c = ∅ and B ∩ c = ∅. We have the following key estimate.
Lemma 3.1. Let s ∈ (0, ∞). There exists two constants 0 < c(s) < C(s) < ∞ such that for any x, y ∈ H d , and u ≤ 2/µ d,1 (L B(o,s+1) ) we have that Lemma 3.1 will follow easily from Lemmas 3.2 and 3.3 below, and we defer the proof of Lemma 3.1 till later.
Recall that we identify S d−1 with ∂H d in the Poincaré ball model. Fix a half-line L 1/2 emanating from the origin. For 0 < θ < π, let L L 1/2 ,θ be the set of all half-lines L ′ 1/2 such that L ′ 1/2 emanates from the origin and such that the angle between L 1/2 and L ′ 1/2 is at most θ. Let S θ (L 1/2 ) be the set of all points s ∈ ∂H d such that s is the limit point of some half-line in L L 1/2 ,θ . Then The (d − 1)-dimensional Euclidean volume of S θ is given by is a regularized incomplete beta function (this follows from [10], equation (1), by noting that sin 2 (θ) = 2h − h 2 ).
Proof. For convenience, we perform the proof in the case s = 1. The general case is dealt with in the same way. The proof is somewhat similar to the proof of Lemma 3.1 in [17]. Recall that we use the Poincaré ball model, and keep in mind that ∂H d is identified with S d−1 . Let R = d H (x, y) and without loss of generality assume that x = o and so y ∈ ∂B(o, R). We can assume that R > 2 as the case R ≤ 2 follows by adjusting the constants c, C. For any R ∈ (0, ∞] and A ⊂ ∂B(o, R), let The projection Π ∂H d (A) of A onto ∂H d is defined as the set of all points y in ∂H d for which there is a half-line emanating from o, passing through A and with its end-point at infinity at y.
We now argue that where σ R is the unique rotationally invariant probability measure on ∂B(o, R). Here, σ ∞ is the rotationally invariant probability measure on ∂H d , which is just a constant multiple of the Lebesgue measure on Hence In addition, every line intersecting B(o, 1) intersects ∂B(o, R) exactly twice. Hence, Taking expectations in (3.7) we obtain It is easily verified that ρ R (A) is invariant under rotations. Hence, ρ R is a constant multiple of σ R . Taking expectations in (3.8), we obtain (3.10) Combining (3.9) and (3.10) we obtain (3.6).
Having proved (3.6) and (3.11), we now proceed to prove the lower bound. We observe that Hence, in view of (3.6), we need to estimate σ R (E) from below, where E = B(y, 1) ∩ ∂B(o, R). Let L 1 be any line containing o and intersecting ∂B(y, 1) ∩ ∂B(o, R), and let L y be the line intersecting o and y. Denote the angle between L 1 and L y by θ = θ(R).
Observe that Π ∂H d (E) is the intersection of ∂H d and a hyperspherical cap of Euclidean height 1 − cos(θ), and so we need to find bounds on θ.
Applying (2.2) to the triangle defined by L 1 ∩ B(o, R), the line segment between o and y, and the line segment between L 1 ∩ ∂B(o, R) and y, we have Solving (3.12) for θ gives Observe that for any 0 ≤ x ≤ 1, Hence for R ≥ 1, and so the lower bound follows by combining (3.11), (3.13) and (3.14). We turn to the upper bound. Let y ′ be the point on ∂B(y, 1) closest to the origin, and let H be the (d − 1)-dimensional hyperbolic space orthogonal to L y and containing Next we find an upper bound of σ ∞ (Π ∂H d (H)), which will imply the upper bound of µ d,1 (L B(o,1) ∩ L B(y,1) ). Let L 2 be any geodesic in H, and let s and s ′ be the two end-points at infinity of L 2 . Let L 3 be the half-line between 0 and s, and let γ = γ(R) be the angle between L 3 and L y . Applying (2.3) to the triangle defined by L 3 , the half-line between s and y ′ and the line-segment between 0 and y ′ , we obtain cos(0) = − cos(π/2) cos(γ) + sin(π/2) sin(γ) cosh(R − 1), which gives 1 = sin(γ) cosh(R − 1).
Observe that we here applied (2.3) to an infinite triangle, which can be justified by a limit argument. Hence .
We observe that Π ∂H d (H) is the intersection between a hyperspherical cap of Euclidean height 1 − cos(γ) and ∂H d . Hence, according to Lemma 3.2, The upper bound follows by combining (3.11), (3.15) and (3.16), which concludes the proof.
Proof. The proof is nearly identical to the proof of the upper bound in Lemma 3.3, and therefore we leave the details to the reader.
We can now prove Lemma 3.1. Proof of Lemma 3.1. We perform the proof in the case s = 1 as the general case follows similarly. First observe that by Lemma 3.3 with C as in the same lemma. 2) ) ≤ 2 by assumption, we get as above that by again using Lemma 3.3 and letting c be half of that of Lemma 3.3.

Proof of u c < ∞ and monotonicity of uniqueness
We start by proving the monotonicity of uniqueness. For convenience, in this section we denote by ω u a Poisson line process with intensity u. In addition, we will let E and P denote expectation and probability measure for several Poisson processes simultaneously.
Recall also that A(d, 1) is the set of all geodesics in H d .
Proof. It is straightforward to show that for any Let u ′ = u 2 −u 1 and let ω u ′ be a Poisson line process of intensity u ′ , independent of ω u 1 . By the Poissonian nature of the process, C(ω u 2 ) has the same law as C(ω u 1 ) ∪ C(ω u ′ ). Hence it suffices to show that the a.s. connectedness of C(ω u 1 ) implies the a.s. connectedness of C(ω u 1 ) ∪ C(ω u ′ ). To show this, it suffices to show that a.s., every line in We will show that P[D c ] = 0 and we start by observing that For clarity, we let E ω u ′ and E ωu 1 denote expectation with respect to the processes ω u ′ and ω u 1 respectively, and we will let E denote expectation with respect to ω u ′ ∪ ω u 1 . We use similar notation for probability. We then have that, where we use the independence between ω u ′ and ω u 1 in the penultimate equality and that P [S(L) c ] = 0 which follows from (4.1). This finishes the proof of the proposition.
The aim of the rest of this section is to prove the following proposition, which is a part of Theorem 1.1 In order to prove Proposition 4.2, we will need some preliminary results and terminology. Recall the definition of L + A for A ⊂ H d and the definitions of a(L) and ρ(L), all from Section 2.2. Using the line process ω, we define a point process τ in H d as follows: In other words, τ is the point process induced by the points that minimize the distance between the origin and the lines of ω. We observe that since ω is a Poisson process, it follows that τ is also a Poisson process (albeit inhomogeneous). We will consider a percolation model with balls in place of cylinders, using τ as the underlying point process. Our aim is to prove that V does not percolate for u < ∞ large enough by analyzing this latter model. For this, we will need Lemma 4.4, which provides a uniform bound (in z ∈ H d ) of the probability that a point of τ falls in the ball of radius 1/2 centered at z ∈ H d . Before that, we present the following lemma, which will be useful on several occasions. Furthermore, for any such set, there exist constants 0 < c 1 (d) < c 2 (d) < ∞ so that for any x ∈ H d , and r ≥ 1, Let S z be the line segment from o to z, and observe that since o ∈ x∈Dm B(x, 1/2), there must be some point s = s(E m , z) belonging to S z ∩ E m . Because of (4.3), we see that for some ǫ > 0, we have that d H (o, z) = d H (o, s) + ǫ and so we get that leading to a contradiction. We now turn to (4.2) and start with the upper bound. Let y 1 , . . . , y N be an enumeration of D ∩ B(x, r). By construction, the balls B(y k , 1/5) are all disjoint, and so Nv d (B(o, 1/5)) ≤ v d (B(o, r + 1)) from which the upper bound follows with c 2 = 1/v d (B(o, 1/5)).
For the lower bound, it suffices to observe that from the construction we have that (B(o, 1)). Hence, the lower bound follows with c 1 = 1/v d (B(o, 1)).
where the second equality uses (2.5) with C(d) where we used that µ d,1 is invariant under rotations in the last equality. From (4.6) we conclude that µ d,1 (L + B(z,1/2) ) ≥ C ′ (d)e (d−1)r /N r ≥ c(d) > 0, finishing the proof of the lemma. Then it is clear that W ⊃ V so it suffices to show that W does not percolate when u is large.
For z ∈ H d let Q(z) be the event that z is within distance 1/2 from W. Then Q(z) is determined by τ ∩ B(z, 3/2) so that Q(z) and Q(z ′ ) are independent if d H (z, z ′ ) ≥ 3. Let A be the event that o belongs to an infinite component of W. If A occurs, then there exists an infinite continuous curve γ : [0, ∞) → W with the properties that γ(0) = o and d H (o, γ(t)) → ∞ as t → ∞. Let t 0 = 0 and y 0 = o, and for k ≥ 1 let inductively t k = sup{t : d H (γ(t), y k−1 ) = 6} and y k = γ(t k ). For each k, let y ′ k be a point in D which minimizes the distance to y k . By definition d H (y j , y k ) ≥ 6 if j = k, and Observe that since y k ∈ W, the event Q(y ′ k ) occurs. Let D be as in Lemma 4.3, and let X n be the set of sequences x 0 , ..., x n of points in D such that d H (o, x 0 ) ≤ 1/2, d H (x n , x n+1 ) ≤ 7 and d H (x j , x k ) ≥ 5 if j = k. Furthermore, let N n denote the number of such sequences. We have that and that for some constant c(d) < ∞. By independence, as n → ∞ if u < ∞ is large enough. We conclude that P[A] = 0 for u large enough but finite.
We can now prove Proposition 4.2.
Proof of Proposition 4.2: If C is disconnected, then it consists of more than one infinite connected component. Since any two disjoint infinite components of C must be separated by some infinite component of V, we get that the disconnectedness of C implies that V percolates. According to Proposition (4.5), there is no percolation in V when u is large enough. Hence C is connected when u is large enough.

Proof of Theorem 1.4.
Before we can prove Theorem 1.4, we will need to do some preliminary work. To that end, let {c k,n } n≥0,−1≤k≤n be defined by letting c 0,0 = c 0,1 = c 1,1 = 1 and c −1,n = 0 for every n and then inductively for every 0 ≤ k ≤ n letting where we define c n+1,n = 0. Note that by this definition, c k,n = c k−1,n−1 + c k+1,n . These numbers constitute (a version) of the Catalan triangle, and it is easy to verify that for every n and 0 ≤ k ≤ n. This follows by using that if (5.2) holds for c k−1,n−1 and c k+1,n , we get that By an induction argument, we see that (5.2) holds for every 0 ≤ k ≤ n. Consider the following sequence {g n (x)} n≥0 of functions such that g n : R + → R + for every n. Let g 0 (x) ≡ 1, and define g 1 (x), g 2 (x), . . . inductively by letting g n+1 (x) = for every n ≥ 1.
Proposition 5.1. With definitions as above, we have that Proof. We start by noting that x e −y dy = x + 1, and since c 1,1 = c 0,1 = 1 the statement holds for n = 1. Assume therefore that it holds for n − 1 and observe that with c −1,n = 0, By using (5.1), we conclude the proof.
Our next result provides a link between the particle process ζ defined in the introduction, and the functions g n (x). Recall the interpretation that a particle at position Z k,n = x, independently gives rise to new particles according to a Poisson process with intensity measure dµ x = I(y ≥ 0)ue −(y−x) + dy, so that in particular the entire process is restricted to R + . Recall also the definition of X n [a,b] in (1.4).
. For any u < ∞, F n (R) is differentiable with respect to R, and we have that with f n (R) := F ′ n (R), for every n ≥ 1.
Proof of Proposition 5.2. We will prove the statement by induction, and so we start by noting that F 1 (R) = E[X 1 [0,R] ] = uR, which follows since Z 1,0 is of type 0. Therefore, the statement holds for n = 1.
Assume now that the statement holds for some fixed n ≥ 1. Let R, ∆R > 0 and consider Any particle in generation n of type smaller than R gives rise to individuals in [R, R+∆R] (in generation n + 1) at rate u. Furthermore, any individual of type x ∈ [R, R + ∆R] gives rise to individuals in [R, R + ∆R] at most at rate u while individuals of type x > R + ∆R produce individuals in [R, R + ∆R] at rate at most ue R+∆R−x . We therefore get the following upper bound where N is an arbitrary number. By assumption, F n (R) is differentiable, and by the mean value theorem, since f n (x) is increasing. Thus, we conclude from (5.4) that by the dominated convergence theorem. Hence, we conclude that lim sup again by the dominated convergence theorem. Similarly, we get the following lower bound which together with (5.5) gives us Thus, we conclude that F n+1 (R) is differentiable and that where the last equality follows from (5.3).
Remarks: The proof shows that for u = 1, the functions f n (x) = F ′ n (x) satisfies (5.3), which is of course why (5.3) is introduced in the first place.
For future reference, we observe that F n (R) in fact depends on u, and we sometimes stress this by writing F n (R, u). Furthermore, it is easy to see that for any 0 < u < ∞, we have that F n (R, u) = u n F n (R, 1) for every n ≥ 1.
We have the following result Proof. By Propositions 5.1 and 5.2, Furthermore, by using that m n is increasing in m ≥ n, we see that Combining (5.7) and (5.8), we see that for u < 1/4, finishing the proof.
Remark: As pointed out to us by an anonymous referee, a variant of Proposition 5.3 can be proved along the following lines. Let T be the integral operator defined by It is easy to check that g(x) = (x + 2)e x/2 is an eigenfunction of T satisfying T (g) = 4g. Thus, since g 0 (x) ≡ 1 ≤ g(x) we get that g 1 = T (g 0 ) ≤ T (g) = 4g, and iterating we see that g n+1 = T (g n ) ≤ T (4 n g) = 4 n+1 g. This can then be used in conjunction with Proposition 5.2 to prove the desired result. The justification for obtaining and using the explicit forms of g n , f n and F n is twofold. Firstly, these forms will be convenient when proving the second part of Theorem 1.4 and also when proving Lemma 7.1 below. Secondly, we believe that the infinite type branching process ζ is of independent interest, and therefore a detailed analysis is intrinsically of value.
We can now prove Theorem 1.4. Proof of Theorem 1.4. We have that Furthermore, for u < 1/4, we can use Proposition 5.3 and the dominated convergence theorem to conclude that .
The aim of this section is to prove the lower bound of Theorem 1.1. We will do this by establishing a link between the cylinder process ω and the particle process of Section 5. As an intermediate step, we will in Section 6.1 consider particle processes with offspring distributions that can be weakly bounded above by ζ. In Section 6.2, these new particle processes and the cylinder process in H d will be compared. Thereafter, this link is used in Section 6.3 to obtain the required lower bound.

Particle processes weakly dominated by ζ
Recall that dµ x (y) = 1 (y≥0) ue −(x−y) + dy and suppose that (ν x ) x∈R + is a family of measures with the following property: there is a constant c ∈ (0, ∞) such that for all integers k, l ≥ 0 sup and moreover, ν x ({0}) = 0 for all x ≥ 0. This last assumption is made only for convenience; if one allows the measures to have an atom at 0 what follows below can be modified fairly easy to get similar conclusions. The particle processes that we consider here are defined as the one in Theorem 1.4, but using ν x as the offspring distribution in place of µ x for a particle of type x. Recall that we think of the position of a particle in R + as being the type of that particle. Of course, we still assume that every particle produces offspring independently. For this process, letX n D be the number of individuals in generation n of type in D ⊂ R + . Furthermore let Lemma 6.1. With c ∈ (0, ∞) as in (6.1), we have that for every R ∈ N + F n (R) ≤ c n F n (R).
Proof. Let R ∈ N + . It suffices to show that with c as in (6.1), and any integers n ≥ 1 and k ≥ 0, we have SinceF n (0) = F n (0) = 0, the claim of the lemma will then follow by summing the two sides of (6.2) from k = 0 to k = R − 1. We proceed by induction in n. For any k ∈ N we have that so that (6.2) holds for n = 1. Assume therefore that (6.2) holds for some n ≥ 1 and every k ≥ 0. LetỸ n k,l denote the number of individuals in generation n of type in (k, k + 1] with parents of type in (l, l + 1]. We have This finishes the proof of the lemma.

The independent cylinder process
We now turn to the independent cylinder process discussed in the introduction. We start by defining the process itself, and the coupling with the ordinary line process ω.
Fix some L k,n ∈ η n and consider an offspring L ∈ η k,n+1 . If L is at distance between l and l + 1 from the origin, then this corresponds to an offspringZ ∈ζ k,n+1 ofZ k,n such thatZ ∈ (l, l + 1]. Furthermore, the expected number of offspring (of L k,n ) belonging to L B(o,l+1) \ L B(o,l) equals uµ d,1 L c(L k,n ,2) ∩ (L B(o,l+1) \ L B(o,l) ) , and so we see that the particle processζ can be described using the intensity measures {τ x } x≥0 where with L x satisfying x = d H (o, L x ). Our next result will be used to prove that {τ x } x≥0 satisfies (6.1) for some c < ∞.
There exists a constant C(d) ∈ (0, ∞) such that for any k ≥ 0, Proof. Fix k ∈ N. Suppose that L = {γ(t) : −∞ < t < ∞} where the parametrization of γ is chosen to be unit speed and so that d H (o, L) = d H (o, γ(0)). For i ∈ Z let y i = γ(i) and B i = B(y i , 3). Then c(L, 2) ⊂ ∪B i since any point in c(L, 2) is at distance at most 2 from L, and any point in L is at distance at most 1/2 from some y i . We now claim that for every i ≥ 0, and some constant c 1 > 0. To see this, assume that i ≥ 0, and observe that by (2.2), d H (0, y i ) = cosh −1 (cosh(x) cosh(i)), since the angle between L and the geodesic from o to L is π/2. Equivalently, we get that Hence, where we use that cosh(i + 1) ≥ cosh(i) cosh(1) which holds since i ≥ 0. Hence, (6.7) follows with c 1 = log(cosh (1)). Assume first that k < x. From (6.7), we get that d(y i , 0) ≥ x + c 1 |i| for every i using symmetry. We get that where the penultimate inequality follows from Lemma 3.4. Now assume instead that x ≤ k. Let p = inf{|i| : d H (o, y i ) ≥ k − 3}. Using the union bound and that µ d,1 (L B i ∩ L B(o,k+1) \ L B(o,k) ) = 0 when |i| ≤ p, we get We can now use Lemma 6.2 to show the following result (recall the definition of F n (R) = F n (R, u) from Section 5). Lemma 6.3. There is a constant c(d) ∈ (0, ∞) such that for every R ∈ N + , and 0 < u < ∞, H n (R, u) ≤ c(d) n F n (R, u).
Proof. Since H n (R, u) = E[Z n [0,R] ], and the particle processζ uses {τ x } x≥0 as intensity measures, it suffices in view of Lemma 6.1, to show that there is a constant c < ∞ such that for every integer k, l ≥ 0, On the other hand In what follows, we drop the explicit dependence on u from the notation and simply write H n (R) and F n (R).

Proof of Theorem 1.1
We now have all the ingredients to prove our main result. Recall that C 0 (ω) is the maximally connected component of c(L 1,0 ) ∪ C(ω), and recall also the definition of C 0 (η) from (6.4). By (6.5) we can couple C 0 (ω) and C 0 (η) so that C 0 (ω) ⊂ C 0 (η), and so we have as in the proof of Theorem 1.4, that for u < 1/(4c) with c = c(d) as in Lemma 6.3, by using Lemma 6.3 in the second inequality.
Hence, we see that when 0 < u < 1/(4c), V (R) grows at most exponentially in R at a rate which is strictly smaller than 1. On the other hand, we have that by (2.5). This grows exponentially at rate (d − 1)R and so we see that with probability one, C 0 (ω) is a strict subset of c(L 1,0 ) ∪ C(ω). We conclude that C(ω) is a.s. not connected for this choice of u. 7 Proof of Theorem 1.2.
Similar to the notation of Section 3 we let A m ↔ B denote the event that there exists 1 ≤ l ≤ m and a sequence of cylinders c 1 , · · · , c l ∈ ω such that A ∩ c 1 = ∅, c 1 ∩ c 2 = ∅, . . . , c l ∩ B = ∅. That is, the sequence c 1 , · · · , c l connects A to B in l steps. We observe that {A ↔ B} = {A 1 ↔ B}. We start with the following lemma.
There exists a collection B R of balls of radius 1/4 with centers in ∂B(o, R) such that |B R | ≥ ce (d−1)R for some c > 0 and such that any cylinder intersecting B(o, R) intersects at most c(d) < ∞ balls in B R . To construct such a collection B R , we consider first D as in Lemma 4.3. Let G R = D ∩ (B(o, R + 3/2) \ B(o, (R − 1/2) + )). By a slight modification of the lower bound in (4.2), we get that |G R | ≥ ce (d−1)R /ν d (B(o, 1/2)) = c ′ e (d−1)R . For x ∈ G R , let x ′ be defined as the point on ∂B(o, R) such that x ′ minimizes the distance to ∂B(o, R), and let G ′ R be the collection of all such x ′ . Obviously, the collection of balls B R := {B(x, 1/4)} x∈G ′ R satisfies |B R | ≥ c ′ e (d−1)R . Now let L be a line intersecting B(o, R + 5/4) (only cylinders centered around such lines might intersect some ball in B R ). Using (6.7), there is a universal constant c 2 < ∞ and two points x 1 , x 2 ∈ ∂B(o, R) (these points depend on L) such that c(L) ∩ (B(o, R + 1/4) \ B(o, (R − 1/4) + )) ⊂ B(x 1 , c 2 ) ∪ B(x 2 , c 2 ). Hence the number of balls from B R intersecting c(L) is bounded by the number of points in D ∩ (B(x 1 , c 2 + 2) ∪ B(x 2 , c 2 + 2)). This in turn is bounded by some constant c 3 (d) < ∞, by the upper bound of (4.2). Hence, the existence of the B R is verified.
Using B R , we see that the probability that a fixed ball at distance R from o will be intersected by any cylinder in η of generation less than or equal to m is bounded by The statement follows by using that C 0 (ω) ⊂ C 0 (η) and noting that any cylinder that intersects B(o, 1) must also intersect the cylinder c(L 1,0 ).
Proof of Theorem 1.2. Let m ∈ N + and fix ǫ ∈ (0, 1). Using Lemma 7.1, we can choose r = r(m, ǫ) < ∞ so large that the probability that any two fixed cylinders separated by distance r will be connected in at most m steps is less than ǫ. Indeed, take r so large that the probability that B(o, 1) and B(y, 1) (where y ∈ ∂B(o, r)) are connected in at most m + 2 steps is less than ǫ. Consider then two cylinders c 1 , c 2 separated by distance r, and assume without loss of generality that c 1 ∩B(o, 1) = ∅ and c 2 ∩B(y, 1) = ∅. Then, if the probability that c 1 , c 2 are connected in at most m steps is larger than ǫ, this would lead to a contradiction.
For lines L 1 , L 2 ∈ A(d, 1), let E m (L 1 , L 2 ) be the event that c(L 1 ) and c(L 2 ) are connected in at most m steps. Define the event where the union is over all 2-tuples of distinct lines in ω. In words, H is the event that there is at least one pair of lines in ω whose corresponding cylinders are not connected in at most m steps. We now let E L 1 ,L 2 denote the expectation with respect to ω + δ L 1 + δ L 2 . Using the Slivnyak-Mecke formula ( [14] Obviously, the expression on the right hand side diverges for any 0 < u < ∞, so that  In this section, we prove that when u < u c (d), there are a.s. infinitely many connected components in C. Let N(ω) =the number of connected components in C.
Proof of Proposition 1.3 Obviously, the event N(ω) = k is invariant under isometries of H d and so using Proposition 2.1, we have that for any u there is k = k(u) ∈ N∪{∞} such that P[N(ω) = k] = 1. Suppose u < u c (d) and suppose that 1 < k(u) < ∞. It is not hard to show that there exist points y 1 , ..., y k ∈ H d such that the event It is easy to see that P[C] > 0 (indeed, ω 1 might consist of k lines L 1 ,...,L k such that c(L i ) contains o and y i ). Since ω 1 and ω 2 are independent, it follows that B and C are independent and hence P[B ∩ C] > 0. The event B ∩ C implies that N(ω) = 1, whence P[N(ω) = 1] > 0 which contradicts P[N(ω) = k] = 1. We conclude that N(ω) ∈ {1, ∞}, and since u < u c by assumption, it follows that a.s. N(ω) = ∞.