The extremal process of two-speed branching Brownian motion

We construct and describe the extremal process for variable speed branching Brownian motion, studied recently by Fang and Zeitouni, for the case of piecewise constant speeds; in fact for simplicity we concentrate on the case when the speed is $\sigma_1$ for $s\leq bt$ and $\sigma_2$ when $bt\leq s\leq t$. In the case $\sigma_1>\sigma_2$, the process is the concatenation of two BBM extremal processes, as expected. In the case $\sigma_1<\sigma_2$, a new family of cluster point processes arises, that are similar, but distinctively different from the BBM process. Our proofs follow the strategy of Arguin, Bovier, and Kistler.


INTRODUCTION
A standard branching Brownian motion (BBM) is a continuous-time Markov branching process that is constructed as follows: start with a single particle which performs a standard Brownian motion x(t) with x(0) = 0 and continues for a standard exponentially distributed holding time T , independent of x. At time T , the particle splits independently of x and T into k offspring with probability p k , where ∞ i=1 p k = 1, ∞ k=1 kp k = 2 and K = ∞ k=1 k(k − 1)p k < ∞. These particles continue along independent Brownian paths starting from x(T ) and are subject to the same splitting rule. And so on.
Branching Brownian motion has received a lot of attention over the last decades, with a strong focus on the properties of extremal particles. We mention the seminal contributions of McKean [18], Bramson, Lalley and Sellke, and Chauvin and Rouault [7,6,15,8] on the connection to the Fisher-Kolmogorov-Petrovsky-Piscounov (F-KPP) equation and on the distribution of the rescaled maximum. In recent years, their has been a revival of interest in BBM with numerous contributions, including the construction of the full extremal process [3,1]. For a review of these developments see, e.g., the recent survey by Guéré [13].
BBM can be seen as a Gaussian process with covariances depending on an ultrametric distance, in this case the ultrametric associated to the genealogical structure of an underlying Galton-Watson process. In that respect it is closely related to another class of Gaussian processes, the Generalised Random Energy Models (GREM) introduced by Derrida and Gardner [12]. While in BBM the covariance of the process is a linear function of the ultrametric distance, in the GREM one considers more general functions. One of the reasons that makes BBM interesting in this context is the fact that the linear function appears as a borderline where the correlation starts to modify the behaviour of extremes [4,5].
In the context of BBM, different covariances can be achieved by varying the speed (i.e. diffusivity) of the Brownian motions as a function of time (see also [5]). This model was introduced by Derrida and Spohn [9] and has recently been investigated by Fang and Zeitouni [11,10], see also [16,17]. The entire family of models obtained as time changes of BBM is a splendid test ground to further develop the theory of extremes of correlated random variables. Understanding fully the possible extremal processes that arise in this class should also provide us with candidate processes for even wider classes of random structures.
1.1. Results. In [11], Fang and Zeitouni showed that in the case when the covariance is a piecewise linear function, the maximum of BBM is tight and behaves as expected from the analogous GREM. In this paper we refine and extend their analysis: we obtain the precise law of the maximum, and we give the full characterisation of the extremal process.
For simplicity we consider the following variable speed BBM. Fix a time t. Then we consider the BBM model where at time s, all particles move independently as Brownian motions with variance We normalise the total variance by assuming Note that in the case b = 1, σ 2 = ∞ is allowed. We denote by n(s) the number of particles at time s and by {x i (s); 1 ≤ i ≤ n(s)} the positions of the particles at time s.
Remark. Strictly speaking, we are not talking about a single stochastic process, but about a family {x k (s), k ≤ n(s)} t∈R + s≤t of processes with finite time horizon, indexed by that horizon, t.
In this case, Fang and Zeitouni [10] showed that max k≤n(t) 3) The second case has a simple interpretation: the maximum is achieved by adding to the maxima of BBM at time bt the maxima of their offspring at time (1 − b)t later. The first case looks simpler even, but is far more interesting. The order of the maximum is that of the REM, a fact to be expected by the corresponding results in the GREM (see [12,4]). But what is the law of the rescaled maximum and what is the corresponding extremal process? The purpose of this paper is primarily to answer this question.
For standard BBM,x(t), (i.e. σ 1 = σ 2 ), Bramson [7] and Lalley and Sellke [15] show that lim t↑∞ P max where m(t) ≡ √ 2t − 3 In [3] (see also [1] for a different proof) it was shown that the extremal process, δx k (t)−m(t) = E, (1.5) exists in law, and E is of the form where η k is the k-th atom of a mixture of Poisson point process with intensity measure CZe − √ 2y dy, with C and Z as before, and ∆ (k) i are the atoms of independent and identically distributed point processes ∆ (k) , which are the limits in law of The main result of the present paper is similar but different. Theorem 1.1. Let x k (t) be branching Brownian motion with variable speed σ 2 (s) as given in (1.1). Assume that σ 1 < σ 2 . Then where η k is the k-th atom of a mixture of Poisson point process with intensity measure C ′ Y e − √ 2y dy, with C ′ and Y as in (i), and Λ (k) i are the atoms of independent and identically distributed point processes Λ (k) , which are the limits in law of To complete the picture, we give the result for the limiting extremal process in the case σ 1 > σ 2 . This result is much simpler and totally unsurprising. Theorem 1.2. Let x k (t) be as in Theorem 1.1, but σ 2 < σ 1 . Let E ≡ E 0 and E (i) , i ∈ N be independent copies of the extremal process (1.6) of standard branching Brownian motion. Let 14) exists in law, and j are the atoms of the point processes E and E (i) , respectively. Remark. In the case σ 1 < 1, we see that the limiting process depends only on the values of σ 1 (through the martingale Y ) and on σ 2 (through the processes of clusters σ 2 Λ (k) ). As σ 2 grows, the clusters become spread out, and in the limit σ 2 = ∞, the cluster processes degenerate to the Dirac mass at zero. Hence, in that case the extremal process is just a mixture of Poisson point processes. When σ 1 = 0, and b > 0, the martingale limit is just an exponential random variable, the limit of the martingale n(t)e −t . The case b = 1, σ 1 = 0 corresponds to the random REM, where there is just a random number of iid random variables of variance one present.

Remark.
We have decided to write this paper only for the case of two speeds. It is fairly straightforward to extend our results to the general case of piecewise constant speed with a fixed number of change points. The details will be presented in a separate paper [14]. The general case of variable speed still offers more challenges, in spite of recent progress [16,17].

1.2.
Outline of the proof. The proof of our result follows the strategy used in [3]. The main difference is that we show that particles that will reach the level of the extremes at time t must, at the time of the speed change, tb, lie in a √ t-neighbourhood of a value √ 2(σ 2 − 1)bt below the straight line of slope √ 2. This is the done in Section 2. Then two pieces of information are needed: in Section 3 we get precise bounds on the probabilities of BBM to reach values at excessively large heights, and more generally we control the behaviour of solutions of the F-KPP equations very much ahead of the travelling wave front. The final results comes from combining this information with the precise distribution of particles at the time of the speed change. This is done in Section 4 by proving the convergence of a certain martingale, analogous, but distinct from the derivative martingale that appears in normal BBM. The identification and the proof of L 1 convergence of this martingale is the key idea. Using this information in Sections 5 and 6, the convergence of the maximums, respectively the Laplace functional of the extremal process are proven, much along the lines on [3]. Section 7 provides various characterisations of the limiting process, as in [3]. In particular, we describe the extremal process in terms of an auxiliary process, constructed from a Poisson point process with a strange intensity to those atoms we add BBM's with negative drift. Interestingly, the process of the cluster extremes of this auxiliary process is again Poisson with random intensity driven by the new martingale. The results stated above follow then from looking at the clusters from their maximal points. In the final Section 8, we give the simple proof of Theorem 1.2 Acknowledgements. This paper uses many of the ideas from the long collaboration with Louis-Pierre Arguin and Nicola Kistler. We are very grateful for their input. A.B. thanks Ofer Zeitouni for discussions. Finally, we thank an anonymous referee for a very careful reading of our paper and for numerous valuable suggestions.

POSITION OF EXTREMAL PARTICLES AT TIME bt
The key to understanding the behaviour of the two speed BBM is to control the positions of particle at time bt which are in the top at time t. This is done using Gaussian estimates. Proposition 2.1. Let σ 1 < σ 2 . For any d ∈ R and any ǫ > 0, there exists a constant A > 0 such that for all t large enough Proof. Using a first order Chebyshev inequality we bound (2.1) by A, P w 2 denotes the law of the variable w 2 . Introducing into the last line the identity in the form we can write it as (R1) + (R2). We first show lim t→∞ (R1) = 0. Using the standard Gaussian tail estimate For (R2) we can use again (2.4) to show that (R2) is smaller than We change variables w = √ 2σ 1 √ bt + z. Then the integral in (2.6) can be bounded from above by M where M is some positive constant. (2.7) can be made as small as desired by taking A (and thus A ′ ) sufficiently large.
Remark. The point here is that since σ 2 1 < σ 1 , these particles are way below max k≤n(bt) x k (bt), which is near √ 2σ 1 bt. The offspring of these particles that want to be top at time will have to race much faster (at speed √ 2σ 2 2 , rather than just √ 2σ 2 ) than normal. Fortunately, there are lots of particles to choose from. We will have to control precisely how many.
We need a slightly finer control on the path of the extremal particle until the time of speed change. To this end we define two sets on the space of paths, X : R + → R, The first controls that the position of the path is in a certain tube up to time s and the second the position of the particle at time s.
Recall [7] that the ancestral path form 0 to x k (s) can be written as x k (q) = q s x k (s) + z k (s), where z k is a Brownian bridge from 0 to 0 in time s, independent of x k (s). We need the following simple fact about Brownian bridges. Then for all γ > 1/2, the following is true. For all ǫ > 0 there exists r such that (2.9) Proposition 2.3. Let σ 1 < σ 2 . For any d ∈ R, A > 0 , γ > 1 2 and any ǫ > 0, there exists constants B > 0 such that, for all t large enough, (2.10) Proof. For B and t sufficiently large the probability in (2.10) is bounded from above by Let w 1 and w 2 be independent N (0, 1)-distributed random variables and z a Brownian bridge starting in zero and ending in zero at time bt. Using a first moment method as in the proof of Proposition 2.1 together with the independence of the Brownian bridge from its endpoint, one sees that (2.11) is bounded from above by where the last bound follows from Lemma 2.2 (with ǫ replaced by ǫ/M) and the bound (2.7) obtained in the proof of Proposition 2.1.

Proposition 2.4.
Let σ 1 < σ 2 . For any d ∈ R, A, B > 0, γ > 1 2 and any ǫ > 0, there exists a constant r > 0 such that for all t large enough Proof. The proof of this proposition is essentially identical to the proof of Proposition 2.3.

ASYMPTOTIC BEHAVIOUR OF BBM
Letx(t) denote a standard BBM. We are interested in the asymptotic behavior of P max with initial condition u(0, x) = ½ x<0 . We are more generally interested in the behaviour of solutions for such large values of x. The following proposition is an extension of Lemma 4.5 in [3] for these values of x.

Proposition 3.1. Let u be a solution to the F-KPP equation with initial data satisfying
where C(a) is a strictly positive constant. The convergence is uniform for a in compact intervals.
The convergence is uniform for a in a compact set.
Next we show that we can use dominated convergence to take the limit t → ∞ into the integral. First, the integrand is bounded by where B > 0. As was shown by Bramson [6] (and used in [3]), the solution of the F-KPP equation can be bounded by the solution u (2) (t, x) of the linearised F-KPP equation with the same initial condition u (2) (0, x) = u(0, x). Moreover there exists y 0 such that for any where B(r) is a constant that only depends on r. Hence we can apply dominated convergence to (3.6) and obtain This proves the lemma.

THE MCKEAN MARTINGALE
In this section we pick up the idea of [15] and consider a suitable convergent martingale for the time inhomogeneous BBM with σ 1 < σ 2 . Let x i (s), 1 ≤ i ≤ n(s) be the particles of a BBM where the Brownian motions have variance σ 2 1 with σ 2 1 < 1. Define This turns out to be a uniformly integrable martingale that converges almost surely to a positive limit Y .
Remark. Note that in terms of statistical mechanics, Y s can be thought of as a normalised partition function at inverse temperature σ 1 √ 2 (for ordinary BBM). Here the critical temperature is √ 2, so that we are in the high-temperature case. In the case of the GREM, where the underlying tree is deterministic, this quantity is known to even converge to a constant [4]. The assertion of this theorem follows immediately from the following proposition.
Remark. We would like to call this martingale McKean martingale, since McKean [18] had originally conjectured that this martingale (with σ 1 = 1) was the martingale in the representation of the extremal distribution of BBM, which, as Lalley and Sellke showed is wrong as it is actually the derivative martingale that appears there. We find it nice to see that in the time-inhomogeneous case with σ 1 < 1, KcKean turns out to be right! We will see in the proof that the uniform integrability of this martingale breaks down at σ 1 = 1.
Remark. Note further that if σ 1 = 0, then Y t = e −t n(t) which is well known to converge to an exponential random variable.
It remains to show that Y s is uniformly integrable. We will write abusively x k (r) for the ancestor of x k (s) at time r ≤ s and write x k for the entire ancestral path of x k (s). Define the truncated variable , and a simple computation using the independence of x k (s) and which can be made as small as desired by taking A and r to infinity. The key point is that the the second moment of Y A s is uniformly bounded in s. where We start by controlling (T 1).
Now we control (T 2). By the sometimes so-called "many-to-two lemma" (see e.g. [6], Lemma 10), and dropping the useless parts of the conditions on the Brownian bridges Now we change variables w = y Now the integral with respect to w is bounded by 1. Hence (4.12) is bounded from above by We split the integral over q into the three parts R 1 , R 2 , and R 3 according to the integration from 0 to r, r to s − r, and s − r to r, respectively. Then This is bounded by For R 1 the integral over z can only be bounded by one. This gives R 3 can be treated the same way as R 2 and we get Putting all three estimates together, we see that sup s E Y A s 2 ≤ D 2 (r). From this it follows that Y s is uniformly integrable. Namely, For the second, we have The last term in (4.18) is also bounded by E Y s − Y A s . Choosing now A and r such that E Y s − Y A s ≤ ǫ/3, and then z so large that 2 z D 2 (r) ≤ ǫ/3, we obtain that E[Y s ½ Ys>z ] ≤ ǫ, for large enough z, uniformly in s. Thus Y s is uniformly integrable, which we wanted to show. We will also need to control the processesỸ A s,γ = n(s) i=1 e −(1+σ 2 Lemma 4.3. The family of random variablesỸ A s,γ , s, A ∈ R + , 1 > γ > 1/2 is uniformly integrable and converges, as s ↑ ∞ and A ↑ ∞, to Y , both in probability and in L 1 . Proof. The proof of uniform integrability is a rerun of the proof of Proposition 4.2, noting that the bounds on the truncated second moments are uniform in A. Moreover, the same computation as in Eq. (4.6) shows that E|Y s −Ỹ A s,γ | ≤ ǫ, uniformly in s, for A large enough. Therefore, lim which implies that Y s −Ỹ A s,γ converges to zero in probability. Since Y s converges to Y almost surely, we arrive at the second assertion of the lemma.

CONVERGENCE OF THE MAXIMUM OF TWO-SPEED BBM
Using the results established in the last three sections, we show now the convergence of the law the of the maximum of two-speed BBM in the case σ 1 < σ 2 .
Theorem 5.1. Let {x k (t), 1 ≤ k ≤ n(t)} be the particles of a time inhomogeneous BBM with σ 1 < σ 2 and the normalising assumption . Proof. Denote by {x i (bt), 1 ≤ i ≤ n(bt)} the particles of a BBM with variance σ 1 at time bt and by F bt the σ-algebra generated this BBM. Moreover, for 1 ≤ i ≤ n(bt), Let us first observe that by the analog of Theorem 1.1. of [10] for two-speed BBM 1 we know that the maximum of our process is not too small, namely that for any ǫ > 0, there exists d < ∞, such that P max Therefore, x k (t) −m(t) ≤ y + ǫ/2 1 As pointed out in [11], the arguments used for branching random walks carry all over to BBM.
On the other hand, by Proposition 2.1, we have that there exists A < ∞, such that Combining (5.4) and (5.5), we have that Of course the corresponding lower bound holds without the ǫ.
Observe that the last probability in (5.7) is equal to are the particles of a standard BBM. Using Proposition 3.1 for (1 − b)t and u(t, x) = P maxx i j (t) > x , and setting we can write the probabilities in the last line of (5.8) as Now all the x i (bt) that appear are of the form But then, by Proposition 3.1, This is equal to E 1≤i≤n(bt) x i ∈G bt,A,1/2 we have the uniform bounds Observe that the right-hand side of Eq. (5.16)→ 0 as t ↑ ∞, since σ 2 2 > 1. Hence (5.15) is equal to E 1≤i≤n(bt) Expanding the square in the exponent in the last line and keeping only the relevant terms yields √ 2y + tσ 2 The terms up to the last one would nicely combine to produce the McKean martingale as coefficient of C(a). However, the last terms are of order one and cannot be neglected. To deal with them, we split the process at time b √ t. We write somewhat abusively is the ancestor at time b √ t of the particle that at time t is labeled i if we think backwards from time t, while the labels of the particles at time b √ t run only over the different ones, i.e. up to n(b √ t), if we think in the forward direction. No confusion should occur if this is kept in mind.
Using Proposition 2.3 and Proposition 2.4 we can further localise the path of the particle. Recall the definition of G s,A,γ and T r,s , we rewrite (5.17), up to a term of order ǫ, as we can re-write the terms multiplying C(a) in (5.19) as (5.21) Using the inequalities we are able to bound (5.21) from below and above. First, is the truncated McKean martingale defined in (4.11). Note that its second moment is bounded by D 2 (r) (see (4.19)). Second, computing the conditional expectation Performing the change of variables z = w + K t this is equal to . Using Lemma 2.2 together with the independence of the Brownian bridge from its endpoint, we obtain that the right hand side of (5.26) multiplied by an additional factor (1 − ǫ) is also a lower bound. Comparing this to (5.27), one sees that which tends to zero uniformly as t ↑ ∞. Thus the second moment term is negligible. Hence we only have to control t,γ converges in probability and in L 1 to the random variable Y , when we let first t and then B tend to infinity. Since Y B b √ t,γ ≥ 0 and C(a) > 0, it follows .
Finally, letting r tend to +∞, all the ǫ-errors (that are still present implicitly, vanish. This concludes the proof of Theorem 5.1.

EXISTENCE OF THE LIMITING PROCESS
The following existence theorem is the basic step in the proof of Theorem 1.1.
Theorem 6.1. Let σ 1 < σ 2 . Then, the point processes E t = k≤n(t) δ x k (t)−m(t) converges in law to a non-trivial point process E.
Proof. It suffices to show that, for φ ∈ C c (R) positive, the Laplace functional of the processes E t converges. First observe that this limit cannot be zero, since the maximum of the time inhomogeneous BBM converges by Theorem 5.1. As for standard BBM (see e.g. [3]), it follows which implies the locally finiteness of the limiting point process. As in [3] we decompose Hence it remains to analyse the behaviour of Ψ <δ t (φ). We claim that lim where {x j (t), 1 ≤ j ≤ n(t)} are the particles of a standard BBM with variance 1.
We observe that by [18] u δ (t, x) solves the F-KPP equation (3.2) with initial condition u δ (0, x) = 1 − g δ (x). Next we verify Assumptions (i)-(iv) of Proposition 3.1. (i) is clear. Moreover, g δ (x) = 1 for x large enough in the positive , and g δ (x) = 0 for −x large enough, so that Conditions (ii)-(iv) of Proposition 3.1 are satisfied. Now where for each i,x i j are the particles of iid standard BBMs. By Proposition 3.1 and the same calculations as in the proof Theorem 5.1 we have that this converges, as t → ∞, to where C(a, φ, δ) is the constant that appears in Lemma 3.2, with initial condition g δ (z), i.e.
Then lim δ→∞ C(a, φ, δ) exists and is given by Proof. First we note that To see this, note that for any K < ∞, (6.20) Hence, for any K < ∞, Since this holds for all K, Eq. (6.19) follows. It remains to control the limit as δ ↑ ∞ of the right-hand side of (6.19). But an exact rerun of the proof of Lemma 4.10 in [3] using Lemma 6.4 below instead of Lemma 4.8 of [3] yields that exists. By (6.11) and (6.22) we have This converges for δ → ∞ to which contradicts (6.15) and Theorem 6.1. Hence F > 0. Moreover, (6.24) implies (6.18), which concludes the proof of Proposition 6.2.
We recall the following estimate for the tail probabilities of standard BBM.

Moreover, for any bounded continuous function h(x), that is zero for x small enough
where {x i (t), i ≤ n(t)} are the particles of a standard BBM with variance 1. Here C(a) is the constant from (6.27) for u satisfying the initial condition ½ {x≤0} .
Proof. We have by a simple change of variables which proves (6.28). Moreover, (6.28) with initial condition ½ {x≤0} implies that (6.29) holds for h(x) = ½ [b,∞) , b ∈ R. For general h (6.29) follows in the same way as Lemma 4.11 in [3] by linearity and a monotone class argument.

THE AUXILIARY PROCESS
We define the following auxiliary process that has the same limiting behaviour as that of the two-speed BBM. We will denote the law of these processes by P and expectations by E. If desired, all ingredients of the auxiliary process can be thought of to be defined on a new probability space. Let (η i ; i ∈ N) be the atoms of a Poisson point process η on (−∞, 0) with intensity measure Remark. The form of the auxiliary process is similar to the case of standard BBM, but with a different intensity of the Poisson process. In particular, the intensity decays exponentially with t. This is a consequence of the fact that particles at the time of the speed change were forced to be O(t) below the line √ 2t, in contrast to the O( √ t) in the case of ordinary BBM. The reduction of the intensity of the process with t forces the particles to be selected at these locations. Proof. Using the notationφ(z) = φ(σ 2 z) and by the form of the Laplace functional of a Poisson point process we have By Lemma 6.4 we have )z e −a 2 t/2 dz, (7.6) which exists and is strictly positive by Proposition 6.2. This implies that the Laplace functionals of lim t→∞ Π t and of the extremal process of the two-speed BBM are equal.
The following proposition shows that in spite of the different Poisson ingredients, when we look at the process of the extremes of each of the x i (t), we end up with a Poisson point process just like in the standard BBM case.

Proposition 7.2. Define the point process
where P Y is the Poisson point process on R with intensity measure Proof. We consider the Laplace functional of Π ext t . Let M (i) (t) = maxx (i) k (t) and as beforeφ(z) = φ(σ 2 z). We want to show Since η i is a Poisson point process and the M (i) are i.i.d. we have where M(t) has the same distribution as one the variables M (i) (t). Now we apply Lemma 6.4 with h(x) = 1 − e −φ(z) . Hence the result follows by using thatφ(z) = φ(σ 2 z) and √ 2 + a = √ 2σ 2 together with the change of variables x = σ 2 z.
The following proposition states that the Poisson points of the auxiliary process contribute to the limiting process come from a neighbourhood of −at.
Proof. By a first order Chebychev inequality we have )x e −a 2 t/2 dx, (7.12) by the change of variables x → −x. Using the asymptotics of Lemma 6.3 we can bound (7.12) from above by This is a Gaussian integral and can be made as small as we want by choosing B large enough. Similarly one bounds (1)). converges in law under P · {maxx k (t) − √ 2t − x > 0} as t → ∞ to a well defined point processĒ. The limit does not depend on x − at and the maximum ofĒ shifted by x has the law of an exponential random variable with parameter √ 2 + a.
Proof. SetĒ t = k δx To see this we rewrite the conditional probability as and use the uniform bounds of Proposition 4.3 in [3]. Observing that where Ψ is defined in Equation (3.4), we get (7.16) by first taking t → ∞ and then r → ∞. The general claim of Proposition 7.4 follows in exactly the same way from (7.16) as Theorem 3.4. in [3].
Define the gap process Denote by ξ i the atoms of the limiting processĒ, i.e.Ē ≡ j δ ξ j and define D is a point process on (−∞.0] with an atom at 0. Corollary 7.5. Let x = −at + o(t). In the limit t → ∞ the random variables D t and x + maxĒ t are conditionally independent on the event {x + maxĒ t > b} for any b ∈ R. More precisely, for any bounded function f, h andφ ∈ C c (R), Proof. The proof is essentially identical to the proof of Corollary 4.12 in [3]. Let us outline, for the benefit of the readers, the structure of the proof. First, by Proposition 7.4 the pair (Ē t , maxĒ t − x), converge under the law conditioned on maxĒ t − x > 0 to (E, e), where e is an exponential random variable with parameter √ 2 + a and E is independent of the precise value of the conditioning. A general continuity lemma, stated and proven as Lemma 4.13 in [3], shows that this implies the convergence of the processes (D t , maxĒ t − x) to a pair (D, e) where D t is given in (7.18) is related toĒ t by a random shift of its atoms. The fact that D and e are independent follows from an explicit computation, just as in the proof of Corollary 4.12 in [3]. We do not repeat the details.
Finally we come to the description of the extremal process as seen from the Poisson process of cluster extremes, which is the formulation of Theorem 1.1. Theorem 7.6. Let P Y be as in (7.8) and let {D (i) , i ∈ N} be a family of independent copies of the gap-process (7.19) with atoms Λ (i) j . Then the point process E t converges in law as t → ∞ to a Poisson cluster point process E given by (7.21) Proof. Also this proof is now very close to that of Theorem 2.1 in [3]. First note that the Laplace functional of the process E is given by Thus, by Theorem 7.1, we have to show that the Laplace functional of the processes Π t converge to this expression. In the proof of that theorem, we have seen that We rewrite where f (x) = 1 − e −x , T zφ (x) =φ(z + x), f (0) = 0. Using the localisation estimate of Proposition 7.3 we have that (7.24) is equal to 25) where lim B→∞ sup t≥t 0 Ω t (B) = 0. Let mφ be the minimum of the support ofφ. we observe that when z + maxĒ t < mφ. Moreover, P z + maxĒ t = mφ = 0. Hence Now by Corollary 7.5, for z in the range of integration in (7.25), on the event we are conditioning on in (7.27), the random variables D t and maxĒ t + z − mφ converge to independent random variables (D, e), where e is exponential with parameter √ 2 + a. Hence Note that this expression is independent of z. Thus it remains to compute the integral of P z + maxĒ t > mφ . But this converges to e −( √ 2+a)mφ by (6.28) in Lemma 6.4, together with the localisation estimates of Proposition 7.3 (which this time allows to re-extend the range of integration). Putting this together with (7.28) and changing variables x = σ 2 z shows that the right-hand side of (7.23) is indeed equal to the right-hand side of (7.22). This proves the theorem.

THE CASE σ 1 > σ 2
In this section we proof Theorem 1.2. The existence of the process E from (1.15) will be a byproduct of the proof.
The following result is contained in the calculation of the maximal displacement in [10]. (8.3) Using that the maximal displacement is m(t) in this case we can proceed as in the proof of Theorem 6.1 up to (6.9) and only have to control are the particles of a standard BBM at time (1 − b)t and x i (bt) are the particles of a BBM with variance σ 1 at time bt. Using Lemma 8.1 and Theorem 1.2 of [10] as in the proof of Theorem 5.1 above, we obtain that (8.4), for t sufficiently large, equals The rest of the proof has an iterated structure. In a first step we show that conditioned on Then the expectation in (8.7) can be written as (we ignore the error term o(1) which is easily controlled using that the probability that the number of terms in the product is larger than N tends to zero as N ↑ ∞, uniformly in t)   By the same argumentation as in standard BBM setting (see [3]) one obtains that C(Z, C(φ, δ)) ≡ lim D↑∞C (D, Z, C(φ, δ)) = lim t↑∞ 2 π ∞ 0 v δ (t, y + √ 2t)ye √ 2y dy, (8.14) where v δ is the solution of the F-KPP equation with initial condition v(0, z) = 1 − h δ (z) with h δ (z) = E exp −C(φ, δ)Ze The next step is to take the limit δ → ∞. Using Lemma 4.10 of [3] we have that C(φ, δ) is monotone decreasing in δ and lim δ→∞ C(φ, δ) = C(φ), exists and is strictly positive, where C(φ) = lim To see that the constantsC(Z, C(φ)) are strictly positive, one uses the Laplace functionals Ψ t (φ) are bounded from above by E exp −φ max i≤n(bt) x i (bt) + max Here we used that the offspring of any of the particles at time bt has the same law. So the sum of the two maxima in the expression above has the same distribution as the largest descendent at time t off the largest particle at time bt. The limit of Eq. (8.19) as t ↑ ∞ exists and is strictly smaller than 1 by the convergence in law of the recentered maximum of a standard BBM. But this implies the positivity of the constantsC. Hence a limiting point process exists. Finally, one may easily check that the right hand side of (8.18) coincides with the Laplace functional of the point process defined in (1.15) by basically repeating the computations above.
Remark. Note that in particular, the structure of the variance profile is contained in the con-stantC(D, Z, C(φ, δ)) and that also the information on the structure of the limiting point process is contained in this constant. In fact, we see that in all cases we have considered in this paper, the Laplace functional of the limiting process has the form where M is a martingale limit (either Y of Z) and C is a map from the space of positive continuous functions with compact support to the real numbers. This function contains all the information on the specific limiting process. This is compatible with the finding in [16] in the case where the speed is a concave function of s/t. The universal form (8.20) is thus misleading and without knowledge of the specific form of C(φ), (8.20) contains almost no information.