Probability tilting of compensated fragmentations

Fragmentation processes are part of a broad class of models describing the evolution of a system of particles which split apart at random. These models are widely used in biology, materials science and nuclear physics, and their asymptotic behaviour at large times is interesting both mathematically and practically. The spine decomposition is a key tool in its study. In this work, we consider the class of compensated fragmentations, or homogeneous growth-fragmentations, recently defined by Bertoin. We give a complete spine decomposition of these processes in terms of a L\'evy process with immigration, and apply our result to study the asymptotic properties of the derivative martingale.


Introduction
Fragmentation processes offer a random model for particles which break apart as time passes.Informally, we imagine a single particle, characterised by its mass, which after some random time splits into two or more daughter particles, distributing its mass between them according to some law.The new particles act independently of one another and evolve in the same way.Variants of such processes have been studied over many years, with applications across the natural sciences [22,3,17].One large class of fragmentation models, encompassing the so-called homogeneous fragmentation processes, has been particularly successful, and a comprehensive discussion can be found in the book of Bertoin [6].
Compensated fragmentation processes were defined by Bertoin [7] as a generalisation of homogeneous fragmentations, and permit high-intensity fragmentation and Gaussian fluctuations of the sizes of fragments.The processes arise as the limits of homogeneous fragmentations under dilation [7,Theorem 2], and may also be thought of as being related to a type of branching Lévy process, for which the branching occurs at the jump times of the process.From this viewpoint, they may be regarded as the simplest example in the class of so-called Markovian growth-fragmentation processes [8], and for this reason they are sometimes called homogeneous growth-fragmentation processes.Other examples in the class of Markovian growth-fragmentations can be obtained by slicing planar random maps with boundary, as discovered by Bertoin et al. [14], or by considering the destruction of an infinite recursive tree, as in Baur and Bertoin [2].
The main purpose of this work is to give a complete spine decomposition for compensated fragmentation processes.This is motivated by the many applications that such decompositions have found in proving powerful results across the spectrum of branching process models.Since the foundational work of Lyons et al. [34] on 'conceptual' proofs of the L log L criterion for Galton-Watson processes, a large literature has emerged, of which we offer here only a selection, focusing on the applications we have in mind.
In the context of branching random walks, the spine decomposition has been used to prove martingale convergence theorems and to study the asymptotics, fluctuations and genealogy of the largest particle; see [38] for a detailed monograph with historical references.For branching Brownian motion, spine techniques were used by Chauvin and Rouault [20] to describe asymptotic presence probabilities, and by Kyprianou [32] and Yang and Ren [40] to study solutions of reaction-diffusion equations of Fisher-Kolmogorov-Petrowski-Piscounov (FKPP) type.In the context of superprocesses, we mention the study of strong laws of large numbers by Eckhoff et al. [25], which also contains a thorough review of the literature.
Spine techniques have lent themselves well to the study of homogeneous (pure) fragmentation processes.Convergence theorems were proved by Bertoin and Rouault [11], and the decomposition was used by Haas [26] to study the fragmentation equation, Harris et al. [28] for the proof of strong laws of large numbers, and Berestycki et al. [4] to look at solutions of FKPP equations.Returning to the topic of growth-fragmentation processes, Bertoin et al. [14, §4] used a spine decomposition in order to study certain random planar maps, and the results presented in this article overlap with theirs under certain parameter choices (see Remark 5.3.)Bertoin and Stephenson [12,§3.2]gave an explicit decomposition for compensated fragmentation processes in the case of finite fragmentation rate and applied it to the phenomenon of local explosion, and Bertoin and Watson [13, §6] made implicit use of a spine decomposition in studying the growth-fragmentation equation.
Our object of study is the compensated fragmentation process Z = (Z(t), t ≥ 0), where The values Z 1 (t), Z 2 (t), . . .are regarded as the ranked sizes of fragments as seen at time t.Unless otherwise specified, we will assume that Z(0) = (1, 0, . . .).
The law of Z is characterised by a triple (a, σ, ν) of characteristics, where a ∈ R, σ ≥ 0 and ν is a measure on the space P = p = (p 1 , p 2 , . . . ) : satisfying the moment condition Loosely speaking, a describes deterministic growth or decay of the fragments and σ describes the magnitude of Gaussian fluctuations in their sizes.The measure ν is called the dislocation measure, and ν(dp) represents the rate at which a fragment of size x splits into a cloud of particles of sizes xp 1 , xp 2 , . . . .The connection between Z and the triple is given by the cumulant κ, which is defined by the equation e tκ(q) = E i≥1 Z i (t) q .It is given by the following expression, akin to the Lévy-Khintchine formula for Lévy processes: The function κ takes values in R ∪ {∞}.We regard dom κ := {q ∈ R : κ(q) < ∞} as the function's domain.Condition (1) entails that q ∈ dom κ if and only if and that [2, ∞) ⊂ dom κ.One notable property of κ is that it is strictly convex and smooth on the interior of dom κ.
If the measure ν satisfies the stronger moment condition P (1 − p 1 ) ν(dp) < ∞, and σ = 0, then κ is the cumulant of a homogeneous fragmentation process Z in the sense of [6], with additional deterministic exponential growth or decay.
We shall prove a spine decomposition for Z under a change of measure.In particular, for ω ∈ dom κ, we define the (exponential) additive martingale W (ω, •) as follows: Since this is a unit-mean martingale (see the forthcoming Lemma 2.9), we may define a new, 'tilted' probability measure P ω , as follows.Fix t ≥ 0, and let A be an event depending only on the path of Z up to time t.Then, define Our first main result is Theorem 5.2, in which we show that under P ω , the process Z may be regarded as the exponential of a single spectrally negative Lévy process (the spine) with Laplace exponent κ(• + ω) − κ(ω), onto whose jumps are grafted independent copies of Z (under the original measure P).This is the spine decomposition, also known as a full many-to-one theorem.
In order to illustrate the power of this spine decomposition, we study the derivative martingale associated with Z.For ω in the interior of dom κ, this is defined by Since this martingale can take both positive and negative values, it is not immediately obvious whether its limit as t → ∞ exists.
Using our decomposition, we prove our second main result, Theorem 6.1, which states that the derivative martingale converges to a strictly negative limit under certain conditions.This limit is closely related to the process representing the largest fragment of the compensated fragmentation.Our theorem is the counterpart of results on the asymptotics of the derivative martingale which have been found in the context of homogeneous (pure) fragmentation processes [11], branching random walks [15,38] and branching Brownian motion [32].In the case of compensated fragmentation processes, Dadoun [23] studied the discrete-time skeletons of the derivative martingale via a branching random walk, and used their convergence to obtain asymptotics for the largest fragment.Our work complements and extends this by showing the almost sure convergence of the martingale in continuous time and showing that the expectation of the terminal value is infinite; we also obtain somewhat weaker conditions.This work lays the foundations for future research in two principal directions.The first concerns more general Markovian growth-fragmentations, and in particular we anticipate that it should be possible to extend the spine decomposition to growth-fragmentations based on generalised Ornstein-Uhlenbeck processes, as studied in [37,2].The second concerns applications for the homogeneous processes studied here.Our asymptotic results for the derivative martingale may be used to study the size of the largest fragment and the existence and uniqueness of travelling wave solutions to FKPP equations, much as in [4].
The organisation of this paper is as follows.In section 2, we give a rigorous definition of the branching Lévy process, outlining the truncation argument of [7] and simultaneously define a new labelling scheme for its particles.In section 3, we consider the measure P ω just presented, additionally distinguishing a single particle by picking from those particles alive at time t in a size-biased way.In section 4, we present a complete construction of a Markov process with a single distinguished particle, which we claim gives the law of the process Z with distinguished particle under P ω ; this claim is then proven in section 5. Finally, we discuss the asymptotic properties of the derivative martingale in section 6.

The branching Lévy process
Our goal in this section is to establish a genealogical structure for the compensated fragmentation process Z, that is to represent it as a random infinite marked tree.This is what allows us to study the spine decomposition.To be specific, we will define a family of Lévy processes, (Z u , u ∈ U), labelled by the nodes of a tree U.For t ≥ 0, let U t be the set of individuals present at time t.We will be able to list the elements of We also define a related point measure-valued process, called the branching Lévy process: One can easily recover the compensated fragmentation process Z from Z. Therefore, for convenience, we shall always work with Z from now on and state all our results in terms of Z.

Lévy processes
Since our main object of study is a branching Lévy process, it is unsurprising that Lévy processes play a key role.We give a short summary of the relevant definitions and properties.
A stochastic process ξ = (ξ(t), t ≥ 0) under a probability measure P is called a Lévy process if it has stationary, independent increments and càdlàg paths, and satisfies ξ(0) = 0 almost surely.The process ξ is said to be spectrally negative if the only points of discontinuity of its paths are negative jumps.The usual way to characterise the law of such a process is through its Laplace exponent; this is a function Ψ : R → R ∪ {∞}, such that for every t ≥ 0, E[e qξ (t) ] = e tΨ (q) .It is well-known that Ψ satisfies the so-called Lévy-Khintchine formula, as follows: and Ψ(q) < ∞ if q ≥ 0. Here, a ∈ R is called the centre of ξ, γ ≥ 0 is the Gaussian coefficient, and Π is a measure, called the Lévy measure, on (−∞, 0), which satisfies the moment condition The classification of Lévy processes is made more precise by the Lévy-Itô decomposition, which we now describe.Let M be a Poisson random measure on [0, ∞) × (−∞, 0) with intensity measure Leb × Π.Let B = (B(t), t ≥ 0) be a standard Brownian motion independent of M.Then, a Lévy process ξ with Laplace exponent Ψ can be constructed as: x M(ds, dx) − dsΠ(dx) , and the limit of compensated small jumps which appears as the last term is guaranteed to exist in the L 2 sense.We refer to the measure M as the jump measure of ξ.

Construction and truncation of the branching Lévy process
In this section, we give a rigorous definition of the branching Lévy process.Our presentation is inspired by Bertoin [7], and the main idea is first to define, given a sequence of numbers b n ≥ 0, a collection of truncated processes Z(bn) representing the positions, and attached labels, of particles which do not land 'too far' (i.e., at a distance greater than b n ) from their parent.This is necessary since the rate of fragmentation is, in general, infinite.These processes will be constructed such that they are consistent with one another, in a sense which will shortly be made precise, and such that taking n → ∞ reveals all of the particles.The main innovation compared to [7] is the inclusion of labels for the particles, and this is what allows us to study the spine decomposition.
Readers who are already familiar with the construction of [7] may wish to skip this section on first reading, and simply assume the existence of a set of particle labels which is consistent under truncation.
Let us introduce some notation.The set of labels will be given by U = ∪ j≥0 (N 3 ) j , where we use the convention (N 3 ) 0 = {∅}, and we will denote elements of this set in the following way: if u i ∈ N 3 for i = 1, . . ., I, then we will write (u 1 , u 2 , . . ., u I ) as u 1 u 2 • • • u I .The label ∅ represents the progenitor particle which is alive at time 0, sometimes called the 'Eve' particle; and each offspring of the particle with label u ∈ U receives a label u(m, k, i), for some choice of m, k, i which will be explained shortly.Note that we use a Crump-Mode-Jagers type labelling scheme, in which the closest of the 'offspring' of a particle at each branching event retains the parent's identity; see [30] for a discussion of this so-called 'general branching process' framework.Our system is reminiscent of the one adopted in [8], which also uses immortal particles with labels based on the size of the jumps, but for which the labels are purely generational.We mention here also an alternative approach to the genealogy by Bertoin and Mallein [9], based upon a restriction to dyadic rational times, which is of quite a different style.
Let (a, σ, ν) be a triple of characteristics satisfying the conditions outlined in the introduction, and let κ be the cumulant given by (2).We assume throughout that ν({0}) = 0, where 0 := (0, . . . ) Our results will still hold without this condition, but it simplifies notation and proofs by allowing us to ignore the possibility that particles are killed outright.
Let (b n ) n≥0 ⊂ [0, ∞) be a strictly increasing sequence such that b 0 = 0 and b n → ∞; this will be a fixed sequence of truncation levels, which will be assumed given throughout this work.For b ≥ 0, we let k b : P → P be given by and define the truncated dislocation measure via the pushforward ν (b) = ν • k −1 b .We now consider n ≥ 0 to be fixed; we are going to define the branching Lévy process truncated at level b n .Since the labelling is a little more complex than usual, let us first give an intuitive description of this process.The process begins at time zero with a single particle having label ∅, and positioned at the origin.The spatial position of the particle follows a spectrally negative Lévy process ξ ∅ with Laplace exponent Ψ (bn) defined by where P 1 is the set of sequences with at most one non-zero element, Crucially, the moment condition (1) implies that the pushforward ν (bn) | P 1 • log −1 is indeed a Lévy measure, so Ψ (bn) is the Laplace exponent of a Lévy process.Moreover, ν (bn)  restricted to P \ P 1 is finite.At time T ∅,1 , having an exponential distribution with parameter λ bn := ν (bn) (P \ P 1 ) < ∞, the particle ∅ branches.Take p to be a random variable with distribution ν (bn) | P\P 1 /λ bn , and scatter particles in locations ξ ∅ (T ∅,1 −) + log p i , for i ≥ 1.The particle in location ξ ∅ (T ∅,1 −) + log p 1 retains the label ∅.The particles in the other locations receive labels ∅(m, 1, j) = (m, 1, j), where m ≤ n is the unique natural number such that e −b m−1 ≥ p i > e −bm , and j is the minimal natural number such the initial location of (m, 1, j) in R is less than or equal to that of (m, 1, j − 1) (recall that particles are scattered downwards.) After this first branching event, the particle ∅ continues to perform a Lévy process, and then at time T ∅,1 + T ∅,2 , with T ∅,2 independent of and equal in distribution to T ∅,1 , it branches again.Particles are scattered according to the same rule, this time receiving labels (m, 2, j), with the 2 indicating that this is the second branching event for ∅.The particle then proceeds in this manner.
Meanwhile, each particle u which was already born has the same evolution.It performs a Lévy process ξ u with the same law as ξ ∅ , and after waiting a period T u,1 , independent of and equal in distribution to T ∅,1 , it branches.Its children are scattered in the same way as before, but they receive labels u(m, 1, j); and, subsequently, at the k-th branching event of u, the children receive labels u(m, k, j).
A sketch illustrating the labelling scheme appears in Figure 1.
Having established the main idea, we now give a rigorous definition of the branching Lévy process truncated at level b n .
Strictly speaking, all the symbols we define in the next few paragraphs should have an annotation of the sort • (bn) , but this would be rather cumbersome.The notations a • , ξ • , N • , T • , ∆ (•) and Q • , shortly to be defined, will not appear again in the sequel, so we warn that they depend implicitly on n; and all other notations will either receive an annotation or will turn out not to depend on n after all.
Emulating [7], we define the following random elements.
In the above list, ξ u represents the motion of the particle with label u, ignoring the times at which it branches; N u jumps at the branching times of u; and the mass-partition ∆ (u,p) = (∆ (u,p) i ) i≥1 encodes the relative locations of u and its children at the p-th time that u branches.Moreover, these three families are independent one of the others.
Our first step is to divide the ∆ (u,p) • into (disjoint) classes, which correspond to the truncation level of the children they represent.Define For l ≥ 1, let ∆ (u,p,l) = ∆ (u,p) j , where • ↓ indicates decreasing rearrangement of the sequence.For every l ≥ 1, we regard the finite sequence ∆ (u,p,l) as being an element of P, by filling the tail with zeroes.Note that ∆ (u,p,l) = 0 for all l > n.
Next, for each label u, we will give definitions for certain random elements.These are: a u ∈ [0, ∞), the birth time of u; Z u = (Z u (t), t ≥ 0), with Z u (t) ∈ R representing the position of u at time t; and K (bn) u = (K (bn)  u (t), t ≥ 0), with K (bn) u (t) = (K (bn) u (t, l) : l ≥ 1) ∈ (N ∪ {0}) N .The latter sequence has the interpretation that K (bn)  u (t, l) is the number of branching events which particle u has had up to time t in which at least one child with label of the form u(l, k, i), for any k, i ∈ N, was born.
For the particle ∅, let For the remaining particles, we first need a bit of notation: let with the convention that inf ∅ = ∞.Thus, Q u,m (k) is the number of birth events of u which take place until the k-th event at which the sequence ∆ (u,p) contains at least one element y with L(y) = m.Fix u ∈ U and (m, k, i) ∈ N 3 arbitrary, and write u = u(m, k, i).
Then let: We define We are now in a position to define the following elements: is the branching Lévy process truncated at level b n , and is the labelled branching Lévy process truncated at level b n .
From the latter, let us also define which is the set of labels of particles present at time t.
We introduce now the following function, which will be required to understand the un-truncated process.For u ∈ U, define Thus, ML(u) can be seen as the maximum value of r for which a particle with label u could appear in the construction of Z(br) , and indeed, if u ∈ U (bn) t , then ML(u) ≤ n.Of these processes, Z (bn) is a branching Lévy process with characteristics (a, σ, ν (bn) ) in the sense of Bertoin [7, Definition 1], and the others are our extensions.In particular, we have by [7,Theorem 1] that E u∈U (bn) t e qZu(t) = e tκ (bn) (q) , for all q ∈ R, where This function represents the cumulant of the truncated branching Lévy process.
Remark 2.2.In the construction above, the role of the the component K (bn) u , which records some information about the children of u, is simply to ensure that the process Z(bn) is Markov (see the forthcoming Lemma 2.9.)Without the inclusion of this mark, if a particle u branches at time t, it is not possible to determine the labels of its children solely from Z(bn) (t).We emphasise that the unlabelled process, Z (bn) , is always Markov [7, p. 1272].
Having defined the truncated branching Lévy process Z(bn) , we introduce the idea of further truncating it at level b m ≤ b n .That is, we consider keeping, at each branching event, the child particle which is the closest to the parent, and suppressing the other children if and only if their distance to the position of the parent prior to branching is larger than or equal to b m , together with their descendants.Mathematically, for m ≤ n, we let (U which is the truncation of Z (bn) to level b m , and similarly With this definition, we get the following lemma.; note that these are precisely the particles u for which ML(u) = 2.
The paths in dotted blue represent the particles in the process is equal in law to Z (bm) and ( Z(bn) ) (bm) is equal in law to Z(bm) .
Proof.The first statement is [7, Lemma 3], and the second follows by considering the intuitive description of the labels beginning on page 6: if all u with ML(u) > m are removed, then those elements do not appear in Z, and the sequence (K (bn) u (t)) (bm) for the remaining u simply erases the record of birth events that would have given rise to those erased u.
We therefore see that both the labels and the positions of the particles are consistent under truncation, as are the marks K (b•)  u .By the Kolmogorov extension theorem, we can construct, simultaneously on the same probability space, a collection of processes (Z (bn) ) n≥0 and ( Z(bn) ) n≥0 with the property that the equality in law of Lemma 2.3 is replaced by almost sure equality.Thus, we are able to define the following (un-truncated) processes: Definition 2.4.The branching Lévy process with characteristics (a, σ, ν) is For the (un-truncated) process Z, the set of labels of particles present up to time t is Definition 2.5.The labelled branching Lévy process with characteristics (a, σ, ν) is In particular, since κ (bn) (q) ↑ κ(q) whenever q ∈ dom κ, we have that which is an important property of the process.
Remark 2.6.(i) In [9,14], growth-fragmentations are studied in which upward jumps of the particle locations (with or without associated branching) are permitted.This can be accommodated in our construction as well, simply by removing the restriction that the processes ξ • be spectrally negative (and, if necessary, incorporating branching at upward jumps) thereby giving versions of these processes with labels and genealogies.
(iii) We wish to emphasise that, despite the technical appearance of our label definitions, they can be found deterministically once the unlabelled branching Lévy process is known.In particular, if we have all Z (bn) defined on the same probability space, and we are given a single sample from this space, then a sample of the process Z(bn) can be constructed, without extra randomness, using the intuitive definition of the labels on page 6.This will be important in section 4.

Regularity and the branching property
One of the key results of [7] was the branching property of the compensated fragmentation Z.This result extends naturally to Z, and we shall shortly give an explicit statement of it for Z.However, we first elaborate a little on the state space of Z, and consider the regularity of the process.We first expand on the space U. Some of the definitions here will not be needed until the next section, but we give them here for ease of reference.We define relations and ≺ on U to denote ancestry, so u v if there exists some u ∈ U such that v = uu , and u ≺ v if u v and u = v.Using this, we define ancestors and descendants as follows, which is a little subtle due to immortality of particles.If s < t and v ∈ U t , we define u = Anc(s; v) to be the largest (with respect to ) element of U s such that u v. Conversely, for u ∈ U s , we define Desc(s, u; t) = {v ∈ U t : u = Anc(s; v)}.We also define |u| to be the unique n ∈ N ∪ {0} such that u ∈ (N 3 ) n , that is, the generation of u; and (u i ) 1≤i≤n to be those elements of N 3 such that u = u 1 • • • u n .We extend this so that u i = (0, 0, 0) if |u| < i.Finally, we consider U be endowed with the metric ρ(u, v) = i≥1 u i − v i , where here • is the usual Euclidean norm on R 3 .Define the space L to consist of those sequences K = (K(l)) l≥1 in the set (N ∪ {0}) N for which the function This is a complete, separable metric space when given the usual product metric.It will prove useful to define M p (X) to be the set of point measures on X which are finite on bounded subsets of X.We give this a metric as follows (see [24, §A2.6]).Let q ∈ (dom κ) • be chosen arbitrarily, and let x 0 = (∅, 0, 0) ∈ X.If µ, µ are point measures on X, let where µ r = µ| Br(x 0 ) is the measure µ restricted to the open ball B r (x 0 ) of radius r ≥ 0 around x 0 , and d (r) is the Lévy-Prokhorov metric on B r (x 0 ); this is defined as: where For a labelled branching Lévy process Z(t) = u∈Ut δ (u,Ku(t),Zu(t)) , one may show that for any u ∈ U and t ≥ 0, K u (t) L < ∞ almost surely.Therefore, we may regard Z as taking values in the complete separable metric space M p (X) with metric d q .Furthermore, we have the following pair of results: Lemma 2.7.For q ∈ (dom κ) • and t ≥ 0, sup s≤t d q ( Z(s), Z(bn) (s)) → 0 in probability.
Proof.Fix q ∈ (dom κ) • and t ≥ 0. To begin with, We study the two terms on the right-hand side separately.
We first look at the second term.Using the definition of the Lévy-Prokhorov metric and the fact that Z (bn) ⊂ Z, we find that for every r ≥ 0, , the latter being the number of particles in Z (b l ) (t), then yields Noticing that For fixed t ≥ 0, the right-hand side tends to zero as n → ∞.This ensures that the second term of ( 9) converges to zero in probability.
Turning to the first term in (9), we have We now integrate in order to study the d q -distance, and use the bound 1 {Zu(s)∈(−r,r)} ≤ e q (r+Zu(s)) , where q ∈ dom κ is chosen arbitrarily such that q < q holds: e q Zu(s) dr = q q − q u∈Us e q Zu(s) − u∈U (bn) s e q Zu(s) .(10) Now, the proof is completed using Doob's maximal inequality exactly as in [7, Proof of Lemma 4].

Corollary 2.8 (regularity of Z). The process Z possesses a càdlàg version in M p (X).
Proof.This follows from the above lemma exactly as in [7, Proposition 2].
Thanks to this result, we can consider P to be defined on the space Ω = D([0, ∞), M p (X)) of càdlàg functions from [0, ∞) to M p (X), endowed with the Skorokhod topology; we refer the reader to [16] for more details on this space.
The process Z has the Markov property, which in this context is usually called the branching property and which we now explain.We first define translation operators for u ∈ U and t ≥ 0, as follows.Let θ u,t : Ω → Ω be such that, if That is, θ u,t shifts the particle process such that one only observes the particle with label u and its descendants which are born strictly after t; and the particle represented by u is shifted to start at the origin, at time 0, with label ∅ and no recollection of its genealogical history.
Let (F t ) t≥0 be the natural filtration of Z, namely F t = σ( Z(s), s ≤ t), and define F ∞ = σ ∪ t≥0 F t .We then have the following simple result.Lemma 2.9 (branching property).For each u ∈ U, let F u be a bounded, measurable functional.Then Proof.This follows directly from the branching property of Z in [7, p. 1272] and the construction of the labels.
We remark that, as a consequence of Corollary 2.8, the constant time t in the above lemma may be replaced by any (F t )-stopping time, or indeed by a stopping line in the sense of [4, §4].

Change of measure and backward selection of the spine
For ω ∈ dom κ, we define the exponential additive martingale W (ω, •) just as we did in the introduction: It has been proved in [7,Corollary 3] that this is a martingale with unit mean.As such, we may make a martingale change of measure, as follows.We define a measure P ω on F ∞ by setting, for The martingale property of W (ω, •) ensures that this change of measure is consistent across different choices of t, and also implies that the process Z under P ω remains a Markov process.P ω is often referred to as an 'exponential tilting' of the probability measure P.
Under this tilted measure, we isolate a single particle as the 'spine'.We first expand the basic probability space Ω to produce Ω = Ω×U [0,∞) , and introduce for each t ≥ 0 a random variable U t such that, for We may then extend the definition of P ω to sets in F∞ .For A ∈ F t and u ∈ U, let It is well-known (see, for instance, [27, Theorem 4.2]) that events Â ∈ Ft may be written as Â = u∈U (A u ∩ {U t = u}), with A u ∈ F t , and so ( 12) is equivalent to defining The measure P ω is well-defined, in that, if Â ∈ F t , then the right-hand side of ( 13) reduces simply to (11).However, in terms of the definition on F∞ , P ω distinguishes the label U t at time t, and we call this the spine label.
For each fixed t ≥ 0, if we define U t via (12), we can project it backward by setting U s = Anc(s; U t ) for s ≤ t.Due to the branching property of Z, this is consistent with evaluating P ω on Fs , as is made precise in the following lemma.Lemma 3.1 (consistency of P ω ).Let s < t and u ∈ U. Let P t ω indicate the measure P ω defined on Ft by means of (12) and back-projection of U t , and P s ω similarly for P ω defined on Fs .If A ∈ F s , then Proof.Firstly, we have e ωZv(t) F s = e −sκ(ω) e ωZu(s) , due to the branching property.Then, We refer to the process ( Z, U ) = (( Z(t), U t ), t ≥ 0) as the branching Lévy process with spine.In order for it to be useful, it is important that ( Z, U ) retain the branching property.
For the sake of clarity, we keep the time-annotation P t ω which was introduced in the last lemma.
Lemma 3.2 (branching property of ( Z, U )). Fix t ≥ s ≥ 0. Let F v be an F t−s -measurable functional for each v ∈ U, and let G be σ(U t−s )-measurable.Then, Proof.By Kolmogorov's definition of conditional expectation and the definition of Fs , it is sufficient to prove that, for K an F s -measurable functional and u ∈ U, Fixing G = 1 {U t−s =u } , for some u ∈ U, the left-hand side is equal to e −tκ(ω) P K1 {u=Anc(s;uu )} e ωZ uu (t) v∈Us where in the second line we have used Lemma 2.9 and the fact that the event u = Anc(s; uu ) is equivalent to the event that uu is born after time s (or u = ∅); and in the third and fourth lines we have used the definition of P • ω .An appeal to Lemma 3.1 yields (14), which completes the proof.
From now on we will drop the time-annotations P t ω and simply use the notation P ω .Our primary goal in the remainder of the article is to characterise the law of the process ( Z, U ) in terms of well-understood objects.

Forward construction of the process with spine
In this section, we give a construction of a Markov process with values in the set of point measures and with a certain distinguished line of descent.The process, which we will write as ( Ȳ, V ), is regarded as being defined under P ω , and we call it the decorated spine process with parameters (a, σ, ν, ω).In the next section, we will show that it coincides in law with the process ( Z, U ) described in section 3.
We start with a candidate for the motion of the spine particle itself.Let ξ be a spectrally negative Lévy process whose Laplace exponent has the Lévy-Khintchine representation where Note that in particular, the Lévy measure of ξ is given by the pushforward Π := π • log −1 .The motivation for this definition of ξ is that, if ν(P \ P 1 ) < ∞, then by [12, Proposition 3.4] the process (Z Ut (t), t ≥ 0) under P ω is known to be equal in law to the process ξ; this is not difficult to prove even in the absence of said finiteness condition, but it will be a corollary of the main theorem in the next section, so we do not pursue this here.
We regard ξ as representing the position of the spine particle, and our goal is now to construct the rest of the branching Lévy process around it.There will be three steps to this: firstly, we take the Poisson random measure giving the jump times and sizes of ξ.We then add decorations to this which indicate the additional offspring which should be present due to the branching structure; and in the final step, we graft independent branching Lévy processes (under P) onto this structure.
Next we require a short lemma establishing the existence of a conditional measure.Lemma 4.1.For each i ∈ N, and y > 0, there exists a probability measure ν i (dp | y) on P such that Proof.Let i : P → (0, 1) be given by i (p 1 , p 2 , . . . ) = p i , and let Thus, h i (p i ) ≤ (1−p 1 ) 2 for all i and p, and in particular Then by standard results on disintegration of measures (see [39], for instance) there exist measures ν i (• | y) for each y such that ν i (P \ −1 i (y) | y) = 0 and This completes the proof.
This has the following consistency properties: (iv) Let N (ds, dy, di, dp) be the q-randomisation of M , in the sense of [31,Ch. 12 This completes the definition of the decorations, and we will now define a process Y = (Y(t), t ≥ 0).We regard the definition as being given under the probability measure P ω , and we assume that the underlying probability space has been enlarged as required to accommodate it.Definition 4.3.Let (Z [s,j] ) s∈R,j∈N denote a collection of independent branching Lévy processes with triple (a, σ, ν).Under the probability measure P ω , the decorated Lévy process Y, with parameters (a, σ, ν, ω), is defined as follows: where the sum appearing on the right-hand side is over only those j for which p j > 0. The summand has the following interpretation: if µ = i∈I δ µ i is a point measure and z ∈ R, then µ + z := i∈I δ µ i +z .
Let us consider the process Y under truncation.Formally, this is required to give the particles labels; however, the truncated processes will also be a vital component in showing the equivalence of the two spine constructions.Let b > 0, and recall that k b is given by (6).We define a random measure N b by the mapping and define the first entry time which is a stopping time in the natural filtration of N .Then τ b is the time at which the spine is killed under truncation at level b, and it has an exponential distribution with parameter θ b := We define the process Y (b) by the expression where (Z [s,j] ) (b) indicates that the immigrated copy of Z is truncated at level b.
With this definition, we have all the processes Y (bn) , for n ≥ 0, defined on the same probability space as Y.Moreover, following Remark 2.6(iii), we also have the processes Ȳ(bn) all defined on the same space.Now suppose that m < n, and denote by (Y (bn) ) (bm)  the result of applying the truncation method of (8) to the process Y (bn) .It follows that (Y (bn) ) (bm) = Y (bm) almost surely; this can be verified by comparing the particles present at the first braching time T bn := inf{t ≥ 0 : #Y (bn) ≥ 2}, and then proceeding iteratively.Thus, we have that, for every t ≥ 0, Y(t) = ∪ n≥0 Y (bn) (t) almost surely, with Ȳ(t) being defined similarly.
We now specify a distinguished line of descent in Ȳ, which we denote by V = (V t , t ≥ 0) with V t ∈ U. We want it to track the particle whose position is given by ξ, and it may be found explicitly as follows.
We remark that, by its construction, ( Ȳ, V ) = (( Ȳ(t), V t ), t ≥ 0) is a Markov process, and in particular it possesses a branching property exactly analogous to Lemma 3.2.Moreover, it has similar regularity properties, as we now show.We need the following lemma, whose proof is quite technical but requires nothing more than the definition of Y and an understanding of the additive martingale W (ω, •) of Z. Lemma 4.5.For q ∈ (dom κ) • ∩ (1, ∞) and t ≥ 0, sup s≤t d q ( Ȳ(s), Ȳ(bn) (s)) → 0 in probability.
Proof.In the proof, we will use similar notation (U s , K u (s), etc.) for the atoms of Ȳ to that which we used for the atoms of Z.
The proof follows very similar lines to the proof of Lemma 2.7, and we again begin by using the triangle inequality to obtain d q ( Ȳ(s), Ȳ(bn) (s)) ≤ d q u∈Us δ (u,Ku(s),Yu(s)) , u∈U (bn) s δ (u,Ku(s),Yu(s)) To show that the second term vanishes as n → ∞, we can use the same method as in Lemma 2.7, so long as we can adequately bound E ω [#Y (bn) (t)].We do this as follows, beginning with: where in the final equality we use the fact that N bn restricted to [0, ∞)×A c bn is independent of τ bn , together with the compensation formula for the Poisson random measure N bn with intensity measure η bn , which is η restricted to [0, ∞) × A c bn .Recall that τ b is an exponentially distributed random variable with rate θ b .Moreover, , where η (b) is the measure η constructed as in Definition 4.2 for the parameters (a, σ, ν (b) , ω), that is, Thus we can rewrite the previous expression to obtain that where #p is the number of non-zero elements in p. Continuing to evaluate the components of this expression, we obtain We observe that If ω ≥ 0, then p ω 1 ≤ 1, whereas if ω < 0, then 1 {p 1 >e −bn } p ω 1 ≤ e −ωbn .In either case, we have = max(1, e −ωbn )κ (bn) (0).

It follows that
Recall from the proof of Lemma 2.7 that κ (bn) (0) < Ce 2bn for some C > 0 depending only on ν; thus, for some C > 0, we have This is sufficient for our method of bounding the second term in (17) to work.
We turn now to the first term of (17).Using the same trick as in (10), we select q arbitrarily such that q > q and q ∈ (dom κ) • , and obtain We now use the definition of Y and Y (bn) to write u∈Us\U (bn) s e q Yu(s) = I 1 (s)+I 2 (s)+I 3 (s), where for reasons of brevity the terms I i will be defined as we proceed.The first of these is for arbitrary v ≥ 0 and j ≥ 1, and observe that this is a non-negative martingale in its own filtration, it then follows that We first claim that if q ≥ 1 and q ∈ dom κ, then We begin with the estimate If ω ≥ 0, then (20) follows from the fact that q ∈ dom κ and q ≥ 1.If ω < 0, then since p ∈ P \ P 1 , we have p ω 1 ≤ p ω 2 ≤ i≥2 p ω i and ( 20) again follows.Finally, using Doob's maximal inequality just as in Lemma 2.7, we see that sup w≤t M (n) 0,1 (w) converges to 0 in probability as n → ∞.Thus, the right-hand side of ( 19) approaches 0 also, and so sup s≤t I 1 (s) tends to 0 in probability.
This deals with the term I 1 , which is the main difficulty.The term I 2 is defined as N bn (dv, dy, di, dp) Using a similar technique to the one for the term I 1 , we obtain We can then make the estimate where in the second inequality, we use a variation on Doob's L 2 -inequality (see the proof of Corollary II.1.6 in [35].)Moreover, take ε > 0 such that q − ε > 1 and q − ε ∈ dom κ, then i≥1 j =i p ω i p q j 1 {j =1 and p j <e −bn } ≤ e −εbn i≥1 j =i p ω i p q −ε j , and just as in the I 1 case, we know that P i≥1 j =i p ω i p q −ε j ν(dp) < ∞.It follows that the right hand side of (21) tends to zero, and thus sup s≤t I 2 (s) → 0 in probability.
Lastly, we turn to I 3 .This term is defined as In particular, Making a change of variable in the integral, and using the independence properties of the Poisson point process N bn , we obtain By (20) and ( 22), we are left with just the first expectation, for which we have: The second term on the right-hand side may be bounded using Doob's L 2 -inequality for the exponential martingale of the Lévy process ξ; and the first term approaches zero as n → ∞ since τ bn has an exponential distribution whose parameter approaches zero.It follows that sup s≤t I 3 (s) → 0 in probability.
Having shown the necessary convergence for each term I i , we have now proved that the first term in (17) converges to zero in probability, and this completes the proof.
We will fix from now on a metric d q with q ∈ (dom κ) • ∩ (1, ∞), and assume that the process Ȳ is càdlàg.

The spine decomposition theorem
We now show that the forward and backward constructions of the process with distinguished spine under P ω , i.e. ( Z(t), U t ) t≥0 and ( Ȳ(t), V t ) t≥0 , in fact have the same law.
We use a truncation technique, recalling the definitions of Z (b) , ν (b) and the sequence (b n ) from section 2.2.In order to simplify notation in the proof, we define the measure P (bn) such that the law of ( Z, U • ) under P (bn) is that of ( Z(bn) , U (bn)

•
) under P.For n ≥ 1, we consider on the one hand the measure P (bn) ω constructed from P (bn) as follows: where F is a continuous bounded functional on D([0, ∞), M p (X)), and we use the convention e Zu(t) = 0 if u / ∈ U t .On the other hand, we regard the process ( Ȳ, V ) under P (bn) ω as being the decorated spine process with parameters (a, σ, ν (bn) , ω).
Proof.We verify that the two processes have the same decomposition at the first branching time; since both are Markov, this is sufficient.We start with ( Z, U ) under P (bn) ω .Let T denote the time of the first branching event, that is, where # Z(t) = Z(t)(X) is the number of atoms in Z(t).From the construction of the truncated processes, we know that under P (bn) , T has an exponential distribution with rate λ bn = ν (bn) (P \P 1 ).The point measure Z(T ) has a countable number of atoms; let (u (j) ) j≥1 be their labels, such that u (1) = ∅ and (u (j) ) j≥2 is lexicographically ordered; in particular, this implies that Z u (j) (T ) d = Z ∅ (T −) + log p j , where p is sampled from ν (bn) | P\P 1 /λ bn .Furthermore, the translates Z • θ u (j) ,T are independent of each other and of FT , where we recall that Ft = σ( Z(s), U s ; s ≤ t), for t ≥ 0. Additionally, (Z ∅ (s), s < T ) is independent of T and p, and has the law of a Lévy process with Laplace exponent Ψ (bn) killed at an independent exponential time of rate λ bn .
All of these facts add up to the following computation, in which F j is a Ft -measurable functional, G j is a measurable function of R and J is a measurable functional on path space; and u ∈ U. Let i be such that u = u (i) v, with i = 1 only if u (j) ≺ u for all j ≥ 2, and as a shorthand denote ∆Z u (j) (T ) = Z u (j) (T ) − Z ∅ (T −).
We now turn to the process ( Ȳ, V ), again under P (bn) ω .We again define the branching time, where A = (0, 1) × N × (P \ P 1 ); that is, T is the first time that a jump of ξ is accompanied by immigration.We consider the quantity where F j , G j , J are measurable functionals as above.
Observe that, under P (bn) ω , N is a Poisson random measure with intensity η (bn) as defined in (18).Now, by the definition of T and standard properties of Poisson random measures [5, §O.5], we know that T has an exponential distribution with rate In fact, we can say more: the restriction N | [0,T )×(0,1)×N×P has same law as the restriction , where τ is an exponentially-distributed random variable with rate µ bn which is independent of N , and A c = (0, 1) × N × P 1 .This has implications for the process (Y ∅ (s), s < T ) which, importantly, is the same as the spine process ξ on the time interval in question; it remains a Lévy process with Gaussian coefficient σ, but has two changes: first, it is killed independently at rate µ bn .Second, the law of its jump measure, which we recall is the pushforward of N (ds, dy, N, P) by the map (s, y) → (s, log y), is altered because the law of N is altered.Working with the Lévy-Itô decomposition, we see that (Y ∅ (s), s < T ) has Laplace exponent given by where Note that the centre c bn,ω differs from the centre of ξ due to the change in compensation of the small jumps.It follows that (Y ∅ (s), s < T ) has the law of χ (bn) ω killed at an independent exponential time with rate µ bn .Considering the particles born at time T , define the children (u (j) ) j≥1 of Y ∅ as for the previous part of the proof, and assume that again u = u (i) v, with the convention that i = 1 only if u (j) ≺ u for all j ≥ 2. Using again the properties of Poisson random measures, the atom (T, y, k, p) of N appearing at time T is such that (y, k, p) has distribution η (bn) ([0,1],•)| A µ bn , and we are further restricted in (24) to the event V T +t = u, which implies that here we are restricted to the event {k = i}.Finally, from the construction of the decorated spine process, we know that each child u (j) is initially positioned at Y ∅ (T −) + log p j , and that the translate Y • θ u (i) ,T has the law of Y under P (bn)  ω , while the translates Y • θ u (j) ,T are independent of one another and have the law of Z under P (bn) .
The discussion above essentially proves the required decomposition, but for clarity we provide the following calculation, in which J, F j , G j are measurable functionals as above.
This completes the proof.
Having established the result for these truncated processes, we need to remove the truncation, and this proves the following theorem on the spine decomposition, which is our main result.Theorem 5.2 (Spine decomposition).Under P ω , ( Z(t), U t ) t≥0 is equal in law to ( Ȳ(t), V t ) t≥0 .
Proof.Since the processes ( Z, U ) and ( Ȳ, V ) are both Markov, it is sufficient to fix t ≥ 0 and prove that ( Z(t), U t ) has the same distribution as ( Ȳ(t), V t ).
For the measure P (bn) ω , ( 23) implies for continuous bounded F .Under P, Z(bn) (t) → Z(t) weakly on M p (X).Furthermore, for every ω ∈ dom κ, κ (bn) (ω) ↑ κ(ω).Hence, certainly the distribution of ( Z(t), U t ) under P (bn) ω converges weakly to the distribution of ( Z(t), U t ) under P ω .We now address the convergence of the law of ( Ȳ(t), V t ).Consider first the process Ȳ(bn) , which was defined in section 4, using the notation A bn and τ bn .We may consider the joint process ( Ȳ(bn) , V (bn) ), by adjoining a 'cemetery' element ∂ to the collection of labels, and defining indicates that the distinguished line of descent has been killed before time t in the process Ȳ(bn) .
We will start by showing that, for F a continuous bounded functional on M p (X) and u ∈ U, The second equality is an immediate corollary of Lemma 5.1, so we have only to prove the first equality.
The conditioning on the left-hand side of ( 26) is the same as conditioning on the event {t < τ bn }, where τ bn is the hitting time of the set A bn for the Poisson random measure N .We notice that, given {t < τ bn }, we have the equality where N bn is defined in (15).
The change in the law of the measure N which is induced by this conditioning causes a corresponding change to the jump measure of ξ, which we again recall is the pushforward of N (ds, dy, N, P) under (s, y) → (s, log y).Using the Lévy-Itô decomposition much as in the proof of ( 25), we may show that under P ω (• | t < τ bn ), (ξ, N bn ) has the same law as (ξ, N ) does under P (bn)  ω .Finally, Ȳ(bn) is measurable with respect to N bn and ξ, and the same is true of V (bn) t on the event {t < τ bn }.This completes the proof of (26).We now need to take n → ∞.The right-hand side of (26) converges to E ω [F ( Z(t))1 {Ut=u} ], as discussed at the beginning of the proof.The left-hand side of ( 26) is equal to For every t ≥ 0 and every realisation of the process, {V = ∂} holds for large enough n; moreover, by Lemma 4.5 we have Ȳ(bn) (t) → Ȳ(t) in probability, and hence (extracting a subsequence if necessary) also almost surely.It follows from the dominated convergence theorem that (27) converges to P ω [F ( Ȳ(t))1 {Vt=u} ].This completes the proof.
Remark 5.3.The theorem above establishes a 'full many-to-one theorem' in the language of [27].We stress that a version of this theorem has been proved, by Bertoin et al. [14], for the case of binary branching (ν(dp) being supported by those p such that p 3 = 0) under the condition that κ(ω) = 0, though their description of the decomposition differs somewhat from ours due to their view of the genealogy.
An immediate corollary is the following useful expression for certain functionals of Z(t): Corollary 5.4 (Many-to-one formula).For a non-negative Borel function f : R → [0, ∞), where ξ under the measure P is a Lévy process with Laplace exponent E ω κ.
We also point out the following consequence for the process Ȳ. Recall that ( Ȳ, V ) is Markov; this result says the same is true even if we forget V .Corollary 5.5.Ȳ is a Markov process under P ω .
Proof.Z is defined (without the distinguished particle U ) by a change of measure of a Markov process with respect to the martingale W (ω, •), and is therefore a Markov process in its own right.Ȳ is equal in distribution to Z under P ω , and this completes the proof.
Our purpose is to study the asymptotic properties of this martingale.Before stating our main result, Theorem 6.1, let us first distinguish two regimes of ω.By the convexity of κ, we observe that the function q → qκ (q) − κ(q) is increasing on (dom κ) • and has at most one sign change.From now on, we assume that there exists (a unique) ω > 0, such that ω ∈ (dom κ) • and ωκ (ω) − κ(ω) = 0. (H) The value ω has proved to be critical for the study of the uniform integrability of the exponential additive martingale W (ω, •); see [23,10].We point out that the assumption (H) entails that κ(0) ∈ (0, +∞], so the non-extinction event has strictly positive probability: where #Z(t) := Z(t)(R) denotes the number of atoms at time t.Write P * for the probability measure P conditional on non-extinction.We recall our standing assumption ν({0}) = 0, implying that particles are never killed; this in fact implies that non-extinction occurs P-almost surely, and so in fact P * = P for us.However, we retain the notation P * in order to make clear how our results would look without our assumption.
We now state the main result of this section.
In [23, Corollary 2.10(b)], Dadoun has shown the P * -almost sure negativity of the random variable ∂W (ω, ∞), identified there as the almost sure limit of the discrete martingale (∂W (ω, n), n = 0, 1, , . . . ) Our theorem improves upon [23] by proving convergence of the continuous-time martingale and finding the expected value of the limit random variable.Furthermore, we do not require condition (2.7) of [23].
The limit ∂W (ω, ∞) has an intimate connection with the asymptotic behaviour of the largest fragment and Seneta-Heyde norming for W (ω, •); see [23, Corollary 2.10 and Remark 2.11].Analogues of Theorem 6.1 were proved for multitype branching random walks by Biggins and Kyprianou [15], for branching Brownian motion by Kyprianou [32] and for pure fragmentation processes by Bertoin and Rouault [11].A thorough exposition of the theory for branching random walks is given in the monograph of Shi [38].
The common approach of the works described above is a technique based upon stopping particles moving at a certain speed, and we stress that the spine decomposition plays a central role in these arguments.Our proof, which is primarily modelled on that of Bertoin and Rouault [11], is postponed to section 6.2; in the coming section 6.1, we prepare for it by investigating a related family of martingales.Remark 6.2.For generic branching Lévy processes [9] in which upward jumps of the particle locations are permitted, the same arguments apply to prove (i) and (ii) of Theorem 6.1, but not (iii).For ω = ω, we expect that an additional assumption in terms of the dislocation measure ν is needed to make the limit non-trivial.In the case of branching random walks [21] and branching Brownian motion [40], optimal moment conditions have been found, and the martingale limits are proven to be zero when these conditions do not hold.

The stopped martingales
In this subsection we fix a > 0 and ω ∈ (dom κ) • , and define a process where Anc(r, u) denotes the ancestor of u at time r as in section 2.3.It is clear that ∂W a (ω, t) is always non-negative.We use this to define a new measure on F ∞ by and extend it to F∞ by E a + tκ (ω) − Z u (t) e −tκ(ω)+ωZu(t) 1 {a+sκ (ω)−Z Anc(r;u) (r)>0 for r≤t} 1 A . (28)  To justify that the measure Q ω is well-defined and does not depend on the choice of t, we consider the interpretation of Q ω as having a density with respect to P ω on F∞ .Recall that under P ω , we have a process Z together with a spine label U , and the spine (Z Ut (t), t ≥ 0) is a Lévy process with Laplace exponent E ω κ.Write then it follows that λ under P ω is a Lévy process with respect to the filtration ( Ft ) t≥0 , started at a.The process λ is spectrally positive, in the sense that it has only positive jumps, and it has Laplace exponent κ (ω)q−E ω κ(q), meaning that E ω [e −q(λ(t)−a) ] = e −t(κ (ω)q−Eωκ(q)) for q ≥ 0. (This is a slight change in notation compared to (5), but it follows the usual convention for the Laplace exponent of a spectrally positive process.)In particular, E ω [λ(t)] = a for every t ≥ 0, which implies that λ is a P ω -martingale with respect to ( Ft ) t≥0 .Let ζ = inf{t ≥ 0 : λ(t) < 0}, then it follows from Corollary 5.4 that Using the fact that the stopped martingale (λ(t ∧ ζ) = λ(t)1 {t<ζ} , t ≥ 0) remains a P ωmartingale (see [35,Corollary II.3.6]),we justify the previous definition of Q ω as a consistent change of measure.As a consequence, ∂W a (ω, •) is a non-negative P-martingale, and therefore converges P-almost surely to a limit ∂W a (ω, ∞) as t → ∞.
The main object of this subsection is to establish the following result, which will be crucially used in the proof of Theorem 6.1.
To prove Proposition 6.3, the key idea is to use the 'forward' construction (Definition 4.4 and Theorem 5.2) of ( Z, U ) under P ω , as a Lévy process ξ with Laplace exponent E ω κ whose jumps are decorated with independent branching Lévy processes with law P, each positioned according to the atoms of a random measure N .By a slight abuse of notation, the measure N under P ω can be seen as an integer-valued random measure on [0, ∞) × E, with E = (0, 1) × N × P, and its support is a random set having the form {(s, (e −∆ξ(s) , i s , p s )) : ∆ξ(s) = 0}.Further, N is Poisson with the (non-random) intensity measure η.Since Q ω is absolutely continuous with respect to P ω on every Ft , the process under Q ω has the same structure; however, the laws of the process ξ and the random measure N may be different.
The following pair of lemmas provides more detail on the discussion above; we refer to Jacod and Shiryaev [29, §II.1] for a thorough discussion of random measures, and in particular the notion of the predictable compensator of a random measure.Note that hereafter, when we say predictable, we will always mean predictable with respect to the filtration ( Ft ) t≥0 .Lemma 6.4.Under Q ω , the process λ defined as in ( 29) is a spectrally positive Lévy process starting from a with Laplace exponent q → κ (ω)q − E ω κ(q), conditioned to be positive in the sense of [18,19].In particular, we have that inf t≥0 λ(t) > 0, Q ω -almost surely.
Proof.Recall that λ is a (unconditioned) Lévy process with the given Laplace exponent under P ω .In the work of Chaumont and Doney [19], it is shown that conditioning the Lévy process λ to remain positive is equivalent to performing a martingale change of measure with respect to the martingale U − (λ(t))1 {t<ζ} , where U − is the potential function of the downward ladder height subordinator.Since λ has no negative jumps and has constant mean a, it follows that U − (x) = x1 {x>0} (see [33, §6.5.2] for the analogous case of processes with no positive jumps.)Therefore, conditioning λ to remain positive gives rise to Q ω as the conditioned measure.
This completes the characterisation of λ under Q ω .Finally, since λ under P ω is a centred Lévy process with only positive jumps, the fact that the overall infimum of λ under Q ω is positive is implied by [19, For any predictable random function (s, (y, i, p)) → U s (y, i, p), we have that We now need one final technical result to prepare for the main proposition in this section.Lemma 6.6.For every p > 0, where W is the scale function of the spectrally negative Lévy process −λ, with the convention that W(x) = 0 for x < 0, and k > 0 is a constant.
Thus, we have that Finally, by [5,equation (VII.4)] and the renewal theorem [5, Theorem III.21], we know that W(y) − W(y − a) → ac/m + as y → ∞, where m + is the mean of the ascending ladder height process of λ and c is a meaningless constant.This implies that the integral above converges at ∞.We then note that the integrand is equivalent to ka −1 (y 2 + y + 1)W(0) as y → 0, and this completes the proof.Remark 6.7.In [11], the result inf t≥0 λ(t) > 0 and lim stated in [11, equation (21)], is used.This would suffice for our purposes also.However, since the proof of Lemma 6.6 is not very long, we offer it for the sake of completeness.
We are now in a position to prove Proposition 6.3.
The claim follows as a consequence.
We now come back to the proof of the proposition.By [38,Lemma 4.2] it suffices to show that lim inf where G ∞ := σ(Z U (t) (t), U (t), t ≥ 0) ⊂ F∞ .Recall that Q ω is related to P ω via the change of measure (30), and that Z under P ω can be described as a decorated spine process as in Definition 4.3.With notation therein, we claim that where, with λ(t) = a − ξ(t) + tκ (ω) and ζ = inf{t ≥ 0 : λ(t) < 0}, We postpone for a moment the proof of ( 33) and turn our attention to S t .Let Then it is clear that We shall study the asymptotics of the five terms separately.
Let us start with B t .Using the compensator of N under Q ω given in Lemma 6.5 and Definition 4.2, we deduce that where C 1 , C 0 and C −1 are given by By the inequality | log y| ≤ ε −1 y −ε for y ∈ [0, 1], there is We also note that p ω−ε 1 ≤ 1.It follows that P\P 1
By similar arguments as in the proof of Lemma 6.6, with notations therein, we obtain that we know that W(y) − W(y − a) → ac/m + as y → ∞, where m + is the mean of the ascending ladder height process of λ and c is a meaningless constant.This implies that, there exists a constant C 3 large enough, such that Since X log + X ≤ (X log + X + X log + X), by (31) the right-hand-side of the above expression is finite.Hence H ∞ is Q ω-a.s.finite, which yields that sup t≥0 D t < ∞ holds Q ω-a.s.The right-hand-side is an integral over a random point measure, whose total mass is H ∞ .So the fact that H ∞ is Q ω-a.s.finite yields that the integral is Q ω-a.s. a finite sum.
In the same manner, we can also deduce that sup t≥0 D t < ∞ holds Q ω-a.s.This would require that P X(log + X) 2 ν(dp) < ∞, which is again a consequence of (31).Having assumed (33), this completes the proof of (32).
Summarizing, we have that E ω ∂W a (ω, t) G ∞ is equal to S t as in (34).
The second expectation is finite, so is the first one.Fix an enumeration of U and denote for every u ∈ U its index by I u ∈ N. Then for every ε, δ > 0, there exists n 0 ∈ N, depending on ε and δ, such that Furthermore, by conditioning on F1 and using the branching property, Lemma 2.9, we deduce the identity |Z v (1) − κ (ω)|e ωZv (1) .

Figure 1 :
Figure 1: A sketch of the construction and labels of a (truncated) branching Lévy process, with truncation levels marked at certain birth events.The path in solid black represents the process Z (b 1 ) , which in this particular instance includes only the Eve particle ∅.The paths in dashed red represent the particles in the process Z (b 2 ) \ Z (b 1 ) ; note that these are precisely the particles u for which ML(u) = 2.The paths in dotted blue represent the particles in the process Z (b 3 ) \ Z (b 2 ) .

Definition 4 . 2 .
(i) Let M(ds, dz) be the jump measure of ξ, that is, a Poisson random measure with intensity ds Π(dz).Define M to be the pushforward of M under the function (s, z) → (s, e z ).Thus, M (ds, dy) is a Poisson random measure with intensity ds π(dy).
[29,rem 1(a)].The predictable compensator of the random measure N under Q ω is given by Proof.We first point out that ζ is predictable: since λ is a spectrally positive Lévy process under P ω , it can only pass below 0 continuously.Thus, defining T n = inf{t ≥ 0 : λ(t) < 1/n} < ζ, we have that ζ = sup n T n , which implies in particular that ζ is predictable (by[29, Theorem I.2.15(a)].)Now,since N is Poisson under P ω , its compensator under P ω is the (non-random) intensity measure η, and moreover the density process for the change of measure is dQ ω