A Williams' Decomposition for Spatially Dependent Superprocesses

We present a genealogy for superprocesses with a non-homogeneous quadratic branching mechanism, relying on a weighted version of the superprocess and a Girsanov theorem. We then decompose this genealogy with respect to the last individual alive (William's decomposition). Letting the extinction time tend to infinity, we get the Q-process by looking at the superprocess from the root, and define another process by looking from the top. Examples including the multitype Feller diff usion and the superdiffusion are provided.


Introduction
Even if superprocesses with very general branching mechanisms are known, most of the works devoted to the study of their genealogy are concerned with homogeneous branching mechanisms, that is, populations with identical individuals. Four distinct approaches have been proposed for describing these genealogies. When there is no spatial motion, superprocesses are reduced to continuous state branching processes, whose genealogy can be understood by a flow of subordinators, see Bertoin and Le Gall [5], or by growing discrete trees, see Duquesne and Winkel [13]. With a spatial motion, the description of the genealogy can be done using the lookdown process of Donnelly and Kurtz [11] or the snake process of Le Gall [22]. Some works generalize both constructions to non-homogeneous branching mechanisms: Kurtz and Rodriguez [20] recently extended the lookdown process in this direction whereas Dhersin and Serlet proposed in [10] modifications of the snake.
Using the genealogy, it is natural to consider the corresponding Williams' decomposition, which is named after the work of Williams [32] on the Brownian excursion. After Aldous recognized in [3] the genealogy of a branching process in this excursion, they also designate decompositions of branching processes with respect to their height, see Serlet [31] for the quadratic branching mechanism or Abraham and Delmas [1] for general branching mechanism. Their interest is twice: they allow to understand the behavior of processes at the top, see Goldschmidt and Haas [18] for an application of this approach, and to investigate the process conditioned on non extinction, or Q-process, see [31] and Overbeck [25].
For Markov processes with absorbing states, the Q-process is defined as the process conditioned on non absorption in remote time, see Darroch and Seneta [9]. Lamperti and Ney [21] found a simple construction in the case of discrete branching processes. Later on, Roelly and Rouault [28] provided a superprocess version of this result. Q-processes have intrinsic interest as a model of stochastic population, see Chen and Delmas [7]. They also find application in the study of the associated martingale, see Lyons, Pemantle and Peres [24] in a discrete setting.
Understanding this martingale allows to better understand the original process, see Engländer and Kyprianou [16] for superprocesses with non homogeneous branching mechanism.
Our primary interest is to present a genealogy for superprocess with a non-homogeneous quadratic branching mechanism, to condition it with respect to its height (this is the William's decomposition), and to study the associated Q-process.
Let X = (X t , t ≥ 0) be an (L, β, α) superprocess over a Polish space E. The underlying spatial motion Y = (Y t , t ≥ 0) is a Markov process with infinitesimal generator L started at x under P x . The non-homogeneous quadratic branching mechanism is denoted by ψ(x, λ) = β(x)λ + α(x)λ 2 , for suitable functions β and α (explicit conditions can be found in Section 2). Let P ν be the distribution of X started from the finite measure ν on E, and N x be the corresponding canonical measure of X with initial state x. In particular, the process X under P ν is distributed as i∈I X i , where i∈I δ X i (dX) is a Poisson Point measure with intensity x∈E ν(dx)N x (dX). We define the extinction time of X: H max = inf{t > 0, X t = 0}, and assume that X suffers almost sure extinction, that is N x [H max = ∞] = 0 for all x ∈ E. Using an h-transform from Engländer and Pinsky [17] and a Girsanov transformation from Perkins [26], we provide a genealogical structure for the superprocess X, see Proposition 3.12, by transferring the genealogical structure of an homogeneous superprocess.
We define the function v h (x) = N x [X h = 0] = N x [H max ≥ h] and a family of probability measures by setting: is the natural filtration of Y , see Lemma 4.10. Using the genealogical structure of X, we give a decomposition of the superprocess X with respect to an individual chosen at random, also called a Bismut decomposition, in Proposition 4.4. The following Theorem, see Corollary 4.13 for a precise statement and Corollary 4.14 for a statement under P ν , gives a Williams' decomposition of X, that is a spine decomposition with respect to its extinction time H max .
Theorem. (Williams' decomposition under N x ) Assume that the (L, β, α) superdiffusion X suffers almost sure extinction and some regularities on α and β.
(i) The distribution of H max under N x is characterized by: (ii) Conditionally on {H max = h 0 }, the (L, β, α) superdiffusion X under N x is distributed as X (h 0 ) constructed as follows. Let x ∈ E and Y [0,h 0 ) be distributed according to P (h 0 ) x . Consider the Poisson point measure N = j∈J δ (s j ,X j ) on [0, h 0 ) × Ω with intensity: The process X (h 0 ) = (X (h 0 ) t , t ≥ 0) is then defined for all t ≥ 0 by: The proof of this Theorem relies on a William's decomposition of the genealogy of X, see Theorem 4.12. Notice it also implies the existence of a measurable family (N (h) x , h > 0) of probabilities such that N (h) x is the distribution of X under N x conditionally on {H max = h}. We shall from now on consider the case of Y a diffusion on R K or a pure jump process on a finite state space. The generalized eigenvalue λ 0 of the operator β − L is defined in Pinsky [27] for diffusion on R d . For finite state space, it reduces to the Perron Frobenius eigenvalue, see Seneta [30]. In both cases, we have: λ 0 = sup {ℓ ∈ R, ∃u ∈ D(L), u > 0 such that (β − L)u = ℓ u}· We assume that the space of positive harmonic functions for (β − λ 0 ) − L is one dimensional, generated by a function φ 0 . From these assumptions, we have that the space of positive harmonic functions of the adjoint of (β − λ 0 ) − L is one dimensional, and we denote byφ 0 a generator of this space. We also assume that φ 0 is bounded from below and above by positive constants and that the operator (β − λ 0 ) − L is product critical, that is E dx φ 0 (x)φ 0 (x) < ∞. Thanks to the product-critical property, the probability measure P φ 0 , given by: defines a recurrent Markov process (in the sense given by (70)). Since φ 0 is bounded from below and from above by two positive constants, the non-negativity of λ 0 implies the weak convergence of the spine that is the weak convergence of P (h) x towards P (∞) x which is given by P φ 0 x , see Proposition 6.8. An explicit expression is given for P (h) x and P φ 0 x in Lemmas 7.4 and 7.7. The non-negativity of λ 0 implies the almost sure extinction of X, see Lemma 6.2. Under very general conditions, the weak convergence of the spine implies the convergence of the superprocess (Corollary 5.8) and its genealogy (Theorem 5.5). We can easily state them in the particular case of an underlying motion being a diffusion or a pure jump process on a finite state space. We also see that N (∞) x , defined below, is actually the law of the Q-process, defined as the weak limit of the probability measures N (≥h) x = N x [ · |H max ≥ h], see also Lemma 5.1.
Theorem (Q-process under N x ). Assume that λ 0 ≥ 0. Let Y be distributed according to P φ 0 x , and, conditionally on Y , let N = j∈I δ (s j ,X j ) be a Poisson point measure with intensity: Consider the process X (∞) = (X We also prove the weak convergence of the probability measures (N (h) x , h > 0) backward from the extinction time. Let P (−h) denote the push forward probability measure of P (h) , defined by: The product criticality assumption yields the existence of a probability measure P (−∞) such that for all x ∈ E, t ≥ 0, and f bounded measurable: Once again, the convergence of the spine implies the convergence of the superprocess. The following result corresponds to the second item of Theorem 5.9.
Theorem (Asymptotic distribution at the extinction time). Assume that λ 0 > 0. Then the process and conditionally on Y with distribution P (−∞) , j∈J δ (s j ,X j ) is a Poisson point measure with intensity: Remark 1.3. Considering a superprocess with homogeneous branching mechanism, the Q-process may be easily defined from the well known Q-process for the total mass process (see for instance [7] in the case of a general branching mechanism). Thus the recurrence condition imposed on the spatial motion is not necessary for Williams decomposition, but it seems more natural in order to get the asymptotic distribution at the extinction time.
Remark 1.4. The genealogy of X defined in Proposition 3.12 allows us to interpret the following probability measure P (B,t) x as the law of the ancestral lineage of an individual sampled at random at height t (see the Bismut decomposition, Proposition 4.4): We prove in Lemma 6.13 that, if φ 0 is bounded from below and above by positive constants and that the operator (β − λ 0 ) − L is product critical, then the ancestral lineage of an individual sampled at random at height t under N x converges as t → ∞ to the law of the spine, that is P (B,t) x converges weakly to P φ 0 x . This Feynman-Kac type penalization result (see Chapter 2 of Roynette and Yor [29]) heavily relies on the product criticality assumption, but holds without restriction on the sign of λ 0 . It may be interpreted as an example of the so called globular state in random polymers, investigated in Cranston, Koralov and Molchanov [8].
Outline. We give some background on superprocesses with a non-homogeneous branching mechanism in the Section 2. Section 3 begins with the definition of the h-transform in the sense of Engländer and Pinsky, Definition 3.4, goes on with a Girsanov Theorem, Proposition 3.7, and ends up with the definition of the genealogy, Proposition 3.12, by combining both tools. Section 4 is mainly devoted to the proof of the William's decomposition, Theorem 4.12. By the way, we give a decomposition with respect to a randomly chosen individual, also known as a Bismut decomposition, in Proposition 4.2. Section 5 gives some applications of the Williams' decomposition: We first prove in Lemma 5.1 that the limit of the superprocesses conditioned to extinct at a remote time coincide with the Q-process (the superprocess conditioned to extinct after a remote time) and actually show in Theorem 5.5 that such a limit exists. We also consider in Theorem 5.9 the convergence of the process seen from the top (so, backward from the extinction time). All previous results are provided with a set of assumptions. We then give in Section 6 sufficient conditions for these assumptions to be valid in term of the generalized eigenvector and eigenvalue, then check they hold in Section 7 in two examples: the finite state space superprocess (with mass process the multitype Feller diffusion) and the superdiffusion.

Notations and definitions
This section, based on the lecture notes of Perkins [26], provides us with basic material about superprocesses, relying on their characterization via the Log Laplace equation.
We first introduce some definitions: • E is the set of real valued measurable functions and bE ⊂ E the subset of bounded functions. • C(E, R), or simply C, is the set of continuous real valued functions on E, C b ⊂ C the subset of continuous bounded functions. • D(R + , E), or simply D, is the set of càdlàg paths of E equipped with the Skorokhod topology, D is the Borel sigma field on D, and D t the canonical right continuous filtration on D. • For each set of functions, the superscript . + will denote the subset of the non-negative functions: For instance, bE + stands for the subset of non negative functions of bE. • M f (E) is the space of finite measures on E. The standard inner product notation will be used: We can now introduce the two main ingredients which enter in the definition of a superprocess, the spatial motion and the branching mechanism: • Assume Y = (D, D, D t , Y t , P x ) is a Borel strong Markov process. "Borel" means that Let E x denote the expectation operator, and (P t , t ≥ 0) the semi-group defined by . We impose the additional assumption that P t : C b → C b . In particular the process Y has no fixed discontinuities. The generator associated to the semi-group will be denoted L. Remember f belongs to the domain D(L) of L if f ∈ C b and for some g ∈ C b , in which case g = L(f ). • The functions α and β being elements of C b , with α bounded from below by a positive constant, the non-homogeneous quadratic branching mechanism ψ β,α is defined by: for all x ∈ E and λ ∈ R. We will just write ψ for ψ β,α when there is no possible confusion. If α and β are constant functions, we will call the branching mechanism (and by extension, the corresponding superprocess) homogeneous.
The mild form of the Log Laplace equation is given by the integral equation, for φ, f ∈ bE + , t ≥ 0, x ∈ E: 26], Theorem II.5.11) Let φ, f ∈ bE + . There is a unique jointly (in t and x) Borel measurable solution u f,φ t (x) of equation (3) such We shall write u f for u f,0 when φ is null. We introduce the canonical space of continuous applications from [0, ∞) to M f (E), denoted by Ω := C(R + , M f (E)), endowed with its Borel sigma field F, and the canonical right continuous filtration F t . Notice that F = F ∞ .  (3) such that u f,φ t is bounded on [0, T ] × E for all T > 0. There exists a unique Markov process X = (Ω, F, F t , X t , (P (L,β,α) ν , ν ∈ M f (E))) such that: X is called the (L, β, α)-superprocess.
We now state the existence theorem of the canonical measures: , then j∈J X j is an (L, β, α)-superprocess started at ν. We will often abuse notation by denoting P ν (resp. N x ) instead of P and P x instead of P δx when starting from δ x the Dirac mass at point x.
Let X be a (L, β, α)-superprocess. The exponential formula for Poisson point measures yields the following equality: , where u f t is (uniquely) defined by equation (4). Denote H max the extinction time of X: Definition 2.4 (Global extinction). The superprocess X suffers global extinction if P ν (H max < ∞) = 1 for all ν ∈ M f (E).
We will need the the following assumption: (H1) The (L, β, α)-superprocess satisfies the global extinction property.
We shall be interested in the function We set v ∞ (x) = lim t→∞ ↓ v t (x). The global extinction property is easily stated using v ∞ . See also Lemma 4.9 for other properties of the function v.
Proof. The exponential formula for Poisson point measures yields: To conclude, let t goes to infinity in the previous equality to get: For homogeneous superprocesses (α and β constant), the function v is easy to compute and the global extinction holds if and only β is non-negative. Then, using stochastic domination argument, one get that a (L, β, α)-superprocess, with β non-negative, exhibits global extinction (see [16] p.80 for details).

A genealogy for the spatially dependent superprocess
We first recall (Section 3.1) the h-transform for superprocess introduced in [17] and then (Section 3.2) a Girsanov theorem previously introduced in [26] for interactive superprocesses. Those two transformations allow us to give a Radon-Nikodym derivative of the distribution of a superprocess with non-homogeneous branching mechanism with respect to the distribution of a superprocess with an homogeneous branching mechanism. The genealogy of the superprocess with an homogeneous branching mechanism can be described using a Brownian snake, see [12]. Then, in Section 3.3, we use the Radon-Nikodym derivative to transport this genealogy and get a genealogy for the superprocess with non-homogeneous branching mechanism.
3.1. h-transform for superprocesses. We first introduce a new probability measure on (D, D) using the next Lemma. g(x) e − t 0 ds (Lg/g)(Ys) , t ≥ 0 is a positive martingale under P x .
Proof. Let g be as in Lemma 3.1 and f ∈ D g (L). The process: is a P x martingale by definition of the generator L. Thus, the process: , t ≥ 0 is a P x martingale. We set: Itô's lemma then yields that the process (M f,g t , t ≥ 0) is another P x martingale. Take f constant equal to 1 to get the result.
Let P g x denote the probability measure on (D, D) defined by: Note that in the case where g is harmonic for the linear operator L (that is Lg = 0), the probability distribution P g is the usual Doob h-transform of P for h = g.
We also introduce the generator L g of the canonical process Y under P g and the expectation operator E g associated to P g . Lemma 3.2. Let g be a positive function of D(L) such that g is bounded from below by a positive constant. Then, we have D g (L) ⊂ D(L g ) and As, for f ∈ D g (L), the process (M f,g t , t ≥ 0) defined by (8) is a martingale under P x , we get that the process: This gives the result.
x)) be a function bounded from below by a positive constant, differentiable in t, such that g(t, .) ∈ D(L) for each t and ((t, x) → ∂ t g(t, x)) is bounded from above. By considering the process (t, Y t ) instead of Y t , we have the immediate counterpart of Lemma 3.1 for time dependent function g(t, .). In particular, we may define the following probability measure on (D, D) (still denoted P g x by a small abuse of notations): where L acts on g as a function of x.
We now define the h-transform for superprocesses, as introduced in [17] (notice this does not correspond to the Doob h-transform for superprocesses).
Definition 3.4. Let X = (X t , t ≥ 0) be an (L, β, α) superprocess. For g ∈ bE + , we define the h-transform of X (with h = g) as X g = (X g t , t ≥ 0) the measure valued process given for all t ≥ 0 by: . Note that (11) holds point-wise, and that the law of the h-transform of a superprocess may be singular with respect to the law of the initial superprocess.
We first give an easy generalization of a result in section 2 of [17] for a general spatial motion.
Proposition 3.5. Let g be a positive function of D(L) such that g is bounded from below by a positive constant. Then the process X g is a L g , (−L+β)g g , αg -superprocess.
Proof. The Markov property of X g is clear. We compute, for f ∈ bE + : where, by Theorem 2.2, u satisfies: which can also be written: But (12) written at time t − s gives: By comparing the two previous equations, we get: and the Markov property now implies that the process: with s ∈ [0, t] is a P x martingale. Itô's lemma now yields that the process: with s ∈ [0, t] is another P x martingale (the integrability comes from the assumption Lg ∈ C b and 1/g ∈ C b ). Taking expectations at time s = 0 and at time s = t, we have: We divide both sides by g(x) and expand ψ according to its definition: By definition of P g x from (9), we get that: We conclude from Theorem 2.2 that X g is a (L g , (−L+β)g g , αg)-superprocess.
In order to perform the h-transform of interest, we shall consider the following assumption.
Notice that (H2) implies that αL(1/α) ∈ C b . Proposition 3.5 and Lemma 3.1 then yield the following Corollary. Corollary 3.6. Let X be an (L, β, α)-superprocess. Assume (H2). The process X 1/α is an (L,β, 1)-superprocess with: Moreover, for all t ≥ 0, the lawP x of the process Y with generatorL is absolutely continuous on D t with respect to P x and its Radon-Nikodym derivative is given by: We will noteP for the law of X 1/α on the canonical space (that isP = P (L,β,1) ) andÑ for its canonical measure. Observe that the branching mechanism of X underP, which we shall writẽ ψ, is given by: and the quadratic coefficient is no more dependent on x. Notice that P αν (X ∈ ·) =P ν (αX ∈ ·). This implies the following relationship on the canonical measures (use Theorem 2.3 to check it): As α is positive, equality (16) implies in particular that, for all t > 0 and x ∈ E:

3.2.
A Girsanov type theorem. The following assumption will be used to perform the Girsanov change of measure.
For z ∈ R, we set z + = max(z, 0). Under (H2) and (H3), we define: Notice that q ≥ 0. We shall consider the distribution of the homogeneous (L, β 0 , 1)-superprocess, which we will denote by P 0 (P 0 = P (L,β 0 ,1) ) and its canonical measure N 0 . Note that the branching mechanism of X under P 0 is homogeneous (the branching mechanism does not depend on x). We set ψ 0 for ψ β 0 ,1 . Since ψ 0 does not depend anymore on x we shall also write ψ 0 (λ) for ψ 0 (x, λ): Proposition 3.7 below is a Girsanov's type theorem which allows us to finally reduce the distributionP to the homogeneous distribution P 0 . We introduce the process M = (M t , t ≥ 0) defined by: where the function ϕ is defined by: Proposition 3.7. A Girsanov's type theorem. Assume (H2) and (H3) hold. Let X be a (L,β, 1)-superprocess.
(i) The process M is a bounded F-martingale underP ν which converges a.s. to (iii) If moreover (H1) holds, then P 0 ν -a.s. we have M ∞ > 0, the probability measureP ν is absolutely continuous with respect to P 0 ν on F: ds Xs(ϕ) .
We also have: The two first points are a particular case of Theorem IV.1.6 p.252 in [26] on interactive drift. For the sake of completeness, we give a proof based on the mild form of the Log Laplace equation (3) introduced in Section 2. Notice that: Thus, Proposition 3.7 appears as a non-homogeneous generalization of Corollary 4.4 in [2]. We first give an elementary Lemma.
Proof. The following computation: and the definition (18) of β 0 ensure that the function ϕ is non-negative.
Proof of Proposition 3.7. First observe that M is F-adapted. As the function q also is nonnegative, we deduce from Lemma 3.8 that the process M is bounded by e X 0 (q) . Let f ∈ bE + . On the one hand, we have: On the other hand, we have Using (23), rewrite the previous equation under the form: We now make use of the Dynkin's formula with (H3): and sum the equations (25) and (26) term by term to get: The functions r t (x) and w t (x) + q(x) are bounded on [0, T ] × E for all T > 0 and satisfy the same equation, see equations (24) and (27). By uniqueness, see Theorem 2.1, we finally get that w t + q = r t . This gives: . The Poissonian decomposition of the superprocesses, see Theorem 2.3, and the exponential formula enable us to extend this relation to arbitrary initial measures ν: . This equality with f = 1 and the Markov property of X proves the first part of item (i). Now, a direct induction based on the Markov property yields that, for all positive integer n, and f 1 , . . . , f n ∈ bE + , 0 ≤ s 1 ≤ . . . ≤ s n ≤ t: . And we conclude with an application of the monotone class theorem that, for all non-negative . The martingale M is bounded and thus converges a.s. to a limit M ∞ . We deduce that for all non-negative F t -measurable random variable Z: . This also holds for any non-negative F ∞ -measurable random variable Z. This gives the second item (ii).
Notice that for all positive integer n, and f 1 , . . . , f n ∈ bE + , 0 ≤ s 1 ≤ . . . ≤ s n , we havẽ Taking f i = 0 for all i gives (22). This implies: The monotone class theorem gives then the last part of item (iii).

Genealogy for superprocesses.
We now recall the genealogy of X under P 0 given by the Brownian snake from [12]. We assume (H2) and (H3) hold. Let W denote the set of all càdlàg killed paths in E. An element w ∈ W is a càdlàg path: w : [0, η(w)) → E, with η(w) the lifetime of the path w. By convention the trivial path {x}, with x ∈ E, is a killed path with lifetime 0 and it belongs to W. The space W is Polish for the distance: where d s refers to the Skorokhod metric on the space D([0, s], E), and w I is the restriction of w on the interval I. Denote W x the set of stopped paths w such that w(0) = x. We work on the canonical space of continuous applications from [0, ∞) to W, denoted byΩ := C(R + , W), endowed with the Borel sigma fieldḠ for the distance d, and the canonical right continuous filtrationḠ t = σ{W s , s ≤ t}, where (W s , s ∈ R + ) is the canonical coordinate process. Noticē G =Ḡ ∞ by construction. We set H s = η(W s ) the lifetime of W s . Definition 3.9 (Proposition 4.1.1 and Theorem 4.1.2 of [12]). Fix W 0 ∈ W x . There exists a unique W x -valued Markov process W = (Ω,Ḡ,Ḡ t , W t , P 0 W 0 ), called the Brownian snake, starting at W 0 and satisfying the two properties: (i) The lifetime process H = (H s , s ≥ 0) is a reflecting Brownian motion with non-positive drift −β 0 , starting from H 0 = η(W 0 ). (ii) Conditionally given the lifetime process H, the process (W s , s ≥ 0) is distributed as an inhomogeneous Markov process, with transition kernel specified by the two following prescriptions, for 0 ≤ s ≤ s ′ : This process will be called the β 0 -snake started at W 0 , and its law denoted by P 0 W 0 .
We will just write P 0 x for the law of the snake started at the trivial path {x}. The corresponding excursion measure N 0 x of W is given as follows: the lifetime process H is distributed according to the Itô measure of the positive excursion of a reflecting Brownian motion with non-positive drift −β 0 , and conditionally given the lifetime process H, the process (W s , s ≥ 0) is distributed according to (ii) of Definition 3.9. Let σ = inf{s > 0; H s = 0} denote the length of the excursion under N 0 x . Let (l r s , r ≥ 0, s ≥ 0) be the bicontinuous version of the local time process of H; where l r s refers to the local time at level r at time s. We also setŵ = w(η(w)−) for the left end position of the path w. We consider the measure valued process The β 0 -snake gives the genealogy of the (L, β 0 , 1) superprocess in the following sense. We have: and we shall write this quantity H max or H max (W ) if we need to stress the dependence in W . This notation is coherent with (6). We now transport the genealogy of X under N 0 to a genealogy of X underÑ. In order to simplify notations, we shall write X for X(W ) when there is no confusion.
Notice the second equality in the previous definition is the third item of Proposition 3.7. At this point, the genealogy defined for X underÑ x will give the genealogy of X under N up to a weight. We set Proposition 3.12. We have: We may write X weight (W ) for X weight to emphasize the dependence in the snake W .
Proof. This is a direct consequence of Definition 3.11 and (16).
We shall say that W under N x provides through (34) a genealogy for X under N x .

A Williams' decomposition
In Section 4.1, we give a decomposition of the genealogy of the superprocesses (L, β, α) and (L,β, 1) with respect to a randomly chosen individual. In Section 4.2, we give a Williams' decomposition of the genealogy of the superprocesses (L, β, α) and (L,β, 1) with respect to the last individual alive. 4.1. Bismut's decomposition. A decomposition of the genealogy of the homogeneous superprocess with respect to a randomly chosen individual is well known in the homogeneous case, even for a general branching mechanism (see lemmas 4.2.5 and 4.6.1 in [12]).
We now explain how to decompose the snake process under the excursion measure (Ñ x or N 0 x ) with respect to its value at a given time. Recall σ = inf {s > 0, H s = 0} denote the length of the excursion. Fix a real number t ∈ [0, σ]. We consider the process We also define for the excursion j the corresponding excursion of the snake: . We consider the following two point measures on R + ×Ω: for ε ∈ {g, d}, Notice that under N 0 x (and underÑ x if (H1) holds), the process W can be reconstructed from the triplet (W t , R g t , R d t ). We are interested in the probabilistic structure of this triplet, when t is chosen according to the Lebesgue measure on the excursion time interval of the snake. Under N 0 , this result is as a consequence of Lemmas 4.2.4 and 4.2.5 from [12]. We recall this result in the next Proposition.
For a point measure R = j∈J δ (s j ,x j ) on a space R × X and A ⊂ R, we shall consider the restriction of R to A × X given by For every measurable non-negative function F , the following formulas hold: where underẼ x and conditionally on Y ,R B,g andR B,d are two independent Poisson point measures with intensityν B (ds, dW ) = ds N 0 Ys [dW ]. The next Proposition gives a similar result in the non-homogeneous case. Proposition 4.2. Under (H1)-(H3), for every measurable non-negative function F , the two formulas hold: where underẼ x and conditionally on Y , R B,g [0,r) and R B,d [0,r) are two independent Poisson point measures with intensity where under E x and conditionally on Y , Observe there is a weight α(Ŵ s ) in (40) (see also (34) where this weight appears) which modifies the law of the individual picked at random, changing the modified diffusionP x in (38) into the original one P x .
We shall use the following elementary Lemma on Poisson point measure.
Lemma 4.3. Let R be a Poisson point measure on a Polish space with intensity ν. Let f be a non-negative measurable function f such that ν(e f −1) < +∞. Then for any non-negative measurable function F , we have: Proof of Proposition 4.2. We keep notations introduced in Propositions 4.1 and 4.2. We have: where the first equality comes from (H1) and item (iii) of Proposition 3.7, we set f (s, W ) = +∞ 0 X r (W )(ϕ) for the second equality, we use Proposition 4.1 for the third equality, we use Lemma 4.3 for the fourth, we use (22) for the fifth, and the definition (18) of q in the last. This proves (38). (38) and use (14) as well as (33) to get (40).
The proof of the following Proposition is similar to the proof of Proposition 4.2 and is not reproduced here.
Proposition 4.4. Under (H1)-(H3), for every measurable non-negative function F , the two formulas hold: for fixed t > 0, where underẼ x and conditionally on Y , R B,g and R B,d are two independent Poisson point measures with intensity ν B defined in (39), and where under E x and conditionally on Y , R B,g and R B,d are two independent Poisson point measures with intensity ν B .
As an example of application of this Proposition, we can recover easily the following well known result.
In particular, we recover the so-called "many-to-one" formula (with g = 0 in Corollary 4.5): Remark 4.6. Equation (44) justifies the introduction of the following family of probability measures indexed by t ≥ 0: which can be understood as the law of the ancestral lineage of an individual sampled at random at height t under the excursion measure N x , and also correspond to Feynman Kac penalization of the original spatial motion P x (see [29]). Notice that this law does not depend on the parameter α. These probability measures are not compatible as t varies but will be shown in Lemma 6.13 to converge as t → ∞ in restriction to D s , s fixed, s ≤ t, under some ergodic assumption (see (H9) in Section 6).
Proof. We set for w ∈ W with η(w) = t and r 1 , r 2 two point measures on R + ×Ω F (w, r 1 , r 2 ) = f (ŵ) e h(r 1 )+h(r 2 ) , where h( i∈I δ (s i ,W i ) ) = s i <t X weight (W i ) t−s i (g). We have: where we used item (ii) of Proposition 3.12 for the first and last equality, (43) with F previously defined for the second, formula for exponentials of Poisson point measure and (33) for the third.

4.2.
Williams' decomposition. We first recall the Williams' decomposition for the Brownian snake (see [32] for Brownian excursions, [31] for Brownian snake or [1] for general homogeneous branching mechanism without spatial motion The next result is a straightforward adaptation from Theorem 3.3 of [1] and gives the distribution of (H max , W Tmax , R g Tmax , R d Tmax ) under N 0 x .
Proposition 4.7 (Williams' decomposition under N 0 x ). We have: In other words, for any non-negative measurable function F , we have where underẼ  [19], the Esty time reversal "is obtained by conditioning a [discrete time] Galton Watson process in negative time upon entering state 0 (extinction) at time 0 when starting at state 1 at time −n and letting n tend to infinity". The authors then observe that in the linear fractional case (modified geometric offspring distribution) the Esty time reversal has the law of the same Galton Watson process conditioned on non extinction. Notice that in our continuous setting, the process ( a Bessel process up to its first hitting time of h, and thus is reversible: . It is also well known (see Corollary 3.1.6 of [12] . This result, which holds at fixed h, gives a pre-limiting version of the Esty time reversal in continuous time. Passing to the limit as h → ∞, see Section 5.2, we get the equivalent of the Esty time reversal in a continuous setting.
Before stating the Williams' decomposition, Theorem 4.12, let us prove some properties for the which will play a significant rôle in the next Section. Recall (17) states that αv t =ṽ t .
Notice also that (18) implies that q is bounded from above by (β 0 + β ∞ )/2. Lemma 4.9. Assume (H1)-(H3). We have: is of class C 1 in t and we have: where the function Σ defined by: We deduce from item (iii) of Proposition 3.7 that, as ϕ ≥ 0 (see Lemma 3.8), We also haveṽ where we used (22) for the third equality. This proves (46).
Using the Williams' decomposition under N 0 x , we get: ds Xs(ϕ) .
Using again the Williams' decomposition under N 0 x , we have We deduce that, for fixed x, r → N 0,(r) x e +∞ 0 ds Xs(ϕ) is non-decreasing and continuous as N 0 y [H max = t] = 0 for t > 0. Therefore, we deduce that for fixed x,ṽ t (x) is of class C 1 in t: We have thanks to item (iii) from Proposition 3.7: where the last equality follows from (15), (18) and (19). Thus, with Σ s (y) = ∂ λ ψ 0 (v 0 s ) − ∂ λψ (y,ṽ s (y)), we deduce that: This implies (47). Notice that, thanks to (46), Σ is non-negative and bounded from above by 2q.
Fix h > 0. We define the probability measures P (h) absolutely continuous with respect to P andP on D h with Radon-Nikodym derivative: Notice this Radon-Nikodym derivative is 1 if the branching mechanism ψ is homogeneous. We deduce from (47) and (48) that: and, using (14): In the next Lemma, we give an intrinsic representation of the Radon-Nikodym derivatives (52) and (53), which does not involve β 0 or v 0 .
are non-negative bounded D t -martingales respectively under P x andP x . Furthermore, we have for 0 ≤ t < h: Notice the limit M  Proof. First of all, the processM (h) is clearly D t -adapted. Using (47), we get: In the homogeneous setting, v 0 simply solves the ordinary differential equation: and thus We deduce thatẼ Therefore,M (h) is a D t -martingale underP x and the second part of (54) is a consequence of (52). Then, use (14) to get that M (h) is a D t -martingale under P x and the first part of (54).
We now give the Williams' decomposition: the distribution of (H max , W Tmax , R g Tmax , R d Tmax ) under N x or equivalently underÑ x /α(x). Recall the distribution P (h) x defined in (52) or (53). Theorem 4.12 (Williams' decomposition under N x ). Assume (H1)-(H3). We have: x . (iii) Conditionally on {H max = h 0 } and W Tmax , R g Tmax and R d Tmax are under N x independent Poisson point measures on R + ×Ω with intensity: In other words, for any non-negative measurable function F , we have where under E (h) x and conditionally on Y [0,h) , R W,(h),g and R W,(h),d are two independent Poisson point measures with intensity:

Notice that items (ii) and (iii) in the previous Proposition imply the existence of a measurable family (N
x is the distribution of W (more precisely of (W Tmax , R g Tmax , R d Tmax )) under N x conditionally on {H max = h}. Proof. We keep notations introduced in Proposition 4.7 and Theorem 4.12. We have: where the first equality comes from (H1) and item (iii) of Proposition 3.7; we set f (s, W ) = +∞ 0 X r (W )(ϕ) for the second equality; we use Proposition 4.7 for the third equality; we use  x . Consider the Poisson point measure N = j∈J δ (s j ,X j ) on [0, h) × Ω with intensity: The process We now give the superprocess counterpart of Theorem 4.12.
(i) Sample a positive number h 0 according to the law of H max under P ν : (iii) Conditionally on h 0 and x 0 , sample X (h 0 ) according to the probability measure N (h 0 ) x 0 . (iv) Conditionally on h 0 , sample X ′ , independent of x 0 and X (h 0 ) , according to the probability measure P ν (.|H max < h 0 ).
Then the measure valued process X ′ + X (h 0 ) has distribution P ν .
In particular the distribution of X ′ +X (h 0 ) conditionally on h 0 (which is given by (ii)-(iv) from Corollary 4.14) is a regular version of the distribution of the (L, β, α) superprocess conditioned to die at a fixed time h 0 , which we shall write P Proof. Let µ be a finite measure on R + and f a non-negative measurable function defined on R + × E. For a measure-valued process Z = (Z t , t ≥ 0) on E, we set Z(f µ) = f (t, x) Z t (dx)µ(dt).
We also write f s (t, x) = f (s + t, x).
Let X ′ and X (h 0 ) be defined as in Corollary 4.14. In order to characterized the distribution of the process X ′ + X (h 0 ) , we shall compute We shall use notations from Corollary 4.13. We have: where we used the definition of X ′ and N for the first equality, and the equality P ν (H max < h) = P ν (H max ≤ h) = e −ν(h) for the second. Recall notations from Theorem 4.12. We set: and g(h) = E ν e −X(f µ) 1 {Hmax<h} . We have: where we used the definition of G and g for the first and third equalities, Theorem 4.12 for the second equality, the master formula for Poisson point measure i∈I δ X i with intensity ν(dx) N x [dX] for the fourth equality (and the obvious notation H i max = inf{t ≥ 0; X i t = 0}) and Theorem 2.3 for the last equality. Thus we get: This readily implies that the process X ′ + X (h 0 ) is distributed as X under P ν .

Some applications
5.1. The law of the Q-process. Recall P (h) ν defined after Corollary 4.14 is the distribution of the (L, β, α)-superprocess started at ν ∈ M f (E) conditionally on {H max = h}. We consider also P The distribution of the Q-process, when it exists, is defined as the weak limit of P (≥h) ν when h goes to infinity. The next Lemma insures that if P (h) ν weakly converges to a limit P (∞) ν , then this limit is also the distribution of the Q-process.
Proof. Let Z = 1 A with A ∈ F t such that P (∞) ν (∂A) = 0. Using the Williams' decomposition under P ν given by Corollary 4.14, we have for h > t: . We write down the difference: gives the result. The proof is similar for the conditioned excursion measures.
We now address the question of convergence of the family of probability measures (P (h) x , h ≥ 0). Recall from (54) that for all 0 ≤ t < h: We shall consider the following assumption on the convergence in law of the spine. Note that Scheffé's lemma implies that the convergence also holds in L 1 (P x ). Furthermore, since (M By construction, the probability measure P (h) x converges weakly to P (∞) x on D t , for all t ≥ 0. Let ν ∈ M f (E). We shall consider the following assumption: (H5) ν There exists a measurable function ρ such that the following convergence holds in L 1 (ν): In particular, we have ν(ρ) = 1. Let ν ∈ M f (E). Under (H4) and (H5) ν , we set: x (dY ) converges weakly to P Remark 5.4. If ν a constant times the Dirac mass δ x , for some x ∈ E, then (H5) ν holds if (H4) holds and in this case we have P x .
We can now state the result on the convergence of N (h) x .
and conditionally on Y , R B,g and R B,d are two independent Poisson point measures with intensity ν B given by (39). We even have the slightly stronger result. For any bounded measurable function F , we have: Proof. Let h > t. We use notations from Theorems 5.5 and 4.12. Let F be a bounded measurable function on W × (R + ×Ω) 2 . From the Williams' decomposition, Theorem 4.12, we have: We also set: We want to control: Notice that: We prove the first term of the right hand-side of (60) converges to 0. We have: Then use that ϕ h is bounded by F ∞ and the convergence of (M We then prove the second term of the right hand-side of (60) converges to 0. Conditionally on Y , R given by (39)). And we have: Thanks to (17) and (46), we get that: The proof of the next Lemma is postponed to the end of this Section. Lemma 5.6. Let R andR be two Poisson point measures on a Polish space with respective intensity ν andν. Assume thatν(dx) = 1 A (x)ν(dx), where A is measurable and ν(A c ) < +∞. Then for any bounded measurable function F , we have: Using this Lemma with ν given by 1 [0,t] (s) ν B (ds, dW ) and A given by {H max (W ) < h − s}, we deduce that: We deduce that: Recall that (H1) implies that v h−s (x) converges to 0 as h goes to infinity. Since v is bounded (use (17) and (46)), by dominated convergence, we get: Therefore, we deduce from (60) that lim h→+∞ ∆ h = 0, which gives (59).
We now define a superprocess with spine distribution P (∞) ν .
ν , and, conditionally on Y , let N = j∈J δ (s j ,X j ) be a Poisson point measure with intensity: Consider the process X (∞) = (X (∞) t , t ≥ 0), which is defined for all t ≥ 0 by: (i) Let X ′ independent of X (∞) and distributed according to P ν . Then, we write P (∞) ν for the distribution of X ′ + X (∞) .
(ii) If ν is the Dirac mass at x, we write N (∞) x for the distribution of X (∞) .
As a consequence of Theorem 5.5, we get the convergence of P (h) ν . We shall write P (h) x when ν is the Dirac mass at x. Corollary 5.8. Under (H1)-(H4), we have that, for all t ≥ 0: Proof. Point (i) is a direct consequence of Theorem 5.5, Definition 5.7 and Proposition 3.12.
Point (ii) is a direct consequence of point (i), Corollary 4.14 and the weak convergence of P (≤h) x to P x as h goes to infinity. According to Corollary 4.14, under P Assumption (H5) ν implies this distribution converges weakly to: (because of the convergence of the densities in L 1 (ν)) on (Ω, F t ) as h goes to infinity. This and the weak convergence of P (≤h) ν to P ν as h goes to infinity gives point (iii).
Proof of Lemma 5.6. Similarly to Lemma 4.3 (formally take f = −∞1 A c ), we have: We deduce that: This gives the result.

5.2.
Backward from the extinction time. We shall work in this section with the space D − = D(R − , E) equipped with the Skorokhod topology. We also consider the σ-fields D I = σ(Y r , r ∈ I) for I an interval on (−∞, 0]. Let us denote by θ the translation operator, which maps any process R to the shifted process θ h (R) defined by: The process R may be a path, a killed path or a point measure, in which case we set, for R = j∈J δ (s j ,x j ) , θ h (R) = j∈J δ (h+s j ,x j ) . We also denote P (−h) the push forward probability measure of P (h) by θ h , defined on D [−h,0] by: We introduce the following assumptions.
(H6) There exists a probability measure on (D − , D (−∞,0] ) denoted P (−∞) such that for all x ∈ E, t ≥ 0, and f bounded and D [−t,0] measurable: (H7) For all t > 0, there exists a non negative function g such that for all x ∈ E, for all h > 0: Note that the probability measure P (−∞) in (H6) does not depend on the starting point x.
We can now state the result on the convergence of the superprocess backward from the extinction time.
where Y has distribution P (−∞) and conditionally on Y , R W,g and R W,d are two independent Poisson point measures with intensity: We even have the slightly stronger result. For any bounded measurable function F , we have: and conditionally on Y with distribution P (−∞) , j∈J δ (s j ,X j ) is a Poisson point measure with intensity: Remark 5.10. We provide in Lemmas 7.3 and 7.6 sufficient conditions for (H6) and (H7) to hold in the case of the multitype Feller diffusion and the superdiffusion. These conditions are stated in term of the generalized eigenvalue λ 0 defined in (57) and its associated eigenfunction.
Proof. Let 0 < t < h. We use notations from Theorems 4.12, 5.5 and 5.9. Let F be a bounded measurable function on W − × (R − ×Ω) 2 with W − the set of killed paths indexed by negative times. We want to control δ h defined by: We set: We deduce from Williams' decomposition, Theorem 4.12, and the definition of R W,g and R W,d , that: We thus can rewrite δ h as: The function Υ being bounded by F ∞ and measurable, we may conclude under assumption (H6) that lim h→+∞ δ h = 0. This proves point (i).
We now prove point (ii). Let t > 0 and ε > 0 be fixed. Let F be a bounded measurable function on the space of continuous measure-valued applications indexed by negative times. For a point measure on R − ×Ω, M = i∈I δ (s i ,W i ) , we set: For h > t, we want a control ofδ h defined by: By Corollary 4.13, we have: Thus, we get: For a > s fixed, we introduceδ a h , for h > a, defined by: Notice the restriction of the point measures to [−a, 0]. Point (i) directly yields that lim h→+∞δ a h = 0. Thus, there exists h a > 0 such that for all h ≥ h a , We now consider the differenceδ h −δ a h . We associate to the point measures M introduced above the most recent common ancestor of the population alive at time −t: Let us observe that: 0] ) in the left and in the right hand side. Similarly, we have: with A = A R W,g + R W,d in the left and in the right hand side. We thus deduce the following bound onδ h −δ a h : where we used (65), (66), (67) and (68) for the first inequality, the definition of A for the first equality, as well as (H7) and the fact that 1−e −x ≤ x if x ≥ 0 for the last inequality. From (H7), we can choose a large enough such that: |δ h −δ a h | ≤ ε/2. We deduce that for all h ≥ max(a, h a ): This proves point (ii).

The assumptions (H4), (H5) ν and (H6)
We assume in all this section that P is the distribution of a diffusion in R K for K integer or the law of a finite state space Markov Chain, see Section 7 and the references therein. In particular, the generalized eigenvalue λ 0 of (β − L) (see (86) or (88)) is known to exist. We will denote by φ 0 the associated right eigenvector. We shall consider the assumption: (H8) There exist two positive constants C 1 and C 2 such that ∀x ∈ E, C 1 ≤ φ 0 (x) ≤ C 2 ; and φ 0 ∈ D(L).
Under (H8), let P φ 0 x be the probability measure on (D, D) defined by (9) with g replaced by φ 0 : We shall also consider the assumption: (H9) The probability measure P φ 0 admits a stationary measure π, and we have: Notice the two hypotheses (H8) and (H9) hold for the examples of Section 7, see Lemmas 7.1 and 7.5.
Let us mention at this point that we will check that P φ 0 defined by (58), see Proposition 6.8. 6.1. Proof of (H4)-(H6). Notice (H9) implies that the probability measure P φ 0 π admits a stationary version on D(R, E), which we still denote by P φ 0 π . We introduce a specific h-transform of the superprocess. From Proposition 3.5 and the definition of the generalized eigenvalue (86) and (88), we have that the h-transform given by Definition 3.4 with g = φ 0 of the (L, β, α) superprocess is the (L φ 0 , λ 0 , αφ 0 ) superprocess. We define v φ 0 for all t > 0 and x ∈ E by: Observe that, as in (17), the following normalization holds between v φ 0 and v: Our first task is to give precise bounds on the decay of v φ 0 t as t goes to ∞. We first offer bounds for the case λ 0 = 0 in Lemma 6.1, relying on a coupling argument. This in turn gives sufficient condition under which (H1) holds in Lemma 6.2 We then give Feynman-Kac representation formulae, Lemma 6.3, which yield exponential bounds in the case λ 0 > 0, see Lemma 6.4. We finally strengthen in Lemma 6.6 the bound of Lemma 6.4 by proving the exponential behavior of v φ 0 t in the case λ 0 > 0. The proofs of Lemmas 6.1, 6.2, 6.3, 6.4 and 6.6 are given in Section 6.2.
We first give a bound in the case λ 0 = 0.
The following Proposition gives that (H6) holds. Proof. Let 0 < t and F be a bounded and D [−t,0] measurable function. For h large enough, we have: where we used the definition of P (−h) and the Markov property for the first equality, Lemma 6.7 together with F bounded by F ∞ for the third, and assumption (H9) for the fourth. We continue the computations as follows: where we used Lemma 6.10 for the second equality. This gives (H6) with P (−∞) = P (−∞),φ 0 . 6.2. Proof of Lemmas 6.1, 6.2, 6.3, 6.4 and 6.6.
Proof of Lemma 6.1. From (H2) and (H8), there exist m, M ∈ R such that Let W be a ( M αφ 0 L, 0, M ) Brownian snake and define the time change Φ for every w ∈ W by Φ t (w) = t 0 ds M αφ 0 (w(s)).
(w) denote its inverse. Then, using Proposition 12 of [10], first step of the proof, we have that the time changed snake W • Φ −1 , with value ) at time s, is a (L, 0, αφ 0 ) Brownian snake. Noting the obvious bound on the time change Φ −1 t (w) ≤ t, we have, according to Theorem 14 of [10]: which implies: from the exponential formula for Poisson point measures. Now, the left hand side of this inequality can be computed explicitly: 1 M t and the right hand side of this inequality is v φ 0 t (x) from (71). We thus have proved that: and this yields the first part of the inequality of Lemma 6.1. The second part is obtained in the same way using the coupling with the m αφ 0 L φ 0 , 0, m Brownian snake.
Proof of Lemma 6.3. Let ε > 0. The function v φ 0 is known to solve the following mild form of the Laplace equation, see equation (3): By differentiating with respect to s and taking t = t − s, we deduce from dominated convergence and the bounds (46), (47) and (49) on v φ 0 = v/φ 0 and its time derivative (valid under the assumptions (H1)-(H3)) the following mild form on the time derivative ∂ t v φ 0 : From the Markov property, for fixed t > 0, the two following processes: are D s -martingale under P φ 0 π . A Feynman-Kac manipulation, as done in the proof of Lemma 3.1, enables us to conclude that for fixed t > 0: are D s -martingale under P φ 0 π . Taking expectations at time s = 0 and s = h with t = h + ε, we get the representations formulae stated in the Lemma: Proof of Lemma 6.4. Since v φ 0 ε = v ε /φ 0 =ṽ ε /(αφ 0 ), we can conclude from (46), (H2) and (H8) that v φ 0 ε is bounded from above and from below by positive constants. Similarly, we also get from (47), (48) and (49) that |∂ hṽε | is bounded from above and from below by two positive constants. Thus, we have the existence of four positive constants, D 1 , D 2 , D 3 and D 4 , such that, for all x ∈ E: From equations (73), (82) and the positivity of v φ 0 , we deduce that: Putting back (84) into (73), we have the converse inequality Similar arguments using (74) and (83) instead of (73) and (82), gives (76).
Proof of Lemma 6.6. Using the Feynman-Kac representation of ∂ h v φ 0 h+ε from (73) as well as the Markov property, we have: Notice that according to Lemma 6.4 if λ 0 > 0 and Lemma 6.1 if λ 0 = 0. We get: where we used (85) for the first equality, (H9) for the second, stationarity of Y under P φ 0 π for the third and (85) again for the last. This gives (77).
6.3. About the Bismut spine. Choosing uniformly an individual at random at height t under N x and letting t → ∞, we will see that the law of the ancestral lineage should converge in some sense to the law of the oldest ancestral lineage which itself converges to P (∞) x defined in (58), according to Lemma 6.8.
We have defined in (45) the following family of probability measure indexed by t ≥ 0: x |Dt 0 dP x |Dt 0 P x -a.s. and in L 1 (P x ).
Note that there is no restriction on the sign of λ 0 for this Lemma to hold.
Remark 6.14. This result correspond to the so called globular phase in the random polymers literature (see [8], Theorem 8.3).
Proof. We have: where we use the Markov property at the first equality, we force the apparition of λ 0 at the second equality and we force the apparition of φ 0 at the third equality in order to obtain the Radon Nikodym derivative of P φ 0 x with respect to P x : this observation gives the fourth equality. The ergodic assumption (H9) ensures the P x -a.s. convergence to 1 of the fraction in the fourth equality as t goes to ∞. Since is bounded according to (H8), we conclude that the convergence also holds in L 1 (P x ). Then use Lemma 6.8 to get that P φ 0 x = P (∞) x .

Two examples
In this section, we specialize the results of the previous sections to the case of the multitype Feller process and of the superdiffusion. 7.1. The multitype Feller diffusion. The multitype Feller diffusion is the superprocess with finite state space: E = {1, . . . , K} for K integer. In this case, the spatial motion is a pure jump Markov process, which will be assumed irreducible. Its generator L is a square matrix (q ij ) 1≤i,j≤K of size K with lines summing up to 0, where q ij gives the transition rate from i to j for i = j. The functions β and α defining the branching mechanism (2) are vectors of size K: this implies that (H2) and (H3) automatically hold. For more details about the construction of finite state space superprocess, we refer to [14], example 2, p. 10, and to [6] for investigation of the Q-process.
The generalized eigenvalue λ 0 is defined by: (86) λ 0 = sup {ℓ ∈ R, ∃u > 0 such that (Diag(β) − L)u = ℓu}, where Diag(β) is the diagonal K × K matrix with diagonal coefficients derived from the vector β. We stress that the generalized eigenvalue is also the Perron Frobenius eigenvalue, i.e. the 7.2. The superdiffusion. The superprocess associated to a diffusion is called superdiffusion. We first define the diffusion and the relevant quantities associated to it, and take for that the general setup from [27]. Here E is an arbitrary domain of R K for K integer. Let a ij and b i be in C 1,µ (E), the usual Hölder space of order µ ∈ [0, 1), which consists of functions whose first order derivatives are locally Hölder continuous with exponent µ, for each i, j in {1, . . . , K}. Moreover, assume that the functions a i,j are such that the matrix (a ij ) (i,j)∈{1...K} 2 is positive definite. Define now the generator L of the diffusion to be the elliptic operator: The generalized eigenvalue λ 0 of the operator β − L is defined by: λ 0 = sup {ℓ ∈ R, ∃u ∈ D(L), u > 0 such that (β − L)u = ℓ u}· (88) Denoting E the expectation operator associated to the process with generator L, we recall an equivalent probabilistic definition of the generalized eigenvalue λ 0 : for any x ∈ R K , where τ A c = inf {t > 0 : Y (t) / ∈ A} and the supremum runs over the compactly embedded subsets A of R K . We assume that the operator (β − λ 0 ) − L is critical in the sense that the space of positive harmonic functions for (β − λ 0 ) − L is one dimensional, generated by φ 0 . In that case, the space of positive harmonic functions of the adjoint of (β − λ 0 ) − L is also one dimensional, and we denote byφ 0 a generator of this space. We further assume that the operator (β − λ 0 ) − L is product-critical, i.e. E dx φ 0 (x)φ 0 (x) < ∞, in which case we can normalize the eigenvectors in such a way that E dx φ 0 (x)φ 0 (x) = 1. This assumption (already appearing in [15]) is a rather strong one and implies in particular that P φ 0 is the law recurrent Markov process, see Lemma 7.5 below.
Concerning the branching mechanism, we will assume, in addition to the conditions stated in section 2, that α ∈ C 4 (E).
Note that the non negativity of the generalized eigenvalue of the operator (β − L) now characterizes in general the local extinction property (the superprocess X suffers local extinction if its restrictions to compact domains of E suffers global extinction); see [16] for more details on this topic. However, under the boundedness assumption we just made on α and φ 0 , the extinction property (H1) holds, as will be proved (among other things) in the following Lemma. Proof. The assumption α ∈ C 4 (E) ensures that (H2) and (H3) hold. Then the end of the proof is similar to the end of the proof of Lemma 7.2 and the proof of Lemma 7.3. Lemma 7.7. We have:

Recall that P
• P (h) x is an inhomogeneous diffusion on [0, h) issued from x with generator at time t ∈ [0, h): is an homogeneous diffusion on [0, ∞) issued from x with generator L + a ∇φ 0 φ 0 ∇. .
Proof. The proof is similar to the proof of Lemma 7.4.
William's decomposition under N (h) x (Propositions 4.14) together with the convergence of this decomposition (Theorem 5.5) then hold under the assumption λ 0 ≥ 0 and (H8). Convergence of the distribution of the superprocess near its extinction time under N (h) x (Proposition 5.9) holds under the stronger assumption λ 0 > 0.
Remark 7.8. Engländer and Pinsky offer in [17] a decomposition of supercritical non-homogeneous superdiffusion using immigration on the backbone formed by the prolific individuals (as denominated further in Bertoin, Fontbona and Martinez [4]). It is interesting to note that the generator of the backbone is L w where w formally satisfies the evolution equation Lw = ψ(w), whereas the generator of the spine of the Q process investigated in Theorem 5.5 is L φ 0 where φ 0 formally satisfies Lφ 0 = βφ 0 . In particular, we notice that the generator of the backbone L w depends on both β and α and that the generator of our spine L φ 0 does not depend on α.