Approximation of a generalized continuous-state branching process with interaction

In this work, we consider a continuous–time branching process with interaction where the birth and death rates are non linear functions of the population size. We prove that after a proper renormalization our model converges to a generalized continuous state branching process solution of the SDE where W is a space-time white noise on (0 , ∞ ) 2 and M ( ds, dz, du ) = M ( ds, dz, du ) − dsµ ( dz ) du , with M being a Poisson random measure on (0 , ∞ ) 3 independent of W, with mean measure dsµ ( dz ) du , where (1 ∧ z 2 ) µ ( dz ) is a ﬁnite measure on (0 , ∞ ) .


Introduction
Consider a population evolving in continuous time with m ancestors at time t = 0, in which to each individual is attached a random vector describing her lifetime and her number of offsprings. We assume that those random vectors are independent and identically distributed. The rate of reproduction is governed by a finite measure ν on Z + = {0, 1, 2, ...}, satisfying ν(1) = 0. More precisely, each individual lives for an exponential time with parameter ν(Z + ), and is replaced by a random number of children according to the probability ν(Z + ) −1 ν. For each individual we superimpose additional birth and death rates due to interactions with others at a certain rate which depends upon the other individuals in the population. More precisely, given a function f : R + → R, which satisfies assumption (H2) below, whenever the total size of the population is k, the total additional birth rate due to interactions is k i=1 (f (i) − f (i − 1)) + , while the total additional death rate due to interactions is where W is a space-time white noise on R + × R + , M (dr, dz, du) is a Poisson random measure with mean measure dsµ(dz)du independent of W , c ≥ 0 and µ is a σ-finite measure on (0, ∞) which satisfies
Under the assumptions (H1) and (H2), the existence and uniqueness of a strong solution of (1.1) is proved in [5]. We thus generalize the convergence result in [3], see also [12], where the limit was a continuous process.
We will need to consider the CSBP Y x solution of the SDE whose branching mechanism is given by In this work, we assume that Y x does not explode, which is equivalent to (see [8]) The paper is organised as follows. We first define a discrete model jointly for all initial population sizes. This imposes a non symmetric competition rule between the individuals, which we will describe in section 2 below. We do a suitable renormalization of the parameters of the discrete model in section 3, and we prove the convergence of the renormalized model in the large population limit in section 4.
Note that due to our weak assumption (H1), Z x does not have a finite moment of order 1. This induces difficulties for checking tightness of the approximation. We use comparison with two branching processes.

The model
We consider a continuous time Z + -valued population process {X m t , t ≥ 0}, which starts at time zero from X m 0 = m ancestors who are arranged from left to right, and evolves in continuous time. The left/right order is passed on to their offsprings. Moreover, at each death/birth event all newborn are arranged in an arbitrary left-right order. Those rules apply inside each genealogical tree, so that distinct branches of the tree never cross. This means that the forest of genealogical trees of the population is a planar forest of trees, where the ancestor of the population X 1 t is placed on the far left, the ancestor of X 2 t − X 1 t immediately on her right, etc... This defines in a non-ambiguous way an order from left to right within the population alive at each time t. We decree that each individual feels the interaction with the others placed on her left but not with those on her right. In order to simplify our formulas, we suppose moreover that the first individual in the left/right order gives birth to offspring at rate ν( ) + f + (1)1 { =2} and dies at rate ν(0) + f − (1). {X m t , t ≥ 0} is a Z + -valued Markov process, which starts from X m 0 = m. and evolves as follows. If X m t = 0, then X m s = 0 for all s ≥ t. While at state k ≥ 1, the process X m t jumps to (2.1) Hence the total interaction birth rates minus the total interaction death rates endured by the population X m t at time t is

Coupling over ancestral population size
The above description specifies the joint evolution of all {X m t , t ≥ 0} m≥1 , or in other words of the two-parameter process {X m t , t ≥ 0, m ≥ 1}. In the case of a linear function f, for each fixed t > 0, {X m t , m ≥ 1} is an independent increments process. Here {X m · , m ≥ 1} is a Markov chain with values in the space D([0, ∞); Z + ) of càdlàg functions from [0, ∞) into Z + , which starts from 0 at m = 0. Consequently, in order to describe the law of the two-parameter process {X m t , t ≥ 0, m ≥ 1}, it suffices to describe the conditional law of X n · , given X n−1 · for each n ≥ 1. We now describe the conditional law of X n · given X m · , for arbitrary 1 ≤ m < n. Let V m,n t = X n t − X m t , t ≥ 0. Conditionally upon {X j · , j ≤ m}, and given that X m t = x(t), t ≥ 0, {V m,n t , t ≥ 0} is a Z + -valued time inhomogeneous Markov process starting from V m,n 0 = n − m, whose time-dependent infinitesimal generator {Q k,j (t), k, j ∈ Z + } is such that its non zero off-diagonal terms are given by This description of the conditional law of {X n t − X m t , t ≥ 0}, given X m · , is prescribed by what we have just said, and {X m · , m ≥ 1} is indeed a Markov chain. Note that V 0,n t evolves as X n t . Our nonlinear function f is very general. It can both model the Allee effect and competition, in case it is increasing for moderate values of x, and decreasing for large x.

A renormalized process
In this section, we first construct our continuous time branching process with interaction. We then proceed to its renormalisation. Let N 1 be an integer which will eventually go to infinity.

Preliminaries
Let us start with a construction, which will allow us to separate small jumps and big jumps of the population process. Let us define ψ 1 and ψ 2 ∈ C([0, +∞)) by where µ satisfies (H1). In what follows, we set , |s| 1.
For any ≥ 0, we define It is easy to check that for all N ≥ 1, We denote by h N the probability generating function of π N . We have In what follows, we will need the Remark 3.1. For any λ > 0, Consider g(s) = 1 2 (1 + s 2 ), which is the generating function of the probability π = We define one more probability on Z + . For any ≥ 0, We denote by L N the probability generating function of ν N . We have (3.7)

Renormalized discrete model
Now we proceed with the renormalization of the model defined by (2.1). For x ∈ R + and N ∈ Z + , we choose m = [N x], and ν( ) , we multiply f by N and divide its argument by N . We attach to each individual in the population a mass equal to 1/N. Then the total mass process Z N,x , which starts from [N x] N at time t = 0, is a Markov process whose evolution can be described as follows. Z N,x jumps from k/N to Let P 1 and P 2 be two standard Poisson processes, such that M N , P 1 and P 2 are independent. From (3.8), Z N,x can be expressed as We introduce the notations For the rest of this subsection, we define the martingale (3.9) can be rewritten as Since M N,x is purely discontinuous, its quadratic variation M N,x is the sum of the squares of its jumps: From this, (3.2), (3.4) and the identities Z+ z 2 π −,N (dz) = h 1,N (1) + h 1,N (1) and It now follows from Assumption (H2) that In the direction x, we could only obtain the convergence in the sense of finite dimensional distributions. Our result is sufficient to declare that the coupling of the various initial conditions described by (1.1) is the natural one. For the proof of this theorem, we first consider Z N,x for a fixed x > 0.

Tightness of Z N,x
The main difficulty for proving tightness of the sequence Z N,x comes from the fact that, as a result of our very weak assumption on the Lévy measure µ, the limiting process Z x does not have a first moment (since the large jumps may not be integrable). Hence we cannot hope for a uniform estimate of the first moment of Z N,x like in section 7.1 of [12], and another method is necessary for establishing tightness. We have chosen to use comparison of Z N,x (resp. Z x ) with a branching process Y N,x (resp. with a CSBP Y x ).
To prove the tightness criterion of Z N,x , we will proceed in several steps. Let Y N,x be the Markov process which starts from N at time t = 0, and evolves as follows for every ≥ 2. Hence the dynamics of the continuous time Markov process X N,x is entirely characterized by the measure µ N . We have the following Proposition, which can be found in Athreya-Ney [2] (4) page 106 and in Pardoux [12] Prop. 3 page 10.

Proposition 4.2.
The generating function of the process X N,x is given by where L N is the generating function given by (3.6).  The function u N t solves the equation   We fix λ > 0. Since ψ and ψ N are locally Lipschitz, −ψ N (u) ≤ −ψ(u) and u N 0 (λ) ≤ u 0 (λ), it follows from a well-known comparison theorem for one-dimensional ODEs that We also notice from (H3) that Y x t does not explode and the facts that t → (u t (λ), We have Proposition 4.3. Let (t, λ) → u t (λ) be the unique locally bounded positive solution of (4.5). For every λ 0, as N → ∞, u N t (λ) → u t (λ) locally uniformly in t.
Proof. We take the difference between (4.3) and (4.5), and use (4.4) to deduce that for where k N (λ) = λ − N 1 − e −λ/N → 0, as N → ∞, and K λ is the Lipschitz constant for ψ on [u N0 (λ),ū(λ)]. From (4.6), the last term tends to 0, as N goes to infinity. We conclude thanks to Gronwall's lemma. Now some simple algebra yields (recall that Y x is the CSBP given by (1.2)) We next establish  whereM N,x is a purely discontinuous martingale, whose predictable quadratic variation reads We have Lemma 4.6. For all T > 0, x ≥ 0, there exists a constant C 0 > 0 such that for all N ≥ 1, Proof. Taking the expectation on both side of equation (4.7), we obtain N,x r dr .
It remains to use Gronwall's Lemma to conclude with C 0 = xe βT .
We next establish From Cauchy-Schwartz, Doob's inequality for the L 2 norm, |y| ≤ 1 + y 2 and (4.8),  Since from (H3) Y x T < ∞ a.s, we can choose M such that (4.10) The arguments leading to Proposition 4.4 yield thatȲ N,x T ⇒Ȳ x T , whereȲ x is the same as Y x , but with µ replaced byμ. Using again the Portmanteau theorem and (4.10), we Finally, combining this inequality with (4.11), (4.12) and Proposition 4.7, we deduce that The result follows by choosing k = 2C 1 / (1 − /2).
Thanks to these results, we are in a position to establish the tightness of Z N,x .
We want to check tightness of the sequence {Z N,x , N ≥ 1} using Aldous' criterion. Let {τ N , N ≥ 1} be an arbitrary sequence of [0, T ]-valued stopping times. We deduce from the above Corollary Lemma 4.9. For any T > 0, and η, > 0, there exists δ > 0 such that Proof. Let J : R + → R + be the continuous increasing function defined by J(z) = sup 0≤r≤z |f (r)|.
The result follows by using Corollary 4.8 and choosing δ < η/J(k ).
We need to check tightness of the sequence of processes Q N,x . We have Lemma 4.10. For any T > 0, and η, > 0, there exists θ 0 > 0 such that Proof. From (3.10), we notice that  From Lemma 4.9 and Lemma 4.10, we deduce that the second and fourth terms in the right-hand side of (3.11) satisfy Aldous' criterion in [1]. Corollary 4.8, (3.12) and (3.13) imply that M N,x is both tight and continuous, hence C-tight in the terminology of [10], and from Theorem VI 4.13 in [10], M N,x is tight. We have proved

Convergence of Z N,x for fixed x
For the rest of this section we set (4.13) The argument leading to (3.13) implies It follows from (1.1) and Itô's formula that, for λ ≥ 0, the following is a martingale (4.14) Thanks to these results, we can now establish the convergence of Z N,x . Proof. By Proposition 4.11, along a subsequence (denoted as the whole sequence) {Z N,x t , t ≥ 0} converges weakly to a process {Z x t , t ≥ 0} for the Skorohod topology of D([0, ∞); R + ). Let for λ ≥ 0, F (u) = e −λu , from (3.9), (4.13) and Itô's formula, we deduce that the following is a martingale N ), (4.15) where we have used the decomposition (2cN + d N )ν N = d N π N + 2cN π, and Using Taylor's formula and the fact that However, we have that From an easy adaptation of the argument of the proof of Proposition 3.40 of Li [11], we have that Combining this with Remark 3.1, we deduce that However, we deduce from Taylor's formula that Now, by combining the above results with Lemma 4.12 and Proposition 4.11, we obtain (4.14) by letting N → ∞ in (4.15). Let g ∈ C 2 K (R + ) (the space of C 2 functions from R + into R with compact support) and h(x) = g(− log(x)). h ∈ C 2 ([0, 1]). Let h n (x) = n k=0 n k h(k/n)x k (1 − x) n−k be its Bernstein polynomial approximation, which converges uniformly to h(x) on [0, 1]. Consequently g n (x) = n k=0 n k g(− log(k/n))e −kx (1 − e −x ) n−k is a linear combination of exponential functions with negative exponents, which converges to g(x) uniformly on R + as n → ∞. A lengthy but elementary computation shows that g n (x) → g (x) and g n (x) → g (x) pointwise. Consequently if L denotes the generator of the Markov process Z x , we have that Lg n (x) → Lg(x). This being true for any g ∈ C 2 K (R + ), it is easy to conclude that Z x t solves the martingale problem associated to (1.1). The result follows.

Proof of Theorem 4.1
We shall prove the statement in the case n = 2 only. The general proof is very similar. Recall (3.8). We now describe the law of the pair (Z N,x , Z N,y ), for any 0 < x < y.  Now from an easy adaptation of the arguments of the above results, we deduce the Proposition 4.14. For any fixed 0 ≤ x < y, V N,x,y ⇒ V x,y as N → ∞ in D([0, ∞); R + ), where V x,y is the unique solution of the following SDE