ELECTRONICCOMMUNICATIONSinPROBABILITY A RESUMMED BRANCHING PROCESS REPRESENTATION FOR A CLASS OF NONLINEAR ODES

We study some probabilistic representations, based on branching processes, of a simple nonlinear diﬁerential equation, i.e. u 0 = ‚u ( au R ¡ 1). The ﬂrst approach is basically the same used by Le Jan and Sznitman for 3-d Navier-Stokes equations, which need small initial data to work. In our much simpler setting we are able to make this precise, ﬂnding all the cases where their method fails to give the solution. The second approach is based on a resummed representation, which we can prove to give all the solutions of the problem, even those with large initial data.


Introduction
This research was initially motivated from the desire to understand the limitations on initial conditions imposed by Le Jan and Sznitman in their outstanding work [3] on branching processes representations of the solution to the 3-d Navier-Stokes equations. They provide, for "small" initial conditions, global in time existence and uniqueness of solutions in suitable function spaces. This paper is devoted to an elementary toy model, a class of simple ODE's where the picture of existence and uniqueness is clear a priori by analytical methods. In this setting it is not difficult (Section 2 aim) to provide examples of ODE's that can be solved for all initial conditions while the probabilistic formula is meaningful only for small data. This raises the question, also suggested us by Sznitman in private conversation, whether one can "resum" the classical probabilistic formula to get a new one that provides the solution of the differential equation for all (or for more of) the admissible initial conditions. In Section 3 we provide a new probabilistic formula, obtained by a resummation that gives us all the correct solutions to our class of ordinary differential equations. Proposition 3 states that when this new object can be defined, it is indeed a solution of the given equation. Proposition 4 finally shows that this happens each time that we know by analytical methods that a solution exists.

Cauchy problem and branching representation
The simple nonlinear problem we shall be dealing with is the following: Here R is a positive integer, a and u 0 are real numbers and λ > 0. The system above is equivalent to We will produce a solution of the latter, as the expected value of a random process built on a Yule branching. Consider a population of branching particles, starting at time 0 with exactly one ancestor (labelled with the empty string ∅), in which each particle i independently, after an exponential time τ i of rate λ, is removed and replaced with R + 1 new particles labelled with the strings i0, i1, . . . , iR.
Let I be the set of all particle labels (i.e. the finite strings on the alphabet 0, 1, . . . , R, including ∅), and let τ = {τ i } i∈I , taking values in T := R I + , be the entire history of the population. We will denote by σ i : T → T , for i = 0, 1, . . . , R, the shift operators that return the history of the population generated by the i-th child of the ancestor, i.e. for all h ∈ T , for all j ∈ I, and i = 0, 1, . . . , R, Clearly for i = 0, 1, . . . , R, the random variables σ i (τ ) are independent and with the same distribution as τ .
A very important process is N t (τ ), the total number of branchings up to time t. For any h ∈ T , let where we say that i ≤ j, when both belong to I, if and only if j descends from i, that is, if j is the concatenation of i with another string of I. We will often write N t for N t (τ ) when there is no danger of misunderstanding. Note also that N t (τ ) < ∞ a.s. for all t.
We are now ready to construct the random process whose expectation will turn out to solve Equation (2). For h ∈ T and t > 0, let (4) Note that the recursion ends after N t (h) steps. Finally, let It's easy to prove that the above iterative definition is equivalent to the much simpler We shall need both, the former in particular being useful when conditioning on the time of the first branching. Letū(t) be the expected value of X t , Proof. Since X t ∈ L 1 there exists E[X t |τ ∅ ], a version of which is given by and hence, by a change of variable,ū is a solution of Equation (2).
We now investigate the condition X t ∈ L 1 . Let us consider The same approach of Proposition 1 leads to so that l satisfies l = λl(|a|l R − 1) An elementary study of Equations (1) and (11) tells us that l becomes infinite for some finite t each time that |au R 0 | > 1, whereas u blows up (at the same time as l does) only if au R 0 > 1, a global solution existing when au R 0 < −1. This means thatū is defined for all t for which there is solution if au R 0 ≥ −1, while this is not true when au R 0 < −1.

Resummed representation
The next approach, which is reminiscent of Borel sum, uses a resummation of the expected value of X t . When it does make sense, definẽ Note that when X t ∈ L 1 we can exchange integral and expectation, so in that caseū(t) =ũ(t).
In fact we will show that the existence ofũ is a weaker condition than integrability of X t (Proposition 4), and nevertheless, it is nearly enough to yield an existance result (Proposition 3) similar to Proposition 1. Let so that By Equation (6), H k (t) is always defined and so that ϕ t (x) is analytic.

Lemma 2.
For all x ≥ 0 and all choices of k 0 , . . . , k R ∈ N one has Proof. The left-hand side is the density of a Gamma r.v. with parameters R + 1 + R i=0 k i = R i=0 (k i + 1) and 1. The right-hand side is the convolution of the densities of R + 1 Gamma r.v.'s with parameters k i + 1 and 1, i = 0, . . . , R, that is, the same.
so that, in particular,ũ is defined on the whole interval [0, T ]. Thenũ solves Equation (2) on the same interval.
We integrate by parts R − 1 times to find (recall that ϕ t is analytic), Now we condition on the first branching time as in (8); let N i s := N s (σ i (τ )) denote the total number of branchings in the i-th subtree during the time interval (τ ∅ , τ ∅ + s]; then by construction N t = 1 + R i=0 N i t−τ∅ a.s. on {τ ∅ ≤ t}, and we get To compute the expectation above, we partition according to the values of all the N i t−s , then we apply Lemma 2, obtaining ∞ k0,...,kR=0 where we could exchange integral and sum by Fubini's theorem, because the integration above is on a compact domain and the bound in Equation (17) is a continuous function of the x i 's. Changing s in t − s in the integral of Equation (20) and substituting (21) for the expectation, we get Here in the first passage, we could exchange the order of the integrals in ds and dx because by Equation (19), Finally we show that for au R 0 < −1,ũ provides a global solution of (1). This is a region of values for whichū could not be defined.
are the coefficients of the Taylor expansion of 1/ R √ 1 − x = k Υ(R, k)x k . On the other hand, a simple calculation shows that for R ≥ 2, Υ(R, k) is the k-th moment of a beta 1 random variable with density Let Y denote a random variable with this distribution and let x ≥ 0, then using Equations (23) and (24), the fact that Y ≥ 0 a.s. and the hypothesis au R 0 < 0,