Tail asymptotics for the total progeny of the critical killed branching random walk

We consider a branching random walk on $\mathbb{R}$ with a killing barrier at zero. At criticality, the process becomes eventually extinct, and the total progeny $Z$ is therefore finite. We show that the tail distribution of $Z$ displays a typical behaviour in $(n\ln^2(n))^{-1}$, which confirms the prediction of Addario-Berry and Broutin.


Introduction
We look at the branching random walk on R + killed below zero. Let b ≥ 2 be a determinist integer which represents the number of children of the branching random walk, and x ≥ 0 be the position of the (unique) ancestor. We introduce the rooted b-ary tree T , and we attach at every vertex u except the root an independent random variable X u picked from a common distribution (we denote by X a generic random variable having this distribution). We define the position of the vertex u by S(u) where v < u means that the vertex v is an ancestor of u. We say that a vertex (or particle) u is alive if S(v) ≥ 0 for any ancestor v of u including itself.
The process can be seen in the following way. At every time n, the living particles split into b children. These children make independent and identically distributed steps. The children which enter the negative half-line are immediately killed and have no descendance. We are interested in the behaviour of the surviving population. At criticality (see below for the definition), the population ultimately dies out. We define the total progeny Z of the killed branching random walk by Z := #{u ∈ T : S(v) ≥ 0 ∀ v ≤ u} .
Aldous [2] conjectured that in the critical case, E[Z] < ∞ and E[Z ln(Z)] = ∞. In [1], Addario-Berry and Broutin proved that conjecture (in a more general setting where the number of children may be random). As stated there, this is a strong hint that P (Z = n) behaves asymptotically like 1/(n 2 ln 2 (n)), which is a typical behaviour of critical killed branching random walks. Here, we look at the tail distribution P (Z ≥ n). We mention that the Branching Brownian Motion, which can be seen as a continuous analogue of our model, already drew some interest. Kesten [6] and Harris and Harris [5] studied the extinction time of the population, whereas Berestycki et al. [3] showed a scaling limit of the process near criticality. Maillard [7] investigated the tail distribution of Z, and proved that P (Z = n) ∼ c n 2 ln 2 n as expected.
Before stating our result, we introduce the Laplace transform φ(t) := E[e tX ] and we suppose that • φ(t) reaches its infimum at a point t = ρ > 0 which belongs to the interior of {t : • The distribution of X is non-lattice.
The second assumption is for convenience in the proof, but the theorem remains true in the lattice case. The probability that the population lives forever is zero or positive depending on whether E[e ρX ] is less or greater than the critical value 1/b. In the present work, we consider the critical branching random walk which corresponds to the case E[e ρX ] = 1/b. For x ≥ 0, we call P x the distribution of the killed branching random walk starting from x.
Theorem 1.1 There exist two positive constants C 1 and C 2 such that for any x ≥ 0, we have for n large enough Hence, the tail distribution has the expected order. Nevertheless, the question to find an equivalent to P (Z = n) is still open. As observed in [1], in order to have a big population, a particle of the branching random walk needs to go far to the right, so that its descendance will be greater than n with probability large enough (roughly a positive constant). The theorem then comes from the study of the tail distribution of the maximum of the killed branching random walk. By looking at the branching random walk with two killing barriers, we are able to improve the estimates already given in [1].
The paper is organised as follows. Section 2 gives some elementary results for onedimensional random walks on an interval. Section 3 gives estimates on the first and second moments of the killed branching random walk, while Section 4 contains the asymptotics on the tail distribution of the maximal position reached by the branching random walk before its extinction. Finally, Theorem 1.1 is proved in Section 5.

Results for one-dimensional random walks
Let R n = R 0 + Y 1 + . . . + Y n be a one-dimensional random walk and P x be the distribution of the random walk starting from x. For any k ∈ R, we define τ + k (resp. τ − k ) as the first time the walk hits the domain (k, +∞) (resp. (−∞, k)), We assume All the results of this section are stated under condition (H). The results remain naturally true after renormalization as long as E[e tY 1 ] is finite on a neighborhood of zero (and E[Y 1 ] = 0). Throughout the paper, the variables C 1 , C 2 , . . . represent positive constants. We first look at the moments of the overshoot U k and undershoot L k defined respectively by Proof. This is a consequence of Proposition 4.2 in Chang [4].
The following lemma concerns the well-known hitting probabilities of R.
as k → ∞. Moreover, there exist two positive constants C 4 and C 5 such that, for any real k ≥ 0 and any z ∈ [0, k], we have Proof. Let k > 0 and x ∈ [0, k]. By Lemma 2.1, we are allowed to use the stopping time theorem on (R n , n ≤ min(τ − 0 , τ + k )), and we get We can write it where A 1 and A 2 are nonnegative and defined by By Cauchy-Schwartz inequality and Lemma 2.1, we observe that Since P x (τ + k < τ − 0 ) goes to zero when k tends to infinity, we deduce that By dominated convergence, we have also Similarly, We notice also that P Thus equation (2.2) holds with C 5 := C 8 and C 4 := C 11 .
Throughout the paper, we will write ∆ k (1) for any function such that for some constants D 1 and D 2 and k large enough. The following lemma provides us with estimates used to compute the moments of the branching random walk in Sections 3 and 4.

Lemma 2.3
We have for any x > 0, Proof. First let us explain how we can find intuitively these estimates. The terms of the sum within the expectation is big when R ℓ is close to 0, and the time that the random walk spends in the neighborhood of 0 before hitting level 0 is roughly a constant. Moreover, by Lemma 2.1, we know that the overshoot U k and the undershoot L 0 behave like a constant. From here, we can deduce the different estimates. In (2.4), the optimal path makes the particle stay a constant time near zero then hit level k which is of cost 1/k. In (2.5), the particle first goes close to 0, which gives a term in (1 + x)/k, then go back to level k which gives a term in 1/k. Finally looking at (2.6), we see that the particle goes directly to 0, which brings a term of order k because of the sum, and a term of order (1 + x)/k because of the cost to hit 0 before k. The proofs of the three equations being rather similar, we restrain our attention on the proof of (2.4) for sake of concision.
We introduce the function g(z) := e −z (1 + z) and we observe that g is decreasing. Let also Let a > 0 be such that P (Y 1 > a) > 0 and P (Y 1 < −a) > 0. For ease of notation we suppose that we can take a = 1. For any integer i such that 0 ≤ i < k, we denote by I i the interval [i, i + 1), and we define which respectively stand for the first time the walk enters I i and the number of visits to the interval before hitting level k or level 0. We observe that Let i be an integer between 1 and k − 1, and let z ∈ [i, i + 1). We have We use the Markov property to get By Lemma 2.2 equation (2.1) (applied to −R), there exists a positive constant C 12 such that When i ≤ k/2, (and z ∈ [i, i + 1)), we notice that where the last two inequalities come from Lemmas 2.1 and 2.2. For i ≥ k/2, we simply write Therefore, we have for any i ≤ k, We obtain that for any integer i between 1 and k − 1, and any z ∈ I i , by (2.7) and (2.8). We have to deal with the extreme cases i = 0 and i > k − 1. For z ∈ I 0 , we see that P z (T i > min(τ − 0 , τ + k )) ≥ P (Y 1 < −1), which yields by the same reasoning as before Similarly, (⌊k⌋ is the biggest integer smaller than k), Therefore, (2.9) still holds for any integer i ∈ [0, k), as long as C 17 is taken large enough. By the strong Markov property, we deduce that This gives the following upper bound for A: In particular, with C 20 := C 17 i≥0 (i + 1) 3 g(i + 1). This proves the upper bound of (2.4). For the lower bound, we write (beware that U k ≥ 0), We apply (2.1) to get the lower bound of (2.4).

Some moments of the killed branching random walk
For any a ≥ 0 and any integer n, we call Z n (a) the number of particles who hit level a for the first time at time n, Z n (a) := #{|u| = n : where for any a, τ − a (u) is the hitting time of (−∞, a) of the particle u. We notice that particles in Z n (a) can be dead at time n, but their father at time n − 1 is necessarily alive. Let also Z(a) := n≥0 Z n (a).
Similarly, for any k > a ≥ 0, and any integer n ≥ 0, we introduce Z n (a, k) := #{|u| = n : In words, Z n (a, k) stands for the number of particles who hit level a at time n and did not touch level k before.
We denote by S n = X 0 + X 1 + . . . + X n the random walk whose steps are distributed like X. We define the probability Q y as the probability which verifies for every n dQ y dP y |X 0 ,..,Xn Under Q y , the random walk S n is centered and starts at y.

Proposition 3.1
We have for any x ≥ 0, and any a ≥ 0, Proof. Let y be any real in [0, k] and let a ∈ [0, y]. We observe that

Summing over n leads to
Suppose that y > k/2. We observe that We know by Lemma 2.1 that sup ℓ≤0 E 0 Q [e ρL ℓ ] ≤ C 22 . We deduce that We use Lemma 2.2 (applied to R ℓ = k − S ℓ ) to see that for k greater than some constant K(a) (whose value may change during the proof), For y ≤ k/2, we see that We deduce the existence of a constant C 24 such that for any 0 ≤ a ≤ y ≤ k and any Therefore, using (3.5), we get that Equations (3.6) and (3.7) give (3.2) by taking y = k. We turn to the proof of (3.3) and (3.4).
We decompose Z(a, k) along the particle u to get where u ℓ is the ancestor of u at time ℓ and Z u ℓ (a, k) is the number of descendants v of u ℓ at time n which are not descendants of u ℓ+1 and such that n = τ − a (v) < τ + k (v). In particular, and S(u ℓ ) ≥ a. This decomposition leads to Then equation (3.8) becomes where we used the change of measure from P y to Q y defined in (3.1). Take y = k. It implies that We apply equation (2.4) of Lemma 2.3 for the walk R ℓ := ρ(k − S ℓ ) to get (3.3). If we take y = x, we obtain and we apply (2.5) of Lemma 2.3 to complete the proof of (3.4).

Tail distribution of the maximum
We are interested in large deviations of the maximum M of the branching random walk before its extinction To this end, we introduce The variable H(k) is the number of particles of the branching random walk on [0, k] with two killing barriers which were absorbed at level k.
It shows that H k is strongly concentrated. Our result on the maximal position states as follows.

Corollary 4.2 The tail distribution of M verifies
Proof. The corollary easily follows from the following inequalities .
We turn to the proof of Proposition 4.1. Since it is really similar to the proof of Proposition 3.1, we feel free to skip some of the details.

Proof of Proposition 4.1. We verify that
On the other hand, observe that We see that for some ε(M) which goes to zero when M goes to infinity by Lemma 2.1. Therefore for M large enough. Equations (4.3), (4.4) and (4.5) give (4.1). We look then at the second moment of H k . As before (see (3.9)), we can write We apply (2.6) of Lemma 2.3 to complete the proof.

Proof of Theorem 1.1
Proof of Theorem 1.1: lower bound. Let a ∈ (0, x). We observe that P x (Z > n) ≥ P x (M ≥ k)P k (Z(k, a) > n) .
By the choice of k, we notice that P k (Z(k, a) > n) ≥ P k Z(k, a) > E[Z(k, a)] 2 .
We turn to the proof of the upper bound. We recall that Z(0) represents the number of particles who hit the domain (−∞, 0).
Proof of Theorem : upper bound. First, we notice that Z(0) = 1 + (b − 1)Z. Indeed, Z(0) is the number of leaves of a tree of size Z + Z(0), in which any vertex has either zero or b children. Therefore Hence it is equivalent to find an upper bound for P x (Z(0) > n). For any k, we have that P x (Z(0) > n) ≤ P x (M < k, Z(0, k) > n) + P x (M ≥ k) ≤ P x (Z(0, k) > n) + P x (M ≥ k) .