Branching random walks in random environment are diffusive in the regular growth phase

We treat branching random walks in random environment using the framework of Linear Stochastic Evolution. In spatial dimensions three or larger, we establish diffusive behaviour in the entire growth phase. This can be seen through a Central Limit Theorem with respect to the population density as well as through an invariance principle for a path measure we introduce.


Background
Branching random walks (and their time-continuous counterpart branching Brownian motion) are treated, with the result of a central limit theorem (CLT), by Watanabe in [Wat67] and [Wat68]. Smith and Wilkinson introduce the notion of random (in time) environment to branching processes [SW69], and in 1972, the book by Athreya and Ney [AN72] appears and gives an excellent overview of the knowledge of the time.
A closely related model, the directed polymers in random environment (DPRE), is studied since the eighties, when the question of diffusivity is treated by Imbrie and Spencer [IS88] as well as Bolthausen [Bol89]. A review can be found in [CSY04].
It took until the new millenium for the time-space random environment known from DPRE to get applied to branching random walks by Birkner, Geiger and Kersting [BGK05]. A CLT in probability is proven in [Yos08a], and improved to an almost sure sense in [Nak11] with the help of Linear Stochastic Evolutions (LSE), which were introduced in [Yos08b] and [Yos10]. Linear stochastic evolutions build a frame to a variety of models, including DPRE. For LSE, the CLT was proven in [Nak09]. Shiozawa treats the time-continuous counterpart, namely branching Brownian motions in random environment [Shi09a,Shi09b].
The present article uses as a blueprint [CY06], which proves a CLT for DPRE, and the larger angle of view allowed by the LSE gives the crucial ingredients to conclude our result, which is a CLT on the event of survival on the entire regular growth phase, but under integrability conditions slightly more restrictive than those from [Nak11]. Compared to the case of DPRE, the necessary notational overhead is unfortunately significantly bigger. Speaking of DPRE, it is possible to extend the results of [CY06] to the case where completely repulsive sites are allowed, using the same conditioningtechniques as here.
A localization-result in the slow growth phase is proven by two of the authors of the present work in [HN11].

Branching random walks in random environment
We denote the natural numbers by N 0 = {0, 1, 2, . . . } and N = {1, 2, . . . }. We will need at various places sets of probability measures, which we write as (·); for instance, stands for the set of probability measures on N 0 . We consider particles in Z d , d ≥ 1, each performing a simple random walk and branching into independent copies at each time-step. i) At time n = 0, there is one particle born at the origin x = 0.
ii) A particle born at site x ∈ Z d at time n ∈ N 0 is equipped with k eggs with probability q n,x (k), k ∈ N 0 , independently from other particles.
iii) In the next time step, it takes its k eggs to a uniformly chosen nearest-neighbour site and dies. The eggs then are hatched.
The offspring distributions q n,x = (q n,x (k)) k∈N 0 are assumed to be i.i.d. in time-space (n, x). This model is called Branching Random Walks in Random Environment (BRWRE). Let N n, y be the number of the particles which occupy the site y ∈ Z d at time n.
For the proofs in this article, a modeling down to the level of individual particles is needed. First, we define namespaces n , n ∈ N 0 for the n-th generation particles and N 0 for the particles of all generations together: Then, we label all particles as follows: i) At time n = 0, there is just one particle which we call 1 = (1) ∈ 0 .
By using this naming procedure, we define the branching of the particles rigorously. This definition is based on the one in [Yos08a].
Note that the particle with name x can be located at x anywhere in Z d . As both informations genealogy and place are usually necessary together, it is convenient to combine them to x = (x, x); think of x and x written very closely together.
• Random environment of offspring distibutions: with the natural Borel σ-field induced by the one of [0, 1] N 0 . We denote by q the product σ-field on Ω q .
We fix a product measure Q ∈ (Ω q , q ) which describes the i.i.d. offspring distributions assigned to each time-space location.
Each environment q ∈ Ω q is a function (n, x) → q n,x = (q n,x (k)) k∈N 0 from N 0 × Z d to (N 0 ). We interpret q n,x as the offspring distribution for each particle which occupies the time-space location (n, x).
• Spatial motion: A particle at time-space location (n, x) jumps to some neighbouring location (n + 1, y) before it is replaced by its children there. Therefore, the spatial motion should be described by assigning a destination to each particle at each time-space location (n, x). We define the measurable space (Ω X , X ) as the set (Z d ) N 0 ×Z d × N 0 with the product σ-field, and Ω X X → X n,x for each (n, x) ∈ N 0 × (Z d × N 0 ) as the projection. We define P X ∈ (Ω X , X ) as the product measure such Here, we interpret X n,x as the step at time n + 1 if the particle x is located space location x.
• Offspring realization: We define the measurable space (Ω K , K ) as the set N 0 N 0 ×Z d × N 0 with the product σ-field, and Ω K K → K n,x for each (n, x) ∈ N 0 × (Z d × N 0 ) as the projection. For each fixed q ∈ Ω q , we define P q K ∈ (Ω K , K ) as the product measure such that We interpret K n,x as the number of eggs of the particle x if it is located at time-space location (n, x). One could directly speak of its children as well.
The first steps of such a BRWRE are shown in Figure 1.
Putting everything together, we arrive at the • Overall construction: We define (Ω, ) by and with q ∈ Ω q , Now that the BRWRE is completely modeled, we can have a look at where the particles are: for This enables the • Placement of BRWRE into the framework of Linear Stochastic Evolutions: We set the starting condition N 0,y = 1 y=(0,1) . Then, defining the matrices (A n ) n via their entries in the manner indicated below, we can describe N n,y inductively by where y/x is given for x, y ∈ N 0 as One-site-and overall population can be defined respectively as ( y,y) , and N n = • 1 z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z v Õ z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z • 12 z z z z z z z z z z z z z z z z z z z z z z z z y Ñ z z z z z z z z z z z z z z z z z z z z z z z z  for n ∈ N 0 , y ∈ Z d . Other quantities needed later are the moments of the local offspring distributions for n ∈ N 0 and x ∈ Z d , n,x ), p ∈ N 0 , m = m (1) , and the normalized one-site and overall populations N n, y = N n, y /m n and N n = N n /m n , n ∈ N 0 , y ∈ Z d .
It is easy to see that the expectation of the matrix entries, which is an important parameter in the setting of LSE, for x, y ∈ Z d × N 0 computes as Taking sums, we obtain

Preliminaries
In this and the following subsection, we gather properties of BRWRE that are already known. First, we introduce the Markov chain (S, P x S ) = ((S, S), P if |x − y| = 1, and y/x = k ∈ N 0 otherwise.
Note that we can regard S and S as independent Markov chains on Z d and N 0 , respectively, with S the simple random walk on Z d .
Next, we introduce a process which is essential to the proof of our results: (1.2) Lemma 1.3.1. ζ n is a martingale with respect to the filtration given by Moreover, we have that N n,y = m n P (0,1) S (ζ n : S n = y) P-a.s. for n ∈ N 0 , y ∈ Z d × N 0 . From this Lemma follows an important result: the next Lemma shows that a phase transition occurs for the growth rate of the total population. Lemma 1.3.3. N n is a martingale with respect to n := σ(A m : m ≤ n). Hence, the limit The proof of Lemmas 1.3.1 and 1.3.3 can be found in [Nak11].
We refer to the case P(N ∞ ) = 1 as regular growth phase and to the other one, P[N ∞ ] = 0, as slow growth phase. The regular growth phase means that the growth rate of the total population has the same order as the growth rate of the expectation of the total population m n ; on the other hand, the slow growth phase means that, almost surely, the growth rate of the population is lower than the growth rate of its expectation.
One can also introduce the notions of 'survival' and 'extinction'.

Definition 1.3.4. The event of survival is the existence of particles at all times:
The extinction event is the complement of survival.

The result
Definition 1.4.1. An important quantity of the model is the population density, which can be seen as a probability measure with support on Z d , Our main result is the following CLT, proven as Corollary 2.2.4 of the invariance principle Theorem 2.2.2.

Theorem 1.4.2. Assume d ≥ 3 and regular growth, and the moment conditions m
where ν stands for the Gaussian measure with mean 0 and covariance matrix 1 d I. Remark 1.4.3. The hypothesis d ≥ 3 is in fact not necessary because in dimensions one and two, regular growth cannot occur. Instead of a CLT, localized behaviour can be observed, see [HY09,HN11].
It is the following equivalence, recently proven as [CY, Proposition 2.2.2], that enables us to speak easily of P( · |sur vi val)-probability: {r e gul ar g r ow th} := {N ∞ > 0} = {survival}, P-a.s.. [CY] handles also the case of slow growth. Using this, we can, with m ≤ n, express µ n on a finite time-horizon as Note that for B ∈ ∞ , the limit exists P-a.s. because of the martingale limit theorem for P S (ζ n : B), which is indeed a positive martingale with respect to the filtration ( n ) n , as can be easily checked, and for N n , see Lemma 1.3.3.
Remark 2.1.2. We can write, for B ∈ 1 n , The reader who cares to return to the lower part of Figure 1 will be rewarded with an intuitive picture of how we can let run our BRW up to time n = 3 and plug in there the shifted processes, indicated by the dotted cones. and where P S denotes the measure of a simple random walk.
In order to prove this Lemma, we need the following observation: Proof. We first prove the first equality. For δ > 0, We can estimate On the other hand, as N −1 n converges P-a.s., their distributions are tight, and lim δ→0 sup n P(N n ≤ δ) = 0.
The second equality follows directly by an application of dominated convergence.
Proof of Lemma 2.1.4. The statement (2.2) is in some sense an affirmation of well-definiteness. The proof consists in verifying that Pµ ∞ is finitely additive, that Pµ ∞ (Ω 1 × Ω 2 ) = 1, and that The first two are quite obvious and the third one is a trivial application of the preceding Lemma 2.1.5, as is the absolute continuity (2.3).
In the following Proposition, we introduce the variational norm where ν and ν are probability measures on . This norm will be applied to µ n+m (· × Ω 2 ) and µ ∞ (· × Ω 2 ), which are indeed, P-a.s., probability measures on 1 r because of the finiteness of 1 r , for all r, m, n ∈ N 0 .
Proposition 2.1.6. In the regular growth phase, Proof. From (2.1) and its analogue for µ ∞ , for n, m ≥ 0, Note that in the first of the right-hand terms, the denominator is cancelled out with P S ζ n N n,S n m ; so, as N n converges in L 1 (P), the P-expectation of the first term vanishes as m → ∞, and the second one yields This proves Now, we use the same trick with the Chebychev-inequality that gives us a N ∞ in front of the norm as in Lemma 2.1.4: we control δ and m approprietely, independently of n.

The main statements
Definition 2.2.1. For n ≥ 1, the rescaling of the path S is defined by Furthermore, we will denote by W Wiener-space, equipped with the topology induced by the supremum-norm. The probability space (W, W , P W ) features the Borel-σ-algebra F W and P W the Wiener-measure. We will be using W = (W t ) t≥0 a Wiener-process on this probability-space. in P-probability.
Remark 2.2.3. This is equivalent to L p ( P)-convergence for any finite p.
This Theorem implies the following CLT:

Corollary 2.2.4. Under the same assumptions as in the Theorem, for all F
where ν designs the Gaussian measure with mean 0 and covariance matrix 1 d I.

Some easier analogue of the main Theorem
The following Proposition is not needed for the proof of our result. We literally propose it nevertheless to the reader's attention because the proof is much easier than the one of Theorem 2. Pµ n (S (n) ∈ ·) = P W (W / d ∈ ·), weakly, , weakly. (2.7) The following notation will prove useful.
Definition 2.3.2. We define, for w ∈ W, Proof of Proposition 2.3.1. The second statement is easier to prove. We attack it first, and use it later to manage the first one.
Two ingredients from outside this article will help us to prove (2.7). First, (2.7) is equivalent to One of the key ideas of the proof is that in the last line, due to (2.3), we can replace 'P S -a.s.' by ' Pµ(· × Ω 2 )-a.s.', and the statement still holds.
This enables us to prove (2.8) by contradiction. Assume that (2.8) does not hold. Then there is some subsequence a m l = Pµ ∞ (F (S (m l ) )) > c > 0 (or < c < 0). It has bounded domain, so has a convergent subsequence a m l k which can be chosen such that n k := m l k satisfies the above inf k≥1 n k+1 /n k > 1.
To this n k , we apply (2.9) and integrate with respect to Pµ ∞ . By dominated convergence, we can switch integration and limit and get But this is a contradition to the assumption that all the Pµ ∞ (F (S (n k ) )) = Pµ ∞ (F (S (m l k ) )) > c (or < c). So we conclude that (2.8) does hold, indeed. Now, it remains to prove (2.6) with the help of (2.7). We need to show the analogue of (2.8): lim n→∞ Pµ n F (S (n) ) = 0 for all F ∈ B L(W). (2.10) For 0 ≤ k ≤ n, we add some telescopic terms: We apply what we just proved, i.e. (2.8), and conclude that the last line vanishes for fixed k and n → ∞. The middle one does the same due to Proposition 2.1.6. As for the first line, we note that F is uniformly continuous and that Hence, (2.10) holds, so that we conclude (2.6) and thus the Proposition.

The real work
In order to prove the statement of Theorem 2.2.2 'in probability', we take the path via 'L 2 '. While the proceeding is basically the same as in the last section, the notation becomes much more complicated. As a start, we take a copy of our path S: Definition 2.4.1. Let ( S, P S ) be an independent copy of (S, P S ) defined on the probability space ( Ω = Ω 1 × Ω 2 , ) for i = 1, 2, 3, 4. Similarily, we write ζ = ζ( S), P S S , and P S S for the simultaneous product measures and so on.

Lemma 2.4.2.
For all B ∈ 1 ∞ ⊗ 1 ∞ , with the notation B = B × Ω 2 × Ω 2 , the following limit exists P-a.s. in the regular growth phase: Moreover, we have that for all n ∈ N, P-a.s. on {N ∞ > 0}, For the proof, we need a few Definitions and Lemmas.

Lemma 2.4.4. Y n converges to 0 P-almost surely, independently of B.
Proof. A consequence of the construction of the BRWRE is that ζ n ζ n 1 {S n−1 = S n−1 ,S n = S n } = 0, P ⊗ P S S -a.s., so that we have 0 ≤ P Y n ≤ P P S S ζ n ζ n 1 S n−1 = S n−1 = P P S S ζ n ζ n 1 S [0,n−1] = S [0,n−1] (2.14) We made use of the fact that in the third line, because the A S k k,S k−1 's are indicators, we can erase the square. Also erasable is the condition in the inner P-expectation. After that, the outmost Pexpectation can be taken into the first fraction, cancelling out one of the a S k S k−1 's. To what remains, we apply the definition of the expectation, using (1.1). This technique is hinted in the second part of the fifth line, and applied similarly to the first part.
The assertion now follows from the Borel-Cantelli lemma.

Lemma 2.4.5. X n is a submartingale with respect to n .
Proof. We start calculating P(X n | n−1 ) = P P S S (ζ n ζ n 1 B∩{S n−1 = S n−1 } ) n−1 = P S S ζ n−1 ζ n−1 1 B∩{S n−1 = S n−1 } P A S n n,S n−1 a S n S n−1 A S n n, S n−1 a S n S n−1 . (2.15) We do not use the following definition again, but we should like to point out its similarity to W to be defined later. The inner P-expectation computes as w(x, x, y, y) := a y x a y x = 0.
Using this, we note that, under the condition {S n−1 = S n−1 }, w(S n−1 , S n−1 , S n , S n ) depends only on S n−1 − S n−1 , S n /S n−1 and S n / S n−1 . Thus, we pursue where α = P(m 2 0,0 )/m 2 > 1. This last equality is obtained by introducing a P S S ( · | n−1 , n−1 )conditional expectation, and remarking that the event B depends only on the random walk-part while the corresponding above fraction depends only on the children-part, and the two are thus independent. The calculus reads as follows: The BRWRE has, due to the strict construction of the ancestry, the feature that So, we continue (2.16) and finish the proof of the submartingale property by (2.16) ≥ P S S ζ n−1 ζ n−1 1 B∩{S n−2 = S n−2 } = X n−1 .
Notation 2.4.6. For some sequence (a n ) n≥0 , we set ∆a n := a n − a n−1 for n ≥ 1.
This notation is convenient when we treat the Doob-decomposition of the process X n from Definition 2.4.3, i.e.

Definition 2.4.7.
X n = X n (B) =: M n + A n , (2.17) with M n a martingale, M 0 = A 0 = 0, and A n the increasing process defined by its increments ∆ A n := P(∆X n | n−1 ). By 〈M 〉 n , we denote the quadratic variation of (M n ) n , defined by ∆〈M 〉 n := P (∆M n ) 2 | n−1 . Passing to the limit, we define n (X n + Y n )1 N ∞ >0 as well, P-a.s. On the event of extinction, the statement is trivial, and we conclude (2.12).
The second statement (2.13) follows immediately from the definition.
In order to prove Lemma 2.4.8, we also need the so called replica overlap, which is the probability of two particles to meet at the same place: This replica overlap can be related to the event of survival via a Corollary of the following general result for martingales [Yos10, Proposition 2.1.2].
Proposition 2.4.9. Let (Y n ) n∈N 0 be a mean-zero martingale on a probability space with measure E and filtration ( n ) n∈N 0 such that −1 ≤ ∆Y n , E-a.s. and (1 + ∆Y m ). Then, holds if Y n is square-integrable and E (∆Y n ) 2

n−1 is uniformly bounded. The opposite inclusion is provided by Y n being cube-integrable and
Corollary 2.4.10. Suppose P(N ∞ > 0) > 0 and m (3) < ∞. Then For proving this Corollary, we start with some notation.
Notation 2.4.11. Define U n+1,x := 1 N n >0 mN n x∈ N 0 : It is imporant to note that the sum in this definition is taken over exactly N n,x random variables. Also define The (U n+1,x ) x∈Z d are independent under P(·| n ). It is not difficult to see that, on the event {N n > 0}, Proof of Corollary 2.4.10. We need to verify the prerequisites of Proposition 2.4.9 which we apply to X n := N n and The second moments compute as Using the observations after Notation 2.4.11, we hence get Similar observations lead to estimate for the third moment: This proves that all hypotheses of Proposition 2.4.9 are fulfilled and in fact equality holds for (2.20).
Proof of Lemma 2.4.8. We make a slight abuse of notation writing B (m) as templates for both the cases B and B m , and so on for similar cases of notation. We can make use of (2.16) and, splitting two times 1 into complementary indicators, get + 1 S n−1 = S n−1 + α1 S n−1 = S n−1 1 S n−1 = S n−1 1 S n−2 = S n−2 − 1 S n−1 = S n−1 + 1 S n−1 = S n−1 1 S n−2 = S n−2 . (2.21) In the last term, 1 S n−1 = S n−1 is implied by the following indicator, while in the second term, 1 S n−1 = S n−1 is 0 due to the fact that ζ n−1 ζ n−1 1 S n−1 = S n−1 1 S n−2 = S n−2 = 0, P ⊗ P S S -a.s.. Thus, we can continue (2.21) = P S S ζ n−1 ζ n−1 1 B (m) (α − 1)1 S n−1 = S n−1 1 S n−2 = S n−2 The sum Now, the same sort of estimates will be carried out for M n , but involves much more work.
First, we note that ∆M (m) n can be written as Definition 2.4.12. For convenience, we define This is the point where we cannot maintain our easy notation of S and S, for we need four independent random walks . The probability spaces and other notations are adjusted accordingly, refer to Definition 2.4.1. We compute n−1 ζ  [ j] for j = 1, 2, 3, 4: Cases that can be obtained by symmetry are not listed here. Case 0 yields W (X , Y ) = 0 for it is impossible in the BRWRE-Model: particles with the same name at the same place are blown by the wind to the same site, so their children cannot be born at different sites.
The notation with the small squares is solely for the ease of understanding; all information is fully contained in the written part. For how to read it, let us take as an example case number 5: The first square corresponds to the 'x'-part, the second one to the 'x'-part, and the last one to the ' y'-part of the restriction. Each • corresponds to an index j = 1, . If one changes the mapping of bullet-position and index, one gets all the symmetries immediately.
A missing square has the same meaning as a square with only dotted lines would have. Now, we can compute W (X , Y ), which equals in the respective cases to: x q 00 (i) i≥k [3] q 00 (i) P i≥k [2] q 00 (i) i≥k [4] q 00 (i) q 00 (i) a y [4] x q 00 (i) i≥k [3] q 00 (i) i≥k [2] q 00 (i) i≥k [4] q 00 (i)  Clearly, the first term on the right-hand-side vanishes as m ∞ because of (2.19), and so does the second term as can be seen from the following application of Doob's inequality (for instance [Dur91, p.248]): Since is arbitrary, (2.28) follows and hence we conclude (2.27).
Note that the L 2 -techniques in this paragraph that lead to (2.30) are needed only for the treatment of the last line; the other two can be dealt with with the same arguments than after (2.11).