A branching random walk among disasters

We consider a branching random walk in a random space-time environment of disasters where each particle is killed when meeting a disaster. This extends the model of the"random walk in a disastrous random environment"introduced by [15]. We obtain a criterion for positive survival probability, see Theorem 1. The proofs for the subcritical and the supercritical cases follow standard arguments, which involve moment methods and a comparison with an embedded branching process with i.i.d. offspring distributions. The proof of almost sure extinction in the critical case is more difficult and uses the techniques from [8]. We also show that, in the case of survival, the number of particles grows exponentially fast.


Introduction
In this work we introduce a branching random walk on Z d in a killing random environment. The process consists of particles performing a branching random walk in continuous time. All particles jump independently at rate κ and give birth to children at rate λ. The jump rate κ, the birth rate λ and the distribution q of the number of children do not change over time and space, and are the parameters of the model.
We then consider this process in a random environment ω given by disasters in space-time, defined as follows: The environment ω consists of a collection ω (x) x∈Z d of i.i.d. random variables where ω (x) = (ω (x) (t)) t≥0 is a Poisson process of rate one. Whenever ω (x) has a jump at time t, all the particles occupying x at time t are killed.
We give an answer to the following question: * Technical University of Munich, Germany. E-mail: gantert@ma.tum. de,junk@tum.de For which values of λ, κ and q is the probability that the branching random walk survives strictly positive?
A priori, the answer might depend on the realization of the random environment, but we will see that the survival probability is either zero, for almost all environments, or strictly positive, for almost all environments.
Let us comment on the dependence on the parameters of the model: It is clear by a coupling argument that increasing λ will increase the probability of survival, simply because there are more particles. Similarly, replacing the distribution q of the number of descendants by some distribution q having a larger mean should also increase the chance of survival. The dependence on κ is more tricky: If the jump rate is small, the process is essentially frozen and remains concentrated on few sites, and can be killed quickly if the environment is particularly unfavorable in a small area. If we increase κ, the process will jump away from any small area that is atypical and see an environment that is more average. However even in the best case particles will be killed at rate 1.
We will not fully resolve the dependence on κ, but instead connect the problem to the survival rate in the one-particle model, which was studied in [15]. This correspondence is similar to the connection between the random polymer model and branching random walks in random space-time-environments, as explained in Section 1.3 in [5]. The proof of extinction in the critical case borrows heavily from the proof given in [8], which confirmed Conjecture 1 in [5].
Branching random walks in time-dependent environments have been studied extensively in the context of the parabolic Anderson model, see [9], [6]. However, most papers consider the solution to an SDE with random potential which describes the behavior of the expectation of the number of particles in a branching random walk in random environment, and not the actual particle system (a notable exception where the two models are compared, is [14]). In addition, most papers have non-degeneracy conditions on the killing rates which are violated by our environment. In particular, we point out that our model differs from the branching random walks considered in [5] not only because time is continuous instead of discrete, but also because disasters in the environment were excluded in [5] (see formula (1.7)). The possibility of killing many particles at the same site at once makes our model interesting but also creates some technical difficulties. For a survey on the parabolic Anderson model and random walk in random potential, we refer to [10].
The paper is organized as follows. In the remainder of Section 1 we define the process and recall some previously known results about the one-particle model. Our main result, stated in Section 1.3, is Theorem 1.1 which characterizes the set of parameters where the survival probability is strictly positive.
The subcritical case of Theorem 1.1 follows immediately from the first moment method, see Section 2.
In Section 3 we handle the supercritical case by comparing our process to an embedded Galton-Watson process with i.i.d. offspring distributions. While this argument is relatively short, it needs an auxiliary result (Proposition 3.1) about the one-particle model. To prove the auxiliary result, we need uniform moment bounds (see Proposition 3.3) and a concentration inequality (see Proposition 3.9). The proofs of these propositions make use of stochastic domination. These results can be found in Sections 3.1, 3.2 and 3.3, in which no branching processes occur.
Finally the critical case follows from a standard comparison to oriented site percolation, presented in Section 4.1. To implement this argument we need two propositions, the proofs of which are carried out in the remainder of Section 4.

Definition and notation
We first define the branching random walk introduced above: We identify the nodes of a tree with the set N * := ∞ k=0 N k = x = (x 1 , ..., x k ) : k ∈ N, x 1 , ..., x k ∈ N .
We call |(x 1 , ..., x k )| =: k the height of v and write ∅ for the unique element of height 0, which we call the root. Proceeding recursively we interpret (x 1 , ..., x k ) as the the x th k child of (x 1 , ..., x k−1 ), for k ≥ 1. Fix now positive values κ and λ as well as a distribution q = (q(k)) k∈N on the natural numbers satisfying m := ∞ k=0 kq(k) < ∞ and q(1) < 1. (1.1) We associate to every node an exponential clock of rate λ, and whenever a clock rings the node is removed and replaced by its children, where the number of children is distributed according to q. The clocks and the numbers of descendants are independent. We will write V (t) for the set of nodes that are alive at time t, starting with V (0) = {∅}.
Next, we extend this by associating to each node v alive at time t a position X(t, v) in Z d . We let each particle perform a simple random walk in continuous time of jump rate κ between its birth and the time when it is replaced by its children, independently from everything else. The root initially starts in the origin, and all other nodes start at the position occupied by their parent node at the time of birth.
For v ∈ V (t), it will be convenient to extend X(t, v) to a function X(·, v) : [0, t] → Z d , where for s ∈ [0, t] we set X(s, v) equal to the position occupied at time s by the unique ancestor of v in V (s).
The process described so far is well-studied. Recall that the environment ω = ω (x) x∈Z d consists of independent Poisson processes of rate 1 indexed by the sites of Z d , which are independent of the random variables defined before. Let If δ(t, x) = 1, we say that there is a disaster at time t at x. The process we are interested in is denoted (Z(t)) t≥0 , with So Z(t) contains all particles v where no disaster occurred along the trajectory of v before time t. Note that since we did not assume q(0) = 0 it is possible that a particle has zero children, and the process may die out even without the influence of the environment.
We will use Q to denote the law of the environment, and P for the law of the branching random walk. Typically we consider the processes Z(t) for fixed realizations of ω, and then we write P ω for the conditional or quenched law. The annealed or averaged law P is given by

Previous results about the one-particle model
There is a close relationship between our model and the model considered in [15].
There, the process consists of a single particle performing random walk at rate κ among disasters in the same way that particles in our model do. In this section we summarize some known results.
Let (X(t)) t≥0 be a simple random walk in continuous time, moving in Z d at a jump rate κ > 0, with the corresponding probability measure denoted P . The environment ω = ω (x) x∈Z d is the same as before. We let τ be the first time the random walk hits any of the disasters, that is We are interested in the probability to survive until time t for a fixed realization of the environment: Note that by averaging over the environments one easily gets the annealed survival rate: We summarize the results of [15] in the following  (ii) For all κ > 0 we have p(κ) ≤ −1.

The main result
We are interested in the event Using the exponent p(κ) we prove the following criterion: In analogy to classical branching processes, we define three regimes.

Definition 1.2.
We say that the process Z(t) is An easy corollary is We define the event of local survival to be {Z survives locally} := {0 is occupied for arbitrarily large times} .
Our proof of Theorem 1.1 shows in fact that the process survives locally in the supercritical case, so that the following holds.
For the proof see Remark 3.2. Remark 1.6. By an obvious truncation argument, the assumption m < ∞ can be dropped; if m = ∞, we are in the supercritical case.
We do not make any assumption on the shape of p, so a priori it may be discontinuous or may not be increasing in κ. In Corollary 4.1 in [7] continuity of p is proven for a related class of models, but the relevant case of hard obstacles is excluded. However, if we interpret p as the free energy of a polymer in random environment as in Section 3 of [4], it is reasonable to conjecture that p is concave. A proof might be attempted by showing the following Conjecture 1.7. Fix a branching mechanism with m > 1, and set Then U is a convex set.

Some more notation
Before we start with the proof of Theorem 1.1, we collect some notation that will be useful at various points throughout this work. We first extend the definition of Z to the case where we may have more than one initial particle.
We call η = (η x ) x∈Z d a configuration, and let Z η denote the process as defined before, except that we start with η x particles in x, all of which evolve independently but in the same environment. If A ⊆ Z d and R ≥ 0 is an integer we record the special configuration (A, R) where each site x ∈ A is occupied by R particles, that is 1) for the process started from exactly one particle on every site in A. For t > 0 and η a configuration we use to denote the process started at time t with η x particles occupying each site x, and we use Z {t}×A if η is equal to (A, 1).
Moreover if (Z(t)) t≥0 is some branching process and B ⊆ Z d , we let (Z B (t)) t≥0 denote the truncated process consisting of all particles that have never left B: (1.7) In the simple case where B = {−L, ..., L} d for some L ∈ N we simply write (Z L (t)) t≥0 . We also use the following notation for the set of particles of (Z t ) t≥0 occupying a site x at time t: If η is a configuration, we denote the event that at time t every site is occupied by at least η x particles by In the case where η = 1 C for some C ⊆ Z d this is simply written as
For almost all environments ω, we can find T = T (ω) such that Then we have for t ≥ T (ω) where M is a random variable whose law is Poisson with parameter λt. This implies Z(t) → 0 for almost all environments.

The supercritical case
For the proof in the supercritical case we will need to consider the random variable S(t) := P ω (τ ≥ t, X t = 0). It is intuitively clear that S(t) should decay to zero with the same exponential rate as S(t), since the event {X(t) = 0} has probability decaying only with a polynomial rate, and therefore its contribution should be dominated by the contribution of the event {τ ≥ t}. This is stated in the following    That is, for the process A we only consider particles that return to the origin at times T, 2T, 3T, ... . Note that every particle that contributes to A(k) is the descendant of a particle that contributed to A(k − 1). To see that (A(k)) k has i.i.d. offspring distributions, we recall from Section 1.4 the notation Z(t) ∩ {0} and Z t,A . Using those, we can define the sequence (q (k) ) k∈N of offspring distributions by Note that q (k) only depends on the environment in the interval [(k − 1)T, kT ), and (q (k) ) k is therefore an i.i.d. sequence in the space of probability measures on N.
We let m (k) denote the expectation of q (k) . By a well-known result on branching processes with i.i.d. offspring distributions, see [16,17], the survival probability of (A(k)) k∈N is positive for almost all environments if  We can write m (1) as Recall the definition of S(t) in (3.1) By the same computation as in (2.1) we get m (1) = e λ(m−1)T S(T ). In order to give a lower bound for the quantity in (3.6), we compare the branching process to the random walk of a single particle: We choose a path by starting in the root, and whenever there is more than one descendant, we follow its first child. Let F (t) be the event that this construction succeeds up to time t, that is the currently observed particle always has at least one descendant. We have where M is the number of branching events along this path. Note that M has distribution Poisson(λt), so that 1 − q (1) We can now conclude: By (3.2) and (3.8), we find for every ε > 0 some T large enough By (3.5), we can satisfy (3.7) by choosing ε small enough, finishing the proof.
Remark 3.2. The proof shows in fact that in the supercritical case, the process survives locally with positive probability. Using results of [17] about branching processes with i.i.d. offspring distributions we also see that in the supercritical case, the number of particles grows exponentially fast.
It remains to prove Proposition 3.1. This will take up most of Section 3: We start by proving a uniform moment bound in Section 3.1 using comparison techniques from [15] and some results about stochastic orders. In Section 3.2 we use this to get a concentration inequality, which is necessary for the proof of Proposition 3.1 in Section 3.3.

A uniform moment bound
The following proposition is key to proving the concentration inequality in the next section: For the proof we use an equivalence relation ≡ on Z d defined by (y 1 , ..., y d ) ≡ (z 1 , ..., z d ) ⇐⇒ y 1 = z 1 mod 2.
We will identify Z d / ≡ with Z 2 = {0, 1}, and we use π : Z d → {0, 1} to denote the projection. Let ω be an environment on {0, 1}, consisting as usual of two independent Poisson processes ω (0) and ω (1) of rate 1. We write π −1 ( ω) for the environment on Z d given by Note that this is a degenerate environment on Z d , where all sites that share an equivalence class in ≡ experience the same disasters. We will slightly abuse notation by writing E for the law of ω as well.
First we need the following auxiliary lemma.
Proof. This is a modification of the proof of Lemma 2.4 in [15], where the integrability of P ω (τ ≥ 1) −p is shown. We quickly sketch how the proof can be modified: Note that the bound in (2.24) of [15] is actually a bound for P ω (τ ≥ t, X(t) = 1). By slightly modifying the argument we obtain a similar bound for P ω (τ ≥ t, X(t) = 0), where on the right hand side we have to replace C(t)C 1 (t) n t 1−2p 1 by C(t) 2 C 1 (t) n−1 t −p 1 . It is clear that this does not make a difference for the convergence of the sum appearing in the display after (2.26), where the coefficients β n have to be replaced by This gives the first claim, and the second claim follows because the coefficients in that sum can be chosen increasing in t. This is clear for C(t) and C 1 (t), and for n ≥ 1 and p ∈ (0, 1 2 ) the β n are increasing as well.
Proof of Proposition 3.3. By Lemma 2.2 in [15] we have So the claim follows once we show that For simplicity we only treat the case where x ≡ 0, noting that the case x ≡ (1, 0, ..., 0) is similar. For a fixed environment ω, let N be the number of disasters in [0, 1]. We write T 1 , ..., T N for the disaster times in increasing order, and E 1 , ..., E N for their locations. Let us write P T1,...,T N resp. E T1,...,T N for the law resp. expectation of E 1 , ..., E N conditioned on N and T 1 , ..., T N , which is simply the uniform distribution on {0, 1} N . Notice that for any event A and function f : Before we make use of this observation we introduce a different encoding for the disaster locations which will be convenient later: Given a realization ω, we define its The intuition is that I ω encodes the necessary jumps for the random walk. To see this we notice that if {I ω (i) = 1} for some i ∈ {0, ..., N }, the process has to switch sites in if it wants to survive and at time 1 end up in a location equivalent to 0. (Recall that T 0 = 0 and T N +1 = 1). We let Σ ⊆ {0, 1} N +1 be the set of configurations with an even number of 1s. Observe that P T1,...,T N (I ω ∈ Σ) = 1, and that I ω has the uniform distribution on Σ.
On the other hand, we define for a càdlàg process X on Z d its signature I as  Notice that we now have So we have two probability measures which are evaluated at a random point I ω , and we want to compare the expectations of f (µ(I ω )) and f (ν(I ω )) for the convex function For this we recall some results about stochastic orders: For two probability measures µ and ν on Σ, we say that µ is majorized by ν, The intuition for µ M ν is that the mass of µ is more spread out than the mass of ν, so that the random evaluation µ(I ω ) should be more random than ν(I ω ). The following result makes this precise in terms of the convex stochastic order: Lemma 3.5 (Corollary 1.5.37 in [13]). We have µ M ν if and only if Indeed we have Lemma 3.6. Let µ and ν be defined as in (3.12). Then µ M ν.
It remains to show Lemma 3.6. If Z is a càdlàg process, we call t a jump time of Z if Z(t) ≡ Z(t − ), and we write R Z for the number of jumps times of Z in [0, 1]. Lemma 3.7. Let X resp. Y be simple random walks on Z d with jump rate κ resp. κ

. Then
Proof. It is easier to show that where lr denotes domination in the likelihood ratio order, see for example Chapter 1.4 in [13], where it is also shown that lr is stronger than st .
We have to check that for k, l ∈ N of the same parity as x 1 and such that |x 1 | ≤ k ≤ l, the following holds: We apply the definition of conditional probability and cancel the terms that appear on both sides (note that P κ 2 (Y (1) ≡ x|R Y = l) = 1 since l has the same parity as x 1 ). Then we can rewrite the equation as Here (Z i ) i∈N is a discrete time simple random walk on Z, and A resp. A is a Poisson random variable of parameter κ d resp. κ 2d . But this inequality holds, since by the Markov property Now we are ready to show Lemma 3.6.
Proof. Let us define weights p 0 , ..., p N by p 0 := T 1 , p N := 1 − T N and p i := T i+1 − T i for all other values of i. We note that µ and ν do not depend on the order of T 1 , ..., T N , and therefore we can rearrange them to satisfy Now for k ∈ N, let M k = (M k (0), ..., M k (N )) denote a random variable having the multinomial distribution with k trials, and write P k for its law. That is, k indistinguishable balls are thrown in bins numbered 0, ..., N such that each ball independently lands in bin i with probability p i , and M k (i) is the final number of balls in bin i. We define (3.14) We will often use I k interchangeably with the set {i : Observe that by conditioning on R X and R Y we get Indeed, conditional on a random walk having K jumps in [0, 1], each jumps occurs in [T i , T i+1 ) with probability p i , independently of the other jump times, and the process switches sites between T i and T i+1 exactly if there is an odd number of jumps in [T i , T i+1 ).
One might be tempted to think that we are done now, since for all fixed values k ≤ l we can easily show that P l (I l ∈ ·) M P k (I k ∈ ·) holds: The distribution of I l can be obtained from the distribution of I k by the application of a doubly stochastic matrix, and this is an equivalent characterization of M , see for example Theorem 1.5.34 in [13].
Moreover from Lemma 3.7 we know that there exists a coupling between K and L such that K ≤ L holds with probability one. However the majorization order is not stable under taking mixtures, so this does not give the conclusion.
Instead we define a partial order on Σ by j l for all k = 0, ..., N.
We will show in Lemma 3.8 that if we increase the number of jumps from 2k to 2k + 2, the mass in Σ will become less concentrated on the "small values" with respect to this partial order, which is what we need to conclude: First note that both µ and ν are decreasing in , as defined in part (i) of Lemma 3.8 below. From (3.15) we see that this follows by taking expectations in (3.16), and noting that both K and L are supported on the even numbers. Moreover we have To see this, recall from Lemma 3.7 that we can couple K and L such that K ≤ L holds with probability one, and apply (3.15) together with (3.17). We have checked conditions (1) and (2) from [2], and µ M ν now follows from Theorem 3 in that work.

It remains to show
Proof. For S ⊆ JN K we write M k (S) := i∈S M k (i). We recall the following fact about a binomial random variable B n,p with n trials and success probability p: Whenever S or T is the empty set, we drop it from the notation and just write f T or f S .
We first show (3.16) in two special cases: Next we assume that |I| = |J| and that I and J only differ in two coordinates, that is I = I 0 ∪ {a} and J = I 0 ∪ {b} for some b < a. Let B be the event B : For the inequality we have used that for m odd we have Now the general case follows from the observation that for any I J we can find I 0 ... I r such that I 0 = I and I r ⊆ J, and with the property that I i+1 and I i only differ in two coordinates, as defined above.

Part (ii):
We do this by constructing a coupling (I 2k , I 2k+2 ) with the property that I 2k I 2k+2 holds with probability one, from which (3.17) follows.
For this, let (B, and let M be independently sampled with the multinomial distribution with 2k trials. On the event {A = B} we define I 2k from M according to the definition, and set I 2k+2 equal to I 2k .
In the case where B < A, we first fix the coupling on JN K \ {A, B} by  We claim that this is indeed the desired coupling. First note that we can sample a realization of the multinomial distribution M 2k+2 with 2k + 2 trials by sampling M together with two additional balls A and B as described above. If the extra balls end up in the same bin, then the parity of all coordinates of M and M 2k+2 will agree, and we can take I 2k = I 2k+2 .
Otherwise adding A and B will flip the parity of M (A) and M (B). So conditionally on {M (A) + M (B) = R} we have sampled I 2k (B) and I 2k+2 (B) with the correct laws, which then forces us to choose I 2k (A) and I 2k+2 (A) as in (3.19).

A concentration inequality
We write With the previous moment bound at hand, we can now proceed to prove a concentration inequality for the sequences S(t, x) t≥0 where the bounds do not depend on x. We follow the proof of Proposition 3.2.1 in [5].
for all t ∈ N and x ∈ Z d .
Proof. We will drop the dependence on t and x in the notation, and only write S for S(t, x). Let ω i be the environment that contains all disasters (t, y) of ω except for those with t ∈ [i − 1, i). We now consider the filtration F i t i=0 with and the random variables S i t i=1 given by Here P (r,y),(s,z) is the law of a random walk starting at time r in y and conditioned to end up in z at time s. To see that (3.21) holds true, note that α y,z = P ωi X(i − 1) = y, X(i) = z, τ ≥ t, X(t) = x /S i .
To compute the expectation of the r.h.s. of (3.21), consider, for i fixed, the sigma algebra From our choice of ω i we clearly have F i−1 ⊆ F * , and η y,z is independent of F * while α y,z is F * measurable. So using Jensen's inequality we obtain

Proof of Proposition 3.1
Equipped with this concentration inequality we can now prove Proposition 3.1. We follow the proof of Proposition 2.4 in [3].
Proof of Proposition 3.1. We start with (3.4), where we first argue that it is enough to assume t ∈ N. We take any t > 0 and set s := t . Then Here we take δ ∈ (0, 1 2 ), and for the second line we used Jensen's inequality. The second term in (3.22) is finite by (3.10). For the first term we have where P (i) is the law of a random walk started at time i − 1 at the origin. Note that the first term in the last line does not depend on ω. But for all δ ∈ (0, 1) we have where we used Jensen's inequality, and the integrability follows from Proposition 3.3. From (3.2) and the concentration inequality (3.20) we obtain (3.3) by a simple Borel-Cantelli argument. Now to prove (3.2), we remark that the existence of the limit can be shown by subadditivity as usual, but this is not even necessary for our claim. Clearly we have lim sup We now prove the other direction, where by (3.22) it is enough to consider t ∈ N. Note that for any x ∈ Z d we have Since P t,x (τ ≥ 2t, X(2t) = 0) has the same law as P ω (τ ≥ t, X(t) = x), we conclude that (3.23) For γ > 0 we consider a box B t := {x ∈ Z d : x ≤ γt} and the event A t := {X(t) ∈ B t }.
Using standard large deviation techniques, we can choose γ large enough such that Consequently we have .
Then we are left with

The critical case 4.1 Proof of Theorem 1.1 in the critical case
In this section we apply the technique going back to [1], where it was used to show that the critical contact process dies out. We consider the critical process and assume that it survives, showing that this leads to a contradiction.
For this we find a supercritical oriented site percolation process induced by the branching process in such a way that an infinite cluster in the percolation implies global survival of the branching process. In this coupling the event that a site is open can be decided by considering local events of the branching process, i.e. an event that only depends on a finite space-time box. The probability of this local event therefore depends continuously on the parameters of the model, so that the comparison to supercritical percolation still holds true if we push the parameters slightly into the subcritical phase. Since we know that the process dies out in this case, we have a contradiction.
This technique was also used in [8] for a discrete time, non-degenerate version of our model.   For the contradiction we first consider the process in an environment with a higher disaster rate, making the process subcritical: Let us introduce the rate at which disasters appear as a new parameter of the model (until now, it was fixed to be 1). Denote by Q α the law such that ω (x) x∈Z d is a collection of independent Poisson processes of rate α > 0, and write P α,κ,λ for the annealed measure Q α ⊗ P κ,λ ω . Let p(α, κ) be the survival rate of a single particle in this environment (defined as in (1.2) but with an environment with disaster rate α). We show at the end of this section that for any δ > 0 we have λ(m − 1) + p(1 + δ, κ) < 0.  The contradiction will come from a coupling with oriented percolation, showing that for δ small enough we have P 1+δ,κ,λ (Z survives) > 0.  (The reason to use S 2 on the r.h.s. of (4.6) will become clear later). In words, A 0,0 = A 0,0 (L, T, n, S) is the event that starting from configuration (D n , S 2 ) at time 0, those particles will propagate such that at some time t ∈ [5T, 6T ] we find a copy x + D n of D n where again every site is occupied by at least S 2 particles. Because we consider the truncated process, the event has to be achieved by particles which do not leave a certain space-time box. We now state the key proposition which says that under the assumption (4.2), we can make the probability for (an auxiliary version of) A 0,0 arbitrarily large:  We will use this to give an estimate for the probability of A s,y that holds uniformly for all s and y in some space-time box: Note that A s,y is a local event, i.e. it depends only on the process in some finite space-time box. Therefore its probability depends continuously on the parameters, and we get the following We will now argue that Corollary 4.3 ensures that the process survives with positive probability by a comparison to oriented percolation on N 2 . We follow the arguments from chapter I.2 in [12].  This defines a random process (η(k, l)) (k,l)∈N 2 ∈ {0, 1} N 2 from every realization of the process (Z (Dn,S 2 ) (t)) t≥0 , and an easy observation is that if η(k, l) = 1 for infinitely many points (k, l), then the original process must have survived. The next proposition compares (η(k, l)) (k,l)∈N 2 to an independent oriented site percolation ( η(k, l)) (k,l)∈N 2 .
Corollary 4.3 shows that for any ε > 0, we can choose L, T > 0, n ∈ N, s ∈ N and δ > 0 such that this happens with probability at least 1 − ε, and it is clear that this probability does not depend on K or l. So we have constructed a percolation (η(k, l)) (k,l)∈N 2 where each point is open with high probability, but not independently. To address this we define a distance between two sets S 1 , S 2 ⊆ N by We notice that the restriction to a truncated process in Corollary 4.3 ensures that the percolation is 2-dependent. This means that conditioned on {η(k, l) : k ≤ K}, the collections (η(K + 1, l)) l∈S1 and (η(K + 1, l)) l∈S2 are independent for any sets S 1 , S 2 ⊆ N with d(S 1 , S 2 ) > 2. Theorem B26 in [12] then ensures that we can couple (η(K + 1, l)) l≤K+1 with an independent family of Bernoulli random variables ( η(K+1, l)) l≤K+1 such that η dominates η, and such that η(K + 1, l) = 1 holds with probability at least (1 − 5 √ ε) 2 if either η(K, l − 1) = 1 or η(K, l) = 1.

Some technical lemmas
Recall that we have fixed λ, κ and q such that (4.2) holds. We first show that we can make the survival probability arbitrarily close to 1 by enlarging the set of initially occupied sites. This is part (i) of Lemma 4.5 below.
Part (ii) concerns particles that survive locally until time 1 by using only two sites.
We obtain that (with high probability) many particles will achieve this if we start with a large enough number of particles at the origin. Part (iii) shows that with high probability, starting from N particles occupying the origin, at time 1 we end up with a configuration where every site of D n is occupied by many particles. We need this to be a local event, so we restrict ourself to particles that do not leave certain boxes.
Now, due to the spatial ergodic theorem (see Theorem 4.9 in [11]), we have 1 where we used the fact that {Y x , x ∈ Z d } are independent with respect to P ω . We conclude from (4.9) that P ω (S n = 0) → 0 almost surely and therefore P (S n = 0) → 0 as well.

Part (ii):
For v ∈ N * a node in our tree (recall Section 1.1), let B(v) denote the event that v  Note that the events A(α) are increasing as α ↓ 0 and that their union over all α ∈ (0, 1]∩Q has probability 1. So for any η > 0 we can find α > 0 small enough that Q(A(α)) ≥ 1 − η Now starting with N initial particles at the origin in an environment ω ∈ A(α), the number of particles v such that B(v) occurs dominates the number of successes of a binomial random variable with N trials and success probability α. Clearly we can choose N large enough such that P Bin(N, α) ≥ M ≥ 1 − η.
Then we can conclude since holds for η small enough. Part (iii): Let D n be equal to either D n or ne 1 + D n . We fix an enumeration D n = {x 1 , ..., x (2n+1) d } of the sites, and introduce the quantity Here we use P ω for the law of a single particle which does not branch and which is killed by the environment ω with τ denoting its extinction time. For α ∈ (0, 1] we consider events A(α) := min{S(x) : x ∈ D n } ≥ α .
Fix η > 0. By the same argument as before we find that Q(A(α)) ≥ 1 − η holds for some α > 0 small enough. We now choose N := m(2n + 1) d for some large m. Letting W ⊆ N * denote the set of initial particles, we partition W (deterministically) in such a way that Noticing that P (B i (w) = 1) = e −λ S(x i ) we conclude that for ω ∈ A(α) we have In the following, we think of A ⊆ Z d as a large set, so that {Z A dies out} is an event of small probability. In the next lemma we state the familiar property that survival can only happen if the number of particles goes to infinity. Looking at the process as a random tree embedded in space-time, this means that there are many particles occupying the top of a space-time box.  Let F t be the sigma algebra generated by the environment, the branching times and the particle positions up to time t. Then for any t we have Letting t go to infinity, the left side converges to the indicator function 1{Z A dies out} ∈ {0, 1}. However, if for some K we have |Z A (t)| < K for arbitrarily large t, the limit inferior of the right hand side will be bounded away from 0. Therefore the event Z A survives, |Z A (t)| < K for arbitrarily large t has probability 0. Now We also need the following general result: Since the function on the left hand side is convex in p JmK while the right hand side is linear, the conclusion follows by checking that the inequality indeed holds for p JmK equal to 0 and to 1.

Space-time boxes and an FKG-inequality
Let us define the random variables mentioned at the end of Section 4.1. Note that we can think of the process (Z η (t)) 0≤t≤T as a process in space-time, which we want to emphasize by writing For convenience we also define the sign of zero to be 1, that is sign (x) := 1 x≥0 − 1 x<0 for x ∈ Z. Note that the bottom {0} × {−L, · · · , L} of the box is not part of the boundary. For all these quantities we sometimes omit the dependence on L and T if it is clear from the context. See also Figure 1 for an example in d = 2. Let η be a configuration as defined in Section 1.4. For u ∈ U and θ ∈ Θ let N η (L, T, u, θ) count the number of particles leaving B through F(L, T, u, θ). That is, N η (L, T, u, θ) is the number of times such that a particle of Z η hits ∂B for the first time at some (t, x) ∈ F(L, T, u, θ), formally defined as the cardinality of the set Furthermore for u ∈ {±1} and θ ∈ Θ let M η (L, T, u, θ) count the particles exiting B through T(L, T, u, θ), so that M η (L, T, u, θ) := v ∈ Z η (T ) : X(T, v) ∈ T(L, T, u, θ), X(s, v) / ∈ ∂B ∀s < T .  We use M η and N η to refer to the vectors M η (L, T, ·, ·) ∈ N (2 d ) and N η (L, T, ·, ·) ∈ N (d2 d ) .
Moreover we record the following shorthand notation for later use:   We have the following FKG inequality.
Theorem 4.8. Let η 1 and η 2 be two configurations, and denote by V η1 and V η2 two independent realizations of the process started from η 1 resp. η 2 . We let Z η1 , M η1 and N η1 (resp. Z η2 , M η2 and N η2 ) be defined as above for the processes started from η 1 (resp. η 2 ). Moreover let f, g : An intuitive explanation is that if many particles of V η1 survive and occupy any given orthant then this increases the chance that many particles of V η2 are alive in any other orthant, since they are affected by the same disasters.
Proof of Theorem 4.8. We will show that for almost all realizations of V η1 and V η2 we (4.17) Taking expectation with respect to the law of V η1 and V η2 then yields the claim. Think of the processes as trees, recalling Section 1.1. Now conditioned on V η1 and V η2 we can find K ∈ N and 0 = U 0 < U 1 < ... < U K < U K+1 = T such that both trees are constant on [U k , U k+1 ) for all k = 0, ..., K. That is, neither V η1 nor V η2 jumps or branches in [0, T ] \ {U 1 , ..., U K }. Consider χ(k, x) := 1 no disaster occurs at x in the interval [U k , U k+1 ) .
Let G := σ(χ(k, x) : 0 ≤ k ≤ K, x ∈ Λ) and note that M η1 , N η1 , M η2 and N η2 are Gmeasurable and increasing in χ. Since f and g are increasing this means that both f (M η1 , N η1 ) and g( M η2 , N η2 ) are also increasing in χ. Therefore (4.17) follows from the FKG inequality, see Corollary 2.12 in [11]. In this case the law of {χ(k, x) : 0 ≤ k ≤ K, x ∈ Λ} trivially satisfies the FKG lattice condition since it is a product measure.
We obtain the following Corollary 4.9. For any L, K, K ∈ N, T > 0, any configuration η and any S ∈ N we have θ∈Θ,u∈U Proof. We will show only the proof of (4.18) since the other claims follow in the same way. Let I := U × Θ, so that |I| = d2 d . Fix an environment ω and for (u, θ) ∈ I define X u,θ := 1{M η (L, T, u, θ) > K}.
Now consider independent copies of the tree indexed by I × {1, ..., S}, each of which is started from configuration η and evolves in the same environment. We use X u,θ,i to denote the realization of X u,θ corresponding to the tree (u, θ, i) ∈ I × {1, ..., S}, which is now an independent family. Observe that P ω X u,θ,i = 0 for all i = 1, ..., S ≥ P ω M Sη (L, T, u, θ) ≤ K .
Together with Lemma 4.7 this implies The next lemma shows that we can make the probability on the right hand side of (4.20) arbitrarily small: That is, if the process survives then there will be many particles occupying the boundary of any space-time box: Lemma 4.10. Let (T j ) j and (L j ) j be two sequences increasing to infinity. Then for any K > 0 and any configuration η have lim sup Proof. Let Λ j := {−L j +1, · · · , L j −1} d , and consider the space-time box B j := [0, T j ]×Λ j . We denote by F Lj ,Tj the sigma algebra generated by the environment in B j as well as the branching times and positions of particles inside B j . We will consider the process of particles in Z η that have never left B j : Here · ∞ denotes the maximum norm. Note that (s, v) ∈ E j implies that the particle v has just left B j (for the first time) at time s, either through one of the sides or through the top. Clearly E j is F Lj ,Tj measurable and we have Then P(D(s, v) = 1) = αβ with the same α and β as in (4.10) and (4.11). We can write For the last estimate, note that for (s, v) ∈ E j the event D(s, v) = 1 is independent of F Lj ,Tj and that for (s 1 , v) = (s 2 , w) ∈ E j we have P(D(s 1 , v) = D(s 2 , w) = 1) ≥ P(D(s 1 , v) = 1)P(D(s 2 , w) = 1).
Now the same argument as in the proof of Lemma 4.6 applies: For j → ∞ the left hand side of (4.21) converges to 1{Z η dies out}, while the right side will be bounded away from zero whenever |E j | ≤ K for infinitely many j. Therefore we have lim sup j→∞ P(|E j | < K) ≤ P(|E j | ≤ K i.o.) ≤ P(Z η dies out).

Proof of the key propositions
We are now in a position to prove the missing Propositions from Section 4.1. Note that there we have only used Proposition 4.2, however we obtain it by repeatedly applying Proposition 4.1. For the proof of this first result we need to consider two cases depending on the value of ε. Since ε will in turn depend on the value ε in the second proposition, we choose to state those two cases in terms of ε from the beginning: Given ε > 0 we choose ε > 0 such that (1 − ε) 10 ≥ 1 − ε . With this value of ε we can find δ > 0 such that Now one of the following two statements will be true, and we prove both propositions separately in each case: (case 2)

Proof in case 1
Proof of Proposition 4.1 in case 1. We first have to find a number R ∈ N that is large enough for our purposes: Let Then choose R 1 such that Note that this ensures that any set A ⊆ Z d with |A| ≥ R 2 contains a subset A ⊆ A with |A | ≥ R 1 and such that for every two sites x = y ∈ A we have x − y ∞ ≥ 4n. By part (iii) of Lemma 4.5 we find R 3 such that The next step is to find L and T . From Lemma 4.6 and the definition of n we obtain We can rewrite this by saying that for all T ≥ T 0 there exists L(T ) with  Note that our space-time box has 2 d orthants in the top and d2 d orthants in the faces. We therefore apply Lemma 4.10 with K equal to (1 + d)2 d R + 1 and the sequences (L k ) k and (T k ) k defined before. We find that there exists k 0 such that for all k ≥ k 0 we have We set L := L k0 and T := T k0 . Then we have For the second inequality we have used (4.20) and the definition of S. Together with (4.27) we get (4.28) Applying (4.19) together with the definition of S, and using the fact that by symmetry, the value of P N (Dn,S 2 ) (L, T, u, θ) ≤ R does not depend on θ and u, we obtain On the other hand (4.18) together with (4.27) and the definition of S shows Now we have to verify that the claim of proposition 4.1 is indeed satisfied with this choice of L and T . That is, we need to bound the probability that we find a copy of D n shifted to the correct space-time location, and such that every site is occupied by at least S 2 particles of the truncated tree. We show that each of the following steps occurs with high probability, independent of the choice of θ ∈ Θ: 1. The tree Z (Dn,S 2 ) has many particles leaving through F(L, T, e 1 , θ).
2. There exist (t, x) ∈ F(L, T, e 1 , θ) such that the particles occupying x at time t grow into a fully occupied copy {t + 1} × (x + ne 1 + D n , S 2 ) of (D n , S 2 ).

Consider now the box
The tree growing from {t + 1} × (x + ne 1 + D n , S 2 ) will have many descendants that leave through the top T(1, −θ) of B.
4. There is one particle at (t, x) ∈ T(1, −θ) that grows into a new copy of the box {t + 1} × (x + D n , S 2 ), which now satisfies the necessary conditions. On {N (Dn,S 2 ) (L, T, e 1 , θ) > R} at least one of the following statements will be true: • There exist at least For both cases we let E t,v be the indicator function of the event that (t, v) ∈ R grows into a shifted copy of D n : In (case A) note that √ R ≥ (4n) d R 1 , so we can find at least R 1 distinct indices (t 1 , x 1 ), ..., (t R1 , x R1 ) ∈ I such that |t i − t j | ≥ 2 and x i − x j ∞ ≥ 4n holds for all i = j.
Because of the truncation the events {E si,vi = 1} and {E sj ,vj = 1} are independent for i = j. Moreover the probability that E si,vi = 1 is at least α, defined in (4.25). By our choice of R 1 we have In (case B) we find y ∈ x 0 + {L} × {0, ..., n − 1} such that at least √ R n ≥ R 4 particles arrive at [t 0 , t 0 + 1] × {y}. Let G be the event that • at least R 3 of those particles survive until time t 0 + 1 • while not leaving the set {y, y + e 1 }, • and occupying y at time t 0 + 1.
By our choice of R 4 and part (ii) of Lemma 4.5 we obtain Let now G be the event that at time t 0 + 2 every site of y + ne 1 + D n is occupied by at least S 2 descendants of the particles occupying y at time t 0 + 1. By our choice of R 3 and part (iii) of Lemma 4.5 we find that In (case A') we note that √ R ≥ (4n) d R 1 , and thus we find at least R 1 sites x 1 , ..., x R1 in T(e 1 , −θ), each occupied by at least one particle, with the property that x i − x j ∞ ≥ 2n + 1 for all i = j. For x ∈ T(e 1 , −θ) we let E x be the indicator function of the event (x + D n , S 2 ) ≤ Z {t+T }×{x} x+Dn (t + T + 1) .
Proof of Proposition 4.2 in case 1. Set L := 2L + n and T := 2T . Recall that in the previous proof we chose θ ∈ Θ and u ∈ {±e 1 }, and then bounded the probability of the event that • we find R particles in the orthant F(u, θ) in (4.30).
• starting from those particles, we again find R particles in the orthant T(u, −θ) of the top of a shifted box in (4.29).
We now repeatedly apply this result, each time making a convenient choice for θ and u. We start with s (0) , y Note that this requires between 4 and 10 applications of the proposition, so we have a success probability of at least (1 − ε) 10 ≥ 1 − ε .

Proof in case 2
Proof of Proposition 4.1 in case 2: Take L ∈ 2N large enough for (case 2) to hold and fix some large t ∈ N. We introduce the two sites in some deterministic way, say by choosing the minimal element in the lexicographical order. This sequence enables us to make infinitely many trials to find a fully occupied box at the required position: For every k, denote by Z k (s)) s≥tk the process obtained by taking v k as the new root and considering only its descendants. We define random variables A i k := 1 (z i + D n , S 2 ) ≤ Z k (t(k + 1)) for k ∈ N, i ∈ {0, 1}.  So { B i = 1} is (up to shifts) the same event as {A 1 = 1} with z 1 replaced by z 2 and started from (z (i) + D n , S 2 ) at some time t (i) , which we did not specify yet. Note that from our choice of α in (4.32), the same argument as before yields We now recursively define (t (i) ) i∈N . Start from t (1) := K 1 t, and assume we have found t (1) , ..., t (i) . On { B i = 1} we find a minimal value K i+1 such that z (i+1) + D n is occupied by at least S 2 particles at time t (i) + tK i+1 . Then we proceed by t (i+1) := t (i) + K i+1 k. So the claim follows from our choice of ε in (4.22) and because the event on the left hand side has probability at least (1 − ε) 6 .