Absorbing-state transition for Stochastic Sandpiles and Activated Random Walks

We study the dynamics of two conservative lattice gas models on the infinite d-dimensional hypercubic lattice: the Activated Random Walks (ARW) and the Stochastic Sandpiles Model (SSM), introduced in the physics literature in the early nineties. Theoretical arguments and numerical analysis predicted that the ARW and SSM undergo a phase transition between an absorbing phase and an active phase as the initial density crosses a critical threshold. However a rigorous proof of the existence of an absorbing phase was known only for one-dimensional systems. In the present work we establish the existence of such phase transition in any dimension. Moreover, we obtain several quantitative bounds for how fast the activity ceases at a given site or on a finite system. The multi-scale analysis developed here can be extended to other contexts providing an efficient tool to study non-equilibrium phase transitions.


Introduction
Models of avalanches in sandpiles introduced in [1] became a paradigm example of a phenomenon called self-organized criticality. In contrast with typical statistical mechanics systems, models of this type are not equipped with a tuning parameter such as temperature, for which a phase transition is observed. Instead, they are expected As we mentioned above, the techniques developed in the current paper go beyond the mere existence of a phase transition for FES models. To illustrate this fact, we show two quantitative results concerning the expected time of absorption and the warm-up phase of DDM models.
Let τ be the last time of activity for the origin in either the ARW or SSM on the infinite lattice. Then for ζ small enough, we show that P ζ [τ > l] ≤ c exp{−c log(l) 2 }, for every l ≥ 0, (1.2) see Theorem 2.1. This result motivates the definition of which can be seen as a strengthening of the definition of ζ c . From (1.2), one sees that ζ * > 0, leaving open the question of whether it equals ζ c , in analogy with the case of percolation, see [18].
Another interesting quantitative statement that we establish concerns the DDM type models away from equilibrium. For this, consider either the ARW or SSM on a finite box with side length n in the presence of driving and dissipation at the boundary. Then, if we start from an empty configuration and if ζ is small enough, after the insertion of ζn d particles, no more than n d−1+o (1) of them get dissipated in the boundary.
(1.4) See Theorem 2.2 for a more precise statement of the above. In Figure 1 we illustrate the above result with a simulation.  starts empty and particles are successively introduced uniformly over the square one by one and topple until the configuration stabilizes. Whenever a particle exits the box it is eliminated (dissipated). Observe that the overwhelming majority of particles get absorbed at the early stages of particle introduction, see Theorem 2.2 for a precise statement supporting this claim.
We now briefly comment on the difficulties encountered in proving such results. Various proofs of phase transition in statistical mechanics follow what is known as EJP 22 (2017), paper 33. "energy versus entropy" approach. The essence of this can be described along the following lines. In order to show that a certain event does not occur (an event that one doesn't expect at a given phase), one first characterizes a geometric/combinatorial structure that is necessary for its occurrence. If the number of such structures is overwhelmed by the high energy price to construct them, one can show that such class has vanishing probability. This is very successful in perturbative statistical mechanics, appearing in different forms such as the Peierls argument or the Pirogov-Sinai theory. However, the conservation of particles in the system gives rise to long range dependence in time. This makes it difficult to find a suitable structure behind perpetually continued activity in this model, whose occurrence could be ruled out in this way.
The main techniques that we employ in order to prove our main results are renormalization and sprinkling, that we describe below.
Renormalization and sprinkling Renormalization techniques provide a way to control how certain parameters of the model vary as we change the scales of the system. In our case the existence of an absorption phase is intuitive if the initial density of particles is low enough: at time zero, most of regions of space have only few particles which quickly become inactive. However, fluctuations of the initial density give rise to spatial boxes with an exceptionally high number of particles that will remain active for an extended period of time, or, fluctuations of the dynamics produce boxes, where the initial density is low, but particles are unusually long-term active. It is expected, though, that such exceptional boxes should not be able to create much damage if we look at them at a larger scale. Thus, the important parameter to control in our context is the probability p k , that a given box at scale k leaks particles far away in space, see (4.8). Our multi-scale analysis elucidates the mechanism behind the decay of the probabilities p k as the scale increases. Turning this description into a rigorous proof involves controlling the long range dependencies of the model, as well the complications that appear once one conditions on the occurrence of rare events. We deal with these issues using the Sieving Lemma 4.5, which allows us to start from an "almost worst case" configuration and recover some independence.
The main advantage of this technique is that it looks very little into the details of the dynamics, making us hopeful that they could be adapted to work in other problems. Moreover, the recursive inequalities that we devise can provide interesting quantitative results, such as Theorems 2.1 and 2.2. The renormalization picture of why absorption happens is very instructive and fits to our intuition that no matter how big a defect is, it will only affect a neighborhood of comparable size.
Another important ingredient that appears in the proof is sprinkling. It consists of increasing slightly the density of particles in the system, when needed in order to blur the dependence that may appear by conditioning. This idea already appeared in [19] and [21]. These techniques become remarkably powerful when combined with the concept of soft local times introduced in [23], see the key Lemma 4.5.
We believe that the methods developed here will help deepen our understanding of absorption transitions and will open various possibilities for future works towards understanding the relation between FES and DDM models under a rigorous perspective. Moreover, following the general theory of Renormalization Group, one could expect that an a similar multi-scale analysis could work not only in perturbative regimes, but lead to results close to criticality, although this would require several new ingredients.

Remark 1.1.
Nearly one year after the submission of this work, the very interesting article [32] has been written on the subject. There Stauffer and Taggi show that the critical density ζ c is positive for the ARW model. On the one hand, their proofs are much shorter and apply to any amenable transitive graph (instead of only Z d as we present EJP 22 (2017), paper 33. here). However, their techniques do not yield quantitative results like (1.2), also it does not seem to work directly for the stochastic sandpiles model.
Organization This paper is organized as follows. In Section 2 we establish some notation and define precisely the model of ARW. Section 3 is devoted to showing a triggering statement, which is later bootstrapped into our main quantitative estimates, see Theorem 4.6 in Section 4. Our main results are then proved in Section 5. Sections 6 and 7 contain some auxiliary results that are needed through the paper. Finally we show in Section 8 how to extend our results to other FES models such as SSM.

Notation
We write d(x, y) for the ∞ -distance (also called supremum distance) in Z d . For Given x ∈ Z d , we let P x denote the law on D(R + , Z d ) of a continuous time simple random walk starting from x. On the space D(R + , Z d ), we denote by X t the canonical projections, that is for w ∈ D(R + , Z d ), X t (w) = w(t).
Through this article we will consider a particle system on Z d that is referred to as activated random walks or the sleepy random walkers in the literature. To make its definition precise, we will describe its generator in detail.
Each site in our system can assume a state in the space S = {0, 1, . . . } ∪ {s} where s indicates the presence of a sleeping particle there, while {0, 1, . . . } stores the number of active particles at that site. Observe from the above that we do not allow more than one sleeping particle at any site.
We denote a configuration of our system by η : Z d → S, and given such η, we define two modifications of this configuration, representing the allowed transformations in our dynamics. The first of them is denoted by η (y) (for y ∈ Z d ) and indicates that the particles at site y attempted to "sleep". Note below that sleeping is only allowed if η(y) = 1.
η(x) if x = y or η(y) = 1 and s if x = y and η(y) = 1. (2.1) The second transformation in our dynamics involves a jump attempt from site y ∈ Z d to some of its neighbors z. Observe below that such a jump is only allowed if there is at least one active (that is, not sleeping) particle at y. Moreover, once a jump is performed it will wake up a sleepy particle that may be present at z. More precisely, if y and z are neighbors The transformations above allow us to define the generator of our process as follows.
Fixed some sleep rate λ > 0, for f : Observe that the factor η(y) multiplying the last parenthesis above indicates that the rate of jump from y to z is proportional to the number of active particles at y. Observe that s is only multiplied with zero and we tacitly assume that this product vanishes.
Our particle system is defined on the space D(R + , S Z d ), endowed with the canonical projections (η t ) t≥0 representing the configuration of particles at time t. Given an initial collection of particles (x j ) j∈J we can define η 0 (y) = |{j; x j = y}| to be the number of particles at y at times zero. Supposing that η 0 (y) does not grow exponentially as y → ∞, we define P (xj ) j∈J to be the law on the space D(R + , S Z d ), starting from η 0 and under which η t evolves according to the generator L in (2.3).

(2.4)
To see why such process exists, one could use the theory developed in [14], or alternatively use the graphical construction given by the Diaconis-Fulton representation of the process that we introduce in Subsection 2.1. Implicitly, this means that every particle at time zero is active (since the value s does not appear in η 0 ). Given a function σ : Z d → R + , we can define a Poisson point process with intensity measure σ. Most of the times, the intensity σ will be taken to be a constant ζ > 0 or ζ1 A , where 1 A stands for the indicator function of A ⊆ Z d .
Then, we can define η 0 (x) as the number of points at x in the above Poisson process and define P σ to be the law on the space D(R + , S Z d ), governing the evolution of η t under L and starting from a Poisson point process with intensity σ. (2.5) This is well defined whenever σ is bounded (which will always be the case throughout this article). We also write, with a slight abuse of notation, P ζ (where ζ > 0) to indicate that the process starts with a Poisson point process with uniform density ζ.
Through this paper, we will be interested in the existence of an absorbing phase for our particle system at low densities. The relevant event will be {the system fixates} = x∈Z d {η(x) changes only finitely many times}. (2.6) In the literature, this is sometimes referenced as local fixation.
With this notation in place, we can state our main results Theorem 2.1. There exists c 0 > 0 such that, for every ζ < c 0 P ζ [the system fixates locally] = 1.
which is a consequence of P ζ [number of times that η t (0) changes is larger than l] ≤ c exp{−c log(l) 2 }. (2.8) For the model on a finite box with driving/dissipation, we have Theorem 2.2. Fix any ζ < c 0 and ε > 0. Then there exists a constant c 1 = c 1 (ε) > 0 such that, on a box B of side length n P ζ1 B more than n d−1+ε particles exit B ≤ c exp{−c 1 log(n) 2 }.
Throughout the text we let c denote positive constants, which may depend on d and λ and may change from time to time. Further dependence will be denoted explicitly. So, for instance c(α) depends on α and possibly on d and λ. Moreover, numbered constants such as c 1 , c 2 , . . . refer to their first appearance in the text.

Diaconis-Fulton representation
This section is dedicated to an important graphical representation of the above described stochastic process. Such construction was developed in various different EJP 22 (2017), paper 33. articles, such as [6], [8], [13] and [24]. However, the most complete presentation of this graphical construction for our current purposes has been presented in Rolla's PhD thesis [27]. The two main advantages of this representation are that it provides notions of monotonicity and commutativity to this system, which are not apparent at first.
We give here a somewhat informal description of the construction, while referring the reader to the more complete description presented in Section 1.3 of [27]. Intuitively speaking, in this graphical construction, instructions telling what the particles should do (attempt to sleep, jump in such direction...) are stored in each site. So let us fix a collection (F x,j ) x∈Z d ,j≥1 of elements in the set of instructions Γ = {s, e 1 , −e 1 , . . . , e d , −e d }. Later we will assign an i.i.d. distribution to the instructions F x,j , but for now they could be arbitrary.
We start by describing how the construction goes for a finite initial number n of particles at positions x 1 , . . . , x n having states τ 1 , . . . , τ n ∈ {s, a}, where s indicates that the particle is sleeping and a stands for active. Whenever a given particle's clock rings, if it is active, it reads the envelope on its current site, executes the corresponding instruction and "burns that envelope". The notion of "burning" is implemented through the local time (or total activity) of particles J ∈ {0, 1, . . . } Z d , that typically starts with value zero everywhere. The function J x tells us how many envelopes have been read at site x.
To finish our graphical construction, we need a process T giving the time of updates T = {0 = t 0 < t 1 < . . . } (later we will take t 1 , t 2 , . . . to be a Poisson point process) and a sequence of particle indices N = (n 1 , n 2 , . . . ) ∈ {1, . . . , n} Z+ . The index n i tells which particle is supposed to act at time t i (these will later be chosen as i.i.d. and uniformly distributed indices).
We now proceed iteratively as follows 1) at time t i , the particle n i reads the envelope F xn i ,Jx n i , 2) then, the total activity J is increased by one at position x ni and 3) the position x ni and the state τ ni of that particle are updated accordingly.
It is not hard to see that the above construction results in the same distribution of particles' evolution (starting at positions x 1 , . . . , x n at states τ 1 , . . . , τ n ) as long as 1) F x,j are i.i.d. with proper distributions to reflect the sleeping rate, 2) T is a Poisson Point process with rate n(1 + λ) and 3) N is i.i.d. and uniformly distributed over {1, . . . , n}.
Although the above construction is completely equivalent to the one provided by the generator, it possesses two crucial features which are stated in the theorems below.
We say that a given realization of the graphical construction ω = (x, τ ), F, T, N stabilizes if, by following the above described procedure, we eventually obtain an inactive configuration (that is, τ i = s for every i). This clearly occurs almost surely if n is finite.  [27]). Let ω = (x, τ ), F, T, N be a realization of the graphical construction that stabilizes. Suppose that ω = (x , τ ), F , T , N has the same envelopes and initial state as ω, that is (x, τ ) = (x , τ ) and F = F . Then ω also stabilizes and, at the final configuration,  Another very important feature of this construction is related to monotonicity. Intuitively speaking, if we add more particles or remove s envelopes, we sustain activity for a longer time. Given two starting configurations ω and ω with particles at position x i and x i respectively, we say that ω ω if δ xi , that is: ω has no more particles on each site than ω, δ xi , that is: ω has no more active particles on each site than ω and 3) the envelopes of F can be obtained by removing some s envelopes from F .
We can now state the following Theorem 2.4 (Monotonicity, Theorem 1.3 in [27]). Let ω = (x, τ ), F, T, N be a realization of the graphical construction that stabilizes. Suppose that ω ω, then ω also stabilizes and the final activity of ω (number of burned envelopes) is smaller or equal to that for ω, that is J x ≤ J x for every x ∈ Z d .
These theorems will be used several times throughout the text and their proofs can be found in [27], Section 1.3. An important consequence of the above stated theorems is the following Corollary 2.5. Fix a finite collection of particles (x i ) i∈I and a stopping time T i for each of them. Then, for any event A which is increasing in the final accumulated activity (J x ), where the expectation above is taken with respect to an independent collection of simple random walks.
Intuitively speaking, the above corollary says that we can turn off the "sleeping envelopes", wait until each particle reaches its stopping time and then turn back to the original dynamics. This procedure can only increase the total activity of the system.
It is also important to note that restricting ourselves to finite starting configurations such as above is not a big restriction due to the following. Suppose that we are interested in the variable R giving the number of times that η t (0) changed in the whole evolution of the system. Then R = lim s→∞ R s , where R s is the number of updates in η t (0), for t ≤ s.

Vanishing density and triggering
In this section, we will prove two statements concerning the absorption of particles on a finite box. These statements are important steps towards our final result and their proofs illustrate well of some of the techniques used throughout the paper.
These two results can be viewed as a weaker form of our final Theorem 4.6, as they deal with a vanishing density of particles in the box (as the size of the box grows). But they are necessary in order to trigger our renormalization scheme later.  Proof. Recall that d(x, y) stands for the supremum norm on Z d . We now define a sequence of disjoint annuli by A j = {x ∈ Z d ; d(x, B L ) ∈ (jL δ/2 , (j + 1)L δ/2 )}, for j = 0, . . . , k. (3.2) Note that for L > c all the sets A j are disjoint and contained inside B L \ B L , see Figure 2. We chose the number of annuli to equal the number of particles so that we can associate them and define the following stopping times So that each particle X j is stopped in the middle of its corresponding annulus A j .
Finally, using Corollary 2.5, finishing the proof of the lemma by taking c(δ) large enough.
In the above lemma, we strongly relied on the fact that the number of particles is much smaller than the side length of the cube, in order to show that they get absorbed before exiting. In what follows we will prove a theorem that deals with a much larger number of particles (now of smaller order than the volume of the cube) but which follows a Poisson distribution.
If P L denotes the law of the activated random walks process starting from a Poisson point process of particles in B L with density L − then P L [some particle reaches ∂ i B L ] ≤ c( ) exp{−cL /4d }. (3.5) In the proof of the above theorem we will consider a fine paving of the lattice Z d by disjoint boxes, allowing us to use the previous Lemma 3.1 for each of them. Let us use this opportunity to introduce some definitions which will be useful through the rest of the paper. We say that a collection of boxes {C i } i∈I is a disjoint paving of Z d with side length if there is some x ∈ Z d such that Before going to the proof of Theorem 3.2, we will analyze what we call the hopping process of a simple random walk and its hitting time of the half-kernels ∪ i∈I C i .
The hopping process will be nothing more than a discrete time Markov chain, obtained when one only looks at the random walk position at some multiples of a fixed constant. Although one could also prove Theorem 3.2 without using the hopping process, we decided to employ it here, since it will be again important in Section 7.
The next lemma deals with a single random walker and will be used to prove Theorem 3.2 and in other parts of the article.

Lemma 3.3.
Fix the paving {C i } i∈I and the half-kernels {C i } i∈I as above. Now fix the arithmetic sequence t l = lr, for l ≥ 0, where r ≥ 2 is given. Then the stopping time is P x -almost surely finite and Proof. We denote by π the canonical projection from Z d to the torus Using the invariance principle on the torus, since r ≥ 2 we see that In view of the above, it becomes clear that S is almost surely finite. Moreover, by the Markov property, (3.10) We now need to bound the maximal distance traveled by the walker during this time. For this we use Azuma's inequality for the maximum of a martingale (see for instance [16] (41) p. 28), and comparing the continuous random walk with the discrete one, we can bound We now join the above with (3.10) with a = D/ √ r to finish the proof of the lemma.
We are now in position to provide the statement of the theorem. We now define, for r = 2 , the stopping time S j as in (3.7) corresponding to the j-th particle.
We will use Corollary 2.5 in order to let the particles walk without sleeping (and consequently not interacting), until the stopping time S j . Note that each particle stops at its own time. At the end of this procedure we hope that the particles will be in such a configuration that allows us to use Theorem 3.1 The first issue we have to rule out is that some particle X j escapes B L before it stops (at S j ). This can be bounded using Lemma 3.3 by The first term is bounded by a simple large deviations estimate and all of the above can be accommodated in the error of (3.5).
We now claim that ω = j∈J δ X S j is a Poisson point process, with intensity measure ν that: 1) has support on ∪ i∈I C i and 2) satisfies ν( To see why ω is a Poisson point process, one simply observe that ω is a random mapping of another Poisson process, where each particle is mapped independently from each other. To estimate ν(C i ) we first note that we can stochastically dominate ω by the process ω , obtained in the same way as ω, but starting with a Poisson point process of particles on the entire lattice (with intensity L − ). Thus, by ergodicity of ω , we can infer that its intensity measure satisfies ν (C i ) = d L − . This together with the domination of ω by ω , establishes (3.13).
We now show that with high probability no half-kernel C i ends up with more than L /2d particles. For this, we use (3.13) and a tail estimate on a Poisson(1) random variable, to obtain that for any given i ∈ I, where E stands for the expectation governing (x j ) j∈J . Finally, fix any possible configuration (y j ) j∈J for the (X j Sj ) j∈J 's such that #{j ∈ J; y j ∈ C i } ≤ L /2d for every i ∈ I. Then we use Lemma 3.1 to obtain that P (yj ) k j=1 some particle reaches ∂ i B L ≤ c( ) exp{−cL /4d }. (3.15) This, combined with (3.12) and (3.14) establishes the desired result.

Renormalization
In this section we present the core argument for our main Theorem 4.6. For this we will assume the validity of an auxiliary result, the so-called Sieving Lemma, which will be later proved in Section 7.
To establish our main result we follow a multi-scale renormalization argument, which employs the following scale sequence. Fix L 0 = 10000 and define We should think of this scale sequence as growing roughly like L k+1 ∼ L 1.2 k . This can be made precise observing that We also introduce the set which serves to index boxes at scale k, We will also need the intermediate scale which lies between L k and L k+1 . Note that for k ≥ c 5 , we have that 5R k ≤ L k , so that we can define the (non-empty) boxes Figure 3. The boxed C 1 and C 2 are kernels (in a sense similar to that introduced in Theorem 3.2). The annuli between C 0 \ C 1 and C 1 \ C 2 can be thought as buffer zones that start without particles. We can now define the main quantity we want to bound Recall the definition of P σ in (2.5).
Using the above definition, we can obtain the following consequence of Theorem 3.2 EJP 22 (2017), paper 33. (4.9) The above corollary will be used to trigger the recursion relation on the p k 's in Theorem 4.6 below.
Proof. Note first that which grows much faster than L γ/4 k . Therefore, we choose = γ/4 in Theorem 3.2.
finishing the proof of the Corollary.
The next result improves on the one above, since it works for non-vanishing densities. Throughout our proof, we will employ an argument called "sprinkling", in which a slight change in the density of the system compensates for the interdependence between boxes. For this reason, we introduce the sequence of densities 1 j 2 , for any given ζ 0 > 0. The exact choice of ζ k 's is not crucial for our arguments to work. What we need from this sequence is that inf k ζ k ≥ ζ 0 /2 and that ζ k − ζ k−1 is sufficiently large, so that we can apply a large deviations estimate in the proof of the Sieving Lemma 4.5 in Section 7.
The following result is the key ingredient to prove our main theorem. It relates the probabilities p k+1 with the probabilities in the previous scale p k .

Theorem 4.2.
There exists c 6 , c 7 , c 8 such that, if k ≥ c 6 and ζ 0 ∈ 8(k + 1 The main steps in the proof of the above theorem are schematically summarized in Figure 4. Roughly speaking, each of the boxes in that picture represent a starting configuration under which we are evaluating the probability that some particle escapes. In Figure 4, regions are either: gray, hatched or filled with small squares. These patterns have different meanings that we now explain. Gray regions represent a configuration starting from a Poisson point process (like in the definition of p k (ζ)). We now explain the hatched area, which stands for balanced configurations. We now need to explain what the areas filled with small gray squares represent in Figure 4. These are the so-called sieved configurations, where all particles are positioned in squares of type C 2 m , with m in scale k − 1. Having the particles in such situation, will allow us to relate the probability that some escape in terms of p k−1 , which is the main part of our induction argument. We now define precisely what these configurations are.
Definition 4.4. Given a density ζ, an index q ∈ {0, 1} and m ∈ M k (k ≥ c 5 ), we define the q-sieved law in C m with density ζ as the law of a Poisson point process with intensity 14) The support of the above measure is illustrated in Figure 4 (c) and (e) for q = 0 and 1 respectively (gray area). Another important aspect of our proof is that in every step depicted in Figure 4, we need to perform a sprinkling of our density ζ k . This motivates us to introduce the intermediate densities between ζ k+1 and ζ k : We are now ready to state our main auxiliary result in the proof of Theorem 4.2, the so-called Sieving Lemma. Roughly speaking, in this lemma we start from a balanced configuration of particles, let them run until a certain stopping time, ending up with a configuration which is dominated by a sieved law as in Definition 4.4. For this to work, we have to allow for some sprinkling in the density of particles (r + 1 → r) and for an enlargement in the size of the box (C q+1 m → C q m ). This is all made precise below.
For the following, fix either (r, q) = (0, 0) or (r, q) = (2, 1). we can find a coupling Q between j∈J P xj and the sieved law with intensity µ q m (ζ r k ), in such a way that where T j is a stopping time for the j-th particle (defined in (7.2)).
Proof of Theorem 4.2. In order to use Lemma 4.5, recall that we assumed (4.16) in Theorem 4.2. We start estimating p k+1 (ζ k+1 ) (recall (4.8)) using Theorem 2.4. For this, fix any given m ∈ M k+1 and write p k+1 (ζ k+1 ) = P ζ 4 The above terms correspond to the arrows leaving the box (a) in Figure 4. We start by estimating the first term above. By a crude large deviation's estimate, The above bound corresponds to the arrow going down from (a) in Figure 4.
We now turn to the second term in (a → b), using the Diaconis-Fulton representation, together with (4.16) and Lemma 4.5 we have The above terms correspond to the two arrows leaving (b) in the figure.
We now turn to the bound on the second term above. Every particle in the starting Consider the following stopping time,  where in the last inequality, we used the fact that no particle can exit its corresponding Observe now that the same calculation as in (a) can be used to bound the last probability in (4.19), so that the above can be bounded by and similarly to (4.19) we can bound P µ 0 (k,0) (ζ 0 (4.20) finishing the proof of Theorem 4.2.
We now can use the above recursion to prove the decay of p k in the following Theorem 4.6. There exists c 10 , c 11 > 0 such that, for ζ < c 10 , Proof. We will use Theorem 4.2 to prove the desired result by induction. For that let us first pick c 12 so that for any k ≥ c 12 by induction in k. We use (4.23) and the definition of ζ 0 to establish (4.25) for k =k. Now, supposing that we have established (4.25) for some k ≥k, we recall (4.24) and Theorem 4.2 to In the next section, we will employ the above results to show the existence of an absorbed phase for the infinite system. Remark 4.7. The condition (4.16) is essential in proving the Sieving Lemma 4.5 (see Section 7). This is justifiable, since a small value of ζ 0 will make the sprinkling parameter also smaller and consequently breaking the good decay that appears in Lemma 6.5. This restriction is the main reason for the existence of Section 3, where we deal with particle densities that vanish, but which are still large enough to satisfy (4.16).

The absorption phase
We start this section with an application of Theorem 4.6, that estimates the maximum displacement of particles when we start from sets other than a box. This will be later used in the proof of Theorem 2.1.  Proof. Given s as in the statement of the theorem, we choose k = k(s) such that recall (4.1).
Note that with this choice ζ ≤ ζ k ≤ 2ζ, for every k ≥ 1. Consider a paving {C 1 i } i∈I of Z d by boxes of side length L k − 2R k (which coincides with that of C 1 m in (4.7)), see Figure 5. Note that this paving is not made of C 0 m boxes as in the previous section. In particular, the C 0 m boxes corresponding to the C 1 m above overlap.
This paving can be subdivided into the finer (scale k − 1) paving {C 0 m } m∈M k−1 as indicated (for one box) in Figure 5. Our first requirement is for the initial Poisson point process of particles to be balanced with respect to this finer paving. More precisely, we observe that A can touch at most |A| boxes in {C 0 m } m∈M k−1 , and arguing as in (a) we obtain where the term balanced above refers of course to the paving {C 0 m } m∈M k−1 . Thus, we get by the Diaconis-Fulton representation that P ζ1 A some particle exits B(A, s) We are preparing to employ the Sieving Lemma 4.5 for the balanced collection above. We now restrict our attention to one box C 1 i intersecting A. If the points (x j ) are 2ζbalanced (with respect to {C 0 m } m∈M k−1 ) as in (5.7), we can split them into two collections that are ζ 1 k -balanced, so that we can use the Sieving Lemma 4.5 and (5.3) twice, to obtain that sup (x j ) j∈J ⊆C 1 i 2ζ-balanced P (x j ) j∈J some particle exits B(A, s) ≤ P µ i (4ζ) some particle exits B(A, s) + c exp{−cL γ/3 k−1 }, (5.9) where µ i (4ζ) stands for the measure 4ζ Taking the union over all possible choices of C 1 i intersecting A, we obtain sup (x j ) j∈J ⊆A 2ζ-balanced P (x j ) j∈J some particle exits B(A, s) (5.8) ≤ P µ(4·3 d ζ) some particle exits B(A, s) + c|A| exp{−cL γ/3 k−1 }, To finish the proof, we use the following observation. Start the particles with law given by the intensity measure µ(4 · 3 d ζ), if every particle that starts at a given C We can therefore join (5.7), (5.10) and Theorem 4.6 to obtain that P ζ1 A some particle exits B(A, s) This finishes the proof of Theorem 5.1 (since 4 · 3 d ζ = c 10 ). We may need to properly choose the constants, in order to take care of the case s < c.
We are now in position to prove the existence of an absorption phase for our model. But before, let us quickly present the Proof of Theorem 2.2. Given a box B of side length n, without loss of generality we may We now use the Diaconis-Fulton representation to estimate the probability that, starting from P ζ1 B , more than n d−1+ε particles leave B. For this, we first deactivate the sleep envelopes and let only the particles that started in B \ B run (the particles that started in B stay put) until they all leave B. Then we activate the original dynamics, concluding that P ζ1 B more than n d−1+ε particles leave B ≤ P ζ1 B more than n d−1+ε particles start in B \ B + P ζ1 B some particles exits B .
For n large enough, the volume of B \ B is smaller or equal to (1/2)n d−1+ε . Therefore, the first term above is bounded by c exp{−cn d−1+ε }, recall that ζ ≤ 1.
The bound on the second term of the above equation is obtained by a simple application of Theorem 5.1. This finishes the proof of Theorem 2.2.
We now turn to the proof of our main result. The proof has two steps. First we "turn off sleeping" and let the particles walk until a stopping time (that depends on the particle), in order to accommodate them into a nested collection of annuli surrounding the origin, see the right hand side of Figure 6. Then we "turn sleeping back on" and let the particles run, hopping that different annuli will not interact. For this we use Theorem 5.1.
Fixed such value i 0 ≥ 4 we partition the space Z d into the disjoint annuli together with the ball B i0 = B(0, 2 i0 ).
We start by decomposing the desired probability as follows P ζ #{times η(0) changes} > l ≤ P ζ {some particle visits more than three B i 's} + P ζ #{visits to the origin when particles are killed on B i0+3 } > l] (5.14) Let us start bounding the second term in the right hand side. We do this by first bounding the probability that more than 2ζ · vol(B(0, 2 i0+2 )) particles start in B i0 ∪ B i0+1 ∪ B i0+2 .
We add to this the probability that some of them takes longer than 2 3(i0+2) to hit B i0+3 . Observe that any particle starting at B i0 ∪ B i0+1 ∪ B i0+2 has a positive probability of hitting B i0+3 after 2 2(i0+2) steps. Moreover, this probability is bounded away from zero, uniformly over the starting point and independently of i 0 . Therefore, P ζ #{visits to the origin when particles are killed on B i0+3 } > l] ≤ c exp{−cl 10 −d }. (5.15) In order to bound the first term in the right-hand-side of (5.14) we need to introduce some further notation. Given i ≥ i 0 + 1 and any point x ∈ B i , we associate to x the two spheres in Z d see Figure 6 (left). Finally, if x ∈ B i0 , then we set S in (x) = ∅ and S out (x) = y; |y| = |x|/2 + 5 · 2 i0−2 . (5.17) The exact form of the above definition is not important. The only properties of these sets that we need are (5.21) which are easy to verify from their definitions.
x Bi Sout(x) Sin(x) Bi Figure 6: In the left, we depict a point x ∈ B i and its corresponding S in and S out (the dashed lines represent the boundaries of B i 's). In the right, the gray area represents the union of S in (x) and S out (x), for all x ∈ B i . We now define the stopping time  (5.23) see Figure 6 (right), depicting two annuli from D.
For i ≥ i 0 and x ∈ B i , we use Lemma 7.6 in [23], to conclude that P x [X T = y] ≤ c|y| d−1 , for any y ∈ S in (x) ∪ S out (x). We can now bound the first term in the right-hand-side of (5.14). For this, fix any ζ ≤ c 10 /(c 13 4 · 3 d ), and note that P ζ {some particle visits more than three B i 's} (5.25) ≤ P c13ζ1 D {some particle leaves its corresponding B i } ≤ i≥i0 P c13ζ1 D∩B i some particle leaves B i which, by Theorem 5.1 is smaller or equal to (5.26) where x = 2 i0 . This, together with (5.12), (5.14) and (5.15) finishes the proof of the theorem.

Simulation and domination of particles
The main result of this section is the Lemma 6.5 which is the main ingredient to prove the Sieving Lemma 4.5 in Section 7. The Sieving Lemma was used several times during the proof of our main results. This section is devoted to the construction of a coupling between a collection of independent random walks and a Poisson point process.
Remark 6.1. The Lemma 6.5 below is very similar in spirit to Proposition 4.1 in [21]. However, the results in [21] are not formulated in the most convenient way for our use here (most notably, it concerns Brownian motions instead of random walks). Therefore, for the reader's convenience, we present a self contained proof of this coupling here. We would also like to point out that our proof follows a different approach than that of [21], employing the concept of Soft local times from [23] instead.
Let us collect some simple facts concerning the heat kernel of a continuous time, simple random walk on Z d . Let p t (x, y) stands for the heat kernel, given by p t (x, y) = P x [X t = y]. When confusion may arise, we write p d t instead of p t , to explicit the dimension under consideration.
Proof. The first claim (6.1) is a consequence of the independence of the evolution of coordinates of a continuous time random walk, while the second follows from standard random walk theory.
The above lemma will be used in the proof of Lemma 6.4, which deals with the integration of the heat kernel under a balanced cloud of starting particles. More precisely, Lemma 6.4 is crucial in the proof of Lemma 6.5.
We know that the heat kernel p t sums up to one on its second coordinate. What the lemma below attempts to do is to approximate this feature when we are not summing all the possible points in the second coordinate, but over a balanced collection (x j ) j∈J , recall definition 4.3. More precisely, Lemma 6.4. (d ≥ 1) Let {C i } i∈I be a paving of Z d by disjoint boxes of side length L in the sense of (3.6), see Figure 7. Consider also an ζ-balanced collection (x j ) j∈J with respect to {C i } i∈I . Then, for any t ≥ 0,  Indeed, this is a simple consequence of Lemma 6.2 and the fact that the absolute value of any given coordinate of a point x in C i is smaller than that of a point x ∈ C φ(i) . We now split the balanced set (x j ) j∈J into the indices J = {j ∈ J; x j ∈ i∈I C φ(i) } and J = J \ J . First, let us estimate, for any t ≥ 0, Given a point x = (x 1 , . . . , x d ) ∈ J , we first claim that there exists a coordinate k x ∈ {1, . . . , d} for which |x k | ≤ 3L. (6.9) Supposing that the above does not hold, the signs s = ( x1 |x1| , . . . , x d |x d | ) will be well defined and let i ∈ I be the index of the box containing y = x − Ls. We will prove that i belongs to I and x ∈ C φ(i) , contradicting the fact that x ∈ J . For this, observe that all the coordinates of y have absolute value at least 2L, in which case, any other point z ∈ C i has its coordinates with absolute value larger or equal to L, implying that i ∈ I . The fact that x ∈ C φ(i) follows from the definition of φ below (6.5). This contradiction establishes (6.9). For x ∈ J , we define the projectionx = (x 1 , x 2 , . . . , 0, . . . , x d−1 , x d ), where we nullify the k x -th coordinate.
We will now show (6.3) by induction in d. For d = 1, the result is obvious from (6.9), since there are at most 6ζL points in J . Now supposing that (6.3) holds for d − 1 and given an ζ-balanced collection (x j ) j∈J ⊂ Z d , (6.10) Observe that for each j ∈ J , the pointx j belongs to a coordinate hyperplane, we therefore partition J into J 1 , . . . , J d , corresponding to each of these hyperplanes. Moreover, (x j ) j∈J k is a 6ζL-balanced collection in Z d−1 . Thus, we can use our induction hypothesis to conclude that finishing the proof of Lemma 6.4.
Given a sequence (x j ) j∈J of points in Z d , we denote by j∈J P xj the law of an independent sequence of random walks starting on each of these points. Still in this context, we denote by X j t the canonical coordinates of the j-th particle at time t.
Then next lemma provides us with a way to couple a collection of independent particles with a Poisson point process. This procedure is very much inspired by the works in [21] and [23]. Lemma 6.5. Let (x j ) j∈J be a collection in Z d which is ζ-balanced with respect to the paving {C i } i∈I , of side length L. Then, given ζ > 0, we can find a coupling Q between EJP 22 (2017), paper 33. j∈J P xj and the law of a Poisson point process j ∈J δ Z j on Z d with intensity ζ ≥ ζ, in such a way that (6.12) for any set D ⊂ Z d and every t ≥ c 14 L 2 .
Proof. Using Corollary A.3, we obtain a coupling Q such that where G J (z) = j∈J ξ j p t (x j , z) and ξ j is an i.i.d. sequence of Exp(1) distributed random variables.
We are going to estimate the right hand side of (6.13) using concentration inequalities. For this, let us first estimate the expectation (6.14) Given z ∈ D, we write p j for p t (x j , z) and estimate and from (6.2) we first use the formula for the moment generating function of an exponential random variable, then we expand log((1 − x) −1 ) around zero to conclude that, for t ≥ cL 2 , for t ≥ c 14 L 2 we use (6.14) and (6.2) we get yielding the result.

Proof of the Sieving Lemma 4.5
The proofs in this section make use of the "soft local time" technique, introduced in [23] and also used in [11] for a similar purpose.
Fix q ∈ {0, 1} for the rest of this section and define which is the support of the sieved measure Y j in Lemma 4.5. Given a collection (x j ) j∈J , we define the stopping time for the j-th particle (called sieving time) as where the t s stands for the equally spaced sequence t s = sL 2.02 k , s ≥ 0. Note that we have defined t 0 = 0, but it does not feature in (7.2), so that T j is at least L 2.02 k . The stopping times T j that appeared in the statement of Lemma 4.5 are nothing more that the ones above.
Before starting the proof of Lemma 4.5, let us present an auxiliary result, which encompasses the main difficulty of this section. It looks very similar to the Sieving Lemma, however we start with a very sparse collection of particles and we are allowed to dominate them by a sieved law with much higher density (which is not the case in Lemma 4.5).
Below, let the sequences L k , R k and ζ r k be as in (4.1), (4.6) and (4.15). Fix also q = 0, 1. in such a way that where T j are defined as in (7.2).
We postpone the proof of the above lemma to later in the text. For now, let us see how it helps proving the Sieving Lemma 4.5.
The above lemma is very useful in the process of "sieving", however, it is very rough in the sense that it needs a high density of particles in the sieved law for the domination to work (L −γ/3 k L −2γ/3 k ). We will now turn to the proof of the Sieving Lemma 4.5, which was essential in the proof of Theorem 4.6.
Recall the definitions (4.8), (4.15) and the statement of the Sieving Lemma 4.5 in Section 4.
The proof of this lemma strongly relies on the fact that the density of the x j 's is slightly smaller than that of the sieved law. These two important densities to keep in mind during this proof are ζ r k > ζ r+1 k .
Proof of the Sieving Lemma 4.5. The first thing we observe in the proof is that the stopping times defined in (7.2) will rarely invoke the minimum with L 2.04 k . More precisely, j∈J P xj T j = L 2.04 k , for some j ∈ J ≤ c exp{−cL 0.02 k }. The proof goes in two steps, we first use Lemma 6.5 in order to couple the majority of the particles (those that have T j = t 1 ) with a sieved law, then we apply Lemma 7.1 to deal with the remaining dust.
We first let the random walks (starting at (x j ) j∈J ) run for time t = L 2.02 k and apply Lemma 6.5 to couple them with a Poisson point process j ∈J δ Z j , having intensity (ζ r k + ζ r+1 k )/2 ≤ 1, in such a way that Observe that the Poisson point process j ∈J δ Z j above is not supported on the set B = ∪ m ∈M k C 0 m as is the sieved law. This will be dealt with later by restricting the sum to the indices J = {j ∈ J ; Z j ∈ B}. When we do this, we will be left with some remaining walks (those with indices J 1 = {j ∈ J; T j > L 2.02 k }). The points X j L 2.02 k , for j ∈ J 1 are exactly those which have fallen in the collection of annuli A = ∪ m ∈M k C 0 m \ C 2 m and they should be sparse, as the fraction of points in C 0 m that intersects A is not larger than Using the above, we will use Lemma 7.1 to couple the remaining particles (with indices in J 1 ). For k > c we know that 8dL −γ k < L −2γ/3 k , so that we are in a good position to use Lemma 7.1. On the event in (7.7) this lemma provides us with a coupling Q between ⊗ j∈J1 P X j t 1 Observe that we need the sprinkling L −γ/3 k that we used in defining Y j 's to be compatible with our choice of densities (4.15). More precisely, we need (7.9) which is precisely the hypothesis on ζ 0 in the statement of the lemma. The above condition plays a central in this article, see Remark 4.7 for more details. Using (7.9), we can define Q, combining Q with Q , in a way that where above we used Lemma 3.3, (7.5), (7.7) and (7.8). This finishes the proof of the lemma.
We now give a proof of Lemma 7.1, which was used in the proof of the Sieving Lemma 4.5. Definition (7.2) allows us to split the indices of particles J 0 into disjoint sets F s = {j ∈ J 0 ; T j = t s }, for s ≥ 1, (7.10) corresponding to the particles that have finished sieving at time t s . We also define the indices of particles that are not yet sieved at stage s, that is J 0 = J and J s = J 0 \ ∪ s ≤s F s , for s ≥ 1.
Lemma 6.5 would have been enough to couple the particles at time t 1 with a Poisson point processes on the entire lattice. The challenge now is to make sure that the dominating Poisson point process is restricted to B, the so-called "sieving". For this we will now apply Lemma 6.5 repeatedly S = L γ/3 k /2 times. In the following proof, we let S = L γ/3 k /2 as above.

S.
Comparing this intensity measure with that of (Y j ) j ∈J , we see that It is therefore clear that, in order to prove the lemma, it is enough to construct a coupling Q satisfying k }. (7.12) For the construction of Q and the proof of (7.12) we make repeated use of Lemma 6.5.
Using the fact that (x j ) j∈J0 is (L −2γ/3 k )-balanced, we can employ Lemma 6.5 to obtain a coupling Q 1 between ⊗ j∈J0 P xj and (Z j Note that if we restrict the above sum over j's such that X j t1 ∈ B, we obtain (7.14) which already resembles (7.12).
We are now left with the particles (X j t1 ) j∈J1 , which have not been sieved at this first stage, recall that J 1 = J 0 \ F 1 . We could restart the coupling from them using Lemma 6.5 again, as long as we are on the event We expect this event to have high probability, since most particles will not fall into (7.16) Let us now recall the whole strategy of the proof. We are constructing the coupling Q and for this we have first built Q 1 . In case Q 1 fails, i.e. on the complement of the event in (7.14), we declare that Q also failed (in case of failure, we let (X j t ) and (Y j ) be independent under Q).
On the complement of A 1 we will also assume that Q failed. However, this is improb- On the event A 1 we can proceed as above, using Lemma 6.5 to construct Q 2 . Proceeding by induction one finally obtains, for every s ≤ S, We thus construct the coupling Q by iterating Q s for s = 1, . . . , S. More precisely, let Q be defined as follows: • if A s does not occur for some s ≤ S or J S+1 is non-empty, the coupling failed, and in this case the random walks and the sieved law are independent under Q, • if A s holds for every s ≤ S and J S+1 = ∅, , Q is given by all the couplings Q s , conditionally independent given (J s ) s≤S and the position of all particles (X j ts ) s≤S,j∈Js .
It is simple to verify that Q is a coupling between ⊗ j∈J0 P xj and the Poisson point processes (Z j s ) s≤S,j∈J s . Summing up the probabilities that Q failed we prove (7.12), finishing the proof of Lemma 7.1.

Other models
In this section we comment on other models for which our techniques apply. Although we have stated Theorems 2.1 and 2.2 for the process of Activated Random Walks, let us now show how the same proof can be used in other cases.
The first trivial observation is that if we show that the total activity of a certain model is stochastically dominated by that of the Activated Random Walkers, then this model will also fall in the scope of this work. However, this simple argument is very limited. For instance, it does not apply to the Stochastic Sandpile Model as we observe below. . This can be easily seen by starting with κ particles (the site's capacity) at the origin. For such starting configuration, the total activity for the SSM will be at least κ as the origin will topple all its particles to random neighbors after an exponential time. On the other hand, for the ARW, it could happen that the first κ − 1 particles jump and sleep and then the remaining particle at the origin immediately goes to sleep, resulting in total activity κ − 1.
Here, we show how to extend our main results for other models. In order to keep the exposition simple, we focus on the Stochastic Sandpile Model, but our intention is to illustrate what are the main ingredients one needs for such an extensions. Let us first observe what are the main properties of the Activated Random Walks model that were used through the previous proofs. We used the commutativity and monotonicity provided by Theorems 2.3 and 2.4, and several times the "turning off" of the dynamics provided by Corollary 2.5.
The proof of the above properties can often be tedious for different models, therefore we will refer to the excellent work in [2], where the authors provide a collection of results that hold for a large class of models. Following their notation, we sometimes call the particles messages.
Let us first put some definitions in place. Each site x ∈ Z d will be assigned an infinite sequence of instructions (y x i ) i≥1 , where y x i ∈ {±e j ; j = 1, . . . , d}. Later, the y x i 's will be chosen randomly, but for now we can pick an arbitrary sequence.
The site x will react to the arrival of two types of messages, corresponding to ordinary and activation particles. The number of particles sent by x to its neighbors will EJP 22 (2017), paper 33. be determined by the function f (q, r) = min q, max{q − (q mod κ), r} , (8.1) see Figure 8. Note that f (q, r) in non-decreasing in both arguments.
Informally speaking, after a site x receives q ordinary particles and r activation particles, the site will launch f (q, r) ordinary particles to its neighbors. The directions to which these particles will be sent will be determined by the unitary vectors y  Let us now make the above description precise. We define for x ∈ Z d , We introduce the update rules T x : A x × Q x → Q x as T (x) (x o , (q, r)) = (q + 1, r)

a dictionary of messages
T (x) (x a , (q, r)) = (q, r + 1), (8.2) which simply increases the counter of the corresponding particle type. After updating its counter, the site x will send some particles to its neighbors (only ordinary particles are sent). The number of particles emanating from x after it receives a message will be given by the increment on f (q, r), that is, f (T (x) (·, (q, r))) − f (q, r) (recall that f is monotone). And these particles will jump to directions given by the instructions y x j , where j varies over fresh indices. More precisely, for |y| = 1, let In the above definition, one can replace · by either x o or x a . In order to apply the results in [2], we need to prove that the rules T (x) and T (x,x+y) above are Abelian. Indeed, independently of the order in which messages arrive, the total number of particles sent by x is simply f evaluated in the total number of each message type.
We can now apply Theorem 4.8 of [2] to conclude that the final configuration of particles as well as the total occupation field (J x ) x∈Z d (that is the total number of messages processed by each site) does not depend on the order in which each processor acts. It is clear that if there are no activations messages in the system, the process will be determined simply by f (q, 0) = q − (q mod κ), which sends κ particles whenever q reaches {0, κ, 2κ, . . . }. This is nothing more than the original Stochastic Sandpile Model.
Applying Lemma 4.2 of [2], we conclude that adding more messages to the system (of type either x o or x a ) can only increase its final occupation field (J x ) x∈Z d .
We have seen that our representation of the process provides us with analogous results to Theorems 2.3 and 2.4. Let us finish by establishing a result similar to Corollary 2.5. Lemma 8.2. Fix a finite collection of particles (x i ) i∈I and a stopping time T i for each of them. Then, for any event A which is increasing in the final accumulated activity (J x ), sup P (xi) i∈I (A) ≤ E (xi) i∈I sup P (X i T i ) i∈I (A) , (8.4) where the suprema are taken over all starting configurations where the state of each site x is on the diagonal. As previously, the expectation above is taken with respect to an independent collection of simple random walks.
Proof. Suppose that the system starts with a finite collection (x o i ) i∈I of ordinary messages, endowed with stopping times T i , that may depend on i. Moreover, each site x ∈ Z d has state (q x , q x ) on the diagonal. Throughout the proof, we will manage to keep the state of every site on the diagonal, so that particles never accumulate on sites.
Suppose that at some given time a site x has at least one ordinary message x o waiting to be processed and we want this message to move to some neighboring site at random. What we do is send this ordinary message to be processed at x, together with an activation message x a , so that Q x stays in the diagonal. Then, since f (q, q) = q, we are guaranteed to have the x o message jump to some random neighbor. After we have repeated this procedure until the stopping time T i of each ordinary message, we can stop feeding the system with activation messages, so that we are now back to the original dynamics ruled by P (still the state of each site remains on the diagonal).
Since we only added activation messages to the system, our procedure can only increase the probability of the event A, finishing the proof of the lemma.
To repeat the proof of absorption, one should only observe that Lemma 3.1 can be proven the same way for this new particle system (even with the supremum appearing in Lemma 8.2). Another important change is in the definition of p k (ζ) in (4.8), that should include this supremum as well. All other steps of the proof should be identical to what has been presented. with intensity given by µ ⊗ dv, where dv is the Lebesgue measure on R + . For more details on this construction, see for instance [26], Proposition 3.6 on p.130.
The result below provides us with a way to simulate a random element of Σ using the Poisson point process m. Although this result is very intuitive, we provide here its proof for the sake of completeness and the reader's convenience.
Proof. Let us first define, for any measurable A ⊂ Σ, the random variable ξ A = inf{t ≥ 0; there exists i ≥ 1 such that t1 A g(z i ) ≥ v i }. Elementary properties of Poisson point processes (see for instance (a) and (b) in [26], page 130) yield that ξ A is exponentially distributed (with parameter A g(z)µ(dz)) and if A and B are disjoint, ξ A and ξ B are independent. (A.8) Property (1) now follows from (A.8), using that Σ is separable and the fact that two independent exponential random variables are almost surely distinct. Observe also that Q[ξ ≥ α, zî ∈ A] = Q[ξ Σ\A > ξ A ≥ α]. (A.9) Thus, using (A.8) we can prove property (2) using simple properties of the minimum of independent exponential random variables.
Finally, let us establish property (3). We first claim that, given ξ, m := i =î δ (zi,vi) is a Poisson point process, which is independent of zî and, conditioned on ξ, has intensity measure 1 {v>ξg(z)} · µ(dz) ⊗ dv. This is a consequence of the Strong Markov property for Poisson point processes and the fact that {(z, v) ∈ Σ × R + ; v ≤ ξg(z)} is a stopping set, see Theorem 4 of [29].
To finish the proof, we observe that, given ξ, m is a mapping of m (in the sense of Proposition 3.7 of [26], p.134). This mapping pulls back the measure 1 {v>ξg(z)} ·µ(dz)⊗dv to µ(dz) ⊗ dv. Noting that the latter distribution does not involve ξ, we conclude the proof of (3) and therefore of the lemma.
In Proposition A.2 below, we use Lemma A.1 in order to simulate a collection of independent random elements Z j of Σ using the same Poisson point process m as above.
Let us setup the required definitions.
Suppose that in some probability space (M, M, P) we are given an independent collection of (not necessarily identically distributed) random elements (Z j ) j≥1 of Σ such that for any given j ≥ 1, the distribution of Z j is given by g j (z)µ(dz).
(A. 10) In what follows, we are going to use a single Poisson point process m to simulate the above sequence (Z j ).  In the same spirit of the definition of ξ in Proposition A.1, we will now define what we call the soft local time G: . . . ξ k = inf t ≥ 0; for at least k values of i, tg j (z i ) + G k−1 (z i ) ≥ v i , G k (z) = ξ 1 g 1 (z) + · · · + ξ k g k (z), (A.11) see Figure 9 for an illustration of this procedure.
Applying Proposition A.1 repeatedly, we conclude by induction in J that We would like to finish this section giving a flavor of how the above proposition can help with the proof of Lemma 6.5. The next corollary shows that performing the above construction for two collections (say Z k and Z k ) of independent elements of Σ while using the same Poisson point process as basis, can provide a powerful coupling between them. More precisely, Corollary A.3. Suppose we are given a family of densities (g j (·)) J j=1 and the corresponding ξ j , G j and i j , for j = 1, . . . , J, as in (A.11) and (A.12). Then, for any ζ > 0, Note that the right-hand side of the above bound only depends on the soft local time, which may be estimated for instance, through large deviation bounds.