Absorbing-state phase transition in biased activated random walk

We consider the activated random walk (ARW) model on $\mathbb{Z}^d$, which undergoes a transition from an absorbing regime to a regime of sustained activity. In any dimension we prove that the system is in the active regime when the particle density is less than one, provided that the jump distribution is biased and that the sleeping rate is small enough. This answers a question from Rolla and Sidoravicius (2012) and Dickman, Rolla and Sidoravicius (2010) in the case of biased jump distribution. Furthermore, we prove that the critical density depends on the jump distribution.


Introduction
In this paper we consider the activated random walk (ARW) model on the lattice. This is a continuous-time interacting particle system with conserved number of particles, where each particle can be in one of two states: A (active) or S (inactive, sleeping). Each A-particle performs an independent, continuous time random walk on Z d with jump rate 1 and jump distribution p(·). Moreover, every A-particle has a Poisson clock with rate λ > 0 (sleeping rate). When the clock rings, if the particle does not share the site with other particles, the transition A → S occurs, otherwise nothing happens. S-particles do not move and remain sleeping until the instant when an other particle is present at the same vertex. At such an instant, the particle which is in the S-state flips to the A-state, giving the transition A+S → 2A. The initial particle configuration is distributed according to a product of Bernoulli distributions having expectation µ ∈ [0, 1], that we call particle density. As we consider initial configurations with only active particles, from the previous rules it follows that sleeping particles can be observed only if they occupy the site alone.
In ARW a phase transition arises from a conflict between the spread of the activity and a tendency of the activity to die out. We say that ARW exhibits local fixation if for any finite set V ⊂ Z d , there exists a finite time t V such that after this time the set V contains no active particles. We say that ARW stays active if local fixation does not occur.
Some of the central questions for this model involve the estimation of the critical density which separates the two regimes, µ c (λ, p( · )) := inf {µ ∈ [0, 1] : P(ARW is active) > 0}, where P(ARW is active) is intended as a function of the parameter µ. The 0-1 law and the monotonicity properties that have been proved in the seminal article by Rolla and Sidoravicius [9] imply that if µ > µ c , then ARW sustains activity almost surely.
In several articles an estimation for µ c has been provided. In one dimension, it has been proved by Rolla and Sidoravicius [9] that µ c ∈ [ λ 1+λ , 1]. Our definition of µ c implies that µ c ≤ 1 since particles are initially distributed as Bernoulli random variables. However, even if we replace this with any product measure of density µ > 0, it is intuitive that µ c ≤ 1, since at most one particle can fall asleep at any given vertex. This fact has been proved in [6,9,12] in wide generality. A fundamental question for this model is whether µ c < 1 for any sleeping rate λ. This question has been asked by Dickman, Rolla and Sidoravicius [4] and by Rolla and Sidoravicius [9] and its answer is expected to be positive in wide generality. In this article we provide a positive answer to this question in any dimension in the case of biased jump distribution. In particular, in one dimension we prove a stronger statement, i.e, that µ c → 0 as λ → 0.
We are now ready to state our results. We let m = z∈Z d p(z) z be the expected jump of the random walk, we let e j be the axis direction such that m · e i takes the maximum value, we let H = {z ∈ Z d s.t. e j · z ≤ 0} and we define the number, F (λ, p( · )) := E[(1 + λ) − H ], (1.1) where H is the total time spent on H by a discrete time random walk with jump distribution p( · ). Such a number is the probability that a continuous time random walk never deactivates, if it jumps at rate 1 and it deactivates at a rate λ only when it is in H. As a consequence of the law of large numbers, for any jump distribution such that m = 0 and for any λ > 0, such a probability is positive and, furthermore, lim λ→0 F (λ, p( · )) = 1, as the walker spends only a finite amount of time in H. Theorem 1.1. Consider ARW on Z with jump distribution p( · ) having a finite support and such that m = 0. Then, The next theorem provides an upper bound for the critical density in dimension d ≥ 2. Theorem 1.2. Consider ARW on Z d with jump distribution p( · ) having a finite support and such that m = 0. Then, Although µ c is conjectured to be strictly less than one for any positive λ and for any jump distribution, our proof techniques allow to answer such a question only under the assumption of biased jump distribution. A second, natural question is how and whether the critical density depends on the jump distribution. Our third theorem states that the critical density is not a constant function of the jump distribution. Theorem 1.3. Consider ARW with jump distribution on nearest neighbours, p(1) = q and p(−1) = 1 − q, where q ∈ [0, 1]. For any fixed λ ∈ R + , the critical density µ c (λ, q) is not a constant function of q. The proof of the theorem uses the stabilization procedure of Rolla and Sidoravicius [9] and it is based on an observation. In particular, we provide a new lower bound for the critical density as a function of the sleeping rate and of the bias parameter (see Figures   1 and 2) and we prove that µ c (λ, q) > λ 1+λ when q ∈ {0, 1}. The statement of Theorem 1.3 follows from our lower bound, as it is known [7] that µ c (λ, q) = λ 1+λ when q ∈ {0, 1}. Remark 1.4. Our Theorems 1.1 and 1.3 hold for any distribution of the initial location of the particles which is a product of identical distributions parametrized by their expectation µ. On the contrary, if we fixed beforehand a distribution which is different from Bernoulli, the statement of Theorem 1.2 would be that µ c < 1 only for small enough λ.
We end this introductory section by presenting the structure of the article. In Section 2 we introduce the proofs of Theorems 1.1 and 1.2 to the reader. In Section 3 we present the Diaconis-Fulton graphical representation, which is a fundamental framework for the analysis of ARW. In Section 4 we prove our upper bound for the critical density in one dimension. In Section 5 we prove our upper bound in more than two dimensions. In Section 6 we sketch the stabilization algorithm of Rolla and Sidoravicius and we present our observation for the proof of Theorem 1.3.

Some words on the proofs
Our proofs rely on the discrete Diaconis-Fulton representation for the dynamics of ARW. As it has been proved in [9], local fixation for ARW is related to the stability properties of this representation, which leaves aside the chronological order of events.
At every site x ∈ Z d , an infinite sequence of independent and identically distributed random variables is defined. Their outcomes are some operators ("instructions") acting on the current particle configuration by moving one particle from one site to the other one or by trying to let the particle turn to the S-state.
Local fixation for the dynamics of ARW is related to the the number of instructions that must be used in order to stabilize the initial particle configuration. Denote by B L a compact subset of Z d such that B L ↑ Z d as L → ∞. For every x ∈ Z d , let m B L ,η,τ (x) be the number of instructions that must be used at x in order to make the configuration    then ARW stays active almost surely. The proof of our results is based on the definition of stabilization algorithms for the set B L and on counting the number of particles crossing the origin, which is chosen to belong to the inner boundary of B L . In order to prove the upper bound (resp. the lower bound), we provide an estimation of the choice of parameters such that (2.1) (resp. 2.2) holds for every L large enough.
The proof of Theorem 1.2 is based on the following idea. In two dimensions, we introduce the set B L = [−L + 1, 0] × [−L 3 , L 3 ] by assuming that m · e 1 > 0 by symmetry.
We define a stabilization procedure where particles are moved one by one until a certain "stopping" event occurs. By "moving", we mean that we use always the instruction on the site where the particle is located until such an event occurs. We say that a particle is "good" if it stops on one of the sites which is empty in the initial particle configuration or if it leaves B L from the boundary side containing the origin. Because of the choice of our stopping events, of the order according to which particles are moved and of the bias of the jump distribution, we can provide a positive uniform lower bound F for the probability that a particle is good. Thus, we show that, if the density of good particles µ · F is higher than the density of empty sites 1 − µ, then a positive density of particles leaves B L by crossing the boundary side containing the origin. In one dimension this would be enough to prove almost sure activity when µ < 1 with B L = [−L, 0], as the number of sites belonging to the inner boundary of B L does not grow to infinity with L.
Instead, in two or more dimensions a control of which boundary sites are crossed by the EJP 21 (2016), paper 13. particles jumping away from B L is needed. To obtain such a control, we adapt to our setting the method of ghost explorers [8] and we exploit the symmetry properties of the random walk. Thus, we prove that the number of particles crossing the origin before leaving B L is larger than cL for some c > 0 with high probability.
This idea applies also to the one dimensional case, but actually the stabilization procedure that has been employed in the proof of Theorem 1.1 (one dimension) is different from the one described above, as the same particle is "moved" several times in the course of the procedure and, every time it fills an empty site, it paves the way to the particle that are moved subsequently. This allows to prove a stronger result, i.e., that activity is sustained at arbitrarily low density by setting λ small enough.
The proof of Theorem 1.3 uses the stabilization procedure that has been developed by Rolla and Sidoravicius [9] and it is based on an observation. We refer the reader to Section 6.

Diaconis-Fulton representation
In this section we describe the Diaconis-Fulton graphical representation for the dynamics of ARW. We follow [9]. Let η ∈ N 0ρ Z d denote the particle configuration, where N 0ρ = N 0 ∪ {ρ}. We define an order relation for ρ, which represents the presence of an S-particle at one site, setting 0 < ρ < 1 < 2 . . .. We also let |ρ| = 1, so that |η t (x)| counts the number of particles regardless of their state. The addition is defined by ρ + 0 = ρ, We introduce two operators, "move" from x to y, which is denoted by τ xy , and "sleep" at x, which is denoted by τ xρ . These operators act on the particle configuration. For any η ∈ N Z d 0ρ , the configuration if z = x and z = y, (3.1) and the configuration τ xρ η ∈ N Z d 0ρ is defined as, x ∈ Z d ) count the number of instructions used at each site. We say that we use an instruction at x when we act on the current particle configuration η through the operator Φ x , which is defined as, Properties. We now describe the properties of this representation. Later we discuss how they are related to the the stochastic dynamics of ARW.
η ≥ η and h = h. Let η, η be two configurations, x be a site in Z d and τ be a realization of the set of instructions. Let V be a finite subset of Z d . A configuration η is said to be stable in V if all the sites x ∈ V are stable. We say that α is contained in V if all its elements are in V and we say that α stabilizes η in V if every x ∈ V is stable in Φ α η. For the proof of the following Lemmas we refer to [9]. Lemma 1. (Abelian Property) If α and β are both legal sequences for η that are contained in V and stabilize η in V , then m α = m β . In particular, Φ α η = Φ β η.
By monotonicity, the limit exists and does not depend on the particular sequence V ↑ Z d .
We now introduce a probability measure on the space of instructions and of particle configurations. We denote by P the probability measure according to which, for any independently. Finally we denote by P ν the joint law of η and τ , where η has distribution ν and it is independent from τ . The following lemma relates the dynamics of ARW to the stability property of the representation.
The next lemma states that by replacing an instruction "sleep" by a neutral instruction the number of instructions used at the origin for stabilization cannot decrease. Thus, besides the τ xy and τ xρ , consider in addition the neutral instruction I, given by I η = η. Given two arrays τ = τ x,j x, j andτ = τ x,j x, j , we write τ ≤τ if for every x ∈ Z d and j ∈ N, eitherτ x,j = τ x,j orτ x,j = I and τ x,j = τ xρ .

Proof of Theorem 1.1
Without loss of generality we assume m > 0 and we consider the set B L = [−2L, 0]. The case m < 0 can be recovered by reflection symmetry. We stabilize only particles in [−L, 0], but we consider the site −2L − 1 as the outer boundary of the set, i.e., once a particle is on a site ≤ −2L − 1 it is "lost".
LetÑ L 0 be the number of particles in [−L, 0]. First, we "move" every particle starting in [−L, 0] until every site of [−L, 0] is either empty or it hosts only one active particle. This means that if the site hosts initially n > 1 particles, we move n − 1 particles until each of them fills an empty site. By "moving", we mean that we always use the instruction on the site where the particle is located until the particle reaches an empty site. Now, every site in [−L, 0] either hosts one particle or is empty. Let N L 0 be the number of particles in [−L, 0]. The next proposition states that with uniformly positive probability we loose a number of particles that is bounded from above by a number that not depend on L.  thus the sum z m [−L,0],η,τ (z) for z on the inner boundary of an interval [−L, 0] is tight with respect to L. Since each particle leaving [−L, 0] must perform a jump from a site of its inner boundary, the result follows.
Now every site in [−L, 0] hosts at most one particle, which is necessarily active. We stabilize the set [−L, 0] according to the following rule. Let z 0 = −L. If the site is empty, we do not do anything. If z 0 hosts one particle, then we move it until one of the following events occurs: (1) the particle sleeps somewhere in [−2L, z 0 ], (2) the particle reaches a site x ≤ −2L − 1, (3) the particle reaches the first empty site in [z 0 + 1, 0], (4) the particle reaches a site x ≥ 0. If (3) or (4) occur, we say that a successful jump has been performed.
As the random walk is biased to the right, we can uniformly bound from below by a constant F L the probability of a successful jump. Indeed, consider now a random walk (Z(j)) j∈N starting from Z(0) = z 0 in the following environment. Namely, if y > z 0 then the walker located at y jumps to y + z with probability p(z). If y ≤ z 0 , then the walker jumps to y + z with probability p(z) 1+λ and it sleeps with probability λ 1+λ . As the random walk (Z(j)) j∈N can sleep on any site in (z 0 − L, z 0 ] and as z 0 − L ≥ −2L, then the probability of a successful jump in the activated random walk model cannot be smaller than F L . Now let z 1 = z 0 + 1 and observe that every site in [z 1 , 0] is either empty or it hosts one active particle. Let N L 1 be the number of particles in [z 1 , 0]. If z 1 hosts no particles, we do not do anything. Instead, if z 1 hosts one particle, we move such a particle as before, until one of the four events above occurs. Again, a successful jump occurs with probability at least F L . We then define z 2 = z 1 + 1 and we continue in this way until we reach z L . We observe that, at every step i, N L i+1 = N L i with probability at least F L and Thus, by using translation invariance and by Lemma 3 we conclude that ARW stays active almost surely.

Proof of Theorem 1.2
We present the proof in the case of two dimensions. The same arguments can be adapted to the case of more than two dimensions. We assume that m · e 1 > 0 and we introduce the set B L = {(x, y) ∈ Z 2 : x ∈ [−L + 1, 0], y ∈ [−L 3 , L 3 ]}. We order the sites of B L by writing B L = {z 1 , z 2 , . . . z |B L | }, requiring that sites with smaller x appear first. We stabilize the set B 2L , but we "move" only particles which start from sites in B L , as EJP 21 (2016), paper 13.
we want them to be "far" from the boundary of the set. By "moving", we mean that we always use the instruction on the site where the particle is located until a certain event occurs. In our stabilization procedure, we say that a particle is "good" if it occupies one of the sites that is empty for the initial configuration or if it leaves B L by crossing the line x = 0. Because of the bias and of the order according to which particles are moved, we can provide a positive uniform lower bound for the probability of a particle being good. The general goal of the proof is to show that, if the density of empty sites for the initial configuration is less than the density of good particles, then a positive density of particles must leave B L by crossing the line x = 0. We use translation invariance then to show that at least cL particles cross the origin with high probability for some c > 0, which in turn implies almost sure activity by Lemma 3.
The stabilization procedure is defined as follows. We consider the first site in the order, z 1 = (x 1 , y 1 ), and we move one of its particles until one of the following events occurs. Namely, (1) either the particles reaches one empty site (x, y) such that x > x 1 (2) either the particle leaves B L , (3) or the particles uses an instruction "sleep" on a site (x, y) such that x ≤ x 1 .
Then, we consider the other particles on the same site and for each of them we employ the same procedure. At the next step, we consider the second site z 2 in the order we repeat the same procedure for all its particles. We proceed in this way until all the particles have been moved one time.
We let N L be the number of particles that visit the origin at least one time. Clearly, m B L ,η,τ (0) ≥ N L . In order to estimate N L , we adapt the idea of "ghost" explorers [8,12] to our setting. Namely, every time a particle starting from z i = (x i , y i ) stops at an empty site (x, y) (which, by definition of stabilization procedure, must satisfy x > x i ), we let a ghost start from (x, y) and perform a random walk until it reaches the inner boundary of B 2L , i.e., ∂ i B 2L := {x ∈ B 2L s.t. ∃ y ∈ Z 2 \ B 2L and y ∼ x}. Ghosts do not interact with other particles. We let W L be the number of particles visiting the origin as a ghost or as an original particle and we let R L be the number of particles visiting the origin only as a ghost. Then, The variables W L and R L are of course dependent. We first provide sufficient conditions for E[W L ] − E[R L ] ≥ cL for some c > 0 and we then prove that such a condition implies that N L ≥ c 3 L with high probability. We now provide an estimation of the expectations of W L and R L . For any z ∈ B 2L and for any j ∈ N, we introduce the sequence {S z,j (t), Y z,j (t)} t∈N , where S z,j (t) is a random walk with jump distribution p( · ) and starting from z and { Y z,j (t) } t∈N is an infinite sequence of independent and identically distributed random variables such that Y z,j (0) = 1 with probability λ 1+λ and Y z,j (0) = 0 with probability 1 1+λ . We start with the estimation of E[W L ]. Thus, we let from every particle (z, j), z = (x, y) ∈ B L , 1 ≤ j ≤ η(z), a simple random walk start and we count the number of them visiting the origin before leaving B 2L and before using any instruction sleep on the set H x := {(x , y ) ∈ Z 2 : x ≤ x}, i.e., where 1( · ) is the indicator function, η is the initial particle configuration and z = (x, y), τ z,j { · } is the hitting time of {·} for the random walk X z,j . The (stochastic) inequality holds EJP 21 (2016), paper 13. as on the right-hand side we count only the walks that hit the inner boundary of B 2L for the first time at the origin and as, once the particle starting from (x, y) turns to a ghost somewhere, it can explore the region H x without any restriction related to the outcome of the instructions sleep. Thus, the condition on the right-hand side is more restrictive.
The term R L is more difficult to handle. However, note that every ghost necessarily starts its walk from a site of B L that is empty in the initial configuration η, due to the order according to which particles are moved. Thus, we provide a (stochastic) upper bound for R L by letting for every empty site a random walk start and by counting the number of them hitting the inner boundary of B L at the origin, without any further restriction. We denote such a number byR L . Therefore, We let now G K = {(x, y) ∈ Z 2 s.t. x = k} and D k = {(x, y) ∈ Z 2 s.t. y = k}. By using independence and translation invariance, Note that we omitted any superscript for the random walk starting from the origin. Observe that the last inequality holds as the sum is over the probability of disjoint events and as the condition on the right-hand side is more restrictive. By the law of large numbers and as the random walk spends only a finite amount of time in H 0 , the probability of the event in the right-hand side of the last inequality converges to F (λ, p( · )) as L → ∞, which is defined before the statement of the theorem. By using the same arguments, we obtain the corresponding equation for E[R L ], L with high probability, which in turn implies that at ARW stays active almost surely by 18 c 2 L . (5.6) and, by taking the limit L → ∞, this concludes the proof of the theorem.
Our goal is to estimate under which conditions on µ, λ and q the next condition holds, ∃ c > 0 s.t. ∀L ∈ N, P ν (m V L ,η,τ (0) = 0) > c, Without loss of generality, we consider q ≤ 1/2. Indeed, the case of q ≥ 1/2 can be recovered by reflection symmetry. First, we consider the stabilization of [−L, −1]. If q < 1 2 and V L = [−L, −1], it is easy to prove that, for any value of µ and λ, (6.1) holds.
Indeed, recall that, by Lemma 4, by erasing from the instruction array all the instructions "sleep" on sites x ≤ 0, the number of instructions used at the origin for stabilization can only increase. Then, we move the particles in x ≤ 0 one by one, until each of them leaves the set [−L, −1]. The trajectory of each of them follows a simple random walk without any interaction, as the instructions "sleep" have been erased. As the bias is to the left, the probability that no particle hits the origin is uniformly positive in L.
2 . For this, we modify the stabilization procedure that has been developed by Rolla and Sidoravicius [9], which is sketched in Section 6.1. Our stabilization algorithm is presented in Section 6.2.

The stabilization procedure of Rolla and Sidoravicius
In this section we briefly describe the stabilization procedure that has been developed by Rolla and Sidoravicius [9]. The procedure explores a certain set of instructions of τ and identifies a suitable trap for every particle. The trap is a site where the particle finds an instruction "sleep" and turns to the S-state. The trap is chosen in such a way that, when a particle is moved to its trap, it does not wake up any of the particles that have already turned to the S-state. In the absence of a suitable trap, the algorithm fails.
If a suitable trap is found for every particle, then we say that the algorithm is successful and this implies that m [0,L],η,τ (0) = 0. The goal is to prove that the probability of success is uniformly positive in L.
We let X 1 ≤ X 2 ≤ . . . ≤ X N L be the position of the particles in [0, L] at time 0, ordered from the left to the right, where N L is the total number of particles in [0, L]. We assume X 1 > 0, which occurs with positive probability. We start from the leftmost particle in the set and we "explore" its putative trajectory until the origin is reached. As the exploration starts from a site which is on the right of the origin, the last "explored" instruction at any site must be "go left". The trap is defined as the leftmost instruction "sleep" among those right below the last instructions "go left". We denote the site where the trap is located as T 1 . Then, the particle is moved until such an instruction "sleep" is reached. For this, all the instruction "sleep" belonging to the set of explored instructions and which are not the trap are ignored. Lemma 4 guarantees that, if instructions "sleep" of τ are ignored, then the total number of instructions that must be used at 0 to stabilize [0, L] cannot be smaller than m [0,L],η,τ (0). This is important, as we need to provide sufficient conditions for m [0,L],η,τ (0) = 0.
At the second step, we consider the second leftmost particle in [0, L]. Starting from X 2 , we explore its putative trajectory until the site T 1 is reached. As before, we let the trap be the leftmost instruction "sleep" among those right below the last instructions "go left". We let T 2 be the site where the trap of the second particle is located. We move such a particle to its trap ignoring all the instructions sleep on the way to the trap.
Moving from the left to the right, we repeat this procedure for every particle in [0, L].
The algorithm fails when no suitable trap is found for one particle. This might occur only in two cases. Namely, when we explore the putative trajectory of the particle starting from X i , if no instruction "sleep" is found right below the last instruction "go left" at any of the explored sites or if such instruction "sleep" is found, but it is not located on the left of X i+1 , then the algorithm fails.
Note that not all the instructions belonging to the explored path are "used" by the particle. Successful algorithm means that no particle ever visits sites hosting instructions that belong to previous explorations and that have not been used (corrupted region).
Indeed, for all i, the region of explored sites for X i is always on the right of the trap T i−1 , while the corrupted region is on sites ≤ T i−1 . This is necessary to have a control on the joint distribution of the outcome of different explorations by using independence of instructions.

Our algorithm
The difference between our stabilization algorithm and the one developed by Rolla and Sidoravicius involves the criterion according to which the trap is chosen. By looking only at the instructions located right below the last instruction "go left", as in the  algorithm by Rolla and Sidoravicius, one ignores most of the instructions "sleep" which belong to the set of explored instructions. In order to save space, we provide a different definition of traps by taking into account for such instructions "sleep" as well. This allows to stabilize particles closer one to the other than in [9].
We move from the leftmost particle in [0, L] to the right and we explore the putative trajectory of every particle, as before. Our traps are defined as the last instruction "sleep" that has been discovered during the whole exploration (without requiring for it to be right below the last instruction "go left"). In order to separate the region of corrupted sites from the region of unexplored sites, we introduce barriers. The barrier is defined as the rightmost site on the explored path that has been visited after the last instruction sleep (see Figure 3 and 4). We let T i and A i be the site where the trap and the barrier of the i-th exploration are located respectively. Every exploration is carried on until the barrier that has been identified at the previous step is reached. The barrier A i must always be on the left of X i+1 . If during the exploration no instruction "sleep" is found or if such an instruction is found, but A i ≥ X i+1 , then we declare the algorithm to have failed. Thus, the barrier separates the corrupted region from the space that is available for the next exploration. Our stabilization procedure is sensitive to the bias of the jump distribution as, the weaker is the bias, the larger is the number of times the exploration visits the same site. This in turn implies that, the weaker is the bias, the higher is the chance of finding instructions "sleep" close to the previous barrier.

Probability of successful stabilization:
We let X 1 ≤ X 2 ≤ . . . ≤ X N L be the positions of the particles at time 0, ordered from the left to the right. We let A i and T i be the position of the barrier and of the trap for the particle X i respectively.
As success of the algorithm is a sufficient condition for m [0,L],η,τ (0) = 0, then We now prove that if µ < B(λ, q), where B(λ, q) is a function such that for every λ, q ∈ {0, 1}, B(λ, q) > λ 1+λ , then the right-hand site of (6.3) is uniformly positive in L.  Figure 4: Representation of the second step of the stabilization procedure. Left: the dark region represents the first exploration. The instructions below the continuous line in the non-dark region represent the second exploration. Right: representation of the second exploration as a simple random walk path. Red circles represent the steps of the path that are related to the presence of an instruction "sleep". Referring to the path in the figure as an example, according to the criterion employed in [9] the trap would be taken as the site hosting the rightmost instruction "sleep" between the two. Instead in our algorithm the trap is identified as the site denoted by T 2 in the figure. Furthermore, the barrier is identified as the site denoted by A 2 .
The probability of success of the algorithm cannot increase with L, as particles are "killed" at the boundary. Thus, for a lower bound for (6.3), we refer to the stabilization of the set [0, ∞). We claim that the position A 1 of the first barrier follows a distribution having expectation E[ To be more precise, the same as in [9], the claim is that the probability space can be enlarged so that we can define a random variable Y 1 independent of η whose expectation E[Y 1 ] has the property above and such that the first step of the construction is successful only if Y 1 ≤ X 1 , in which case the position A 1 of the first barrier is given by A 1 = Y 1 . Indeed, if at least an instruction sleep has been found in [0, X 1 ] before hitting the barrier A 0 = 0, we take Y 1 as the rightmost site that has been visited starting from the last instruction sleep that has been found before hitting A 0 . Namely, we let S y (t) be a random walk starting from y ∈ N and we let { R(t) } t∈N be a sequence of i.i.d. random variables such that R(0) = 1 with probability λ 1+λ and R(0) = 0 with probability 1 1+λ . As after any exploration step the probability to "discover" an instruction "sleep" is λ 1+λ independently, from the considerations above we conclude that, for any k ∈ N, s.t. R X 1 (t) = 1 and S X 1 (t) ≤ X 1 = lim y→∞ P ν max{x ∈ N s.t. S y (t) = x,τ y ≤ ∃t < τ y 0 } = k | ∃t ≤ τ y 0 s.t. R y (t) = 1 and S y (t) ≤ X 1 , where τ y 0 is the hitting time of the origin for the random walk S y ,τ y 0 = max{t ≤ τ y 0 : R(t) = 1} is the last time an instruction sleep has been found before hitting the barrier A 0 and the last equality follows from the Markov property. Instead, if no instructions EJP 21 (2016), paper 13.
Thus, for any k ∈ N, P ν Y 1 = k = lim y→∞ P ν (max{x ∈ N s.t. S y (t) = x for some t s.t.τ y ≤ t < τ y 0 } = k ) . In particular, by using standard probability tools, one can prove that for any λ ∈ (0, ∞), B(λ, q) is strictly increasing with respect to q in [0, 1 2 ) and can derive its analytical expression, which is plotted in Figure 1 and 2 for some values of λ and q.

Concluding remarks
We shall end this article with few comments related to our work. First of all, our results show that in the case of biased jump distribution, by "stabilizing" the interval [−L, L], the expected number of visits at the origin is at least linear in L for any µ > µ 1 , where µ 1 is some number µ 1 ≥ µ c . On the other hand, such a number is bounded from above by the number of visits in the case of no interaction (λ = 0), which is linear in L for any µ ∈ (0, ∞). Hence, it is reasonable to conjecture that E ν [m [−L,L],η,τ (0)] = O(L) for any µ > µ c .
The question whether µ c < 1 has received considerable attention recently. In their recent article [10], Rolla and Tournier consider ARW with biased jump distribution on Z d and they prove that µ c (λ) → 0 as λ → 0 even when d ≥ 2. Concerning the case of unbiased jumps, the question whether µ c < 1 for any λ is still open in wide generality.
The only positive answer to such a question has been provided by Stauffer and Taggi [11] on graphs where the random walk has a positive speed. The simpler question of µ c < 1 for λ small enough has been positively answered by Basu