Collisions of several walkers in recurrent random environments

We consider d independent walkers on Z, m of them performing simple symmetric random walk and r = d -- m of them performing recurrent RWRE (Sinai walk), in I independent random environments. We show that the product is recurrent, almost surely, if and only if m $\le$ 1 or m = d = 2. In the transient case with r $\ge$ 1, we prove that the walkers meet infinitely often, almost surely, if and only if m = 2 and r $\ge$ I = 1. In particular, while I does not have an influence for the recurrence or transience, it does play a role for the probability to have infinitely many meetings. To obtain these statements, we prove two subtle localization results for a single walker in a recurrent random environment, which are of independent interest.


Introduction and statement of the main results
Recurrence and transience of products of simple symmetric random walks on Z d is wellknown since the works of Pólya [P21]. If the product of several walks is transient, one may ask if they meet infinitely often. It is also well-known and goes back to Dvoretzky and Erdös, see ( [DE51], p. 367) that 3 independent simple symmetric random walks (SRW) in dimension 1 meet infinitely often almost surely while 4 walks meet only finitely often, almost surely. In fact, Pólya's original interest in recurrence/transience of simple random walk came from a question about collisions of two independent walkers on the same grid, see [P84], "Two incidents".
The classical topic of meetings/collisions of two or more walkers walking on the same graph has found recent interest, see [KP04], [BSP12], where the grid is replaced by more general graphs. It is well-known that if a graph is recurrent for simple random walk, two independent walkers do not necessarily meet infinitely often, see [KP04]. Since on a transitive recurrent graph, two independent walkers do meet infinitely often, almost surely, see [KP04], the "infinite collision property" describes how far the recurrent graph is from being transitive. For motivation from physics, see [CC12].
We investigate this question for products of recurrent random walks in random environment (RWRE) and of simple symmetric random walks on Z. It is known already that, for any n, a product of n independent RWRE in n i.i.d. recurrent random environments is recurrent, see [Z01], and that n independent walkers in the same recurrent random environment meet infinitely often in the origin, see [GKP14]. Here, we consider several walkers each one performing either a Sinai walk or a simple symmetric random walk, with the additional twist that not all Sinai walkers are necessarily using the same environment.
We set Y n := S (1) n , ..., S (m) n , Z (1) n , ..., Z (r) n , n ∈ N, and make the following assumptions. Given ω = ω (1) , ..., ω (r) and x ∈ Z d , under P x ω , S (1) , ..., S (m) , Z (1) , ..., Z (r) are independent Markov chains such that P x ω (Y 0 = x) = 1 and for all y ∈ Z and n ∈ N, Note that, for every j, Z (j) = Z (j) n n is a random walk on Z in the environment ω (j) , and that the S (i) 's are independent SRW, independent of the Z (j) 's and of their environments. We call P ω := P 0 ω the quenched law. Here and in the sequel we write 0 for the origin in Z d . We also define the annealed law as follows: P[·] := P ω [·]P(dω).
Setting ρ (j) for j ∈ {1, ..., r} and k ∈ Z, we assume moreover that there exists ε 0 ∈ (0, 1/2) such that for every j ∈ {1, ..., r}, where E is the expectation with respect to P. Under these assumptions, the Z (j) are RWRE, often called Sinai's walks due to the famous result of [S82]. Solomon [S75] proved the recurrence of Z (j) for P-almost every environment. We stress in particular that the assumption σ 2 j > 0 excludes the case of deterministic environments, hence when we say "Sinai's walk", we always refer to a random walk in a "truly" random environment.
Our first result concerns the recurrence/transience of Y := (Y n ) n . Recurrence of Y means that S (1) , ...., S (m) , Z (1) , ..., Z (r) meet simultaneously at 0 infinitely often. As explained previously, this result is known for SRW (i.e. if m = d) since [P21] and more recently for RWRE (i.e. if r = d, that is, if m = 0) in the case where the environments ω (j) are independent (i.e. I = r = d, see [Z01,GKP14]) and in the case where the environment ω (j) is the same for all the RWRE (i.e. r = d, I = 1, see [GKP14]). See also [Ga13] for related results.
Theorem 1.1. If m ≤ 1, or if m = d = 2, then, for P-almost every ω, the random walk Y is recurrent with respect to P 0 ω . Otherwise, for P-almost every ω, the random walk Y is transient with respect to P 0 ω .
In particular, a product of two recurrent RWRE and one SRW is recurrent, while a product of two SRW and one recurrent RWRE is transient.
When Y is transient, a natural question is the study of the simultaneous meetings (i.e., collisions) of S (1) , ...., S (m) , Z (1) , ..., Z (r) . That is, we would like to extend the results of [P21,DE51] to the case in which some of the random walks are in random environments (when r ≥ 1). We recall that when r = 0, the number of collisions is, by [P21,DE51], almost surely infinite if m ≤ 3 and almost surely finite when m ≥ 4. Interestingly, compared to Theorem 1.1, the behaviour depends on whether I = 1 (when the RWRE are all in the same environment) or I ≥ 2 (at least two RWRE are in independent environments).
Theorem 1.2. We distinguish the 3 following different cases.
This last result can be summarized in the following manner. Assume that r ≥ 1 and that Y is transient (i.e. m ≥ 2 and r ≥ 1), then S (1) , ..., S (m) , Z (1) , ..., Z (r) meet simultaneously infinitely often if and only if m = 2 and I = 1. Hence our results cover collisions of an arbitrary number of random walks in equal or independent random (or deterministic) recurrent environments.
Remark 1.3. The results of Theorem 1.2 remain true if the simple random walks are replaced by random walks on Z with i.i.d. centered increments with finite and strictly positive variance. However, we write the proof of this theorem only in the case of SRW to keep the proof more readable and less technical.
The case of transient RWRE in the same subballistic random environment is investigated in [DGP18] (in preparation).
In order to demonstrate Theorem 1.2, we prove the two following propositions. The first one deals with two independent recurrent RWRE in two independent environments.
The second proposition deals with r independent recurrent RWRE in the same environment.
Proposition 1.5. Assume r > I = 1. For P-almost every ω, there exists c(ω) > 0 such that, for every (y 1 , ..., y r ) ∈ [(2Z) r ∪ (2Z + 1) r ], we have These two propositions are based on two new localization results for recurrent RWRE, which are of independent interest. These two localization results use the potential of the environment (see (5)) and its valleys, these quantities were introduced by Sinai in [S82] and are crucial for the investigation of the RWRE.
In the first one, stated in Proposition 4.5 and used to prove Proposition 1.4, we localize a recurrent RWRE at time n with (annealed) probability 1 − (log n) −2+ε for ε > 0, whereas previous localization results for such RWRE were with probability 1 − o(1) (see [S82], [G84], [KTT89], [BF08] and [F15]), or with probability 1−C log log log n log log n 1/2 for some C > 0 (see [A05], eq. (2.23)), and they localize the RWRE inside one valley. In order to get our more precise localization probability, which is necessary to apply the Borel-Cantelli lemma in the proof of Item (iii) of Theorem 1.2, we localize the RWRE in an area of low potential defined with several valleys instead of just one. To this aim, we study and describe typical trajectories of the recurrent RWRE into these different valleys.
In our second localization result, stated in Proposition 5.1 and used to prove Proposition 1.5, we prove that for large N ∈ N, with high probability on ω (for P), the quenched probability P ω [Z n = b(N )] is larger than a positive constant, uniformly for any even n ∈ [N 1−ε , N ] for some ε > 0, where b(N ) is the (even) bottom of some valley of the potential V of a recurrent RWRE Z (defined in (77)). In order to get this uniform probability estimate, we use a method different from that of previous localization results, based on a coupling between recurrent RWRE.
The article is organized as follows. In Section 2, we give an estimate on the return probability of recurrent RWRE, see Proposition 2.1, which is of independent interest. Our main results for direct products of walks are proved in Section 3. The proofs concerning the simultaneous meetings of random walks are based on the above-mentioned two key localization results for recurrent RWRE, proved in Sections 4 and 5.

A return probability estimate for the rwre
We consider a recurrent one dimensional RWRE Z = (Z n ) n in the random environment ω = (ω x ) x∈Z , where the ω x ∈ (0, 1), x ∈ Z, are i.i.d. (that is, Z 0 = 0 and (2) is satisfied with Z and ω instead of Z (j) and ω (j) ). We assume the existence of ε 0 ∈ (0, 1/2) such that where ρ k := 1−ω k ω k , k ∈ Z. The following result completes [GKP14, Theorem 1.1] which says that, for every 0 ≤ ϑ < 1, we have for P-almost every environment ω, Proposition 2.1. For P-almost every environment ω, Before proving this result, we introduce some more notations. First, let In words, τ (x) is the hitting time of the site x by the RWRE Z. As usual, we consider the potential V , which is a function of the environment ω and is defined on Z as follows: and, recalling ε 0 from (3) and (4), where E b ω denotes the expectation with respect to P b ω and u ∧ v := min(u, v), (u, v) ∈ R 2 . For symmetry reasons, we also have Moreover, we have, for k ≥ 1 (see Golosov [G84], Lemma 7) and by symmetry, we get (similarly as in Shi and Zindy [SZ07], eq. (2.5) but with some slight differences for the values of ) Lemma 2.2. Let γ > 0. For P-almost every ω, there exists N (ω) such that for every n ≥ N (ω), and such that the same inequalities hold with {−n, . . . , 0} instead of {0, . . . , n}.
Proof. Observe that it is enough to prove that P-almost surely, if n is large enough (up to a change of log ρ i in − log ρ i , in log ρ 1−i or in − log ρ 1−i ). The first inequality of (11) is given by [H65,Theorem 2]. The second inequality of (11) is a consequence of the law of iterated logarithm for V , as explained in ( [C01], end of p. 248).
Proof of Proposition 2.1. Let η ∈ (0, 1) and n ≥ 2. We define Due to the previous lemma, choosing γ small enough, we have that P-almost surely, if n is large enough, the following inequalities hold: We have by the strong Markov property, Recall that, given ω, the Markov chain Z is an electrical network where, for every x ∈ Z, the conductance of the bond (x, x + 1) is C (x,x+1) = e −V (x) (in the sense of Doyle and Snell [DS84]). In particular, the reversible measure µ ω (unique up to a multiplication by a constant) is given by So we have Moreover we have due to (7) and to Markov's inequality, Now using (12), P-almost surely, we have P 0 ω [τ (z + ) > n, τ (z − ) > n] ≤ ε −1 0 n −1 (log n) 4−2η exp (log n) 1−η/10 for every n large enough. This combined with (13), (15) and e −V (−1) ≤ ε −1 0 gives P-almost surely for large n P 0 < ∞ P-almost surely, which ends the proof of Proposition 2.1.

Direct product of Walks
We start with a proof of Theorem 1.1. With a slight abuse of notation, we will write 0 for the origin in Z k , whatever k is.
Proof. 1. If m ≥ 1 and r = 0, then (Y n ) n is a product of m independent simple random walks on Z. It is well-known that it is recurrent if m ∈ {1, 2}, and transient if m ≥ 3. This follows from elementary calculations and the crucial fact that for any irreducible Markov chain (G n ) n , where x is one of the states of the Markov chain. 2. If m ≥ 3 and r ≥ 1, then the 3-tuple of the three first coordinates of (Y n ) n is S (1) n , S (2) n , S (3) n n which is a product of 3 independent simple random walks on Z, hence is transient. So (Y n ) n is transient for P-almost every ω. 3. If m = 2 and r ≥ 1, then applying the local limit theorem (see e.g. Lawler and Limic [LL10] Prop. 2.5.3) for S (1) and S (2) for n ∈ N * , where c > 0 is a constant. This and Proposition 2.1 yield ∞ n=0 P 0 ω [Y n = 0] < ∞ for P-almost ω. Hence, (using the Borel-Cantelli Lemma or (16)), (Y n ) n is P-almost surely transient. 4. We now assume m ∈ {0, 1}. We choose some δ ∈ (0, 1/5) such that 3δr < 1−2δ 2 . We denote by x the integer part of x for x ∈ R. For L ∈ N, we have Due to [GKP14] (Propositions 3.2, 3.4 and (3.22)), since δ ∈ (0, 1/5), there exist C(δ) > 0 and a sequence (Γ(L, δ)) L∈N of elements of F (that is, depending only on ω) such that and such that, for every L ∈ N, on Γ(L, δ), we have ∀i ∈ {1, . . . , r}, ∀k i ∈ e 3δL + 1, · · · , e (1−2δ)L , P 0 (18) Due to the local limit theorem, this gives on Γ(L, δ), for large L so that e (1−2δ)L 2 ≥ e 3δL , which goes to infinity as L goes to infinity due to our choice of δ, c 1 (δ) being a positive constant. Thanks to (17), this gives n≥0 P 0 ω [Y n = 0] = +∞ for P-almost all ω. Consequently, due to (16), (Y n ) n is recurrent for P-almost every environment ω.
Remark 3.1. Recall that Sinai [S82] (see also Golosov [G84]) proved the convergence in distribution of Z Proof of (i). Assume m = 3 and r = 1. Observe that for large n, for some C > 0 since for every k ∈ Z and n ∈ N, P S (1) (1) 2n = 0 ∼ n→+∞ (πn) −1/2 due to the local limit theorem. Hence n P 0 ω S (1) (1) n < ∞ and (i) follows by the Borel-Cantelli lemma in this case and a fortiori when m ≥ 3 and r ≥ 1. Proof of (ii). Assume m = 2 and r ≥ I = 1. Since I = 1, all the RWRE are in the same environment, which is necessary to apply Proposition 1.5, which is essential to prove (ii). We use the generalization of the second Borel Cantelli lemma due to Kochen and Stone [KS64] combined with a result by Doob. To simplify notations, we also write ω for ω (1) , so ω (i) = ω for every 1 ≤ i ≤ r.
We first prove that n P ω [A n ] = ∞ a.s. More precisely, we fix an initial condition x = (x 1 , x 2 , y 1 , ..., y r ) ∈ (2Z) 2+r ∪ (2Z + 1) 2+r . We have for all n and ω, Consequently for large even n, for every ω, This remains true for large odd n. Hence for large n, Recall that Z (1) n /(log n) 3 n converges almost surely to 0 with respect to the annealed law (see [DR86] Theorem 4, or more recently [HS98] Theorem 3). This holds also true for P y 1 ω for P-almost every ω, so the last probability in (19) goes to 0 as n → +∞, which yields lim N →+∞ with c(ω) n . If r = 1, then c(ω) = 1. If r > 1, due to Proposition 1.5, c(ω) > 0 for P-almost every environment ω. This implies that Moreover, let C > 0 be such that for all n ≥ 1 and k ∈ Z, P S (1) (1) 2n = 0 ∼ n→+∞ (πn) −1/2 by the local limit theorem. So for 1 ≤ n < m, we have by Markov property, Consequently, for large N , Applying this and (20) we get for P-almost every ω, for every initial condition Due to the Kochen and Stone extension of the second Borel-Cantelli lemma (see Item (iii) of the main theorem of [KS64] applied with for P-almost every environment ω. Proof of (iii). Assume m = 2 and r = I = 2. We have due to Proposition 1.4 and the local limit theorem. Hence n P 0 [A n ] < ∞ and (iii) follows due to the Borel-Cantelli lemma.
So there only remains to prove Propositions 1.4 and 1.5.

Probability of meeting for two independent recurrent rwre in independent environments
The aim of this section is to prove Proposition 1.4, which is a key result in the proof of case (iii) of Theorem 1.2.
The main idea of the proof is that Z (1) n and Z (2) n are localized with high (annealed) probability in two areas (depending on the environments, see Proposition 4.5) which have no common point with high probability (see Lemma 4.6). Due to [S82], we know that, with high probability, Z  n of some valley (containing 0 and of height larger than log n) for the potential V (i) . Here and in the following, V (i) denotes the potential corresponding to ω (i) , defined as in (5) with ω replaced by ω (i) . An intuitive idea to prove Proposition 1.4 should then be that p n : n /2 is very small. More precisely we would like to prove that p n = O (log n) −1−ε . (In view of the proof of (iii) above, it would suffice to show that n pn n < ∞). However, this seems difficult to prove and we are not even sure that it is true. Indeed, in view of Lemma 4.4 below (proved for a continuous approximation W (i) ≈ V (i) ), we think that with probability greater than 1/ log n, 0 belongs to a valley of height between log n − 2 log log n and log n and that the annealed probability that Z (i) n is close to the bottom of this valley (which is not B (i) n ) should be greater that 1/ log n. Hence, to prove Proposition 1.4, we will work with several valleys instead of a single one.
4.1. Proof of Proposition 1.4. In this subsection, we use a Brownian motion W (i) , approximating the potential . This localization is stated in Proposition 4.5 and is crucial to prove Proposition 1.4.
In order to construct our localization domain Ξ n W (i) , we use the notion of h-extrema, defined as follows.
Definition 4.1 ( [NP89]). If w : R → R is a continuous function and h > 0, we say that y 0 ∈ R is an h-minimum for w if there exist real numbers a and c such that a < y 0 < c, In any of these two cases, we say that y 0 is an h-extremum for w.
We also use the following notation.
Definition 4.2. As in [C05], we denote by W the set of functions w : R → R such that the three following conditions are satisfied: (a) w is continuous on R; (b) for every h > 0, the set of h-extrema of w can be written {x k (w, h), k ∈ Z}, with (x k (w, h)) k∈Z strictly increasing, unbounded from below and above, and with x 0 (w, h) ≤ 0 < x 1 (w, h), notation that we use in the rest of the paper on W; (c) for all k ∈ Z and h > 0, We now introduce, for w ∈ W, i ∈ Z and h > 0, is denoted by T i (w, h) and is called an h-slope, as in [C05]. If x i (w, h) is an h-minimum (resp. h-maximum), then T i (w, h) is a nonnegative (resp. nonpositive) function, and its maximum (resp. minimum) is attained at x i+1 (w, h). We also introduce, for each slope will sometimes be called valley of height at least h and of bottom x i (w, h). The height of this valley is defined as min{w( These h-extrema are useful to localize RWRE and diffusions in a random potential. Indeed, a diffusion in a two-sided Brownian potential W (resp. in a (−κ/2)-drifted Brownian potential W κ with 0 < κ < 1) is localized at large time t with high probability in a small [C05] and [C08] (resp. [AD15]). For some applications to recurrent RWRE, see e.g. [BF08] and [D14].
Let C 1 > 2 and α > 2. Define log (2) x = log log x for x > 1. As in [D14], we use the Komlós-Major-Tusnády almost sure invariance principle [KMT75], which ensures that: Lemma 4.3. Up to an enlargement of (Ω, F, P), there exist two independent two-sided and a real numberC 1 > 0 such that for all n large enough, Proof. Notice that V (1) and V (2) are independent, since ω (1) and ω (2) are independent. Due to ( [KMT75], Thm. 1), there exist positive constants a, b and c such that for N ∈ N, up to an enlargement of (Ω, F, P), there exist two independent two-sided Brownian motions Applying this result to N := (log n) α + 1 and x := (log(2b) + C 1 log (2) n)/c and taking for all n large enough. Moreover, for every n large enough, . This combined with (24) proves the lemma.
In the rest of the paper, we use the W (i) introduced in Lemma 4.2. We will use the valleys for the W (i) . Fix some C 2 ≥ 2α + 2 + 10C 1 . Let We know from ( [C05], Lemma 8) that W (i) ∈ W almost surely (recall definition 4.2). Moreover, using [HS98, Applying this several times to W (i) and −W (i) with t = (log n) α /10 and b = h n , the following holds with a probability The following lemma shows that Proposition 1.4 is more subtle than it may seem at first sight.
Lemma 4.4. Let W be a two-sided standard Brownian motion and σ > 0. Then, for every n large enough, In particular, the probability that the height of the central valley for W (i) is less than log n is not negligible. However, with large enough probability, all the valleys close to 0 except maybe one are large, with height greater than log n + C 2 log (2) n.
Proof of Lemma 4.4. Let h n := h n − 2C 2 log (2) n. First, due to ( [NP89], Prop. 1, see also [C05] eq. (8)), e T i σW, h n / h n is for i = 0 an exponential random variable with mean 1. Consequently, for i = 0 and large n, Observe that e T 0 σW, h n / h n is by scaling equal in law to e(T 0 (W, 1)), which has a density equal to (2x + 1)e −x 1 (0,∞) (x)/3 due to ( [C05], formula (11)). Hence for large n, This remains true if h n is replaced by h n . These inequalities already prove (27) and (29). Moreover, due to ( [NP89], Prop. 1), the slopes T i σW, h n , i ∈ Z are independent, up to their sign, so the random variables H T i (σW, h n ) , i ∈ Z are independent. This and the previous inequalities lead to (28).
Because of (27), it seems reasonable to consider strictly more than one valley of height at least h n if we want to localize a recurrent RWRE with probability ≥ 1 − (log n) −2+ε for ε > 0. We first introduce some notation. Let, for i ∈ {1, 2}, j ∈ Z and n ≥ 3, Loosely speaking, Ξ n,j W (i) is the set of points with low potential in the j-th valley for W (i) . We also define In Proposition 4.5 (proved in Section 4.2), we localize the RWRE Z (i) in a set of points which are close to the b j (.) "vertically", instead of "horizontally" as in Sinai's theorem (see [S82]).
Proposition 4.5. Let ε > 0 and i ∈ {1, 2}. For all n large enough, we have Proposition 1.4 is then an easy consequence of Proposition 4.5 and of the following estimate on the environments.
Lemma 4.6. Let ε > 0. For large n, Proof of Lemma 4.6. First, let k ∈ Ξ n W (i) for some i ∈ {1, 2} and n ≥ 3. Hence k ∈ M j−1 W (i) , h n , M j W (i) , h n and W (i) (k) ≤ W (i) b j W (i) , h n + C 2 log (2) n for some j ∈ {−2, −1, 0, 1, 2}. By definition of h n -minima, we notice that the two Brownian motions x ≥ 0 hit h n − C 2 log (2) n before −2C 2 log (2) n. By independence, it follows that, for n large enough, for every k ∈ Z and i ∈ {1, 2}, Consequently, since W (1) and W (2) are independent, we have uniformly on k ∈ Z, Finally, (26) applied with 2 + ε > 2 instead of α and (30) lead to for every n large enough. Since this is true for every ε > 0, this proves the lemma.
Proof of Proposition 1.4. We have for large n, due to Proposition 4.5, This and Lemma 4.6 prove Proposition 1.4.

4.2.
Proof of Proposition 4.5. We fix i ∈ {1, 2}. To simplify notations we write V for V (i) , Z n for Z (i) n and W for W (i) . The difficulty of this proof is that we have to localize Z n with probability 1 − (log n) −2+ε instead of 1 − o(1) as Sinai did in [S82]. For this reason we need to take into account some cases which are usually considered to be negligible. In order to prove Proposition 4.5, we first build a set G n of good environments, having high probability. We prove that on such a good environment, the RWRE Z = (Z n ) n will reach quickly the bottom b I 1 of one of the two valleys of W surrounding 0. We need to consider these two valleys because we cannot neglect the case in which 0 is close to the maximum of the potential between these two valleys.
Also, we cannot exclude that the valley surrounding b I 1 is "small", that is, its height is close to log n. Then, we have to consider two situations. If the height of this valley is quite larger than log n, then with large probability, Z stays in this valley up to time n (see Lemma 4.9). Otherwise (in the most difficult case, Lemma 4.11), Z can escape the valley surrounding b I 1 before time n, and in this case, with large probability, it reaches before time n the bottom b I 2 of a neighbouring valley and stays in this valley up to time n. In both situations, we prove that Z n is localized in Ξ n (W ), and more precisely in the deepest places of the last valley visited before time n. In order to prove this localization, we use the invariant measure of a RWRE in our environment, started at b I 1 or b I 2 .
with h n defined in (25). For every n large enough, we have due to (26) and to Lemmas 4.3 and 4.4, since C 1 > 2 .
We now consider ω ∈ G + n where G + n : Indeed, the other case, that is, x 0 (W, h n ) is an h n -minimum, or equivalently ω ∈ G − n with G − n := G n \G + n , is similar by symmetry.
Proof of Proposition 4.5. Let us see how we can derive Proposition 4.5 from (33) and from Lemmas 4.7, 4.9 and 4.11 below. Applying Lemma 4.7 with y = 0 and j = −1 on G + n , the random walk Z goes quickly to b −1 or b 0 with high probability. More precisely, setting Due to Lemmas 4.9 and 4.11, there existsñ 1 ∈ N such that, for every n ≥ñ 1 , and so, using (34), By symmetry, this remains true with G + n replaced by G − n . Therefore, due to (33), for every n large enough, Since this is true for every ε > 0, this proves Proposition 4.5.
We consider I 1 ∈ {−1, 0} such that We already saw in (34) that, thanks to Lemma 4.7 with y = 0 and j = −1, we have We consider the event E 2 = E 2 (n) on which Z first goes to the bottom of a "deep" valley: . Notice that this event depends on ω but also on the first steps of Z up to time τ (b I 1 ). Similarly as in (29), this event happens with probability 1 − O((log n) −1 log (2) n), so we cannot neglect E c 2 . We will treat separately the two events E 2 and E c 2 (the study of E c 2 being more complicated). Before considering these two events, we state the following useful result. We introduce for j ∈ Z, where u ∨ v := max(u, v), (u, v) ∈ R 2 , so that Lemma 4.8. For every n large enough, ∀ω ∈ G n , ∀j ∈ {−2, . . . , 2}, sup Proof. We claim that due to (31) and (26), . This follows from the definition of Ξ n,j (W ) if x ∈ [M j−1 , M j ], and from (46) and the fact that |V (y) − V (y − 1)| ≤ log 1−ε 0 ε 0 otherwise. So, due to (36), (26) and (31), for large n, for all ω ∈ G n and j ∈ {−2, . . . , 2}, In the next lemma, we consider the case where Z goes quickly in a deep valley.
For the event E c 2 , we will use the following lemma, which is actually true for any Markov chain.
. Applying (47) as in the simplest case, we get for large n, for all ω ∈ G n , (48) Finally, (51) and the controls on E i , 3 ≤ i ≤ 7 prove Lemma 4.11, which ends the proof of Proposition 4.5.

Probability of simultaneous meeting of independent recurrent rwre in the same environment
This section is devoted to the proof of Proposition 1.5, which is a consequence of the following proposition whose proof is deferred.
Now, it remains to prove Proposition 5.1.

5.1.
Main idea of the proof of Proposition 5.1. Let Z be a RWRE as in Section 2.
In order to prove that Z n is localized at b(N ) with a quenched probability P y j ω greater than a positive constant, we use a coupling argument between a copy of Z starting from b(N ) and a RWRE Z reflected in some valley around b(N ), under its invariant probability measure. To this aim, we approximate the potential V by a Brownian motion W , use W to build the set of good environments ∆ N (δ) and estimate its probability P ∆ N (δ) , and then define b(N ).
We build ∆ N as the intersection of 7 events ∆ (1) N guarantees that the central valley (containing the origin) of height log n has a height much larger than log n, so that Z will not escape this valley before time n (see Lemma 5.6). ∆ (1) N also ensures that this central valley does not contain sub-valleys of height close to log n, so that with high quenched probability, Z reaches quickly the bottom of this valley without being trapped in such subvalleys (see Lemma 5.5). To this aim, we also need that the bottom of this valley is not too far from 0, which is given by ∆ N is useful to provide estimates for the invariant probability measure ν, and is useful to prove that the coupling occurs quickly (Lemma 5.9, using Lemmas 5.7 and 5.8). Finally, ∆ (6) N says that ν b(N ) , which is roughly the invariant probability measure at the bottom of the central valley, is larger than a positive constant.

5.2.
Construction of ∆ N (δ). Let δ ∈ (0, 1). The aims of this section are the construction of the set of environments ∆ N (δ) satisfying (61) and (62), and the proof of (61). We will construct ∆ N (δ) as an intersection where the sets ∆ (i) N , defined below, also depend on δ. In what follows, ε i is for i > 0 a positive constant depending on δ and used to define the set ∆ (i) N . As in the previous section, we will approximate the potential V by a two-sided Brownian motion W such that Var(W (1)) = Var(V (1)) (see Figure 2 for patterns of the potential V and of W in ∆ N (δ)). We start with ∆ (1) N , . . . , ∆ N which are W -measurable. Using the same notation as before for h-extrema, for a two-sided Brownian motion W , we define ∆ (1) and ∆ = max and ∆ (5) Lemma 5.2. Let W be a two-sided Brownian motion such that Var(W (1)) = Var(V (1)).
We introduce, recalling (66), We will carry out the proof in the case ω ∈ ∆ N ∩ ∆ N is similar by symmetry. We define x i := x i (W, (1 − 2ε 1 ) log N ) , and D (1) In the following lemma, we prove that Z goes quickly to b(N ), which is nearly the bottom of the potential V in the central valley x 0 , x 2 , with large probability under P y j ω , uniformly on ∆ N ∩ ∆ (R) N and j. Lemma 5.5. There exists N 3 ∈ N such that for all N ≥ N 3 , since ω ∈ ∆ Since similarly, max [0,y j ] V ≤ ε 4 (log N )/4 and ε 4 < 1, we get successively y j ≤ θ If y j < 0, we prove similarly that x 0 < y j since V ( x 0 ) ≥ 8ε 4 (log N )/9. Hence in every case, x 0 < y j < b(N ).
So by (6) and (79), we get uniformly on ∆ N ∩ ∆ R) N and j for large N , N and x 0 < y j < b(N ) < x 2 . This proves the first inequality of the lemma.
We now turn to D (2) N , this gives max Hence, we have by (7), So due to Markov's inequality, P N and j, for large N .
In the following lemma, we prove that with large quenched probability, uniformly on ∆ N ∩ ∆ Lemma 5.6. We have for large N , N . We recall that |V (k) − V (k − 1)| ≤ a 0 for every k ∈ Z. We have, since Similarly, Hence (9) and (10) lead respectively to N , which proves the lemma. Now, similarly as in Brox [B86] for diffusions in random potentials (see also [AD15,p. 45]), we introduce a coupling between Z (under P b(N ) ω ) and a reflected random walk Z defined below. More precisely, we define, for fixed N , ω x 0 := 1, ω x := ω x if x 0 < x < x 2 , and ω x 2 := 0. We consider a random walk Z n n in the environment ω, starting from x ∈ x 0 , x 2 , and denote its law by P x ω . That is, Z satisfies (2) with ω instead of ω and ω (j) and Z instead of Z (j) . In words, Z is a random walk in the environment ω, starting from x ∈ x 0 , x 2 , and reflected at x 0 and x 2 . Also, let Notice that µ(.)/ µ(Z) is an invariant probability measure for Z. As a consequence, is an invariant probability measure for Z 2n n for fixed ω. That is, P ν ω Z 2k = x = ν(x) for every x ∈ Z and k ∈ N, where P ν ω (.) := x∈Z ν(x)P x ω (.). Notice that ν and µ depend on N and ω.
We can now, again for fixed N and ω, build a coupling Q ω of Z and Z, such that such that under Q ω , these two Markov chains move independently until which is their first meeting time, then Z k = Z k for every τ Z=Z ≤ k < τ exit , where τ exit is the next exit time of Z from the central valley [ x 0 , x 2 ], that is, and then Z and Z move independently again after τ exit . Now, we would like to prove that under Q ω , Z and Z collide quickly, that is, τ Z=Z is very small compared to N . To this aim, we introduce Let u ∨ v := max(u, v). We have the following: Lemma 5.7. We have for large N , τ (.) denoting hitting times by Z as before, N ≤ x 2 similarly as after (78). Because ω ∈ ∆ N and due to (80), we have since Notice also for further use that, for every k ∈ θ (R) Putting together these inequalities gives in particular min [ x 0 , x 2 ] V ≥ V b(N ) − a 0 . Furthermore, max This, (7), Markov's inequality and ω ∈ ∆ uniformly for large N . Consequently, We prove similarly that Q ω τ L − > N 1−ε 1 /2 ≤ 2N −ε 1 /4 uniformly for large N , using (8) and (83) instead of (7) and (84) respectively, and because min [ x 0 , x 2 ] V ≥ V b(N ) − a 0 which we proved after (88). This proves Lemma 5.7.
N and thanks to (87).