Localization and number of visited valleys for a transient diffusion in random environment

We consider a transient diffusion in a $(-\kappa/2)$-drifted Brownian potential $W\_{\kappa}$ with $0\textless{}\kappa\textless{}1$. We prove its localization at time $t$ in the neighborhood of some random points depending only on the environment, which are the positive $h\_t$-minima of the environment, for $h\_t$ a bit smaller than $\log t$. We also prove an Aging phenomenon for the diffusion, a renewal theorem for the hitting time of the farthest visited valley, and provide a central limit theorem for the number of valleys visited up to time $t$. The proof relies on a decomposition of the trajectory of $W\_{\kappa}$ in the neighborhood of $h\_t$-minima, with the help of results of A. Faggionato, and on a precise analysis of exponential functionals of $W\_{\kappa}$ and of $W\_{\kappa}$ Doob-conditioned to stay positive.


Introduction and notation
1.1. Presentation of the model. We are interested in a diffusion (X(t), t ≥ 0) in a random càdlàg potential (V (x), x ∈ R), defined informally by X(0) = 0 and where β is a Brownian motion independent of V . More rigorously, X is defined by its conditional generator given V , which is These diffusions in random potentials are considered as a continuous time analogues of random walks in random environment (RWRE) (see e.g. P. Révész, [31], B.D. Hughes [22], Z. Shi [35] and O. Zeitouni [42] for reviews on RWRE).
The study of such a process starts with a choice for V . A classic one, originally introduced by S. Schumacher [33] and T. Brox [5], is to take for V a Lévy process. In fact only a few papers deal with the discontinuous case, see for example P. Carmona [6] or A. Singh [36,37], and most of the results concern continuous V , that is to say with κ ∈ R and W a two sided Brownian motion. We denote by P the probability measure associated to W κ (.). The probability conditionally on the potential W κ is denoted by P Wκ and is called the quenched probability. We also define the annealed probability as In the case κ = 0, X is recurrent and [5] shows that it is sub-diffusive with asymptotic behavior in (log t) 2 , moreover X is localized, at time t, in the neighborhood of a random point b log t depending only on t and W . This result can be written: Theorem 1.1. (Brox [5]) Assume κ = 0, then for all ε > 0 lim t→+∞ P X(t) ∈ [b log t − ε(log t) 2 , b log t + ε(log t) 2 ] = 1. (1.1) The limit law of b log t /(log t) 2 and therefore of X(t)/(log t) 2 were made explicit independently by H. Kesten [25] and A. O. Golosov [23]. For recent results for this case see for example [1] and [11].
In the case κ > 0, the diffusion X is a.s. transient, with a wide range of limiting behaviors, depending on the value of κ. It was first studied by K. Kawazu  Kawazu et al. [24] proved in particular that when 0 < κ < 1, H(r)/r 1/κ converges in law to a stable distribution (see also Y. Hu et al. [21], and H. Tanaka [40]). More recently we mention the results for large and moderate deviations, by M. Taleb ([38] and [39]), A. Devulder [8] and G. Faraud [18].
In this paper we follow a different approach from [21] and [24]. Indeed we focus on a quenched study, which has attracted much interest for transient RWRE in the last few years, see for example the works of N. Enriquez et al. [14], [15], [16], [17], D. Dolgopyat et al. [12], and J. Peterson et al. [27], [28], [29]. Heuristically, the diffusion goes to locations where the potential is low, hence it goes to +∞, but it is slowed by "valleys" of the potential, which trap the diffusion for some time.
1.2. Main results. The goals of this paper are to localize the diffusion X, when 0 < κ < 1, in some valleys of the potential W κ , to understand the difference with Brox's result given by (1.1), and to prove an Aging phenomenon, as was done in [16] for transient zero-speed RWRE. We moreover obtain a central limit theorem for the number of valleys visited up to time t. We also prove some intermediate results, which we think will be useful for obtaining new results about the maximum local time of X.
Let t → φ(t) a positive increasing function of t, such that φ(t) = o(log t) and log log t = o(φ(t)), where f (t) = o(g(t)) means lim t→+∞ f (t)/g(t) = 0. We prove the following aging phenomenon: Proposition 1.2. Assume 0 < κ < 1. For all α > 1, we have lim t→+∞ P (|X(αt) − X(t)| ≤ φ(t)) = sin(κπ) π This is actually a consequence of Theorem 1.3. Before stating it, we first introduce the notion of h-extrema, which were first introduced by Neveu et al. [26], and studied in the case of drifted Brownian motion by Faggionato [19]. For h > 0, we say that x ∈ R is an h-minimum for a given process V if there exist u < x < v such that V (y) ≥ V (x) for all y ∈ [u, v], V (u) ≥ V (x) + h and V (v) ≥ V (x) + h. Moreover, x is an h-maximum for V if x is an h-minimum for −V , and x is an h-extrema for V iff it is an h-maximum or an h-minimum for V . As we are studying the process X until time t, we are more especially interested in the h t -extrema of W κ where h t := log t − φ(t).
It is known (see [19]) that almost surely, the h t -extrema of W κ form a sequence indexed by Z, unbounded from below and above, and that the h t -minima and h t -maxima alternate. We denote respectively by (m j , j ∈ Z) and (M j , j ∈ Z) the increasing sequences of h t -minima and of h t -maxima of W κ , such that m 0 ≤ 0 < m 1 and m j < M j < m j+1 for every j ∈ Z. We also define N t := max k ∈ N, sup 0≤s≤t X(s) ≥ m k , so that m Nt is the largest h t -minima visited by X until time t. The main result of this paper concerns the localization of the diffusion. It is stated as follows: Theorem 1.3. Assume 0 < κ < 1. There exists a constant C 1 > 0, such that lim t→+∞ P (|X(t) − m Nt | ≤ C 1 φ(t)) = 1.
We first recall that X(t) is asymptotically of order t κ (see e.g. [21]). So, the size φ(t) of the intervals in which X is localized, which can nearly be of the order of log log t, is small and is related to the minimum height h t of our valleys. We could not say however if it the best that can be obtained. The main difference with the result of Brox (1.1) is the appearance of the (random) integer N t , which is the number of typical valleys of height h t visited before time t. In the recurrent case of Brox, the diffusion is, with a large probability, localized near the bottom of a unique valley of the potential, whereas in our transient case, the diffusion is localized near the bottom of one among several valleys of the potential.
We also prove a renewal theorem for the time to reach the last valley visited by X before t: Proposition 1.4. We have the following convergence in law under the annealed probability P, as t → +∞, As a consequence, we get the following results, which are useful for the proofs of Proposition 1.2 and Theorem 1.3: We will see in Section 4 and 5, that this is due to the fact that for any integer k ≤ n t with n t := e κφ(t)(1+δ) , δ > 0, H(m k ) can approximated by a sum of i.i.d. random variables each of these random variables having the law of U. We show that U is the product of a random variable depending only on the environment and t, and an independent variable with exponential law of parameter 2. This first random variable can itself be approximated by a product of sums of functionals of 3-dimensional (−κ/2) drifted Bessel processes and of W κ (see Proposition 4.3).
These results are in accordance with those obtained by Enriquez et al. ([14], [15] and [16]) for transient RWRE. The work we present here is self contained, in particular we present in this same paper the technical study of the Laplace transform of the first exit time U. The study of the environment only requires continuous arguments of stochastic calculus, starting by a Williams decomposition of the trajectory of W κ which mainly comes from the work of A. Faggionato [19].
The number N t of valleys visited goes to +∞ as t → +∞, More precisely, we prove the following central limit theorem for N t , with renormalization e κφ(t) : Proposition 1.6. Assume 0 < κ < 1. Then N t e −κφ(t) → t→+∞ N in law under the annealed law P, where N is a r.v. determined by its Laplace transform: where C κ > 0 is explicitly known (see Proposition 4.7).
Moreover we expect that the results of this paper will be useful to study other properties for the diffusion. In particular, let (L X (t, x), t ≥ 0, x ∈ R) be a bicontinuous version of the local time of X. It is known that the maximum local time of X at time t, that is L * X (t) := max x∈R L X (t, x), satisfies lim sup t→+∞ L * X (t)/t = +∞ a.s. in the cases κ = 0 (see [34] and [11]) and even in the transient case 0 < κ < 1 (see [10]). Hence the maximum local times of X exhibits very interesting properties, very different from those of the maximum local time of RWRE at time t, which is naturally bounded by t/2. We expect that the better understanding of the localization of X and some intermediate results provided in this paper will be useful to prove new results about L * X (work in progress). The rest of the paper is organized as follows. First, we give in Section 2 the main properties of the environment that will be useful for our study; in particular we present a Williams' decomposition of the trajectory of W κ , close to the one detailed in [19]. Then, in Section 3, we approximate the trajectory of X by a sum of i.i.d. random variables, and we study one of these random variable in Section 4. We study the asymptotic behavior of the Laplace transform of the first exit time U in Section 4.3. Finally, we prove the renewal results stated in Proposition 1.4, Theorem 1.3 and Proposition 1.2 in Section 5. Moreover, Sections 2, 3 and 4 start with basic facts on diffusion in random media and/or estimates on the drifted brownian motion and Bessel processes.

Williams' decomposition and Standard valleys
We use a Williams' like decomposition ( [41]), based on the results of A. Faggionato [19].
2.1. Williams' decomposition in the neighborhood of h t -minima, (m i , i). We now recall Williams' decomposition of the trajectory in a neighborhood of the local minima m i , i ∈ N * .
Let a > 0. For any process (U (t), t ∈ R + ) we denote by τ U (a) := inf{t > 0, U(t) = a}, the first time this process hit a, with the convention inf ∅ = +∞. We denote by L U a bicontinuous version of the local time of U when it exists. We also denote by U a the process U starting from a, and by P a the law of U a ; with the notation U = U 0 . Definition 2.1. We recall the definition of a (−ζ/2)-drifted Brownian motion W ζ Doob-conditioned to stay positive (see [3], where E z and P z are the expectancy and probability related to W z ζ . This induces a unique probability measure P −ζ/2,↑ z on σ(W ζ (u), u ≥ 0). Moreover, P −ζ/2,↑ z converges weakly as z → 0 + , in the space of Skorokhod D(R + , R + ) (see [3] Prop. 14) and in C(R + , R + ) (see [19]) to a probability measure denoted by P −ζ/2,↑ 0 . The canonical process, which we denote by R, is a Feller process for the family (P −ζ/2,↑ z , z ≥ 0); it takes values in R + , and its infinitesimal generator is given for every x > 0 by (see [19] Lemma 6) In the following, we call R under P −ζ/2,↑ z for z ≥ 0 a 3 dimensional (−ζ/2)−drifted Bessel process starting from z. Of course this is a misuse of language as this process is not drifted directly but is obtained from a drifted process. We notice in particular that, by (2.1), the law of R is the same if ζ is replaced by −ζ, that is, 3 dimensional (−κ/2)−drifted Bessel processes have the same law as 3 dimensional (κ/2)−drifted Bessel processes. Finally, when ζ < 0, we have P which is the potential re-centered at the local minima m i . We also define for h > 0 We have, , for a < h t . Then the truncated processes ((P are independent as well. Let us denote (R, h) the killed process (R(s), 0 ≤ s ≤ τ R (h)). Let (R (1) , h t ) and (R (2) , h t ) two independent copies of (R, τ (h t )) and (W b κ , a) := (W b κ (s), 0 ≤ s ≤ τW κ (a)) a (−κ/2)-drifted Brownian motion starting from b and killed when it first hits a < b, and independent of R (1) and R (2) . Then for all i ≥ 2, P (i) 1 is equal in law to (R (1) , h t ); for all i ≥ 1, P (i) 2 is equal in law to (R (2) , h t ), and P (i) 3 is equal in law to (W ht κ , a), for a < h t .
The results for the sequence ((P 2 ), i ≥ 1), comes from Theorems 1 and 2 in [19]. The result for P (i) 3 comes from the fact that τ i (h t ) is a stopping time. We treat the central slope that is to say P

2.2.
Standard h t -minima (m i , i). Among the h t -minima (m i , i) only some of them are interesting for the analysis of the process X : the (m i , i) h t -minima. Recall that δ > 0 is a positive real that can be chosen as small as needed (see the definition of n t just after Corollary 1.5). Let We defineL + 0 := 0,m 0 := 0, and recursively for i ≥ 1 (see Figure 1), Figure 1. h t standard valleys We also introduce the equivalent of V (i) , for the (m i , i) We call i th valley, the re-centered truncated trajectory (Ṽ (i) (x),L + i−1 ≤ x <L + i ). The next step is to show that with an overwhelming probability the first n t positive h t -minima In all the paper, C + and c + (resp. C − and c − ) denote positive constants that may grow (resp. decrease) from line to line. Lemma 2.3. For any 0 < δ < 1, and any t large enough P (V t ) ≥ 1−C 1 w t , where w t := n t e −κht/2 and C 1 is a positive constant. Moreover, the sequence (( Hence on the complementary of V t , there would exist 1 ≤ i ≤ n t and 1 ≤ j ≤ n t such thatm i−1 < m j <m i . If for such i and j,L i < m j <m i , there would be a v > m j such that ([19] Prop. 1, Thm 1 and the remark before (2.26)) equal in law to ζ + − ζ − , where ζ + and ζ − are independent exponential r.v, such that the mean of ζ ± is 2κ −1 sinh(κh t /2)e ∓κht/2 . So for j ≥ 2, For j = 1, we notice that either there is an h t -maximum between 0 and m 1 , with probability ≤ h t e −κht by ( [19], Thm. 1 and (2.25)), either follows directly from the strong Markov property applied at timesL + i−1 , which are stopping times. The following Remark will useful in the sequel is equal in law to (R (1) , h t ), and by Corollary 1 in [30] this result can be extended until τ − i (h + t ), as long as . So for every event A which belongs to the σ−algebra generated by the truncated We also need the following intermediate random variables We have the following result for the distance between the points of a given valley: 3) Before giving the proof, we detail a basic result and its short proof: Lemma 2.6. Let 0 < α < ω. For all h large enough, we have Proof of Lemma 2.5: Working on the event V t allows us to write that (1−δ) } and then use Faggionato's results. This idea is used several time in this proof and all along the paper. For i ≥ 1, thanks to ( [19] Thm 1), the law of − , which is defined in [19] p. 1769. Applying ([19] Proposition 1 p. 1769 and especially Formula 2.14), m i+1 − M i has the same law as a r.v. called − , which Laplace transform is given by E e −α − =ᾱe −κht/2 /[ᾱ cosh(ᾱh t ) − (κ/2) sinh(ᾱh t )] for α > 0, withᾱ := 2α + κ 2 /4. In particular, with a Markov inequality with To get the last inequality, we just need an upper bound forL + i −τ i (h t ). SinceL + i is a stopping time for W κ (in particular we do not need to work on V t for this part), we have by using (2.7) Combining these inequalities yields (2.6).

Quasi-Independence in the trajectories of X
In this section we show that the times to escape from the different valleys are asymptotically in t independent under the annealed measure. Then we prove that the time spent by X between the valleys is negligible.
We start with some basic facts about hitting times by X, R and W κ .
3.1. About hitting times. We first introduce some notation. Let and A ∞ := lim r→+∞ A(r) < ∞ a.s. As in Brox [5], there exists a Brownian motion B such that With these notations, we recall the following expression of H(r), for all r ≥ 0, We also need some estimates on hitting times by W κ and a −(κ/2)-drifted Bessel process R: Proof: We recall that R has the same law as the (κ/2)-drifted Brownian motion W 0 −κ = W −κ Doob conditioned to stay positive, and more precisely that [19] Lem 6. and the discussion before), and then for every Λ ∈ G τ , where τ is an a.s. finite stopping time. Moreover, we know that a scale function of W −κ is given by s κ (u) : where LHS means left hand side. This gives (3.2) for large h.
We now turn to the proof of (3.4). We have, if 0 < γ < 1, This yields to (3.4 Now, we notice that the left hand side of (3.3) is less than Moreover where the inequality is proved in the same way as (3.4).

3.2.
Independence in a trajectory of X. We prove that the sequence (U i := H(L i ) − H(m i ), i ≥ 1) is "nearly" i.i.d under P for large t in the following sense: First we need some more points which belongs to the standard valleys, for all i ≥ 1 There exists a constant C 2 > 0 such that for large t, where u(t, n) := ne −δκht if δ is chosen small enough, by the strong Markov property and sincem i <L i <m n for 1 ≤ i < n. Hence we obtain by induction We now need to prove that P Wκ (E i ) is closed to one with a large probability, so the next step is to get a lower bound for this probability. We now work under V t which allows us to use V (i) and its William's decomposition instead ofṼ (i) , we have

Let us give an upper bound for
Recall that A is a scale function of X under P Wκ (see e.g. [35] formula (2.2)), that is and ω = 1 gives for t large enough, with a probability larger than 1 − C + e −κ 2 ht/4 , For the numerator Q i , first by (2.6) and Lemma 2.
where the last inequality comes from formula 1.1.4 (1) page 251 of [4]. Finally for δ small enough and t large enough, with probability greater than Collecting what we did above, we get Using Lemma 2.3 and considering (3.10), With similar ideas for the upper bound, we finally get For every fixed W κ , we have under P Wκ

andB is a standard Brownian motion. This
and (3.11) show that the left hand side of (3.11) and E Wκ Using (3.13), and the fact that give the upper bound of (3.8). Notice that we choose the second valley in the definition of U in order to avoid the central slope (see Fact 2.2) when working under V t .
For (3.9), we obtain Since U n is equal in law to U and P(E n ) ≤ C + e −δht , we get (3.9).
3.3. Negligible parts in the trajectory of X. We now prove that the total time spent between the first n t large valleys is negligible compared to t.
We first need to give estimates concerning the hitting times ofm 1 and τ 1 (h t ). To this aim, Actually, H − (r) (resp. H + (r)) is the time spent by X in R − (resp. in R + ) before it hits r for the first time. We start with the following lemma about H − ; it comes from [10] and the proof is given for the sake of completeness: (3.14) Proof: For a > 0, α > 0 and b > 0, let We first prove an inequality with regards to L * − X (+∞). We notice that By the first Ray-Knight theorem, there exist two Bessel processes R 2 and R 0 , of dimensions 2 and 0 respectively, starting from 0 and Consequently, where γ κ is a gamma variable of parameter (κ, 1). We have for z large enough, Moreover, for c > 2/κ, and ε > 0, for all large z. Moreover, P(E 3. Choosing c large enough, this, together with (3.15), (3.16) and (3.17) gives (3.14).
Lemma 3.4. There exists a constant C 5 > 0 such that for every h > 0, which is a consequence of the first Ray-Knight theorem (see e.g. [32]). We notice that by the scale property of B, recalling that A(u) ≥ 0 for all u ≥ 0 and A is independent of B, we have for every r ≥ 0, which can depend on the environment W κ , by Fubini. Hence, applying this to r = τ * 1 (h), we get Applying Fubini followed by the Markov property at time u, we get where, similarly as in Enriquez et al. ( [15], Lem. 4.9), Using [19] (formula (2.3) and (2.7)) we have β 1 (h) ≤ c 1 e κh , with c 1 > 0.
We now cut the integral which appears in the definition of β 2 (h) into several parts. to show that β 2 (h) ≤ C + e (1−κ)h for h large enough. To this aim, we introduce e 0 := 0 and We have, Hence, applying Markov at times e i and since W κ (e i ) = −i, we get To finish first by 3.10.7 (a) page 317 in [4], taking x = 0, a = −1, α = 0 and b = h, we easily get In the same way using formula 3.10.7 (b) of the same reference with the same parameters except b = h − 1, we also get E(J 0 κ) . This, combined with q ∼ h→+∞ Ce −κh gives β 2 (h) ≤ Ce (1−κ)h for large h, which together with (3.21) and β 1 (h) ≤ c 1 e κh gives (3.18).
We now have all the tools needed to bound the time spent between the deep valleys: Lemma 3.5. For any δ small enough (δ < 2 −3/2 and κ(1 + 2δ) < 1) and t large enough Proof : We have, for every 1 ≤ k ≤ n t , is less than or equal toṽ t with large probability. We consider which is a diffusion in the environment W κ , starting fromL i (resp.L * i ). We also denote by H X i (r) the hitting time of r by X i , for r ≥L i and A x ∞ := ∞ x e Wκ(u)−Wκ(x) du. We introduce the following events: ∞ has the same law as 2/γ κ , where γ κ is a gamma variable of parameter (κ, 1), with density e −x x κ−1 1 R + (x)/Γ(κ) (see [13], or [4] IV.48). Hence, P (A ∞ ≥ y) ≤ Cy −κ for y > 0 and C > 0, and P (A ∞ ≤ y) ≤ e −1/y for small y > 0. Moreover, sinceL i ,τ i+1 (h t ) andL 3 ) ≤ C + n t e −κht/2 3/2 by (2.4) and Lemma 2.3.
. SinceL * i is a stopping time, using the strong Markov property, we , which is, on E 3. 5 1 , the total time spent by X i in [L * i , +∞) before hittingτ i+1 (h t ). This last quantity is less than or equal to the total time spent in [L * i , +∞) by X * i before hittingτ * i+1 (h t ), which has the same law as H + (τ * 1 (h t )) under the annealed probability P, sinceL * i is a stopping time for W κ and then This, together with Lemma 3.4 and a Markov inequality lead to where E 3.5

Time spent in a standard valley
The aim of this section is to prove Proposition 4.6. First we need additional estimates given below.
Note that this expression can be deduced from the fact that G + (ωy, y) eW κ(x) dx and [13].

Approximation of the time to escape from a typical valley.
We now prove that a standard exit time can be approximated by product and sums of independent well known random variables. We recall that U is defined in Proposition 3.2. and I − 2 , depending on t and independent of e 1 a random variable with exponential law with mean 2, such that I + 1 is distributed as F + (h t ), I + 2 as G + (h t /2, h t ), and I − 1 and The proof of this Proposition involves 3 Lemmata, the first two are straightforward consequence of what we have already discussed or proved the last one is more technical.
End of the proof. To finish we have to prove thatĴ 1 is nearly equal to I − .
We now consider, possibly on an enlarged probability space, a process (R (1) (s), 0 ≤ s ≤ τ R (1) (h t /2)), independent of W κ and e 1 and then independent of I − 1 , I − 2 and I + 2 , and distributed as (R(s), 0 ≤ s ≤ τ R (h t /2)). We now extend this process by setting R (1) e R (1) (s) ds. By the strong Markov property, R (1) (and hence also I + 1 ) is independent of I + 2 , I − 1 , I − 2 and e 1 . Moreover, with the same notation as in Lemma 4.6, we have on V t , I − = I − 1 + I − 2 and A(L 2 ) = A + 1 + I + 2 where I − is defined in Lemma 4.6.
Proof of Proposition 1.4 and Corollary 1.5: LetÑ t be the unique integer such that H(mÑ t ) ≤ t < H(mÑ t+1 ). First, by Lemma 2.3 and (3.22), Finally, Propositions 3.2 and 4.7 give since φ(t) = o(log t), Assume first that 0 < r < s < 1, and a > 0. Then, Lemmata 2.3, 3.5 and (5.3) yield to where s t := s + 2/ log h t . We now use (3.9) of Proposition 3.2 and get for any ε > 0, for large t, Let 0 < r < s < 1 and a > 0. Using first the uniform convergence of u → e κφ(t) P(U/t > u) on the compact [a + r , a + s ] ⊂ (0, ∞) and then the vague convergence of µ t (see Lemma 5.1), we get Consequently, by letting ε → 0, we obtain the first inequality of the following line lim sup We prove similarly the second inequality. Since we consider probability measures, the cases r = 0, s = 1 or a = 0 follow, which concludes the proof of Proposition 1.4. Corollary 1.5 follows by straightforward computations.
Proof of Proposition 1.6: Let us denote by ν t a positive measure on R + , such that for every . We have, with the arguments already used between (5.4) and (5.5), for any a > 0 and ε > 0, We prove similarly the lower bound. We now show that the Laplace transform of the measure ν t converges when t goes to infinity. We consider α such that 0 < λ < α. We get, by Propositions 3.2 and then 4.7. We also notice that where ν is the measure defined by dν(u) = 1 This pointwise convergence of the Laplace transform of ν t leads to the vague convergence of ν t to ν. Using the uniform convergence of x → e κφ(t) P (U/t > 1 + a − x) on [0, 1] provided by Lemma 5.1, we get and this remains true for a = 0. Since for every a > 0 and b > 0, changing C κ λ κ into u gives the pointwise convergence of E[exp(−uN t /e κφ(t) )] to the right hand side of (1.4), which ends the proof of Proposition 1.6.

5.2.
The localization : proof of Theorem 1.3. Let φ * (t) := φ(t)/ζ, where 0 < ζ < 1 will be chosen later. Let us define H x→y := H(y) − H(x) for 0 < x < y, t * := t − e φ * (t)(1+2δ) , We also introduce I j := [m j − φ * (t)/ζ,m j + φ * (t)/ζ], j ∈ N * . Let ε > 0. We have: We split the proof into three parts. We start with: Part 1: we prove that there exists c 4 > 0 such that for large t, If t is large enough, on B j , after first hittingm j , X stays in [L − j ,L j ] at least until time t(1 + ε/2). Therefore, conditioning on H(m j ) and using the strong Markov property, So, as in ( [5], proof of Prop. 4.1) we now introduce a coupling between X (under P Wκ m j ) and a reflected process Y j defined below. To this aim, let (Y Brownian motion starting from A(x) and reflected at A(L − j ) and A(L j ) and independent from W κ , and T x,j is defined like T replacing B byB is a diffusion in the potential W κ , starting from x ∈ [L − j ,L j ] and reflected atL − j andL j . We denote its law by P Wκ j,x .
This enables us to define Y j by P Wκ j (Y j ∈ .) := As in ( [5], proof of Prop. 4.1),μ j is invariant for the semi-group of Y j ; in particular P Wκ j (Y j (s) ∈ U ) =μ j (U ) for every s ≥ 0 and U ⊂ [L − j ,L j ]. We can now, as in [5], build a coupling Q Wκ m j of X and Y j , such that Q Wκ m j (Y j ∈ .) = P Wκ j (Y j ∈ .), and Q Wκ m j (X ∈ .) = P Wκ m j (.), these two Markov processes Y j and X move independently until the first collision H j := inf{u ≥ 0, and then X and Y j move independently again.
We deduce from (5.10) that Hence, since X and Y are continuous, where the last line comes from the independence of X and Y j until H j and the fact thatμ j is the invariant probability measure for Y j .
Let s ∈ [0, t * ]. Since X(u) = Y j (u) for every H j ≤ u ≤ H e j and t 1 ≤ t − s ≤ t(1 + ε/2) − s, and notice that we can replace Q Wκ m j by P Wκ m j in the first line. We now prove a Lemma aboutμ j : Notice that for any ζ, the right hand sides of (5.13) and (5.14) go to 0 as t → +∞.