Slowdown estimates for one-dimensional random walks in random environment with holding times

We consider a one dimensional random walk in random environment that is uniformly biased to one direction. In addition to the transition probability, the jump rate of the random walk is assumed to be spatially inhomogeneous and random. We study the probability that the random walk travels slower than its typical speed and determine its decay rate asymptotic.


Setting and preliminaries
Let (ω = {ω(x)} x∈Z , P) be indepnedent and identically distributed random variables taking values in (0, 1). For a given ω, the random walk in random environment {X n } ∞ n=0 is the Markov chain with transition probability ω(x) = P ω (X n+1 = x + 1|X n = x) = 1 − P ω (X n+1 = x − 1|X n = x). (1.1) It is said to be uniformly biased (to the right) if P-essinf ω(0) > 1/2. In this case, the law of large numbers is known to hold with a positive speed (see [19]): lim n→∞ 1 n X n = v P > 0. (1.2) In this paper, we consider a variant of this process whose jump rate is spatially inhomogeneous and random. Specifically, as in [5], let (µ = {µ(x)} x∈Z , P) be independent and identically distributed strictly positive random variables of mean one. For a given µ, we consider a continuous time random walk (X = {X t } t≥0 , {P ω,µ z } z∈Z ) on Z whose jump rates from x to x + 1 and x − 1 are given by ω(x)/µ(x) and (1 − ω(x))/µ(x), respectively. This type of random walk (usually with ω ≡ 1/2) is sometimes called a random hopping time dynamics. If in addition the mean of µ(0) is infinite, it is also called Bouchaud's trap model. Since we assumed finite mean, there is no trapping effect and it is easy to check that the law of large numbers holds: P ω,µ 0 lim t→∞ 1 t X t = v P = 1 P ⊗ P-almost surely. (1. 3) The large deviation principle of rate t also holds for the law of {t −1 X t } t>0 as a special case of the results of [5]. However, when µ is unbounded, it is easily seen that the slowdown probability P ω,µ 0 (X t < vt) for v ∈ (0, v P ) (1.4) exhibits sub-exponential decay. The aim of this work is to establish the precise subexponential rate of the slowdown probability decay and relate it to the tail of the law of µ(x). While our methods apply for quite general distributions of µ, we consider three representative classes (Pareto, Intermediate and Weibull) to make the statements concise. To be precise, suppose P(µ(0) > r) = exp{−g(r)} (1.5) and g has either of the following forms: (P) e g(r) is regularly varying at ∞ with index α > 1; (I) g is slowly varying at ∞ satisfying lim r→∞ g(r)/ log r = ∞; (W) g is regularly varying with index α > 0 at ∞. Furthermore, when g satisfies (W) with α = 1, we assume in addition that the so-called Cramér condition holds: In the quenched slowdown estimate, the extreme value of µ plays an important role.
Let us recall two results from extreme value theory. Let g −1 denote the left-continuous inverse of g. The first result gives us a condition under which the running maxima of {µ(x)} x∈Z can be approximated by a deterministic sequence up to multiplicative constant.
This can be found in [16,Corollary 1]. Any distribution in the class (W) satisfies (1.6) with l = 1. For the class (I), it is satisfied for g(r) = (log r) β 1 {r≥1} (β > 1) with l = 1 and g(r) = log r log log r1 {r≥e} with l = e, but not for g(r) = log r log log log r1 {r≥e e } .

Results
To simplify the presentation, we introduce the following notation: For two functions f, g : (0, ∞) → (0, ∞), we write f (t) log g(t) when there exists a c ∈ (0, ∞) such that for all sufficiently large t, (1.12) When only the left inequality holds, we write f (t) log g(t).
Our first result is the following quenched slowdown estimate.
Theorem 1.4. Let P be a uniformly biased environment: P-essinf ω(0) > 1/2. For any v ∈ (0, v P ), P ⊗ P-almost surely, . (1.13) Moreover, if µ satisfies the assumption of Lemma 1.1, then . (1.14) Our second result is the corresponding annealed slowdown estimate. To this end, for each t, let h(t) be the largest h > 0 satisfying Such h exists for all large t. Indeed, when t is large, the inequality (1.15) holds for h = log t and fails for h = t, whereas per fixed t > 0 the left hand side of (1.15) is decreasing in h and its right hand side eventually increasing in h.

Remark 1.5.
In general, both h(t) and g(h(t)) grow sub-linearly in t. Furthermore, it is straightforward to check that where the function (t) is slowly varying at ∞. There is no such simple formula in the case (I) but for a representative example g(r) = (log r) β (β > 1), we have (1.17) Theorem 1.6. Let P be a uniformly biased environment: P-essinf ω(0) > 1/2. Suppose v ∈ (0, v P ). Then for h(·) of (1.15), , (1.18) where P ⊗ P[ · ] denotes the expectation with respect to P ⊗ P.

Related works
Sub-exponential tail estimates are established in [6] for the annealed slowdown probability of the random walk in random environment (X n ) n≥0 governed by (1.1). When the environment assumes both positive and negative drift, that is, P-essinf ω(0) < 1/2 < P-esssup ω(0), the annealed slowdown probability exhibits a polynomial decay.
In the case of positive and zero drift, that is, P-essinf ω(0) = 1/2, under an additional assumption that P(ω(0) = 1/2) > 0, the slowdown probability is shown to decay stretched exponentially with exponent 1/3. The rate of decay of the corresponding quenched slowdown probability is determined in [8], based on the annealed result and the block argument. In the case of positive and zero drift, both annealed and quenched results are refined to the precision of the usual large deviation principle in [15] and [14], respectively. On the other hand, in the case of positive and negative drift, it has recently been shown in [1] that the leading order of the quenched slowdown probability oscillates and hence does not satisfy a large deviation principle. For more details, we refer the reader to a survey article [9] as well as the introduction of [1]. We focus here on the uniformly biased situation with (inhomogeneous) holding times. Indeed, for uniformly biased environments, the result of [10] shows that without holding times the slowdown probability decays exponentially. Thus, the sub-exponential decay of the slowdown probability is caused in this setting solely by the inhomogeneity of holding times. Note that in case of positive and negative drift, since the annealed slowdown probability decays polynomially without holding times, the holding time can cause a visible effect only if µ has a power law tail. Similarly, in the case of positive and zero drift, the most natural choice of µ would be the Weibull distribution. We leave the latter two cases for future research.
Finally, [20,21,2] provide estimates for the decay rate of the slowdown probability for random walk in random environment in higher dimensions. While it is interesting to see how the holding times affect such slowdown probabilities, our method relies on a certain renewal structure which is limited to the one dimensional setting.

Outline
Section 2 provide the relatively easy proofs of our lower bounds, where in both quenched and annealed settings, we simply let the random walk stay until time t at the site of the highest µ-value within [0, vt − 1]. In the quenched case, the highest µ-value behaves as in Lemmas 1.1 and 1.2, while in the annealed setting, we can make it larger at a suitable cost in log P-probability, so we optimize the sum of the corresponding cost and gain.
The derivation of the upper bounds is more involved. Since a sub-exponential slowdown decay for a uniformly biased random walk can only be caused by the inhomogeneity of the holding times µ, we introduce in Section 3 a suitable time change and thereby reduce such upper bounds on the slowdown decay to a tail bound for certain additive functionals. Section 4 provides our main technical contribution, showing that conditioning on some good events with respect to ω and µ yields the stated upper bounds. Finally, in Section 5, we show that (i) in the quenched setting, the good event has probability one; (ii) in the annealed setting, the good event has probability comparable to the upper bound in Theorem 1.6.

Lower bounds
Proof of the lower bound in Theorem 1.4. Let us define a regularly varying function when (1.6) holds. (2.1) Then due to Lemmas 1.1 and 1.2, the following holds P-almost surely: for all sufficiently Proof of the lower bound in Theorem 1.6. As in the quenched case, if there is a point , then we can slow the random walk by using the first holding time at x. Therefore, by the definition of h(t), we have which is the desired lower bound.

Reduction to tail estimate for additive functional
Let us first translate the problem in terms of the hitting time H(x) of x by our process. For any u > v, on the slowdown event {X t < vt}, either the walk hit ut before time t and thereafter go back to (−∞, vt] or it does not reach ut before time t. Hence, (3.1) The first term on the right hand side is exponentially small in t. Indeed, since the random walk then must backtrack for length (u − v)t, it follows that (3.4)

Green function estimates
The Green function Since our random walk is transient, this quantity is finite P-almost surely.
Lemma 3.1. There exist positive constants c 1 , c 2 and a function η( ) → 0 as → 0 that depend only on P-essinf ω(0) > 1/2 such that the following hold P-almost surely: x } x∈Z ) be the biased random walk corresponding to the deterministic environment ω ≡ P-essinf ω(0) > 1/2. It is standard to construct a coupling (S t , S t ) so that the two walks jump at the same time and S t ≤ S t for all t ≥ 0. The first assertion (i) readily follows from this coupling since for any > 0, Next, turning to the proof of (ii), by our coupling P ω z (H(z − 1) = ∞) ≥ η, uniformly in ω and z ∈ Z d , where η := P  Further, by our coupling with {P ω x } x∈Z , the return probability of the process (S t ) t≥0 is bounded away from one, uniformly in its starting point S 0 = y and the environment ω. This implies same uniform boundedness of G ω (−∞,∞) (y, y), which in view of (3.7) and (3.8) completes the proof.

Upper bound on a good event
Let us fix u ∈ (v, v P ) and a regularly varying and increasing function M with lim t→∞ M (t) = ∞. Throughout this section, we will fix ω and µ and assume that they satisfy the following conditions: and there exists δ ∈ (0, 1) such that and all sufficiently small > 0. Let us introduce for 0 ≤ x < y ≤ ut. By the strong Markov property and the fact f ,t (x, y) ≥ 1, this can be shown to be sub-multiplicative in the following sense: for any 0 ≤ x < y < z ≤ ut, The assumption (4.1) and Lemma 3.1 imply (4.6) By the Feynman-Kac formula (Theorem 6.7 in [4]), we have Now using log(1 + x) ≤ x for x ≥ 0, we obtain from (4.5) and (4.7) that (4.8) Next, by Lemma 3.1(ii) we have that uniformly in ω, t and y, as → 0, Further, c ω − t (y) ≥ G ω (− t,y+1) (y, y) ≥ 1 (the latter being the expected first jump time of {S t } t≥0 out of y). Consequently, it follows from (4.9) that  Thus, using Chebyshev's inequality and recalling (3.3) and (4.4), we obtain that (4.12) Recall Lemma 3.1(ii) that P ω 0 (H(− t) < ∞) decays exponentially in t, so we can choose > 0 such that for all sufficiently large t, under the assumptions (4.1) and (4.2) (where we also used the fact that t → M (t) is regularly varying at infinity).

Proofs of upper bounds
Proof of the upper bound in Theorem 1.4. In view of (4.14), we have only to show that The last expression is P-almost surely of size u+ v P (t + o(t)) as t → ∞. This can be seen as follows: in the same way as in [19, (1.16)], we find that for P-a.e. ω, t −1 H(ut) → (u + )/v P in P ω − t -probability as t → ∞. On the other hand, by the same coupling as in the proof of Lemma 3.1, it follows that sup t,ω E ω − t [(H(ut)/t) 2 ] < ∞. This implies the uniformly integrablity of {H(ut)/t} n∈N with respect to P ω − t for every ω, and hence the above in probability convergence can be upgraded to the convergence in L 1 .
For any fixed u < v P , we can choose δ > 0 such that u+ v P < 1 − δ for all sufficiently small , so it remains to control the discrepancy on the left-side of (4.2) due to replacing µ(y) by E[µ(y)] = 1. To this end, note that from Lemma 3.1-(ii) we have that the weights c ω − t (y) are uniformly bounded. Thus, applying the strong law of large numbers for the weighted sum of zero mean i.i.d. variables {µ(y) − 1}, with such weights {c ω − t (y)} yields that P ⊗ P-almost surely, as t → ∞ and we are done.
In order to prove the upper bound in Theorem 1.6, we need the following lemma, which states that (1.15) is not too far from an equality. Lemma 5.1. Let h(t) be as in (1.15). Then for sufficiently large t, t h(t) ≤ 2(g(h(t)) − log t).

(5.4)
Proof. We claim that for some c < ∞ and all t large enough h(t) ≤ ct/ log t. Indeed, for h(t) > ct/ log t the left side of (1.15) is smaller than c −1 log t. On the other hand, since eventually g(h) ≥ β log h for some β > 1 in all three cases (P), (I) and (W), the right hand side of (1.15) must then be at least 1 2 (β − 1) log t for all large t. Thus, by (1.15) we must have h(t) ≤ ct/ log t for c = 2/(β − 1) and all t large enough. Now by the definition of h(t), it follows that for any λ > 1, λh(t)(g(λh(t)) − log t) > t, (5.5) which implies where in the second line, we have used that g(·) is increasing and h(t) ≤ ct/ log t. The stated conclusion (5.4) thus holds whenever To complete the proof, recall that for increasing and regularly varying g(·) the left hand side gets arbitrarily close to one as h(t) → ∞.
Proof of the upper bound in Theorem 1.6. Again in view of (4.14) and (5.4), it remains to show that for any fixed u ∈ (v, v P ) and small δ, > 0 such that u+ v P < 1 − 2δ, one has P max − t≤z≤ut {µ(z)} > h(t) log exp{−g(h(t)) + log t}, The bound (5.8) follows by the definition of g(·) and the union bound. Turning to (5.9), recall (5.2) that the event on its left-side is contained in the union of: c ω − t (y)(µ(y) − 1) > δt. (5.11) To bound the probability of (5.10) we use Jensen's inequality to find that The last expression is precisely the large deviation upper bound for the hitting time, which is shown in [3] to decay exponentially in t whenever 1 − 2δ > (u + )/v P . Recall Remark 1.5 that t → g(h(t)) grows sub-linearly, hence any event having exponential decay in t is negligible for the purpose of verifying (5.9). Turning to similarly control the probability of the event in (5.11) recall that the positive c ω − t (y) are bounded away from zero and infinity, uniformly in ω, t and y. Thus, standard large deviation estimates for such weighted sums yield that P   y∈(− t,ut) c ω − t (y)(µ(y) − 1) > δt   log exp{−g(h(t)) + log t} (5.13) as claimed. Indeed, if µ(y) has a finite exponential moment (which we have for (W), when α ≥ 1, see (C) in case α = 1), then the Chernoff bound yields an exponential decay in t of the probability on the left-side, whereas appealing to Remark 1.5 for the sub-linear growth of t → h(t), in case µ(y) has no finite exponential moments, we get (5.13) as a special case of Lemma 5.2 below. Lemma 5.2. Let ({µ k } k∈N , P) be a family of i.i.d. mean-one random variables obeying either (P), (I) or (W) with α < 1. Then, for any sequence {w k } k∈N ⊂ [0, κ] and δ > 0, there exists c < ∞ depending only on δ, κ < ∞ and α (which appears in conditions (P) and exp −c −1 g(δn) , for (I) and (W) with α < 1.
Such behavior of the large deviation estimates for sums of independent random variables is well-known in the literature. However, we were not able to find results in this specific form, hence for reader's convenience include its proof in the appendix.
The case (W) with α < 1 and (I) are studied in [11,12] and [17], respectively, for the i.i.d. setting. Utilizing a standard truncation argument, we extend their results to our weighted case in the large deviation regime. Specifically, note first that for any 0 < < δ, The first term has the desired form since g(·) is regularly varying and grows faster than the logarithm. It thus suffices to show that the product term on the right of (A.4) is bounded by exp{ g(n)}. To this end, recalling that e x ≤ 1 + x + x 2 e κ for x ≤ κ, whereas E[ν k : µ k ≤ n/g(n)] ≤ 0 and g(n) n ν k ≤ κ when µ k ≤ n/g(n), we deduce that E exp g(n) n ν k : µ k ≤ n g(n) ≤ 1 + e κ g(n) n 2 E[ν 2 k ] = 1 + o g(n) n (A.5) as n → ∞ (since g(n)/n → 0 and sup k E[ν 2 k ] < ∞). Next, ν k ≤ κµ k , hence using integration by parts and the definition of g(·), E exp g(n) n ν k : n g(n) ≤ µ k ≤ n (A. 6) ≤ E exp κg(n) n µ 1 : n g(n) ≤ µ 1 ≤ n ≤ e κ−g(n/g(n)) + g(n)κ n n n/g(n) exp g(n)κ n r e −g(r) dr . (A.7) The first term in (A.7) is o(g(n)/n) thanks to our assumption that g(·) grows faster than the logarithm. Furthermore, from the representation formula for slowly varying functions [18,Theorem 1.2], it follows that for all n ≥ n 0 (κ, ), g(n) n κr ≤ g(r) 2 ∀r ∈ [n/g(n), n] . (A.8) The integral in (A.7) is thus at most ∞ n/g(n) e −g(r)/2 dr = o(1) when n → ∞. Collecting the preceding estimates, we conclude that for all sufficiently large n, n k=1 E exp g(n) n ν k : µ k ≤ n ≤ 1 + g(n) n n ≤ exp { g(n)} , (A.9) and the proof is complete.