The speed of critically biased random walk in a one-dimensional percolation model

We consider biased random walks in a one-dimensional percolation model. This model goes back to Axelson-Fisk and H\"aggstr\"om and exhibits the same phase transition as biased random walk on the infinite cluster of supercritical Bernoulli bond percolation on $\mathbb{Z}^d$, namely, for some critical value $\lambda_{\mathrm{c}}>0$ of the bias, it holds that the asymptotic linear speed $\overline{\mathrm{v}}$ of the walk is strictly positive if the bias $\lambda$ is strictly smaller than $\lambda_{\mathrm{c}}$, whereas $\overline{\mathrm{v}}=0$ if $\lambda \geq \lambda_{\mathrm{c}}$. We show that at the critical bias $\lambda = \lambda_{\mathrm{c}}$, the displacement of the random walk from the origin is of order $n/\log n$. This is in accordance with simulation results by Dhar and Stauffer for biased random walk on the infinite cluster of supercritical bond percolation on $\mathbb{Z}^d$. Our result is based on fine estimates for the tails of suitable regeneration times. As a by-product of these estimates we also obtain the order of fluctuations of the walk in the sub-ballistic and in the ballistic, nondiffusive phase.


Introduction and main results
1.1. Introduction. In the physics literature, biased random walk on a percolation cluster is considered as a model for transport in an inhomogeneous medium. The mathematically rigorous study of biased random walk on the infinite cluster of supercritical Bernoulli bond percolation on Z d was initiated in two parallel papers by Berger, Gantert and Peres [6], and Sznitman [19]. Both papers establish an interesting phenomenon, namely, if the strength of the bias is positive but small, then the linear speed of the walk is positive, whereas it is zero if the strength of the bias is sufficiently large. The sharpness of the phase transition, which had been conjectured in the physics literature by Barma and Dhar [4], remained open. An indication for the validity of the conjecture was provided by work of Lyons, Pemantle and Peres [18], who had shown that there is an analogous phase transition for the simpler model of biased random walk on a Galton-Watson tree with leaves, and that the phase transition in this model is indeed sharp. Moreover, the result of Lyons, Pemantle and Peres includes the statement that the speed at the critical bias equals zero. A rigorous proof of the sharpness of the phase transition for biased random walk on the infinite cluster of supercritical Bernoulli bond percolation on Z d was eventually given by Fribergh and Hammond [12]. In this paper, the authors conjecture that the speed at the critical bias equals zero. What is more, in the physics literature, it was conjectured by Dhar and Stauffer [9] that the displacement of the critically biased random walk from the origin at time n (in the direction of the bias) is of the order n/ log n.
In the present paper, we shall prove this conjecture for biased random walk on a onedimensional percolation cluster. This model was created by Axelson-Fisk and Häggström in [2,3] to be simpler than biased random walk on the infinite cluster of supercritical Bernoulli bond percolation on Z d , but to display qualitatively similar phenomena. Moreover, the initial hope might have been to construct a model that is even amenable to explicit calculations. And indeed, Axelson-Fisk and Häggström [2] were able to express the critical bias as an elementary function of a percolation parameter of the model. However, more complicated quantities such as the asymptotic linear speed as a function of the percolation parameter and the strength of the bias withstood explicit calculation so far. 1 Technische Universität Darmstadt, Germany. Email: luebbers@mathematik.tu-darmstadt.de 2 Universität Innsbruck, Austria. Email: matthias.meiners@uibk.ac.at 1 Our proof of the fact that the displacement of the critically biased random walk at time n is of the order n/ log n is based on refined estimates for the tails of suitable regeneration times that were introduced and studied in a joint paper of the second author with Gantert and Müller [13]. Our bounds on the tails of the regeneration times do not only hold for the critical bias but for a large range of biases including the whole sub-ballistic and the ballistic, nondiffusive phase. This allows us to deduce the order of the fluctuations of the walk in these phases. Our result on the fluctuations of the biased random walk in the sub-ballistic phase parallels the corresponding results for biased random walk on a Galton-Watson tree with leaves due to Ben Arous et al. [5] and is more precise than the corresponding result for random walk on the infinite cluster of supercritical Bernoulli bond percolation on Z d obtained in [12].
1.2. Model description. In this section, we give a brief introduction to the model and review some results that are required for the formulation of our main results.
Consider the ladder graph G = (V, E) with vertex set V = Z × {0, 1} and edge set E = { u, v ∈ V 2 : |u − v| = 1} where | · | denotes the usual Euclidean norm on R 2 . If v = (x, y) ∈ V , we write x(v) = x and y(v) = y, and call x and y the x-and y-coordinate of v, respectively.
In a first step, we consider i.i.d. bond percolation with retention parameter p ∈ (0, 1) on G, i.e., each edge e ∈ E is retained independently of all other edges with probability p, and deleted with probability 1 − p. As usual, we call an edge e ∈ E open if it is retained and closed if it is deleted. The state space of the percolation process is Ω = {0, 1} E , which we endow with the product σ-algebra F . The elements ω ∈ Ω are called configurations. We interpret ω(e) = 1 for ω ∈ Ω and e ∈ E as the edge e being open in the configuration ω. A path between u, v ∈ V is a finite sequence P = (e 1 , . . . , e n ) of edges e 1 = u 0 , u 1 , . . . , e n = u n−1 , u n ∈ E with u 0 = u and u n = v. The path P is called open if ω(e k ) = 1 for k = 1, . . . , n. Let Ω N1,N2 be the event that there exists an open path connecting a vertex with x-coordinate −N 1 to a vertex with xcoordinate N 2 , and let P p,N1,N2 be the probability measure on (Ω, F ) arising from conditioning i.i.d. bond percolation with parameter p on the event Ω N1,N2 . Then P p,N1,N2 converges weakly as N 1 , N 2 → ∞ to a probability measure P * p on (Ω, F ).
The distribution P ω,λ is the quenched law of (Y n ) n∈N0 (given ω). The corresponding annealed law is obtained by averaging the quenched laws over ω ∈ Ω using P p . Formally, we define the probability measure P on {0, 1} E × V N0 as follows. For A ∈ F , B ∈ G set P(A × B) := A P ω,λ (B) P p (dω). (1.1) Notice that P depends on λ and p even though both parameters do not figure in the notation. For λ > 0, under P, the walk (Y n ) n∈N0 is transient and there exists a critical value λ c for the bias such that X n := x(Y n ) has positive linear speed if λ < λ c , and zero linear speed if λ ≥ λ c . This comes from the fact that the larger the bias, the more time the walk needs to leave dead-ends in the direction of the bias.
Existence of a critical value for the bias has been proven in similar models, e. g. in [18] for biased random walks on Galton-Watson trees and in [12] for biased random walk on the supercritical percolation cluster in Z d . In the present setting, λ c is given as an elementary function of p.

Main results.
The main results of this paper concern the speed of biased random walk in the sub-ballistic regime. If the bias is critical (λ = λ c ), X n is of order n/ log n. This is in alignment with simulation results for biased random walk on the infinite cluster of supercritical bond percolation in Z d in [9]. Theorem 1.3. In the case λ = λ c , there exist constants 0 < a < b < ∞ such that lim n→∞ P Xn n/ log n ∈ [a, b] = 1.
We prove this theorem from fine estimates for the tails of suitable regeneration times to be introduced below. Less accurate estimates for the tails of these regeneration times derived in [13] revealed a second phase transition at λ = λ c /2, namely, a central limit theorem for (X n ) n∈N0 with square-root scaling holds if and only if λ < λ c /2, see [13,Theorem 2.6]. Our tail estimates also give control over the fluctuations of (X n ) n∈N0 in the remaining parameter range λ ∈ [λ c /2, ∞).
(a) Let λ = λ c /2, i.e., α = 2. Then the laws of Xn−nv √ n log n n≥2 under P are tight. (b) Let λ ∈ ( λc 2 , λ c ), i.e., α ∈ (1, 2). Then the laws of Xn−nv n 1/α n∈N under P are tight. (c) Let λ > λ c , i.e., α ∈ (0, 1). Then the laws of Xn n α n∈N under P are tight. In all three cases covered by Theorem 1.4, we do not expect that tightness can be strengthened to convergence in distribution due to a lack of regular variation of the tails of the regeneration times, see Lemma 4.8 and the proof thereof. Instead, we expect only convergence along certain subsequences as found for biased random walk on Galton-Watson trees, cf. [5]. We refrain from further investigating this phenomenon, as our main goal in this paper is to derive the speed of biased random walk at the critical bias.
We continue with an overview of the organization of the paper. In Section 2, we introduce regeneration points and times that go back to [13]. We review known results about the regeneration points and times and state our main technical result, Proposition 2.5, which provides the precise order of the tails of the regeneration times. Based on these tail bounds, we prove the main results in Section 3. Section 4 is devoted to the proof of Proposition 2.5. Finally, in Appendix A, we provide an auxiliary result from renewal theory.

Regeneration points and times
We use the decomposition of the percolation cluster at regeneration points from [13]. Regeneration points are defined in two steps. Given a configuration ω ∈ Ω, a vertex v = (x(v), 0) ∈ V is called a pre-regeneration point if v ∈ C and (x(v), 1) is an isolated vertex in ω, that is, all three edges adjacent to (x(v), 1) are closed in ω.
Lemma 2.1 (Lemma 5.1 and Corollary 5.2 in [2]). With P p -probability one, there exist infinitely many pre-regeneration points both left and right of the origin.
We enumerate the pre-regeneration points in ω by . . . , R pre The pre-regeneration points can be used to decompose the percolation cluster into independent pieces. For a, b ∈ Z, we denote the subgraph of ω with vertex set , ω(e) = 1} by [a, b) and call [a, b) a piece or block (of ω). We then define ω n := [x(R pre n−1 ), x(R pre n )), n ∈ Z. Using this definition, we may introduce the cycle-stationary percolation law P • p . Definition 2.2. The cycle-stationary percolation law P • p is defined to be the unique probability measure on (Ω, F ) such that the cycles ω n , n ∈ Z are i.i.d. under P • p with each ω n having the same law under P • p as ω 1 under P * p , and such that R pre 0 = 0. We write P • for the annealed law of the biased random walk and the percolation configuration when the latter is drawn using P • p instead of P p . To be more precise, P • is defined as P in (1.1), but with P p replaced by P • p .
it is a pre-regeneration point and 2. the random walk (Y n ) n∈N0 visits v exactly once.
It follows from the discussion in Section 4 of [13] that there are infinitely many regeneration points to the right of 0. We set R 0 := 0 and, for n ∈ N, define R n to be the first regeneration point to the right of R n−1 . Thus, ρ n−1 < ρ n for all n ∈ N where ρ n := x(R n ), n ∈ N 0 . Furthermore, let τ 0 := 0 and For n ≥ 1, τ n is the unique time at which the nth regeneration point R n is visited by the walk (Y k ) k∈N0 . In particular, 0 = τ 0 < τ 1 < . . . . We call τ n the nth regeneration time. The following assertions are known from [13] about the regeneration times and points.
The bulk of the work in this paper is required to prove this proposition. Before we turn to its proof, we first demonstrate in the subsequent section how the main results of the paper, Theorems 1.3 and 1.4, can be derived from it. The proofs of these theorems are generic in the sense that they do not use the particular definition of X n , but will apply to any random walk X n for which there are regeneration points and times satisfying the conclusions of Lemma 2.4 and Proposition 2.5.

Proofs of the main results
3.1. Preliminaries and notation. For random variables X and Y with distribution functions F and G, respectively, we say that X is stochastically dominated by Y , and write X Y , if Convergence in distribution of a sequence (X n ) n∈N of random variables towards a random variable X is denoted X n d → X. Analogously, convergence in probability of X n to X under P is denoted by X n P → X. As usual, for sequences a, b : N → [0, ∞), we write a = o n (b) or a n = o(b n ) as n → ∞ if for every ǫ > 0 there is an n 0 ∈ N with a n ≤ ǫb n for all n ≥ n 0 . We say that a and b are asymptotically equivalent and write a ∼ b or a n ∼ b n as n → ∞ if a n , b n > 0 for all sufficiently large n and lim n→∞ a n /b n = 1. Finally, we write a = O n (b) or a n = O(b n ) as n → ∞ if there exists some C > 0 such that a n ≤ Cb n for all sufficiently large n.
From Lemma 2.4, we infer that the τ n , n ∈ N are the points of a delayed renewal process on the integers. The corresponding renewal counting process and first passage times, we denote by k(n) := max k ∈ N 0 : τ k ≤ n and ν(n) := k(n) + 1, respectively, where n ∈ N 0 . Notice that k(n) = max{k ∈ N 0 : ρ k ≤ X n }, n ∈ N 0 . To infer Theorems 1.3 and 1.4 from Proposition 2.5, we shall choose a sequence (ξ k ) k∈N of independent random variables the ξ k , k ≥ 2 are i.i.d., τ 2 − τ 1 ξ 2 and P(ξ 2 > n) ∼ dn −α as n → ∞ (where d is chosen as in Proposition 2.5). Then the law of ξ 2 is in the (normal) domain of attraction of an α-stable law. From general theory it then follows that, after a suitable renormalisation, the first passage times ν ξ (t) := inf k ∈ N : k i=1 ξ i > t} converge in distribution as t → ∞. This will imply tightness of the first passage times ν(n) with the same renormalisation. From this, we shall derive the dual results for X n which translate into the statements of Theorems 1.3 and 1.4.

3.2.
Proofs of Theorems 1.3 and 1.4. We begin with the proof of the results in the subballistic regimes.
Further, we may choose ξ 1 independent of ξ 2 , ξ 3 , . . . such that P(ξ 1 > n) ∼ dn −α log n as n → ∞. We set ν η (n) := inf{k ∈ N : Then it holds that ν ξ (n) ν(n) ν η (n) for all n ∈ N 0 . Furthermore, Theorem 3a in [7] says that there is an (Notice that other than in [7], here we allow ξ 1 to have a distribution different than that of ξ 2 , ξ 3 , . . ., but the contribution of the first step vanishes as n → ∞.) The difference of upper and lower bound in (3.1) satisfies ρ ν(n) a n − ρ k(n) a n = ρ ν(n) − ρ ν(n)−1 ν(n) ν(n) a n P → 0 as n → ∞. Indeed, the first factor on the right-hand side converges to 0 P-a. s. as n → ∞ due to Lemma 2.4(b) and [14, Theorem 1.2.3(i)] while the family of laws corresponding to the second factor are tight by (3.2). Consequently, the difference in (3.3) converges to 0 in distribution and thus in P-probability. Now suppose α = 1. Then Y 1 (t) = t P-a. s. and hence X 1 = 1 P-a. s. The convergence in (3.2) thus is in fact convergence in probability. This completes the proof of Theorem 1.3.
Finally, if 0 < α < 1, then (3.2) and ν ξ (n) ν(n) ν η (n) for all n ∈ N 0 imply that the family of laws of (ν(n)/n α ) n∈N is tight. From (3.1) and (3.3) we conclude that this carries over to the family of laws of (X n /n α ) n∈N .
We now turn to the proof of the main results for ballistic, nondiffusive biases.
Proof of Theorem 1.4(a) and (b). We prove (a) and (b) simultaneously. Let a n := n 1/α in the case α ∈ (1, 2) and a n := √ n log n if α = 2. For n ∈ N, we have ρ k(n) − nv a n ≤ X n − nv a n ≤ ρ ν(n) − nv a n .

Proof of the tail estimate for regeneration times
It remains to prove the tail estimate for regeneration times, Proposition 2.5. This will be done in this section. We begin with the analysis of traps, which will almost immediately result in a proof of the lower bound in Proposition 2.5.

4.1.
Traps and biased random walk on a line segment. As for biased random walk on the supercritical percolation cluster, the slowdown in the model considered here is due to traps. These are dead-end regions stretching in the direction of the bias. For (conditional) percolation on the ladder graph, this boils down to parallel finite open horizontal line segments with no vertical connections.
To give a formal definition of a trap, we introduce some notation. For a vertex u ∈ V , we write u ′ for (x(u), 1 − y(u)). Further, if e = u, v ∈ E, we let e ′ := u ′ , v ′ . In particular, e = e ′ if e is a vertical edge, and e ′ is the horizontal edge parallel to e if e is a horizontal edge. Now we define a trap (in ω) to be an open path P = (e 1 , . . . , e m ) of length m ∈ N with edges Here, m is called the length of the trap, u 0 is called the trap entrance and u m is called the bottom of the trap. trap end trap entrance bottom of the trap We define the backbone B to be the subgraph of the infinite cluster C obtained by deleting from C all edges and all vertices in traps except the trap entrance vertices. Clearly, B is connected and contains all pre-regeneration points. Figure 3. The original percolation configuration and the backbone Due to the Markovian structure of the percolation process under P p , there are infinitely many traps both to the left and to the right of the origin 0. Let T n , n ∈ Z be an enumeration of all trap pieces such that T n is strictly to the left of T n+1 for each n ∈ Z and that T 1 is the trap piece with minimal nonnegative x-coordinate of the trap entrance. Denoting the length of the trap in the trap piece T n by ℓ n , the following result holds.
An excursion of the random walk (Y n ) n∈N0 into a fixed trap of length m can be identified with an excursion of a biased random walk (S n ) n∈N0 on the line graph {0, 1, . . . , m} where m is the length of the trap. Therefore, we study biased random walk on {0, 1, . . . , m}. Let p λ := e λ e λ +e −λ , q λ := 1 − p λ , and γ := q λ p λ = e −2λ . We write P k m,λ for the law of a biased random walk (S n ) n∈N0 on {0, . . . , m} starting at k ∈ {0, . . . , m}, moving to the right with probability p λ and moving left with probability q λ from any vertex other than 0, m. The origin 0 is supposed to be absorbing and at m the walk stays put with probability p λ and moves left with probability q λ . We write E k m,λ for the corresponding expectation. We drop the superscript k, both in P k m,λ as well as E k m,λ , if k = 1. For k, l ∈ {0, ..., m} we write σ k := inf{j ∈ N 0 : S j = k}, σ + k := inf{j ∈ N : S j = k}, and σ k→l = inf{j ≥ 0 : S j = l} on {S 0 = k}. Let e m := P m m,λ (σ + 0 < σ + m ) be the escape probability from the rightmost node in the trap to the trap entrance without a rebound to the rightmost node in the trap. By the well-known Gambler's ruin formula, this is The proof of the lower bound. We are ready to prove the lower bound.

Lemma 4.2.
There exists some c > 0 such that, for all n ∈ N, In the next proof and throughout the paper, for a random variable Z andp ∈ (0, 1), we write Proof. According to Lemma 2.4, we find On the other hand, as P(R pre 0 = 0) > 0, we can safely write We therefore provide a lower bound for P • (τ 1 ≥ n, Y k = 0 for all k ≥ 1). Under P • p , there is a pre-regeneration point at 0 as depicted in the figure below.

0
Given there is a pre-regeneration point at 0 (as is always the case under P • p ), the law of the percolation cluster to the right of the origin under P p and P • p coincides since the ω n , n ∈ N have the same law under P p and P • p . We may thus argue as on p. 3404 of [2] to conclude that the probability that directly to the right of the origin, there is a trap of length m as in the picture above is γ(p)e −2λcm for some constant γ(p) ∈ (0, 1).
We write T for the time spent on the first excursion of (Y n ) n∈N0 into the trap right of the origin. We have there is a trap directly to the right of the origin).
Typically, after entering the trap the walk drifts towards the bottom of the trap and then requires a geometric number of trials to leave again. It follows from the Gambler's ruin formula that for all m, hitting the bottom before leaving the trap has positive probability bounded from below: The probability of leaving the trap from the bottom without rebound to the bottom is e m . In order to visit the trap in the situation as depicted above, two steps to the right at the start suffice. Thus we get Restricting this sum to the term of orderx := log n | log γ| leads to Let (S ′ n ) n∈N0 be a biased random walk on Z that mimics the steps of (S n ) n∈N0 without staying put. More precisely, set S ′ 0 := 0 and for n < σ 0 , let After (S n ) n∈N0 hits the absorbing state 0, we let (S ′ n ) n∈N0 move along as the usual biased random walk on Z with probability p λ to jump right. For z ∈ Z, write P z Z,λ and E z Z,λ for the law of (S ′ n ) n∈N0 starting at S 0 = z and the corresponding expectation, respectively. For k ∈ Z, set σ Z k := inf{l ≥ 0 : S ′ l = k}. We start with a well-known fact about biased random walk on Z.
For completeness, we include a brief proof.
On the one hand, the Markov property gives On the other hand, lim xց0 f (x) = 0 due to dominated convergence. Hence, solving (4. 2) for f (x) yields the stated formula.
We divide the time spent between the visits to the first and second regeneration point τ 2 − τ 1 as follows.
This and Markov's inequality imply the following result.
To obtain an upper bound on P(τ 2 − τ 1 ≥ n), we thus need to consider the time spent in traps. We write (τ 2 − τ 1 ) traps as where T is the number of traps in [ρ 1 , ρ 2 ), V i is the number of visits in the ith trap in [ρ 1 , ρ 2 ) and T ij is the time (Y n ) n∈N0 spends during the jth excursion into the ith trap in [ρ 1 , ρ 2 ).

4.3.
Tail estimates for the time spent in a single trap. If we fix a percolation environment ω, the time spent in a single trap of length m can be split into the time spent on bottom-tobottom excursions and the time spent to reach or leave the bottom without a rebound to the left-or rightmost, respectively, node of the trap. This leads to the following result for a fixed number of excursions into a single trap.
Then, for any l ∈ N, there exist independent Z 1 , ..., Z l ∼ geom(e m ) and m 0 ∈ N such that, for m ≥ m 0 and n ∈ N, we have Proof. Let Z (j) be the number of returns to m of (S n,j ) n∈N0 before absorption. For completeness, we define Z (j) := 0 on the event where (S n,j ) n∈N0 visits m at most once. By the strong Markov property, We writeT jk , k = 1, . . . , Z (j) for the durations of consecutive excursions of (S n,j ) n∈N0 from m to m, and letT jk , k > Z (j) , be a family of i.i.d. random variables distributed as the duration of an excursion of (S n ) n∈N0 from m to m conditioned on the event {σ + m < σ 0 }. When starting at 1, the walk (S n ) n∈N0 either hits the absorbing state 0 before reaching the trap bottom, or hits the bottom, does a geometric number of bottom-to-bottom excursions, and then gets absorbed. We have We can safely replace Z (j) , j = 1, ..., l by an independent family of i.i.d. random variables Z j with law geom(e m ) under P m,λ . AsT jk , j = 1, ..., l, k ∈ N are nonnegative and i.i.d., we have This implies Using Markov's inequality, the Markov property, stochastic domination and Lemma 4.3, for µ > 0, we have The function f : 0, 1 2 log 2q λ e µ is differentiable and satisfies Hence, there existsμ > 0 with f (μ) < 1, and As e m → 0 for m → ∞, there exists m 0 such that f (μ) 1−em < 1 for all m ≥ m 0 . This and (4.3) lead to Lemma 4.6 can be adapted to the case where the random walk is allowed to take lazy steps. Let (S lazy n ) n∈N0 be the lazy biased random walk on the line graph {0, 1, . . . , m} that moves to the right with probability e λ /(e λ + 1 + e −λ ), to the left with probability e −λ /(e λ + 1 + e −λ ) and stays put with probability 1/(e λ + 1 + e −λ ) from any vertex other than 0, m. The origin 0 is again supposed to be absorbing and at m, the walk stays put with probability (e λ + 1)/(e λ + 1 + e −λ ) and moves left with probability e −λ /(e λ + 1 + e −λ ). Slightly abusing notation, we again write P m,λ for the law of (S lazy n ) n∈N0 starting at S lazy 0 = 1, and E m,λ for the corresponding expectation.
Lemma 4.7. Let (S lazy n,j ) n∈N0 , j ∈ N be i.i.d. copies of (S lazy n ) n∈N0 starting at 1. Further, let T qu ij be the absorption time at 0 of the walk (S lazy n,j ) n∈N0 , j ∈ N. Let R := E 0 Z,λ [σ Z 1 ] = 1 1−2q λ and r λ > e 2λ + e λ . Then, for any l ∈ N, there exist independent Z 1 , ..., Z l ∼ geom(e m ) and m 1 ∈ N such that, for m ≥ m 0 ∨ m 1 and n ∈ N, we have where T qu,a ij , j ∈ N are as in Lemma 4.6, andZ k,j , k, j ∈ N are independent random variables distributed as the number of times the walk (S lazy n,j ) n∈N0 stays put before it changes its position for the kth time. Since the probability for (S lazy n,j ) n∈N0 to change its position at any vertex other than the absorbing state 0 is bounded from below byp := e −λ /(e λ +1+e −λ ), we haveZ k,j Z k,j where Z k,j , k, j ∈ N is a family of i.i.d. geometric random variables with success probabilityp. Notice that E m,λ [Z 1,1 ] = (1 −p)/p = e 2λ + e λ > 2. Choose r λ > e 2λ + e λ . Then, as the Z k,j , k, j ∈ N are nonnegative and i.i.d., we find Standard large deviation estimates yield that P m,λ ( ⌊n/r λ ⌋ k=1 Z k,1 ≥ n) decays exponentially fast as n → ∞ (with a rate which is independent of m). Hence, as e m → 0 for m → ∞, there exists The remainder of the proof now follows from Lemma 4.6.
In the annealed case, Lemma 4.7 translates into a tail probability of basically order n −α (given the trap is actually seen).
Lemma 4.8. Let R, r λ , m 0 , m 1 be as in Lemma 4.7 and µ > 0 be such that E 0 Z,λ e µσ Z 1 < ∞. Further, let T ann ij , i ∈ Z, j ∈ N be a family of random variables which are independent given ω and with T ann ij given ω being distributed as the hitting time of the entrance of the trap in T i by (Y n ) n∈N0 under P ω,λ when (Y n ) n∈N0 starts at the right neighbor of the trap entrance. Then ) are positive, finite constants neither depending on n nor l.
Proof. Using Lemmas 4.1 and 4.7, we can estimate P l j=1 T ann ij ≥ n, ℓ i ≥ m 0 ∨ m 1 using independent Z 1 , ..., Z l ∼ geom(e m ) and T qu ij , j = 1, . . . , l, r λ and R as defined in Lemma 4.7 by In other words, conditioned on σ 0 < σ + m , the walk (S n ) n∈N0 drifts towards to the left at least as strong as the unconditioned walk drifts towards the right. Estimating all three quantities in the max-term by corresponding quantities for (S ′ n ) n∈N0 , the biased random walk on Z, we get Using Markov's inequality and Lemma 4.3, we get that for The latter series is finite. To see this, notice that if λ < λ c , we have e −2λc < e −2λ = q λ p λ and thus ] and the series converges using the same argument. For the first series on the right-hand side of (4.4), we use the union bound to get To find the asymptotic behavior of the two expressions in the Lemma, we apply residue calculus. Define the complex function φ via φ(z) := (p λ −q λ ) z γ tz (1−γ α+z ) for z ∈ C. Then φ is holomorphic in C except at the poles z k := 2πik log γ − α, k ∈ Z. Moreover, by the choice of t, |φ(z)| remains bounded as |Re(z)| → ∞. Consequently, Theorem 2(i) in [11] applies and gives n0 j=0 .
Along the lines of Example 3 in [11], we get where Γ is the complex gamma function. From Stirling's formula, e. g. [1, Theorem 1.4.2], we know that for z ∈ C \ (−∞, 0] where log is the branch of the complex logarithm, defined on C \ (−∞, 0], with log x ∈ R for all x > 0 and where R(z) satisfies |R(z)| ≤ c |z| for some constant c > 0. Hence, In this product, the first and second factors are bounded in absolute value by 1, the third by e α , and the fourth by e 2c . Using Corollary 1.4.4 in [1], we conclude that |Γ(−z k )| → 0 exponentially fast as |k| → ∞ and that the bi-infinite series in (4.5) is finite and can be bounded by a finite constant c 1 that neither depends on n nor l. For i = 0, we again use Theorem 2(i) in [11] and find n0 j=0 log γ − α as above. Evaluating the residues leads to n0 j=0 Along the same lines as above, we can show that this bi-infinite series has finite value and the whole term can be bounded by c ′ 1 l α n −α log n, where c ′ 1 ∈ (0, ∞) does not depend on n or l. 4.4. A coupling. As the times spent in different traps are not independent, further work is needed to transfer the tail estimate for the time spent in a single trap to the time spent in the possibly several traps inside a block [ρ i , ρ i+1 ). Therefore, we introduce a random walk on a subgraph ω p of the initial environment ω as follows. We take the initial graph ω sampled according to P p or P • p and modify it as follows. For each trap P = (e 1 , . . . , e m ) in ω with trap entrance u 0 and edges e 1 = u 0 , u 1 , . . . , e m = u m−1 , u m , we delete the edges e 1 , . . . , e m from ω and also the vertices u 1 , . . . , u m . We further delete the opposite vertices u ′ 1 , . . . , u ′ m and replace the parallel edges e ′ 1 , . . . , e ′ m , u ′ m , u ′ m + (1, 0) with a single edge connecting u ′ 0 and u ′ m + (1, 0) with resistance given by the sum of the resistances of the single edges. We shall call the vertex u ′ 0 opposite the former trap entrance an obstacle. Should this procedure lead to the deletion of 0, we assign x-coordinate 0 in ω p to the obstacle that replaced the trap piece which contained 0 in ω. In this way, we also obtain new conductances c s on ω p .  By the series law, the corresponding resistances r s between the first obstacle v to the right of 0 that replaces a trap piece covering x-level k to k + m + 1 and its neighbors u to the left and w to the right satisfy Based on this, we define the pruned random walk as the lazy random walk (Y p n ) n∈N0 on ω p with transition probabilities proportional to the conductances where x(u) ≤ x(v) and p(v) is the number of obstacles with x-coordinate ∈ [0, x(v)). More precisely, if Y p n = u, then the walk attempts to step from u to v with probability proportional to c p ( u, v ). If the edge between u and v is present in ω p , then the step is actually performed, otherwise the walk stays put.
Roughly speaking, (Y p n ) n∈N0 is the lazy random walk on the non-trap pieces of ω when all traps are set to have infinite length. Intuitively, as the traps in ω have finite lengths, the embedding of (Y p n ) n∈N0 into ω will lag behind the random walk (Y n ) n∈N0 . Regenerations of (Y p n ) n∈N0 also amount to regenerations of (Y n ) n∈N0 without implications on the lengths of the traps in the underlying piece of ω. Furthermore, (Y p n ) n∈N0 can be used to bound the number of visits to any trap by a quantity independent of the trap lengths, thus greatly reducing the difficulties in transforming the estimate of Lemma 4.8 to an estimate for the time spent in the whole block [ρ i , ρ i+1 ) in ω. To make this precise, we give a coupling of (Y p n ) n∈N0 and (Y n ) n∈N0 with the described properties. Technically, the coupling is such that we obtain processes with the same distributions as (Y n ) n∈N0 and (Y p n ) n∈N0 and the desired properties, but we shall again refer to them as (Y n ) n∈N0 and (Y p n ) n∈N0 , respectively, once equality of the corresponding laws is established.
First, let (O i ) i∈Z be an enumeration of the obstacles in ω p such that . .
Starting from ω p , take an independent family (L i ) i∈Z of random variables, with (L i ) i =0 independent of ω. We re-insert at O i a trap piece with a trap of length L i . Here, we let L i have the same distribution as ℓ i for i = 0. For i = 0, let the law of L 0 given x(O 0 ) > 0 be the law of ℓ 1 . Further notice that if x(O 0 ) = 0, then, by the definition of T 0 and T 1 , either 0 is one of the two leftmost vertices in T 1 or 0 ∈ int(T 0 ) which consists of all vertices from T 0 except the two leftmost and the two rightmost vertices. Thus, we define the law of L 0 given x(O 0 ) = 0 by In other words, we toss a coin with probability P p (0 ∈ T 1 | 0 ∈ T 1 ∪ int(T 0 )) for heads. If the coin comes up heads, we sample the value of L 0 using an independent copy of ℓ 1 (under P p ). If the coin comes up tails, we sample the value of L 0 using an independent copy of ℓ 0 (under P p given that 0 ∈ int(T 0 ), this random variable satisfies the bound in Lemma 4.1(b)). Additionally, if the coin comes up tails, we shift horizontally by a value k ∈ {1, . . . , L 0 } according to the distribution under P p of the position of 0 in T 0 given 0 ∈ int(T 0 ). This gives a new configurationω. By construction,ω law = ω.
Slightly abusing notation, we write ω p for both ω p and the subset ofω corresponding to it. We further write V (ω p ) and V (ω) for the corresponding vertex sets. Consequently, we write u = v for vertices u ∈ V (ω p ), v ∈ V (ω) if v is the node inω corresponding to u in ω p . Given ω p and ω, we define a random walk (Y n ) n∈N0 on V (ω p ) × V (ω) × {−1, 0, 1}, where the first and second component (up to random waiting times) behave like (Y p n ) n∈N0 and (Y n ) n∈N0 , respectively, and the third component exclusively acts as a memory of the directions taken at certain nodes. This is to ensure that (Y n ) n∈N0 is a Markov chain.
(1) If u = v when regarding ω p as a subset ofω, and if u = O i for all i ∈ Z, we let (Y n ) n∈N0 attempt to do exactly the same steps in its first two components. In that case Note that if v is a trap entrance inω, a step to the right by (Y cand n+1,1 , Y cand n+1,2 ) induces a lazy step of (Y k,1 ) k∈N0 whereas (Y k,2 ) k∈N0 moves into the trap. In that case, as will be described in detail below, (Y k,2 ) k∈N0 will make an excursion into the trap afterwards whereas (Y k,1 ) k∈N0 will stay put in u until (Y k,2 ) k∈N0 returns to the trap entrance v. Similarly, when a step of (Y k,1 ) k∈N0 to the left means moving to an obstacle, (Y k,2 ) k∈N0 will then step onto a backbone node inω \ ω p . In this case (Y k,1 ) k∈N0 will also stay put until (Y k,2 ) k∈N0 reaches a node inω ∩ ω p . (2) If u = v, but u = O i for some i ∈ N, then the step in the first component is taken according to the conductances c p . The second component mimics this, but with the additional option to move right even if the first component does not. This is to adjust the transition probabilities of the second component to match those of (Y n ) n∈N0 . If the first component moves right, we demand that the second component leaves the coming trap piece at the right end, which we encode in the third component. Since we further want the walk in the second component to have the same law as (Y n ) n∈N0 , we have to make sure that in total, it leaves the trap piece at the right resp. left end with the correct probability. These restrictions lead to a system of linear equations for the transition probabilities whose solution is given as follows.
with probability , where L i is the length of the trap right of v and is the probability that the biased random walk (S ′ n ) n∈N0 on Z starting from 0 first makes a step to the right and then hits m before 0.
u v v * Figure 6. Transitions from obstacles. Depending on the value of Y n+1,3 , after a step to the right it is already determined whether the random walk onω hits the boundary of the trap piece at v or v * .
(3) If v is in the interior of the backbone part of a trap piece inω (and thus not in ω p ), then we write L v for the length of the corresponding trap. In this case, the first component of (Y n ) n∈N0 stays put while the second component moves in the trap piece with transition probabilities according to the biased random walk (Y n ) n∈N0 , possibly conditioned on the event that the boundary of the trap piece is first hit at the left-or rightmost end, respectively. Let p k,0 , p k,−1 , p k,1 be the transition matrices of the lazy biased random walk (S n ) n∈N0 on {0, ..., k} (which steps to the right, steps to the left or stays put with probability proportional to e λ , e −λ and 1, respectively) and the lazy biased random walk on {0, ..., k} conditioned on {σ 0 < σ k } resp. {σ 0 > σ k }, where σ j := inf{n ∈ N 0 : S n = j}. Then we set  (4) If v is a trap node inω, the first component of (Y n ) n∈N0 stays put while the second component moves inside the trap with transition probabilities according to the biased random walk (Y n ) n∈N0 . That is, with probability 1 e λ +1+e −λ . Figure 8. Transitions in the dead end part of trap pieces (5) Finally, when v ∈ω ∩ ω p , but the positions of the two components of (Y n ) n∈N0 do not correspond, the second component stays put, while the first component moves with transition probabilities given by the conductances c p : (1, 0), v, 0) with probability proportional to c p ( u, u + (1, 0) ), (u − (1, 0), v, 0) with probability proportional to c p ( u, u − (1, 0) ), (u ′ , v, 0) with probability proportional to c p ( u, u ′ ). Figure 9. Transitions on the backbone when coordinates do not coincide. In this case, the walk onω waits at a trap end or a vertex opposite a trap entrance. This vertex must be passed by the walk on ω p provided that this walk is transient to the right. The walk on ω p pauses until the walk onω hits its position.
We write P ′ p for the distribution of the environment (ω p ,ω) and P ′ ω p ,ω,λ for the quenched law of (Y n ) n∈N0 as described above. With these, we define a measure P ′ on ({0, Sometimes, the walks on ω p andω are at different positions (when ω p is embedded inω). Then, depending on the particular situation, one of the walks waits while the other moves until they meet again. The times at which each of the walks moves without being forced to hold as described above are collected in the following sets: is at a vertex inω corresponding to a vertex in ω p }, N 2 := {n ∈ N 0 : Y n,1 = Y n,2 } ∪ {n ∈ N 0 : Y n,2 is in the interior of a trap piece}.
Let (s 1,k ) k∈N resp. (s 2,k ) k∈N be enumerations of N 1 resp. N 2 in ascending order. Then the following processes coincide in law with (Y p n ) n∈N0 and (Y n ) n∈N0 , respectively. More precisely, with (Y p n ) n∈N0 := (Y s1,n,1 ) n∈N0 , (Ỹ n ) n∈N0 := (Y s2,n,2 ) n∈N0 the following lemma holds. Lemma 4.9. We have Proof. Since (Y p n ) n∈N0 and (Y p n ) n∈N0 are defined on the same environment, and the environments of (Ỹ n ) n∈N0 and (Y n ) n∈N0 are identically distributed by construction, it suffices to check the quenched transition probabilities of (Ỹ n ) n∈N0 and (Y p n ) n∈N0 , respectively. One can check that the transition probabilities of (Y p n ) n∈N0 coincide with those of (Y p n ) n∈N0 , thus the equality in law of (Y p n ) n∈N0 and (Y p n ) n∈N0 follows from the Markov property of (Y n ) n∈N0 . For (Ỹ n ) n∈N0 , at most nodes this is also obvious except for transitions at obstacles and inside trap pieces. However, it suffices to show that on obstacles, steps into the different directions are taken with the correct probability and that excursions on the following trap pieces end on the left resp. right end with the correct probability, i.e., that (Y n,3 ) n∈N0 takes value −1 or 1 with the correct probability. This amounts to a system of linear equations which is solved by the transition probabilities defined under (2). The result now also follows from the Markov property of (Y n ) n∈N0 .
From now on, all results concerning (Y n ) n∈N0 will be discussed in terms of the process (Ỹ n ) n∈N0 under P ′ . To ease notation, we shall write (Y n ) n∈N0 and P for (Ỹ n ) n∈N0 and P ′ , respectively. We shall also write ℓ i though technically referring to L i . Consequently, we shall not distinguish between (Y p n ) n∈N0 and (Y p n ) n∈N0 nor between ω andω.
Lemma 4.10. For λ > λ * := log(2) 2 , especially for λ ≥ λc 2 , it holds that lim n→∞ x(Y p n ) = ∞ a. s. The proof of the lemma is very similar to that of Proposition 3.1 in [2]. We include it for completeness.
Proof. It is sufficient to show that 0 is a transient state for the biased random walk on V (ω p ). We use electrical network theory. Write R p (0 ↔ ∞) for the effective resistance between 0 and +∞ in the random conductance model on ω p with conductances c p (e) for e ∈ E with ω p (e) = 1. Using Thomson's Principle [17,Theorem 9.10], we infer for all unit flows θ from 0 to ∞ where E p (θ) is the energy of the flow θ. Here a flow θ from u to ∞ is a mapping θ : V (ω p ) × V (ω p ) → R satisfying the properties Since there are no traps in ω p , there exists an infinite open self-avoiding path P = (e 1 , e 2 , . . .) connecting 0 with ∞. This path never backtracks in the sense that the sequence of x-coordinates of the vertices on this path is nondecreasing. Now define a flow θ from 0 to ∞ by pushing a unit current through P . More precisely, if e n = u n−1 , u n with u 0 := 0, then let θ(u n−1 , u n ) = 1 = −θ(u n , u n−1 ) for all n ∈ N and θ(v, w) = 0 whenever v, w is not on the path P . For every x-level n ∈ N 0 there is at most one edge e in P connecting the two vertices with xvalue n. The resistance of this edge is bounded by r p (e) ≤ e −2λn (1 − e −2λ ) −p(n) where p(n) is the number of obstacles with x-value < n. There are at most n such obstacles. Therefore, r p (e) ≤ e −2λn (1 − e −2λ ) −n . Further, for every n ∈ N, there is exactly one edge on P leading from a vertex with x-value n − 1 to x-value n. The resistance of this edge is bounded by r p (e) ≤ e −λ(2n−1) (1 − e −2λ ) −p(n) ≤ e −λ(2n−1) (1 − e −2λ ) −n . Consequently, the energy E p (θ) is bounded by The latter series is finite iff e −2λ 1−e −2λ < 1 or, equivalently, λ > log(2) 2 =: λ * . Comparing this with λ c /2, for which we have an explicit formula in terms of p given in Proposition 1.2 with unique minimizer p = 1/2, we have It also follows from the proof of Lemma 4.10 that for u ∈ ω p and λ ≥ λ c /2, the escape probability at u, i.e., the probabilty to leave u and never return, is uniformly bounded from below. For u ∈ ω p , let σ p u := inf{n > 0 : Y p n = u}. Also let R p (u ↔ ∞) and c p (u) be the effective resistance between u and +∞ and the sum of conductances of all incident edges at u, respectively, in the random conductance model on ω p with conductances c p (e) for e ∈ E with ω p (e) = 1. Then pushing a unit current from u to +∞ as in the proof of Lemma 4.10, we get (4.6) Let R p 1 , R p 2 , . . . be an enumeration from left to right of the pre-regeneration points in ω p which are visited exactly once by (Y p n ) n∈N0 . Further, let ρ p 0 = 0 and ρ p n := x(R p n ) for n ∈ N. Finally, for n ∈ N, let τ p n be the unique time k with X p k = ρ p n . We refer to the R p n 's and τ p n 's as regeneration points and times, respectively, of the pruned walk.
Proof. We shall only give an informal description of the proof as the details of it can be adapted from the proofs of Lemmas 6.3 through 6.5 in [13]. The basic idea is to consider the walk (Y p n ) n∈N0 at fresh points. The first fresh point F p 1 is the first pre-regeneration point to the right of the origin visited by the walk (Y p n ) n∈N0 . If after the first visit to this fresh point the random walk never returns to it, then F p 1 = R p 1 . Otherwise, the random walk will return to F p 1 . In this case, the second fresh point F p 2 is the first pre-regeneration point to the right of F p 1 that has not been visited by the random walk before hitting F p 1 for the second time, and so on (see Lemma 6.4 in [13] for the construction for the original walk). By the strong Markov property (for the walk and the cluster where a cycle to the right of the origin in the pruned cluster is revealed upon the first visit of the walk to this cycle), the distances between two fresh points are i.i.d. given they are finite. Using the uniform bound on the resistance to +∞ given in the proof of Lemma 4.10, valid for λ > λ * , the walk will visit at most a geometric number of fresh points before hitting a fresh point from which it escapes to +∞ without ever returning to that point. If, on the other hand, the distance between two consecutive fresh points, a left and a right one, is large, say ≥ 2m, then there are two options. Either the walk made an excursion of length at least m to the right between the first two visits of the walk to the left fresh point, or there is no pre-regeneration point on the percolation cluster from distance m to distance 2m to the right of the left fresh point. Both possibilities are exponentially unlikely in m. The first one because it requires the walk to backtrack at least m steps to the left, which has probabilty bounded by a constant times (e −2λ /(1 − e −2λ )) −m (adapt the proof of Lemma 6.3 in [13] with the new conductances to see this). The second one because of the Markov property of the original percolation cluster ω, which implies that when exploring the cluster from the left to the right, at any point, the next pre-regeneration point to the right is only a geometric distance away. Consequently, ρ p 1 can be bounded from above by a geometric number of independent random variables all stochastically bounded by a nonnegative integer-valued random variable with some finite exponential moment. From this, standard large deviation estimates imply that ρ p 1 has exponentially decaying tails. The proof of E • [(τ p 1 ) κ ] < ∞ for arbitrary κ > 0 can be adapted from the proof of Lemma 6.5 in [13], a brute-force estimate which carries over immediately. 4.5. Proof of Proposition 2.5. We are now ready to give the proof of the tail result for the regeneration times.
Proof of Proposition 2.5. For each n ∈ N, we have The time spent on the backbone can be neglected due to Lemma 4.5. We now estimate the time spent in traps. From Lemma 4.1 in [13], we infer If 0 is a pre-regeneration point (or just connected to +∞ via a path that does not visit vertices with x-coordinate strictly smaller than 0), the argument that leads to (24) in [2] gives 1 − e −λ e λ + 1 + e −λ =: p esc . Integration with respect to P • p gives p esc ≤ P • (Y n = 0 for all n ≥ 1) ≤ 1.
Notice that the same bound holds when P • is replaced by P. Thus Analogously, when estimating P(τ 1 ≥ n), the time spent on the backbone can be neglected by Lemma 4.12, so that it suffices to bound P(τ traps 1 ≥ n) in this case. We shall only estimate P • (τ traps 1 ≥ n, X k ≥ 1 for all k ∈ N) as P(τ traps 1 ≥ n) can be estimated similarly. To this end, we consider (Y n ) n∈N0 and (Y p n ) n∈N0 as constructed in Section 4.4. Further, we use the family T ann ij , i ∈ Z, j ∈ N of random variables introduced in Lemma 4.8. By construction, the number of times (Y n ) n∈N0 visits any node in ω which is not in the interior of a trap piece can be bounded by the number of times (Y p n ) n∈N0 visits the corresponding node in ω p . This holds in particular for all trap entrances. By Lemma 4.11, there exist regeneration points of (Y p n ) n∈N0 . These also are regeneration points for (Y n ) n∈N0 . We have where T is the number of traps in [0, ρ 1 ), V i is the number of visits to the ith trap, T ij is the time (Y n ) n∈N0 spends during the jth excursion into the ith trap, and (T ann ij ) i,j∈N is a family of random variables independent of (ω p , (Y p n ) n∈N0 ) such that the T ann ij , i, j ∈ N are independent given the family (L i ) i∈N with T ann ij being distributed as the duration of one excursion of (Y n ) n∈N0 under P ω,λ into a trap of length L i . Since (ρ p 1 , τ p 1 ) and (T ann ij ) i,j∈N are independent, we can write this as First look at P( l j=1 T ann ij ≥ n) for fixed i and l ∈ N. We write this as with m 0 , m 1 as in Lemma 4.8. With P m,λ and T qu ij , i, j ∈ N as in Lemma 4.7, Markov's inequality and the convexity of Let N (k) be the number of times the walk (S n ) n∈N0 visits vertex k ∈ {1, . . . , m}. Note that in order to describe T qu i1 , we also need to take lazy steps into account. This means that, under P m,λ , we have the following identity in law, where N (k) has distribution geom(e k ) and the Z k,l 's are a family of independent random variables, independent of (N (1), . . . , N (k)), with distribution geom e λ +e −λ e λ +1+e −λ for k = 1, . . . , m − 1, l ∈ N and geom e −λ e λ +1+e −λ for k = m, l ∈ N, respectively. Since m < m 0 ∨ m 1 and the escape probability e k is nonincreasing in k, we can bound e k by e m0∨m1 for all k ∈ {1, . . . , m}. We use this to stochastically bound N (k). In combination with the convexity of x → x α+1 on [0, ∞) this leads to For k, l ∈ N, we write P • ρ p 1 = k, τ p 1 = l = P • τ p 1 = l · P • ρ p 1 = k|τ p 1 = l . As the second factor vanishes for k > l, we get ∞ k,l=1 Hence, it follows from Lemma 4.12 that the first sum on the right-hand side of (4.9) is bounded by a constant times n −α . For τ 1 under P, this becomes a constant times n −α log n. It also follows from Lemma 4.12 and Markov's inequality that, for any κ > 0, for sufficiently large κ.
Appendix A. Uniform integrability of renewal counting processes In our proof of Theorem 1.4, we use that the suitably renormalised renewal counting process of a delayed renewal process is uniformly integrable. The following result is (more than) sufficient for our purposes.
Proposition A.1. Let ξ 2 , ξ 3 , . . . be a sequence of i.i.d. random variables independent of ξ 1 such that P(ξ k > 0) = 1 for k ∈ N, where P denotes the underlying probability measure. Suppose there are constants d > 0 and α ∈ (1, 2] such that P(ξ 2 > t) ≤ dt −α for all t ≥ 1. Then, with µ := E[ξ 2 ], S n := n k=1 ξ k , ν(t) := inf{n ∈ N : S n > t} and a(t) := t 1/α if α ∈ (1, 2) and a(t) := √ t log t if α = 2, it holds that exp θ ν(t) − t/µ a(t) t≥2 is uniformly integrable for every θ > 0 (A.1) and is uniformly integrable for every p ∈ (1, α) (A.2) for which there exists an r > p with E[ξ r 1 ] < ∞. The statements (A.1) and (A.2) have been shown in [16] in the case where the ξ k , k ∈ N are i.i.d. and ξ 1 is in the domain of attraction of an α-stable law. Unfortunately, we have not been able to apply a coupling argument in order to deduce uniform integrability here from the main results in the cited source. However, the proofs given in [16] apply. We shall provide a sketch of these proofs with the necessary changes needed here.
Turning to the second assertion, pick 1 < p < α and r ∈ (p, α) such that E[ξ r 1 ] < ∞. Following the proof of (2.5) in [16] with mild adaptions, we obtain The rest of the proof is as in [16].